AviSynth.VideoFrame

The AviSynth.VideoFrame class provides access to a video frame.

Constructors

VideoFrames cannot be directly created at present. They can only be obtained by using the getFrame(frameNumber) method on an existing Clip. However:

Fields

There are two methods to access video frame data, and the fields available depend on which on is being used.

If interleaved is true, (which it will be for RGB and YUY2) then you have:

Otherwise, if planar is true, those fields are instead available inside:

ArrayBuffer is part of a new JavaScript feature that V8 supports called JavaScript typed arrays. Essentially the ArrayBuffer provides an object that represents the data to JavaScript, while various TypeArray objects provide access to the data. (Yes, that link is to the Mozilla documentation. Chrome doesn't provide documentation directly. However the Mozilla documentation covers the standard and, as long as you avoid the Mozilla-specific features, covers the API available through JSVSynth.)

This means that in order to access the pixel data, you need to write it with an array view. The simplest one is Uint8Array which provides byte access to the pixel data.

However, for RGB32 data, you might find it easier to use Uint32Array which provides you with access to the entire pixels at a time.

In AviSynth, if you have a planar data source, it will be either YV12 or I420 (the difference of which is basically incidental but detectable anyway). This means that bitPerPixel will be 12 and bytesPerPixel will be an inaccurate 1.

NOTE: Accessing the data element at all will force JSVSynth to make the frame "writeable" which will effectively create an extra copy of the frame. There is currently no way to get read-only access to a frame.

Functions

Accessing a Pixel

Or, "what's the difference between pitch and rowSize?"

Basically, data that is provided through the various data arrays may be "padded" along the side. The simplest reason is when a clip is cropped - rather than adjust the frame data, AviSynth just marks the new edges and reuses the frame data. (There are other reasons for this that have to do with data alignment, but that's beyond the scope of this page.)

So rowSize gets you the number of "visible" bytes in a row, while pitch is the number of bytes you need to move forward to the next row of pixels.

This means accessing a specific pixel is done via:

frame.data[y * frame.pitch + x * frame.bytesPerPixel]

Techincally, that just gets you the first byte. So, for example, to access all four channels in an RGB32 clip (RGBA), you'd use:

var r = frame.data[y * frame.pitch + x * frame.bytesPerPixel + 2]
var g = frame.data[y * frame.pitch + x * frame.bytesPerPixel + 1]
var b = frame.data[y * frame.pitch + x * frame.bytesPerPixel]
var a = frame.data[y * frame.pitch + x * frame.bytesPerPixel + 3]

This makes more sense when you know that the x86 chips AviSynth runs on are little-endian, and the data is stored in memory as BGRABGRABGRA..., meaning that reading a single 32-bit value from a pixel location winds up creating 32 bits in ARGB order, which is the order used for color in BlankClip.