This document proposes some additional ImageData methods to asynchronously convert between HTMLImageElement, ImageData and Blob.
This is an unofficial draft to propose these features for the public-webapps mailing list.
Modern web apps need asynchronous functions to process image data without "janking" the browser UI. The ImageData interface was originally specified for use with the canvas [[2dcontext]], and as such all conversions involving it generally require synchronous use of an intermediate CanvasRenderingContext2D (which may introduce intermediate copies and additional memory overhead). Web apps may also need to convert images to and from Blobs representing images in their compressed form (such as a PNG file). This proposal adds new methods to circumvent synchronous use of the canvas and avoid any intermediate copies of images for these use cases, as well as methods to test which image formats can be encoded and decoded.
toBlob()
method, as if the ImageData
had first been put in a canvas with putImageData
and
then toBlob
called on the canvas with the same
arguments.
image/png
; that type is
also used if the given type isn't supported.
type
. If image/png
is
used, this parameter is ignored. Other formats use the dictionary
to specify the encoder options, e.g. JPEG quality. (TODO)
ImageData
object with
the pixel data. This is intended to work identically to drawing the
HTMLImageElement to a canvas and then using
getImageData
. In the case of passing a Blob it would
work as with the HTMLImageElement by first obtaining a URL to the
Blob and setting the src
attribute of a new image to the
blob URL.
HTMLMediaElement.canPlayType
.
toBlob
methods. This allows easy
feature detection of support for encoding new image formats such as
WebP or any other future formats. This is intended to be the image
equivalent of MediaRecorder.canRecordMimeType
.
Example 1 demonstrates decompressing a Blob to ImageData via an
intermediate canvas. Note the steps involving the canvas, notably
getImageData
, are synchronous.
// note: looks asynchronous because it necessarily returns a promise // due to loading the blob as an Image, but the important parts are // still synchronous. function BlobToImageData(blob) { let blobUrl = URL.createObjectURL(blob); return new Promise((resolve, reject) => { let img = new Image(); img.onload = () => resolve(img); img.onerror = err => reject(err); img.src = blobUrl; }).then(img => { URL.revokeObjectURL(blobUrl); let w = img.width; let h = img.height; let canvas = document.createElement("canvas"); canvas.width = w; canvas.height = h; let ctx = canvas.getContext("2d"); ctx.drawImage(img, 0, 0); return ctx.getImageData(0, 0, w, h); // some browsers synchronously decode image here }); };
With this proposal it can be simplified to:
function BlobToImageData(blob) { return ImageData.create(blob); };
Example 3 demonstrates compressing an ImageData to a Blob via an intermediate canvas.
function ImageDataToBlob(imageData) { let w = imageData.width; let h = imageData.height; let canvas = document.createElement("canvas"); canvas.width = w; canvas.height = h; let ctx = canvas.getContext("2d"); ctx.putImageData(imageData, 0, 0, w, h); // synchronous return new Promise((resolve, reject) => { canvas.toBlob(resolve); // implied image/png format }); };
With the new method this can be simplified to:
function ImageDataToBlob(imageData) { return imageData.toBlob(); // implied "image/png" format };
ImageData can already be converted to and from a canvas via
getImageData
and putImageData
(which probably
should remain synchronous to avoid racing with other drawing), and to
ImageBitmap via createImageBitmap
. Conversion back to a
HTMLImageElement requires that there be a valid src
for
the image, so this should be done via first converting to a Blob and
creating a URL to the blob.
Conversion from an ImageBitmap is omitted to avoid complications with the fact implementations may store them on the GPU in premultiplied form. Therefore converting an ImageBitmap to an ImageData could require an expensive readback from the GPU, and make it difficult to avoid a lossy pass through a premultiplied format. Instead whatever the ImageBitmap was created from should be directly converted to an ImageData without using ImageBitmap as an intermediate stage.
Note this proposal does not cover animated formats, in particular GIF, APNG or MJPEG. It is suggested that ImageData remains as a representation of a single static image, and another interface providing access to an array of ImageData be specified for animated formats with its own methods to convert to/from Blob. If an animated format is specified for ImageData, it should operate on the first frame only.
One suggestion was to use ImageBitmap as a "hub" of conversion,
especially since createImageBitmap
can already
asynchronously convert a number of formats to ImageBitmap. However
the purpose of an ImageBitmap is to be able to render "without undue
latency", which implementors may interpret as a GPU resource in
premultiplied format. This makes it difficult to convert to a
different format, since GPU readback is expensive and
premultiplication is lossy. It is not specifically intended to make
ImageData the "hub" of conversion, but it is the appropriate format
when the image pixel data needs to be read or modified.
One alternative solution would be to add an asynchronous
getImageData
method to the 2d context. However most
modern implementations use GPU-accelerated 2d contexts and such a
method could require readback from the GPU, which has costly
implications for performance. Use of a canvas also likely introduces
additional memory use and possibly intermediate copies of the image
data, which ought to be avoided for best performance and minimal
memory overhead. GPU resources may also be premultiplied, which is
lossy.
Further, the logical counterpart to an asynchronous
getImageData
method is an asynchronous
putImageData
method. It is hard to see how this could be
implemented without creating race conditions if more synchronous
drawing happens during the processing. Should drawing done while it
is processing be queued up to happen after, or should the drawing be
overwritten when putImageData
completes? Neither
solution seems great, and the proposed methods circumvent the canvas
entirely, side-stepping this problem.
Another possible solution is to run the canvas in a worker. Then the synchronous use of the canvas for conversion purposes will be processed in parallel to the main thread. However to implement this efficiently requires that various types be transferrable, which has further implications. It also does not help convert HTMLImageElement since it is a DOM element. Also it would likely require intermediate copies of the image data or the additional memory overhead of the canvas.