W3C

Filter Effects 1.0

16 December 2011

Editors:
Vincent Hardy, Adobe Systems,
Dean Jackson, Apple Inc.,
Erik Dahlström, Opera Software ASA,
Authors:
The authors of this specification are the participants of the W3C CSS and SVG Working Groups.

Abstract

Filter effects are a way of processing an element's rendering before it is displayed in the document. Typically, rendering an element via CSS or SVG can conceptually described as if the element, including its children, are drawn into a buffer (such as a raster image) and then that buffer is composited into the elements parent. Filters apply an effect before the compositing stage. Examples of such effects are blurring, changing color intensity and warping the image.

Although originally designed for use in SVG, filter effects are a set a set of operations to apply on an image buffer and therefore can be applied to nearly any presentational environment, including CSS. They are triggered by a style instruction (the ‘filter’ property). This specification describes filters in a manner that allows them to be used in content styled by CSS, such as HTML and SVG. It also defines a CSS property value function that produces a CSS <image> value.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document is the first public working draft of this specification.

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

The (archived) public mailing list public-fx@w3.org (see instructions) is preferred for discussion of this specification. When sending e-mail, please put the text “Filter Effects” in the subject, preferably like this: “[Filter Effects] …summary of comment…

This document was produced by the CSS Working Group (part of the Style Activity) and the SVG Working Group (part of the Graphics Activity)

This document was produced by groups operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures (CSS) and a public list of any patent disclosures (SVG) made in connection with the deliverables of each group; these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

The list of changes made to this specification is available.

Table of contents

1. Introduction

A filter effect is a graphical operation that is applied to an element as it is drawn into the document. It is an image-based effect, in that it takes zero or more images as input, a number of parameters specific to the effect, and then produces an image as output. The output image is either rendered into the document instead of the original element, used as an input image to another filter effect, or provided as a CSS image value.

A simple example of a filter effect is a "flood". It takes no image inputs but has a parameter defining a color. The effect produces an output image that is completely filled with the given color. A slightly more complex example is an "inversion" which takes a single image input (typically an image of the element as it would normally be rendered into its parent) and adjusts each pixel such that they have the opposite color values.

Filter effects are exposed with three levels of complexity:

  1. A small set of canned filter functions that are given by name. While not particularly powerful, these are convenient and easily understood and provide a simple approach to achieving common effects, such as blurring.
  2. A graph of individual filter effects described in markup that define an overall effect. The graph is agnostic to its input in that the effect can be applied to any content. While such graphs are the combination of effects that may be simple in isolation, the graph as a whole can produce complex effects. An example is given below.
  3. A customizable system that exposes a shading language allowing control over the geometry and pixel values of filtered output.

The following shows an example of a filter effect.

Example filters01 - introducing filter effects.

The filter effect used in the example above is repeated here with reference numbers in the left column before each of the six filter primitives:

 
 
1
2
3
 
 
 
 
4
5
 
6
 
 
 
<filter id="MyFilter" filterUnits="userSpaceOnUse" x="0" y="0" width="200" height="120">
  <desc>Produces a 3D lighting effect.</desc>
  <feGaussianBlur in="SourceAlpha" stdDeviation="4" result="blur"/>
  <feOffset in="blur" dx="4" dy="4" result="offsetBlur"/>
  <feSpecularLighting in="blur" surfaceScale="5" specularConstant=".75" 
                      specularExponent="20" lighting-color="#bbbbbb" 
                      result="specOut">
    <fePointLight x="-5000" y="-10000" z="20000"/>
  </feSpecularLighting>
  <feComposite in="specOut" in2="SourceAlpha" operator="in" result="specOut"/>
  <feComposite in="SourceGraphic" in2="specOut" operator="arithmetic" 
               k1="0" k2="1" k3="1" k4="0" result="litPaint"/>
  <feMerge>
    <feMergeNode in="offsetBlur"/>
    <feMergeNode in="litPaint"/>
  </feMerge>
</filter>

The following pictures show the intermediate image results from each of the six filter elements:

filters01 - original source graphic
Source graphic

 

filters01 - after filter element 1
After filter primitive 1

 

filters01 - after filter element 2
After filter primitive 2

 

filters01 - after filter element 3
After filter primitive 3

   
   

filters01 - after filter element 4
After filter primitive 4

 

filters01 - after filter element 5
After filter primitive 5

 

filters01 - after filter element 6
After filter primitive 6

  1. Filter primitive feGaussianBlur takes input SourceAlpha, which is the alpha channel of the source graphic. The result is stored in a temporary buffer named "blur". Note that "blur" is used as input to both filter primitives 2 and 3.
  2. Filter primitive feOffset takes buffer "blur", shifts the result in a positive direction in both x and y, and creates a new buffer named "offsetBlur". The effect is that of a drop shadow.
  3. Filter primitive feSpecularLighting, uses buffer "blur" as a model of a surface elevation and generates a lighting effect from a single point source. The result is stored in buffer "specOut".
  4. Filter primitive feComposite masks out the result of filter primitive 3 by the original source graphics alpha channel so that the intermediate result is no bigger than the original source graphic.
  5. Filter primitive feComposite composites the result of the specular lighting with the original source graphic.
  6. Filter primitive feMerge composites two layers together. The lower layer consists of the drop shadow result from filter primitive 2. The upper layer consists of the specular lighting result from filter primitive 5.

2. Reading This Document

Each section of this document is normative unless otherwise specified.

This document contains explicit conformance criteria that overlap with some RelaxNG definitions in requirements. If there is any conflict between the two, the explicit conformance criteria are the definitive reference.

Note that even though this specification references parts of SVG 1.1 it does not require an SVG 1.1 implementation. Add link to conformance classes here.

3. Definitions

When used in this specification, terms have the meanings assigned in this section.

null filter

The null filter output is all transparent black pixels. If applied to an element it means that the element (and children if any) becomes invisible. Note that it does not affect event processing.

transfer function elements

The set of elements, feFuncR, feFuncG, feFuncB, feFuncA, that define the transfer function for the feComponentTransfer filter primitive.

bounding client rect

The union of all CSS border-boxes for the element if formatted with the CSS box model, as defined by the CSS OM method getBoundingClientRect [CSSOM].

CSS bounding box

The union of all CSS border-boxes for the element and all it's descendant elements, provided the element is formatted with the CSS box model [CSS].

current user coordinate system

For elements formatted with the CSS box model: the current user coordinate system has its origin at the top-left corner of the bounding client rect and one unit equals on CSS px. The viewport for resolving percentage values is defined by the width and height of the bounding client rect.

For elements using SVG layout see user coordinate system.

object bounding box units
For elements formatted with the CSS box model: the bounding box is defined by the CSS bounding box.

For elements using SVG layout the bounding box is defined by the SVG bounding box.

For both cases the bounding box defines the coordinate system in which to resolve values, as defined in object bounding box units.

<filter-primitive-reference>

A string that identifies a particular filter primitive's output.

filter primitives, filter primitive elements

The set of elements that control the output of a filter element element, particularly: feDistantLight, fePointLight, feSpotLight, feBlend, feColorMatrix, feComponentTransfer, feComposite, feConvolveMatrix, feDiffuseLighting, feDisplacementMap, feFlood, feGaussianBlur, feImage, feMerge, feMorphology, feOffset, feSpecularLighting, feTile, feTurbulence, feDropShadow, feDiffuseSpecular, feUnsharpMask, feCustom.

4. Security

Should this section be merged with the CSS shaders security considerations section?

4.1. Timing Attacks

Since a filter effect is applying a processing operation on input values, it is vital that no private information leaks from that operation. The same rules for cross-origin restrictions and tainting of data values apply to filtered content. There are a number of extra cases that are called out here.

A timing attack is a method of obtaining information about content that is otherwise protected, based on studying the amount of time it takes for an operation to occur. For example, rendering is an operation that takes a significant amount of time, and that time depends on the complexity of the drawing operations involved. If, for example, red pixels took longer to draw than green pixels, one might be able to obtain an indication of the proportion of red to green in the element being rendered, even without ever having access to the content of the element. Taking that to its theoretical extreme, an attack may be able to modify content in a way that exposes such variations in timing over a long enough period.

Filter effects do not add new vulnerabilities to such attacks, but they possibly allow malicious code to be written that accelerates the process. For example, a filter effect might be able to modify input pixel values in a manner that amplifies the differences in rendering. It is essential that a filter effect expose as little private information to the system as possible. One of the well-documented security issues is exposing the user's browsing history to script based on detecting the color of link elements styled with the ‘visited’ pseudo-class.

A user agent must ensure that any content passed to a filter effect has discernible information removed. This includes, but is not limited-to:

4.2. Origin Restrictions

Input to a filter effect must not include anything as input that would violate cross-origin restrictions. If cross-origin access is required, then the requested content should be explicitly marked with CORS data.

This restriction includes:

For content that falls under this restriction, it should not be rendered into the input image. For example, a filter effect that is applying to a cross-origin ‘iframe’ element would receive a completely blank input image.

5. The filter property

The description of the filter property is as follows:

filter
Value:   none | <filter-function> [ <filter-function> ]*
Initial:   none
Applies to:   All elements In SVG 1.1 it applies only to "container elements (except ‘mask’) and graphics elements"
Inherited:   no
Percentages:   N/A
Media:   visual
Animatable:   yes

If the value of the filter property property is none then there is no filter effect applied. Otherwise, the list of functions (described below) are applied in order.

5.1. How the ‘filter’ property property applies to content formatted with the CSS box model (e.g HTML)

The application of the filter property property to an element formatted with the CSS box model establishes a pseudo-stacking-context the same way that CSS opacity does, and all the element's boxes are rendered together as a group with the filter effect applied to the group as a whole.

The filter property property has no effect on the geometry of the target element's CSS boxes, even though filter property can cause painting outside of an element's border-box.

The compositing model follows the SVG compositing model: first any filter effect is applied, then any clipping, masking and opacity. These effects all apply after any other CSS effects such as ‘border’. As per SVG, the application of filter property has no effect on hit-testing.


6. Filter Functions

The value of the filter property is a list of <filter-functions> applied in the order provided. The individual filter functions are separated by whitespace. The set of allowed filter functions is given below.

<FuncIRI>
An IRI reference to a filter element element that defines the filter effect. For example "url(commonfilters.xml#large-blur)". If the IRI references a non-existent object or the referenced object is not a filter element element, then the null filter will be applied instead.
grayscale(amount)
Converts the input image to grayscale. The value of ‘amount’ defines the proportion of the conversion. A value of 100% is completely grayscale. A value of 0% leaves the input unchanged. Values between 0% and 100% are linear multipliers on the effect. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
sepia(amount)
Converts the input image to sepia. The value of ‘amount’ defines the proportion of the conversion. A value of 100% is completely sepia. A value of 0 leaves the input unchanged. Values between 0% and 100% are linear multipliers on the effect. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
saturate(amount)
Saturates the input image. The value of ‘amount’ defines the proportion of the conversion. A value of 0% is completely un-saturated. A value of 100% leaves the input unchanged. Other values are linear multipliers on the effect. Values of amount over 100% are allowed, providing super-saturated results. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
hue-rotate(angle)
Applies a hue rotation on the input image. The value of ‘angle’ defines the number of degrees around the color circle the input samples will be adjusted. A value of 0deg leaves the input unchanged. If the ‘angle’ parameter is missing, a value of 0deg is used. Implementations should not normalize this value in order to allow animations beyond 360deg. The markup equivalent of this function is given below.
invert(amount)
Inverts the samples in the input image. The value of ‘amount’ defines the proportion of the conversion. A value of 100% is completely inverted. A value of 0% leaves the input unchanged. Values between 0% and 100% are linear multipliers on the effect. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
opacity(amount)
Applies transparency to the samples in the input image. The value of ‘amount’ defines the proportion of the conversion. A value of 0% is completely transparent. A value of 100% leaves the input unchanged. Values between 0% and 100% are linear multipliers on the effect. This is equivalent to multiplying the input image samples by amount. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
brightness(amount)
Applies a linear multiplier to input image, making it appear more or less bright. A value of 0% will create an image that is completely black. A value of 100% leaves the input unchanged. Other values are linear multipliers on the effect. Values of amount over 100% are allowed, providing brighter results. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
contrast(amount)
Adjusts the contrast of the input. A value of 0% will create an image that is completely black. A value of 100% leaves the input unchanged. Values of amount over 100% are allowed, providing results with less contrast. If the ‘amount’ parameter is missing, a value of 100% is used. The markup equivalent of this function is given below.
blur(radius)
Applies a Gaussian blur to the input image. The value of ‘radius’ defines the value of the standard deviation to the Gaussian function. If no parameter is provided, then a value 0 is used. The parameter is specified a CSS length, but does not accept percentage values. The markup equivalent of this function is given below.
drop-shadow(<shadow>)
Applies a drop shadow effect to the input image. A drop shadow is effectively a blurred, offset version of the input image's alpha mask drawn in a particular color, composited below the image. The function accepts a parameter of type <shadow> (defined in CSS3 Backgrounds), with the exception that the ‘inset’ keyword is not allowed. The markup equivalent of this function is given below.
custom(<vertex-shader>[wsp<fragment-shader>][,<vertex-mesh>][,<params>])
<vertex-shader> <uri> | none
<fragment-shader> <uri> | none
<vertex-mesh> +<integer>{1,2}[wsp<box>][wsp'detached']
where: <box> = filter-box | border-box | padding-box | content-box
<params> See the <feCustom>s params attribute.
Add description to the filter here and reference to equivalent.
It might be clearer to name the custom() function the shader() function instead and introduce an feCustomShader filter primitive instead of feCustom.

The above list is a collection of effects that can be easily defined in terms of SVG filters. However, there are many more interesting effects that can be considered for inclusion. If accepted, there will have to be equivalent XML elements for the effect. Effects considered include:

The first function in the list takes the element (SourceGraphic) as the input image. Subsequent operations take the output from the previous function as the input image. The exception is the function that references a ’filter element' element, which can specify an alternate input, but still uses the previous output as its SourceGraphic.

7. The filter element

‘filter’
Categories:
None
Content model:
Any number of the following elements, in any order:
Attributes:
DOM Interfaces:

The description of the filter element element follows:

Attribute definitions:

filterUnits = "userSpaceOnUse | objectBoundingBox"
See filter effects region.
primitiveUnits = "userSpaceOnUse | objectBoundingBox"
Specifies the coordinate system for the various length values within the filter primitives and for the attributes that define the filter primitive subregion.
If primitiveUnits="userSpaceOnUse", any length values within the filter definitions represent values in the current user coordinate system in place at the time when the filter element element is referenced (i.e., the user coordinate system for the element referencing the filter element element via a filter property property).
If primitiveUnits="objectBoundingBox", then any length values within the filter definitions represent fractions or percentages of the bounding box on the referencing element (see object bounding box units). Note that if only one number was specified in a <number-optional-number> value this number is expanded out before the filter/primitiveUnits computation takes place.
The lacuna value for filter/primitiveUnits is userSpaceOnUse.
Animatable: yes.
x = "<coordinate>"
See filter effects region.
y = "<coordinate>"
See filter effects region.
width = "<length>"
See filter effects region.
height = "<length>"
See filter effects region.
filterRes = "<number-optional-number>"
See filter effects region.
xlink:href = "<IRI>"
An IRI reference to another filter element element within the current SVG document fragment. Any attributes which are defined on the referenced filter element element which are not defined on this element are inherited by this element. If this element has no defined filter nodes, and the referenced element has defined filter nodes (possibly due to its own href attribute), then this element inherits the filter nodes defined from the referenced filter element element. Inheritance can be indirect to an arbitrary level; thus, if the referenced filter element element inherits attributes or its filter node specification due to its own href attribute, then the current element can inherit those attributes or filter node specifications.
This attribute is deprecated and should not be used in new content, it's included for backwards compatibility reasons only.

Animatable: yes.

Properties inherit into the filter element element from its ancestors; properties do not inherit from the element referencing the filter element element.

filter element elements are never rendered directly; their only usage is as something that can be referenced using the filter property property. The display property does not apply to the filter element element; thus, filter element elements are not directly rendered even if the display property is set to a value other than none, and filter element elements are available for referencing even when the display property on the filter element element or any of its ancestors is set to none.


8. Filter effects region

A filter element element can define a region on the canvas to which a given filter effect applies and can provide a resolution for any intermediate continuous tone images used to process any raster-based filter primitives. The filter element element has the following attributes which work together to define the filter effects region:

filterUnits

Defines the coordinate system for attributes x, y, width, height.

If filterUnits="userSpaceOnUse", x, y, width, height represent values in the current user coordinate system in place at the time when the filter element element is referenced (i.e., the user coordinate system for the element referencing the filter element element via a filter property property).

If filterUnits="objectBoundingBox", then x, y, width, height represent fractions or percentages of the bounding box on the referencing element (see object bounding box units).

The lacuna value for filterUnits is objectBoundingBox.

Animatable: yes.

x, y, width, height

These attributes define a rectangular region on the canvas to which this filter applies.

The amount of memory and processing time required to apply the filter are related to the size of this rectangle and the filter/filterRes attribute of the filter.

The coordinate system for these attributes depends on the value for attribute filterUnits.

The bounds of this rectangle act as a hard clipping region for each filter primitive included with a given filter element element; thus, if the effect of a given filter primitive would extend beyond the bounds of the rectangle (this sometimes happens when using a feGaussianBlur filter primitive with a very large feGaussianBlur/stdDeviation), parts of the effect will get clipped.

The lacuna value for x and y is -10%.

The lacuna value for width and height is 120%.

Negative or zero values for width or height disable rendering of the element which referenced the filter.

Animatable: yes.

filter/filterRes

Defines the width and height of the intermediate images in pixels. If not provided, then the user agent will use reasonable values to produce a high-quality result on the output device.

Care should be taken when assigning a non-default value to this attribute. Too small of a value may result in unwanted pixelation in the result. Too large of a value may result in slow processing and large memory usage.

Non-integer values are truncated, i.e rounded to the closest integer value towards zero.

Negative or zero values disable rendering of the element which referenced the filter.

Animatable: yes.

Note that both of the two possible value for filterUnits (i.e., objectBoundingBox and userSpaceOnUse) result in a filter region whose coordinate system has its X-axis and Y-axis each parallel to the X-axis and Y-axis, respectively, of the user coordinate system for the element to which the filter will be applied.

Sometimes implementers can achieve faster performance when the filter region can be mapped directly to device pixels; thus, for best performance on display devices, it is suggested that authors define their region such that the user agent can align the filter region pixel-for-pixel with the background. In particular, for best filter effects performance, avoid rotating or skewing the user coordinate system. Explicit values for attribute filter/filterRes can either help or harm performance. If filter/filterRes is smaller than the automatic (i.e., default) filter resolution, then filter effect might have faster performance (usually at the expense of quality). If filter/filterRes is larger than the automatic (i.e., default) filter resolution, then filter effects performance will usually be slower.

It is often necessary to provide padding space because the filter effect might impact bits slightly outside the tight-fitting bounding box on a given object. For these purposes, it is possible to provide negative percentage values for x, y and percentages values greater than 100% for width, height. This, for example, is why the defaults for the filter effects region are x="-10%" y="-10%" width="120%" height="120%".

9. Accessing the background image

Two possible pseudo input images for filter effects are BackgroundImage and BackgroundAlpha, which each represent an image snapshot of the canvas under the filter region at the time that the filter element is invoked. BackgroundImage represents both the color values and alpha channel of the canvas (i.e., RGBA pixel values), whereas BackgroundAlpha represents only the alpha channel.

Implementations will often need to maintain supplemental background image buffers in order to support the BackgroundImage and BackgroundAlpha pseudo input images. Sometimes, the background image buffers will contain an in-memory copy of the accumulated painting operations on the current canvas.

Because in-memory image buffers can take up significant system resources, content must explicitly indicate to the user agent that the document needs access to the background image before BackgroundImage and BackgroundAlpha pseudo input images can be used.

A background image is what's been rendered before the current element. The host language is responsible for defining what rendered before in this context means . For SVG, that uses the painter's algorithm, rendered before means all of the prior elements in pre order traversal previous to the element to which the filter is applied.

The property which enables access to the background image is enable-background:

enable-background
Value:   accumulate | new | inherit
Initial:   accumulate
Applies to:   Typically elements that can contain renderable elements. language is responsible for defining the applicable set of elements. For SVG: container elements
Inherited:   no
Percentages:   N/A
Media:   visual
Animatable:   no

enable-background is only applicable to container elements and specifies how the SVG user agent manages the accumulation of the background image.

A value of new indicates two things:

A meaning of enable-background: accumulate (the initial/default value) depends on context:

If a filter effect specifies either the BackgroundImage or the BackgroundAlpha pseudo input images and no ancestor container element element has a property value of enable-background:new, then the background image request is technically in error. Processing will proceed without interruption (i.e., no error message) and a transparent black image shall be provided in response to the request.

9.1. Accessing the background image in SVG

This section only applies to the SVG definition of enable-background.

Assume you have an element E in the document and that E has a series of ancestors A1 (its immediate parent), A2, etc. (Note: A0 is E.) Each ancestor Ai will have a corresponding temporary background image offscreen buffer BUFi. The contents of the background image available to a filter referenced by E is defined as follows:

The example above contains five parts, described as follows:

  1. The first set is the reference graphic. The reference graphic consists of a red rectangle followed by a 50% transparent g element. Inside the g is a green circle that partially overlaps the rectangle and a a blue triangle that partially overlaps the circle. The three objects are then outlined by a rectangle stroked with a thin blue line. No filters are applied to the reference graphic.
  2. The second set enables background image processing and adds an empty g element which invokes the ShiftBGAndBlur filter. This filter takes the current accumulated background image (i.e., the entire reference graphic) as input, shifts its offscreen down, blurs it, and then writes the result to the canvas. Note that the offscreen for the filter is initialized to transparent black, which allows the already rendered rectangle, circle and triangle to show through after the filter renders its own result to the canvas.
  3. The third set enables background image processing and instead invokes the ShiftBGAndBlur filter on the inner g element. The accumulated background at the time the filter is applied contains only the red rectangle. Because the children of the inner g (i.e., the circle and triangle) are not part of the inner g element's background and because ShiftBGAndBlur ignores SourceGraphic, the children of the inner g do not appear in the result.
  4. The fourth set enables background image processing and invokes the ShiftBGAndBlur on the polygon element that draws the triangle. The accumulated background at the time the filter is applied contains the red rectangle plus the green circle ignoring the effect of the opacity property on the inner g element. (Note that the blurred green circle at the bottom does not let the red rectangle show through on its left side. This is due to ignoring the effect of the opacity property.) Because the triangle itself is not part of the accumulated background and because ShiftBGAndBlur ignores SourceGraphic, the triangle does not appear in the result.
  5. The fifth set is the same as the fourth except that filter ShiftBGAndBlur_WithSourceGraphic is invoked instead of ShiftBGAndBlur. ShiftBGAndBlur_WithSourceGraphic performs the same effect as ShiftBGAndBlur, but then renders the SourceGraphic on top of the shifted, blurred background image. In this case, SourceGraphic is the blue triangle; thus, the result is the same as in the fourth case except that the blue triangle now appears.

10. Filter primitives overview

10.1. Overview

This section describes the various filter primtives that can be assembled to achieve a particular filter effect.

Unless otherwise stated, all image filters operate on premultiplied RGBA samples. Filters which work more naturally on non-premultiplied data (feColorMatrix and feComponentTransfer) will temporarily undo and redo premultiplication as specified. All raster effect filtering operations take 1 to N input RGBA images, additional attributes as parameters, and produce a single output RGBA image.

The RGBA result from each filter primitive will be clamped into the allowable ranges for colors and opacity values. Thus, for example, the result from a given filter primitive will have any negative color values or opacity values adjusted up to color/opacity of zero.

The color space in which a particular filter primitive performs its operations is determined by the value of property color-interpolation-filters on the given filter primitive. A different property, color-interpolation determines the color space for other color operations. Because these two properties have different initial values (color-interpolation-filters has an initial value of linearRGB whereas color-interpolation has an initial value of sRGB), in some cases to achieve certain results (e.g., when coordinating gradient interpolation with a filtering operation) it will be necessary to explicitly set color-interpolation to linearRGB or color-interpolation-filters to sRGB on particular elements. Note that the examples below do not explicitly set either color-interpolation or color-interpolation-filters, so the initial values for these properties apply to the examples.

Sometimes filter primitives result in undefined pixels. For example, filter primitive feOffset can shift an image down and to the right, leaving undefined pixels at the top and left. In these cases, the undefined pixels are set to transparent black.

10.2. Common attributes

The following attributes are available for most of the filter primitives:

Attribute definitions:

x = "<coordinate>"

The minimum x coordinate for the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.

The lacuna value for x is 0%.

Animatable: yes.

y = "<coordinate>"

The minimum y coordinate for the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.

The lacuna value for y is 0%.

Animatable: yes.

width = "<length>"

The width of the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.

A negative or zero value disables the effect of the given filter primitive (i.e., the result is a transparent black image).

The lacuna value for width is 100%.

Animatable: yes.

height = "<length>"

The height of the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.

A negative or zero value disables the effect of the given filter primitive (i.e., the result is a transparent black image).

The lacuna value for height is 100%.

Animatable: yes.

result = "<filter-primitive-reference>"

Assigned name for this filter primitive. If supplied, then graphics that result from processing this filter primitive can be referenced by an in attribute on a subsequent filter primitive within the same filter element element. If no value is provided, the output will only be available for re-use as the implicit input into the next filter primitive if that filter primitive provides no value for its in attribute.

Note that a <filter-primitive-reference> is not an XML ID; instead, a <filter-primitive-reference> is only meaningful within a given filter element element and thus have only local scope. It is legal for the same <filter-primitive-reference> to appear multiple times within the same filter element element. When referenced, the <filter-primitive-reference> will use the closest preceding filter primitive with the given result.

Animatable: yes.

in = "SourceGraphic | SourceAlpha | BackgroundImage | BackgroundAlpha | FillPaint | StrokePaint | <filter-primitive-reference>"

Identifies input for the given filter primitive. The value can be either one of six keywords or can be a string which matches a previous feBlend/result attribute value within the same filter element element. If no value is provided and this is the first filter primitive, then this filter primitive will use SourceGraphic as its input. If no value is provided and this is a subsequent filter primitive, then this filter primitive will use the result from the previous filter primitive as its input.

If the value for result appears multiple times within a given filter element element, then a reference to that result will use the closest preceding filter primitive with the given value for attribute feBlend/result. Forward references to results are not allowed, and will be treated as if no result was specified.

Definitions for the six keywords:

SourceGraphic

This keyword represents the graphics elements that were the original input into the filter element element. For raster effects filter primitives, the graphics elements will be rasterized into an initially clear RGBA raster in image space. Pixels left untouched by the original graphic will be left clear. The image is specified to be rendered in linear RGBA pixels. The alpha channel of this image captures any anti-aliasing specified by SVG. (Since the raster is linear, the alpha channel of this image will represent the exact percent coverage of each pixel.)

SourceAlpha

This keyword represents the graphics elements that were the original input into the filter element element. SourceAlpha has all of the same rules as SourceGraphic except that only the alpha channel is used. The input image is an RGBA image consisting of implicitly black color values for the RGB channels, but whose alpha channel is the same as SourceGraphic.

If this option is used, then some implementations might need to rasterize the graphics elements in order to extract the alpha channel.

BackgroundImage

This keyword represents an image snapshot of the canvas under the filter region at the time that the filter element element was invoked. See accessing the background image.

BackgroundAlpha

Same as BackgroundImage except only the alpha channel is used. See SourceAlpha and accessing the background image.

FillPaint

This keyword represents the target element rendered filled.

For svg this keyword represents the value of the fill property on the target element for the filter effect.

For non-SVG cases FillPaint generates a transparent black image. ISSUE: Consider whether this should be e.g the CSS bounding box filled with the current color, or if it makes sense to use the ‘fill’ property for this case too.

Note that text is generally painted filled, not stroked.

The FillPaint image has conceptually infinite extent. Frequently this image is opaque everywhere, but it might not be if the "paint" itself has alpha, as in the case of a gradient or pattern which itself includes transparent or semi-transparent parts.

StrokePaint

This keyword represents the target element rendered stroked.

For svg this keyword represents the value of the stroke on the target element for the filter effect.

For non-SVG cases StrokePaint generates a transparent black image. ISSUE: Consider whether this should be e.g the CSS bounding box filled with the one of the border colors, or if it makes sense to use the ‘stroke’ property for this case too.

Note that text is generally painted filled, not stroked.

The StrokePaint image has conceptually infinite extent. Frequently this image is opaque everywhere, but it might not be if the "paint" itself has alpha, as in the case of a gradient or pattern which itself includes transparent or semi-transparent parts.

Animatable: yes.

10.3. Filter primitive subregion

Merge CSS shaders processing model

with this section or the filter regions section.

All filter primitives have attributes x, y, width and height which together identify a subregion which restricts calculation and rendering of the given filter primitive. The x, y, width and height attributes are defined according to the same rules as other filter primitives coordinate and length attributes and thus represent values in the coordinate system established by attribute ’filter/primitiveUnits' on the filter element element.

x, y, width and height default to the union (i.e., tightest fitting bounding box) of the subregions defined for all referenced nodes. If there are no referenced nodes (e.g., for feImage or feTurbulence), or one or more of the referenced nodes is a standard input (one of SourceGraphic, SourceAlpha, BackgroundImage, BackgroundAlpha, FillPaint or StrokePaint), or for feTile (which is special because its principal function is to replicate the referenced node in X and Y and thereby produce a usually larger result), the default subregion is 0%, 0%, 100%, 100%, where as a special-case the percentages are relative to the dimensions of the filter region, thus making the default filter primitive subregion equal to the filter region.

If the filter primitive subregion has a negative or zero width or height, the effect of the filter primitive is disabled.

The filter primitive subregion act as a hard clip clipping rectangle on both the filter primitive's input image(s) and the filter primitive result.

ISSUE: Consider making it possible to do select between clip-input, clip-output, clip-both or none.

All intermediate offscreens are defined to not exceed the intersection of the filter primitive subregion with the filter region. The filter region and any of the filter primitive subregions are to be set up such that all offscreens are made big enough to accommodate any pixels which even partly intersect with either the filter region or the filter primitive subregions.

feTile references a previous filter primitive and then stitches the tiles together based on the filter primitive subregion of the referenced filter primitive in order to fill its own filter primitive subregion.

In the example above there are three rects that each have a cross and a circle in them. The circle element in each one has a different filter applied, but with the same filter primitive subregion. The filter output should be limited to the filter primitive subregion, so you should never see the circles themselves, just the rects that make up the filter primitive subregion.

  • The upper left rect shows an feFlood with flood-opacity="75%" so the cross should be visible through the green rect in the middle.
  • The lower left rect shows an feMerge that merges SourceGraphic with FillPaint. Since the circle has fill-opacity="0.5" it will also be transparent so that the cross is visible through the green rect in the middle.
  • The upper right rect shows an feBlend that has mode="multiply". Since the circle in this case isn't transparent the result is totally opaque. The rect should be dark green and the cross should not be visible through it.

11. Light source elements and properties

11.1. Introduction

The following sections define the elements that define a light source, feDistantLight, fePointLight and feSpotLight, and property lighting-color, which defines the color of the light.

11.2. Light source feDistantLight

Attribute definitions:

azimuth = "<number>"
Direction angle for the light source on the XY plane (clockwise), in degrees from the x axis.
The lacuna value for azimuth is 0.
Animatable: yes.
elevation = "<number>"
Direction angle for the light source from the XY plane towards the Z-axis, in degrees. Note that the positive Z-axis points towards the viewer.
The lacuna value for elevation is 0.
Animatable: yes.

The following diagram illustrates the angles which azimuth and elevation represent in an XYZ coordinate system.

Angles which azimuth and elevation represent

11.3. Light source fePointLight

Attribute definitions:

x = "<number>"
X location for the light source in the coordinate system established by attribute filter/primitiveUnits on the filter element.
The lacuna value for fePointLight/x is 0.
Animatable: yes.
y = "<number>"
Y location for the light source in the coordinate system established by attribute filter/primitiveUnits on the filter element element.
The lacuna value for fePointLight/y is 0.
Animatable: yes.
z = "<number>"
Z location for the light source in the coordinate system established by attribute filter/primitiveUnits on the filter element element, assuming that, in the initial coordinate system , the positive Z-axis comes out towards the person viewing the content and assuming that one unit along the Z-axis equals one unit in X and Y.
The lacuna value for fePointLight/z is 0.
Animatable: yes.

11.4. Light source feSpotLight

Attribute definitions:

x = "<number>"
X location for the light source in the coordinate system established by attribute filter/primitiveUnits on the filter element element.
The lacuna value for feSpotLight/x is 0.
Animatable: yes.
y = "<number>"
Y location for the light source in the coordinate system established by attribute filter/primitiveUnits on the filter element element.
The lacuna value for feSpotLight/y is 0.
Animatable: yes.
z = "<number>"
Z location for the light source in the coordinate system established by attribute filter/primitiveUnits on the filter element element, assuming that, in the initial coordinate system , the positive Z-axis comes out towards the person viewing the content and assuming that one unit along the Z-axis equals one unit in X and Y.
The lacuna value for feSpotLight/z is 0.
Animatable: yes.
pointsAtX = "<number>"
X location in the coordinate system established by attribute filter/primitiveUnits on the filter element element of the point at which the light source is pointing.
The lacuna value for pointsAtX is 0.
Animatable: yes.
pointsAtY = "<number>"
Y location in the coordinate system established by attribute filter/primitiveUnits on the filter element element of the point at which the light source is pointing.
The lacuna value for pointsAtY is 0.
Animatable: yes.
pointsAtZ = "<number>"
Z location in the coordinate system established by the attribute filter/primitiveUnits on the filter element element of the point at which the light source is pointing, assuming that, in the initial coordinate system, the positive Z-axis comes out towards the person viewing the content and assuming that one unit along the Z-axis equals one unit in X and Y.
The lacuna value for pointsAtZ is 0.
Animatable: yes.
specularExponent = "<number>"
Exponent value controlling the focus for the light source.
The lacuna value for specularExponent is 1.
Animatable: yes.
limitingConeAngle = "<number>"
A limiting cone which restricts the region where the light is projected. No light is projected outside the cone. limitingConeAngle represents the angle in degrees between the spot light axis (i.e. the axis between the light source and the point to which it is pointing at) and the spot light cone. User agents should apply a smoothing technique such as anti-aliasing at the boundary of the cone.
If no value is specified, then no limiting cone will be applied.
Animatable: yes.

11.5. The lighting-color property

The lighting-color property defines the color of the light source for filter primitives feDiffuseLighting and feSpecularLighting.

lighting-color
Value:   currentColor |
<color> [<icccolor>] |
inherit
Initial:   white
Applies to:   feDiffuseLighting and feSpecularLighting elements
Inherited:   no
Percentages:   N/A
Media:   visual
Animatable:   yes

12. Filter primitive feBlend

This filter composites two objects together using commonly used imaging software blending modes. It performs a pixel-wise combination of two input images.

Attribute definitions:

mode = "normal | multiply | screen | darken | lighten"
One of the image blending modes (see table below). The lacuna value for mode is normal.
Animatable: yes.
in2 = "(see in attribute)"
The second input image to the blending operation. This attribute can take on the same values as the in attribute.
Animatable: yes.

For all feBlend modes, the result opacity is computed as follows:

qr = 1 - (1-qa)*(1-qb)

For the compositing formulas below, the following definitions apply:

image A = in
image B = in2
cr = Result color (RGB) - premultiplied 
qa = Opacity value at a given pixel for image A 
qb = Opacity value at a given pixel for image B 
ca = Color (RGB) at a given pixel for image A - premultiplied 
cb = Color (RGB) at a given pixel for image B - premultiplied 

The following table provides the list of available image blending modes:

ED: make table look nicer
Image Blending Mode Formula for computing result color
normal cr = (1 - qa) * cb + ca
multiply cr = (1-qa)*cb + (1-qb)*ca + ca*cb
screen cr = cb + ca - ca * cb
darken cr = Min ((1 - qa) * cb + ca, (1 - qb) * ca + cb)
lighten cr = Max ((1 - qa) * cb + ca, (1 - qb) * ca + cb)

The normal blend mode is equivalent to operator="over" on the feComposite filter primitive, matches the blending method used by feMerge and matches the simple alpha compositing technique used in SVG for all compositing outside of filter effects.

13. Filter primitive feColorMatrix

This filter applies a matrix transformation:

| R' |     | a00 a01 a02 a03 a04 |   | R |
| G' |     | a10 a11 a12 a13 a14 |   | G |
| B' |  =  | a20 a21 a22 a23 a24 | * | B |
| A' |     | a30 a31 a32 a33 a34 |   | A |
| 1  |     |  0   0   0   0   1  |   | 1 |

on the RGBA color and alpha values of every pixel on the input graphics to produce a result with a new set of RGBA color and alpha values.

The calculations are performed on non-premultiplied color values. If the input graphics consists of premultiplied color values, those values are automatically converted into non-premultiplied color values for this operation.

These matrices often perform an identity mapping in the alpha channel. If that is the case, an implementation can avoid the costly undoing and redoing of the premultiplication for all pixels with A = 1.

Attribute definitions:

type = "matrix | saturate | hueRotate | luminanceToAlpha"
Indicates the type of matrix operation. The keyword matrix indicates that a full 5x4 matrix of values will be provided. The other keywords represent convenience shortcuts to allow commonly used color operations to be performed without specifying a complete matrix. The lacuna value for type is matrix.
Animatable: yes.
values = "list of <number>s"
The contents of values depends on the value of attribute type:
  • For type="matrix", values is a list of 20 matrix values (a00 a01 a02 a03 a04 a10 a11 ... a34), separated by whitespace and/or a comma. For example, the identity matrix could be expressed as:
    type="matrix" 
    values="1 0 0 0 0  0 1 0 0 0  0 0 1 0 0  0 0 0 1 0"
  • For type="saturate", values is a single real number value. A saturate operation is equivalent to the following matrix operation:

    | R' |     | (0.2126 + 0.7873s)  (0.7152 - 0.7152s)  (0.0722 - 0.0722s) 0  0 |   | R |
    | G' |     | (0.2126 - 0.2126s)  (0.7152 + 0.2848s)  (0.0722 - 0.0722s) 0  0 |   | G |
    | B' |  =  | (0.2126 - 0.2126s)  (0.7152 - 0.7152s)  (0.0722 + 0.9278s) 0  0 | * | B |
    | A' |     |           0                   0                   0        1  0 |   | A |
    | 1  |     |           0                   0                   0        0  1 |   | 1 |

    A value of 0 produces a fully desaturated (grayscale) filter result, while a value of 1 passes the filter input image through unchanged. Values outside the 0..1 range under- or oversaturates the filter input image respectively.
  • For type="hueRotate", values is a single one real number value (degrees). A hueRotate operation is equivalent to the following matrix operation:

    | R' |     | a00  a01  a02  0  0 |   | R |
    | G' |     | a10  a11  a12  0  0 |   | G |
    | B' |  =  | a20  a21  a22  0  0 | * | B |
    | A' |     | 0    0    0    1  0 |   | A |
    | 1  |     | 0    0    0    0  1 |   | 1 |

    where the terms a00, a01, etc. are calculated as follows:

    | a00 a01 a02 |     [0.2126 0.7152 0.0722]
    | a10 a11 a12 | =   [0.2126 0.7152 0.0722] +
    | a20 a21 a22 |     [0.2126 0.7152 0.0722]
    
                                                [ 0.7873 -0.7152 -0.0722]
                        cos(hueRotate value) *  [-0.2126  0.2848 -0.0722] +
                                                [-0.2126 -0.7152  0.9278]
    
                                                [-0.2126 -0.7152  0.9278]
                        sin(hueRotate value) *  [ 0.143   0.140  -0.283 ]
                                                [-0.7873  0.7152  0.0722]

    Thus, the upper left term of the hue matrix turns out to be:

    0.2127 + cos(hueRotate value) * 0.7873 - sin(hueRotate value) * 0.2127

  • For type="luminanceToAlpha", values is not applicable. A luminanceToAlpha operation is equivalent to the following matrix operation:

       | R' |     |      0        0        0  0  0 |   | R |
       | G' |     |      0        0        0  0  0 |   | G |
       | B' |  =  |      0        0        0  0  0 | * | B |
       | A' |     | 0.2126   0.7152   0.0722  0  0 |   | A |
       | 1  |     |      0        0        0  0  1 |   | 1 |

If the attribute is not specified, then the default behavior depends on the value of attribute feColorMatrix/type. If type="matrix", then this attribute defaults to the identity matrix. If type="saturate", then this attribute defaults to the value 1, which results in the identity matrix. If type="hueRotate", then this attribute defaults to the value 0, which results in the identity matrix.
Animatable: yes.

14. Filter primitive feComponentTransfer

This filter primitive performs component-wise remapping of data as follows:

R' = feFuncR( R )
G' = feFuncG( G )
B' = feFuncB( B )
A' = feFuncA( A )

for every pixel. It allows operations like brightness adjustment, contrast adjustment, color balance or thresholding.

The calculations are performed on non-premultiplied color values. If the input graphics consists of premultiplied color values, those values are automatically converted into non-premultiplied color values for this operation. (Note that the undoing and redoing of the premultiplication can be avoided if feFuncA is the identity transform and all alpha values on the source graphic are set to 1.)

The child elements of a feComponentTransfer element specify the transfer functions for the four channels:

  • feFuncR — transfer function for the red component of the input graphic
  • feFuncG — transfer function for the green component of the input graphic
  • feFuncB — transfer function for the blue component of the input graphic
  • feFuncA — transfer function for the alpha component of the input graphic

The following rules apply to the processing of the feComponentTransfer element:

The attributes below are the transfer function element attributes, which apply to the transfer function elements.

Attribute definitions:

type = "identity | table | discrete | linear | gamma"

Indicates the type of component transfer function. The type of function determines the applicability of the other attributes.

In the following, C is the initial component (e.g., feFuncR), C' is the remapped component; both in the closed interval [0,1].

  • For identity:
    C' = C
  • For table, the function is defined by linear interpolation between values given in the attribute tableValues. The table has n+1 values (i.e., v0 to vn) specifying the start and end values for n evenly sized interpolation regions. Interpolations use the following formula:

    For a value C < 1 find k such that:

    k/n <= C < (k+1)/n

    The result C' is given by:

    C' = vk + (C - k/n)*n * (vk+1 - vk)

    If C = 1 then:

    C' = vn.

  • For discrete, the function is defined by the step function given in the attribute tableValues, which provides a list of n values (i.e., v0 to vn-1) in order to identify a step function consisting of n steps. The step function is defined by the following formula:

    For a value C < 1 find k such that:

    k/n <= C < (k+1)/n

    For a value C pick a k such that:

    k/N <= C < (k+1)/N

    The result C' is given by:

    C' = vk

    If C = 1 then:

    C' = vn-1.

  • For linear, the function is defined by the following linear equation:

    C' = slope * C + intercept

  • For gamma, the function is defined by the following exponential function:

    C' = amplitude * pow(C, exponent) + offset

Animatable: yes.
tableValues = "(list of <number>s)"
When type="table", the list of <number> s v0,v1,...vn, separated by white space and/or a comma, which define the lookup table. An empty list results in an identity transfer function. If the attribute is not specified, then the effect is as if an empty list were provided.
Animatable: yes.
slope = "<number>"
When type="linear", the slope of the linear function.
The lacuna value for slope is 1.
Animatable: yes.
intercept = "<number>"
When type="linear", the intercept of the linear function.
The lacuna value for intercept is 0.
Animatable: yes.
amplitude = "<number>"
When type="gamma", the amplitude of the gamma function.
The lacuna value for amplitude is 1.
Animatable: yes.
exponent = "<number>"
When type="gamma", the exponent of the gamma function.
The lacuna value for exponent is 1.
Animatable: yes.
offset = "<number>"
When type="gamma", the offset of the gamma function.
The lacuna value for offset is 0.
Animatable: yes.

15. Filter primitive feComposite

This filter performs the combination of the two input images pixel-wise in image space using one of the Porter-Duff [PORTERDUFF] compositing operations: over, in, atop, out, xor [SVG-COMPOSITING]. Additionally, a component-wise arithmetic operation (with the result clamped between [0..1]) can be applied.

The arithmetic operation is useful for combining the output from the feDiffuseLighting and feSpecularLighting filters with texture data. It is also useful for implementing dissolve. If the arithmetic operation is chosen, each result pixel is computed using the following formula:

result = k1*i1*i2 + k2*i1 + k3*i2 + k4
where:
  • i1 and i2 indicate the corresponding pixel channel values of the input image, which map to in and in2 respectively
  • k1, k2, k3 and k4 indicate the values of the attributes with the same name

For this filter primitive, the extent of the resulting image might grow as described in the section that describes the filter primitive subregion.

Attribute definitions:

operator = "over | in | out | atop | xor | arithmetic"
The compositing operation that is to be performed. All of the operator types except arithmetic match the corresponding operation as described in [PORTERDUFF]. The arithmetic operator is described above. The lacuna value for operator is over.
Animatable: yes.
k1 = "<number>"
Only applicable if operator="arithmetic".
The lacuna value for k1 is 0.
Animatable: yes.
k2 = "<number>"
Only applicable if operator="arithmetic".
The lacuna value for k2 is 0.
Animatable: yes.
k3 = "<number>"
Only applicable if operator="arithmetic".
The lacuna value for k3 is 0.
Animatable: yes.
k4 = "<number>"
Only applicable if operator="arithmetic".
The lacuna value for k4 is 0.
Animatable: yes.
in2 = "(see in attribute)"
The second input image to the compositing operation. This attribute can take on the same values as the in attribute.
Animatable: yes.

16. Filter primitive feConvolveMatrix

feConvolveMatrix applies a matrix convolution filter effect. A convolution combines pixels in the input image with neighboring pixels to produce a resulting image. A wide variety of imaging operations can be achieved through convolutions, including blurring, edge detection, sharpening, embossing and beveling.

A matrix convolution is based on an n-by-m matrix (the convolution kernel) which describes how a given pixel value in the input image is combined with its neighboring pixel values to produce a resulting pixel value. Each result pixel is determined by applying the kernel matrix to the corresponding source pixel and its neighboring pixels. The basic convolution formula which is applied to each color value for a given pixel is:

COLORX,Y = ( 
              SUM I=0 to [orderY-1] { 
                SUM J=0 to [orderX-1] { 
                  SOURCE X-targetX+J, Y-targetY+I *  kernelMatrixorderX-J-1,  orderY-I-1 
                } 
              } 
            ) /  divisor +  bias * ALPHAX,Y

ED: Consider making this into mathml

where "orderX" and "orderY" represent the X and Y values for the order attribute, "targetX" represents the value of the targetX attribute, "targetY" represents the value of the targetY attribute, "kernelMatrix" represents the value of the kernelMatrix attribute, "divisor" represents the value of the divisor attribute, and "bias" represents the value of the bias attribute.

Note in the above formulas that the values in the kernel matrix are applied such that the kernel matrix is rotated 180 degrees relative to the source and destination images in order to match convolution theory as described in many computer graphics textbooks.

To illustrate, suppose you have a input image which is 5 pixels by 5 pixels, whose color values for one of the color channels are as follows:

    0  20  40 235 235
  100 120 140 235 235
  200 220 240 235 235
  225 225 255 255 255
  225 225 255 255 255
ED: Consider making this into mathml

and you define a 3-by-3 convolution kernel as follows:

  1 2 3
  4 5 6
  7 8 9
ED: Consider making this into mathml

Let's focus on the color value at the second row and second column of the image (source pixel value is 120). Assuming the simplest case (where the input image's pixel grid aligns perfectly with the kernel's pixel grid) and assuming default values for attributes divisor, targetX and targetY, then resulting color value will be:

(9*  0 + 8* 20 + 7* 40 +
6*100 + 5*120 + 4*140 +
3*200 + 2*220 + 1*240) / (9+8+7+6+5+4+3+2+1)
ED: Consider making this into mathml

Because they operate on pixels, matrix convolutions are inherently resolution-dependent. To make feConvolveMatrix produce resolution-independent results, an explicit value should be provided for either the filter/filterRes attribute on the filter element element and/or attribute kernelUnitLength.

kernelUnitLength, in combination with the other attributes, defines an implicit pixel grid in the filter effects coordinate system (i.e., the coordinate system established by the filter/primitiveUnits attribute). If the pixel grid established by kernelUnitLength is not scaled to match the pixel grid established by attribute filter/filterRes (implicitly or explicitly), then the input image will be temporarily rescaled to match its pixels with kernelUnitLength. The convolution happens on the resampled image. After applying the convolution, the image is resampled back to the original resolution.

When the image must be resampled to match the coordinate system defined by kernelUnitLength prior to convolution, or resampled to match the device coordinate system after convolution, it is recommended that high quality viewers make use of appropriate interpolation techniques, for example bilinear or bicubic. Depending on the speed of the available interpolents, this choice may be affected by the image-rendering property setting. Note that implementations might choose approaches that minimize or eliminate resampling when not necessary to produce proper results, such as when the document is zoomed out such that kernelUnitLength is considerably smaller than a device pixel.

Attribute definitions:

order = "<number-optional-number>"
Indicates the number of cells in each dimension for kernelMatrix. The values provided must be <integer> s greater than zero. The first number, <orderX>, indicates the number of columns in the matrix. The second number, <orderY>, indicates the number of rows in the matrix. If <orderY> is not provided, it defaults to <orderX>.
A typical value is order="3". It is recommended that only small values (e.g., 3) be used; higher values may result in very high CPU overhead and usually do not produce results that justify the impact on performance.
If the attribute is not specified, the effect is as if a value of "3" were specified.
Animatable: yes.
kernelMatrix = "<list of numbers>"
The list of <number> s that make up the kernel matrix for the convolution. Values are separated by space characters and/or a comma. The number of entries in the list must equal <orderX> times <orderY>.
Animatable: yes.
divisor = "<number>"
After applying the kernelMatrix to the input image to yield a number, that number is divided by divisor to yield the final destination color value. A divisor that is the sum of all the matrix values tends to have an evening effect on the overall color intensity of the result. If the specified divisor is zero then the default value will be used instead. The default value is the sum of all values in kernelMatrix, with the exception that if the sum is zero, then the divisor is set to 1.
Animatable: yes.
bias = "<number>"
After applying the kernelMatrix to the input image to yield a number and applying the divisor, the bias attribute is added to each component. One application of bias is when it is desirable to have .5 gray value be the zero response of the filter. The bias property shifts the range of the filter. This allows representation of values that would otherwise be clamped to 0 or 1.
The lacuna value for bias is 0.
Animatable: yes.
targetX = "<integer>"
Determines the positioning in X of the convolution matrix relative to a given target pixel in the input image. The leftmost column of the matrix is column number zero. The value must be such that: 0 <= targetX < orderX. By default, the convolution matrix is centered in X over each pixel of the input image (i.e., targetX = floor ( orderX / 2 )).
Animatable: yes.
targetY = "<integer>"
Determines the positioning in Y of the convolution matrix relative to a given target pixel in the input image. The topmost row of the matrix is row number zero. The value must be such that: 0 <= targetY < orderY. By default, the convolution matrix is centered in Y over each pixel of the input image (i.e., targetY = floor ( orderY / 2 )).
Animatable: yes.
edgeMode = "duplicate | wrap | none"

Determines how to extend the input image as necessary with color values so that the matrix operations can be applied when the kernel is positioned at or near the edge of the input image.

"duplicate" indicates that the input image is extended along each of its borders as necessary by duplicating the color values at the given edge of the input image.

Original N-by-M image, where m=M-1 and n=N-1:
          11 12 ... 1m 1M
          21 22 ... 2m 2M
          .. .. ... .. ..
          n1 n2 ... nm nM
          N1 N2 ... Nm NM
Extended by two pixels using "duplicate":
  11 11   11 12 ... 1m 1M   1M 1M
  11 11   11 12 ... 1m 1M   1M 1M
  11 11   11 12 ... 1m 1M   1M 1M
  21 21   21 22 ... 2m 2M   2M 2M
  .. ..   .. .. ... .. ..   .. ..
  n1 n1   n1 n2 ... nm nM   nM nM
  N1 N1   N1 N2 ... Nm NM   NM NM
  N1 N1   N1 N2 ... Nm NM   NM NM
  N1 N1   N1 N2 ... Nm NM   NM NM
ED: Consider making this into mathml

"wrap" indicates that the input image is extended by taking the color values from the opposite edge of the image.

Extended by two pixels using "wrap":
  nm nM   n1 n2 ... nm nM   n1 n2
  Nm NM   N1 N2 ... Nm NM   N1 N2
  1m 1M   11 12 ... 1m 1M   11 12
  2m 2M   21 22 ... 2m 2M   21 22
  .. ..   .. .. ... .. ..   .. ..
  nm nM   n1 n2 ... nm nM   n1 n2
  Nm NM   N1 N2 ... Nm NM   N1 N2
  1m 1M   11 12 ... 1m 1M   11 12
  2m 2M   21 22 ... 2m 2M   21 22
ED: Consider making this into mathml

The value none indicates that the input image is extended with pixel values of zero for R, G, B and A.

The lacuna value for edgeMode is duplicate.

Animatable: yes.

kernelUnitLength = "<number-optional-number>"
The first number is the <dx> value. The second number is the <dy> value. If the <dy> value is not specified, it defaults to the same value as <dx>. Indicates the intended distance in current filter units (i.e., units as determined by the value of attribute filter/primitiveUnits) between successive columns and rows, respectively, in the kernelMatrix. By specifying value(s) for kernelUnitLength, the kernel becomes defined in a scalable, abstract coordinate system. If kernelUnitLength is not specified, the default value is one pixel in the offscreen bitmap, which is a pixel-based coordinate system, and thus potentially not scalable. For some level of consistency across display media and user agents, it is necessary that a value be provided for at least one of filter/filterRes and kernelUnitLength. In some implementations, the most consistent results and the fastest performance will be achieved if the pixel grid of the temporary offscreen images aligns with the pixel grid of the kernel.
If a negative or zero value is specified the default value will be used instead.
Animatable: yes.
preserveAlpha = "false | true"
A value of false indicates that the convolution will apply to all channels, including the alpha channel. In this case the ALPHAX,Y of the convolution formula for a given pixel is:

ALPHAX,Y = ( 
              SUM I=0 to [orderY-1] { 
                SUM J=0 to [orderX-1] { 
                  SOURCE X-targetX+J, Y-targetY+I *  kernelMatrixorderX-J-1,  orderY-I-1 
                } 
              } 
            ) /  divisor +  bias 


A value of true indicates that the convolution will only apply to the color channels. In this case, the filter will temporarily unpremultiply the color component values, apply the kernel, and then re-premultiply at the end. In this case the ALPHAX,Y of the convolution formula for a given pixel is:

ALPHAX,Y = SOURCEX,Y

The lacuna value for preserveAlpha is false.
Animatable: yes.

17. Filter primitive feDiffuseLighting

This filter primitive lights an image using the alpha channel as a bump map. The resulting image is an RGBA opaque image based on the light color with alpha = 1.0 everywhere. The lighting calculation follows the standard diffuse component of the Phong lighting model. The resulting image depends on the light color, light position and surface geometry of the input bump map.

The light map produced by this filter primitive can be combined with a texture image using the multiply term of the arithmetic feComposite compositing method. Multiple light sources can be simulated by adding several of these light maps together before applying it to the texture image.

The formulas below make use of 3x3 filters. Because they operate on pixels, such filters are inherently resolution-dependent. To make feDiffuseLighting produce resolution-independent results, an explicit value should be provided for either the filter/filterRes attribute on the filter element element and/or attribute feDiffuseLighting/kernelUnitLength.

feDiffuseLighting/kernelUnitLength, in combination with the other attributes, defines an implicit pixel grid in the filter effects coordinate system (i.e., the coordinate system established by the filter/primitiveUnits attribute). If the pixel grid established by feDiffuseLighting/kernelUnitLength is not scaled to match the pixel grid established by attribute filter/filterRes (implicitly or explicitly), then the input image will be temporarily rescaled to match its pixels with feDiffuseLighting/kernelUnitLength. The 3x3 filters are applied to the resampled image. After applying the filter, the image is resampled back to its original resolution.

When the image must be resampled, it is recommended that high quality viewers make use of appropriate interpolation techniques, for example bilinear or bicubic. Depending on the speed of the available interpolents, this choice may be affected by the image-rendering property setting. Note that implementations might choose approaches that minimize or eliminate resampling when not necessary to produce proper results, such as when the document is zoomed out such that feDiffuseLighting/kernelUnitLength is considerably smaller than a device pixel.

For the formulas that follow, the Norm(Ax,Ay,Az) function is defined as:

ED: Consider making the following in mathml

Norm(Ax,Ay,Az) = sqrt(Ax^2+Ay^2+Az^2)

The resulting RGBA image is computed as follows:

Dr = kd * N.L * Lr
Dg = kd * N.L * Lg
Db = kd * N.L * Lb
Da = 1.0

where

kd = diffuse lighting constant
N = surface normal unit vector, a function of x and y
L = unit vector pointing from surface to light, a function of x and y in the point and spot light cases
Lr,Lg,Lb = RGB components of light, a function of x and y in the spot light case

N is a function of x and y and depends on the surface gradient as follows:

The surface described by the input alpha image I(x,y) is:

Z (x,y) = surfaceScale * I(x,y)

Surface normal is calculated using the Sobel gradient 3x3 filter. Different filter kernels are used depending on whether the given pixel is on the interior or an edge. For each case, the formula is:

Nx (x,y) = - surfaceScale * FACTORx *
           (Kx(0,0)*I(x-dx,y-dy) + Kx(1,0)*I(x,y-dy) + Kx(2,0)*I(x+dx,y-dy) +
            Kx(0,1)*I(x-dx,y)    + Kx(1,1)*I(x,y)    + Kx(2,1)*I(x+dx,y)    +
            Kx(0,2)*I(x-dx,y+dy) + Kx(1,2)*I(x,y+dy) + Kx(2,2)*I(x+dx,y+dy))
Ny (x,y) = - surfaceScale * FACTORy *
           (Ky(0,0)*I(x-dx,y-dy) + Ky(1,0)*I(x,y-dy) + Ky(2,0)*I(x+dx,y-dy) +
            Ky(0,1)*I(x-dx,y)    + Ky(1,1)*I(x,y)    + Ky(2,1)*I(x+dx,y)    +
            Ky(0,2)*I(x-dx,y+dy) + Ky(1,2)*I(x,y+dy) + Ky(2,2)*I(x+dx,y+dy))
Nz (x,y) = 1.0

N = (Nx, Ny, Nz) / Norm((Nx,Ny,Nz))

In these formulas, the dx and dy values (e.g., I(x-dx,y-dy)), represent deltas relative to a given (x,y) position for the purpose of estimating the slope of the surface at that point. These deltas are determined by the value (explicit or implicit) of attribute feDiffuseLighting/kernelUnitLength.

Top/left corner:

FACTORx=2/(3*dx)
Kx =
    |  0  0  0 |
    |  0 -2  2 |
    |  0 -1  1 |

FACTORy=2/(3*dy)
Ky =  
    |  0  0  0 |
    |  0 -2 -1 |
    |  0  2  1 |

Top row:

FACTORx=1/(3*dx)
Kx =
    |  0  0  0 |
    | -2  0  2 |
    | -1  0  1 |

FACTORy=1/(2*dy)
Ky =  
    |  0  0  0 |
    | -1 -2 -1 |
    |  1  2  1 |

Top/right corner:

FACTORx=2/(3*dx)
Kx =
    |  0  0  0 |
    | -2  2  0 |
    | -1  1  0 |

FACTORy=2/(3*dy)
Ky =  
    |  0  0  0 |
    | -1 -2  0 |
    |  1  2  0 |

Left column:

FACTORx=1/(2*dx)
Kx =
    | 0 -1  1 |
    | 0 -2  2 |
    | 0 -1  1 |

FACTORy=1/(3*dy)
Ky =  
    |  0 -2 -1 |
    |  0  0  0 |
    |  0  2  1 |

Interior pixels:

FACTORx=1/(4*dx)
Kx =
    | -1  0  1 |
    | -2  0  2 |
    | -1  0  1 |

FACTORy=1/(4*dy)
Ky =  
    | -1 -2 -1 |
    |  0  0  0 |
    |  1  2  1 |

Right column:

FACTORx=1/(2*dx)
Kx =
    | -1  1  0|
    | -2  2  0|
    | -1  1  0|

FACTORy=1/(3*dy)
Ky =  
    | -1 -2  0 |
    |  0  0  0 |
    |  1  2  0 |

Bottom/left corner:

FACTORx=2/(3*dx)
Kx =
    | 0 -1  1 |
    | 0 -2  2 |
    | 0  0  0 |

FACTORy=2/(3*dy)
Ky =  
    |  0 -2 -1 |
    |  0  2  1 |
    |  0  0  0 |

Bottom row:

FACTORx=1/(3*dx)
Kx =
    | -1  0  1 |
    | -2  0  2 |
    |  0  0  0 |

FACTORy=1/(2*dy)
Ky =  
    | -1 -2 -1 |
    |  1  2  1 |
    |  0  0  0 |

Bottom/right corner:

FACTORx=2/(3*dx)
Kx =
    | -1  1  0 |
    | -2  2  0 |
    |  0  0  0 |

FACTORy=2/(3*dy)
Ky =  
    | -1 -2  0 |
    |  1  2  0 |
    |  0  0  0 |

L, the unit vector from the image sample to the light, is calculated as follows:

For Infinite light sources it is constant:

Lx = cos(azimuth)*cos(elevation)
Ly = sin(azimuth)*cos(elevation)
Lz = sin(elevation)

For Point and spot lights it is a function of position:

Lx = Lightx - x
Ly = Lighty - y
Lz = Lightz - Z(x,y)

L = (Lx, Ly, Lz) / Norm(Lx, Ly, Lz)

where Lightx, Lighty, and Lightz are the input light position.

Lr,Lg,Lb, the light color vector, is a function of position in the spot light case only:

Lr = Lightr*pow((-L.S),specularExponent)
Lg = Lightg*pow((-L.S),specularExponent)
Lb = Lightb*pow((-L.S),specularExponent)

where S is the unit vector pointing from the light to the point (pointsAtX, pointsAtY, pointsAtZ) in the x-y plane:

Sx = pointsAtX - Lightx
Sy = pointsAtY - Lighty
Sz = pointsAtZ - Lightz

S = (Sx, Sy, Sz) / Norm(Sx, Sy, Sz)

If L.S is positive, no light is present. (Lr = Lg = Lb = 0). If feSpotLight/limitingConeAngle is specified, -L.S < cos(limitingConeAngle) also indicates that no light is present.

Attribute definitions:

surfaceScale = "<number>"
height of surface when Ain = 1.
If the attribute is not specified, then the effect is as if a value of 1 were specified.
Animatable: yes.
diffuseConstant = "<number>"
kd in Phong lighting model. In SVG, this can be any non-negative number.
If the attribute is not specified, then the effect is as if a value of 1 were specified.
Animatable: yes.
kernelUnitLength = "<number-optional-number>"
The first number is the <dx> value. The second number is the <dy> value. If the <dy> value is not specified, it defaults to the same value as <dx>. Indicates the intended distance in current filter units (i.e., units as determined by the value of attribute filter/primitiveUnits) for dx and dy, respectively, in the surface normal calculation formulas. By specifying value(s) for kernelUnitLength, the kernel becomes defined in a scalable, abstract coordinate system. If kernelUnitLength is not specified, the dx and dy values should represent very small deltas relative to a given (x,y) position, which might be implemented in some cases as one pixel in the intermediate image offscreen bitmap, which is a pixel-based coordinate system, and thus potentially not scalable. For some level of consistency across display media and user agents, it is necessary that a value be provided for at least one of filter/filterRes and kernelUnitLength. Discussion of intermediate images are in the Introduction and in the description of attribute filter/filterRes.
If a negative or zero value is specified the default value will be used instead.
Animatable: yes.

The light source is defined by one of the child elements feDistantLight, fePointLight or feSpotLight. The light color is specified by property lighting-color.

18. Filter primitive feDisplacementMap

This filter primitive uses the pixels values from the image from feDisplacementMap/in2 to spatially displace the image from in. This is the transformation to be performed:

 P'(x,y) ← P( x + scale * (XC(x,y) - .5), y + scale * (YC(x,y) - .5))
  

where P(x,y) is the input image, in, and P'(x,y) is the destination. XC(x,y) and YC(x,y) are the component values of the channel designated by the feDisplacementMap/xChannelSelector and feDisplacementMap/yChannelSelector. For example, to use the R component of feDisplacementMap/in2 to control displacement in x and the G component of Image2 to control displacement in y, set feDisplacementMap/xChannelSelector to "R" and feDisplacementMap/yChannelSelector to "G".

The displacement map, feDisplacementMap/in2, defines the inverse of the mapping performed.

The input image in is to remain premultiplied for this filter primitive. The calculations using the pixel values from feDisplacementMap/in2 are performed using non-premultiplied color values. If the image from feDisplacementMap/in2 consists of premultiplied color values, those values are automatically converted into non-premultiplied color values before performing this operation.

This filter can have arbitrary non-localized effect on the input which might require substantial buffering in the processing pipeline. However with this formulation, any intermediate buffering needs can be determined by feDisplacementMap/scale which represents the maximum range of displacement in either x or y.

When applying this filter, the source pixel location will often lie between several source pixels. In this case it is recommended that high quality viewers apply an interpolent on the surrounding pixels, for example bilinear or bicubic, rather than simply selecting the nearest source pixel. Depending on the speed of the available interpolents, this choice may be affected by the image-rendering property setting.

The color-interpolation-filters property only applies to the feDisplacementMap/in2 source image and does not apply to the in source image. The in source image must remain in its current color space.

Attribute definitions:

scale = "<number>"
Displacement scale factor. The amount is expressed in the coordinate system established by attribute filter/primitiveUnits on the filter element element.
When the value of this attribute is 0, this operation has no effect on the source image.

The lacuna value for feDisplacementMap/scale is 0.

Animatable: yes.
xChannelSelector = "R | G | B | A"
Indicates which channel from feDisplacementMap/in2 to use to displace the pixels in in along the x-axis. The lacuna value for feDisplacementMap/xChannelSelector is A.
Animatable: yes.
yChannelSelector = "R | G | B | A"
Indicates which channel from feDisplacementMap/in2 to use to displace the pixels in in along the y-axis. The lacuna value for feDisplacementMap/yChannelSelector is A.
Animatable: yes.
in2 = "(see in attribute)"
The second input image, which is used to displace the pixels in the image from attribute in. This attribute can take on the same values as the in attribute.
Animatable: yes.

19. Filter primitive feFlood

This filter primitive creates a rectangle filled with the color and opacity values from properties flood-color and flood-opacity. The rectangle is as large as the filter primitive subregion established by the feFlood element.

 

The flood-color property indicates what color to use to flood the current filter primitive subregion. The keyword currentColor and ICC colors can be specified in the same manner as within a <paint> specification for the fill and stroke properties.

flood-color
Value:   currentColor |
<color> [<icccolor>] |
inherit
Initial:   black
Applies to:   feFlood and feDropShadow elements
Inherited:   no
Percentages:   N/A
Media:   visual
Animatable:   yes

The flood-opacity property defines the opacity value to use across the entire filter primitive subregion.

flood-opacity
Value:   <opacity-value> | inherit
Initial:   1
Applies to:   feFlood and feDropShadow elements
Inherited:   no
Percentages:   N/A
Media:   visual
Animatable:   yes

20. Filter primitive feGaussianBlur

This filter primitive performs a Gaussian blur on the input image.

The Gaussian blur kernel is an approximation of the normalized convolution:

G(x,y) = H(x)I(y)

where

H(x) = exp(-x2/ (2s2)) / sqrt(2* pi*s2)

and

I(x) = exp(-y2/ (2t2)) / sqrt(2* pi*t2)

with ‘s’ being the standard deviation in the x direction and ‘t’ being the standard deviation in the y direction, as specified by stdDeviation.

The value of stdDeviation can be either one or two numbers. If two numbers are provided, the first number represents a standard deviation value along the x-axis of the current coordinate system and the second value represents a standard deviation in Y. If one number is provided, then that value is used for both X and Y.

Even if only one value is provided for stdDeviation, this can be implemented as a separable convolution.

For larger values of ‘s’ (s >= 2.0), an approximation can be used: Three successive box-blurs build a piece-wise quadratic convolution kernel, which approximates the Gaussian kernel to within roughly 3%.

let d = floor(s * 3*sqrt(2*pi)/4 + 0.5)

... if d is odd, use three box-blurs of size ‘d’, centered on the output pixel.

... if d is even, two box-blurs of size ‘d’ (the first one centered on the pixel boundary between the output pixel and the one to the left, the second one centered on the pixel boundary between the output pixel and the one to the right) and one box blur of size ‘d+1’ centered on the output pixel.

The approximation formula also applies correspondingly to ‘t’.

Frequently this operation will take place on alpha-only images, such as that produced by the built-in input, SourceAlpha. The implementation may notice this and optimize the single channel case. If the input has infinite extent and is constant (e.g FillPaint, this operation has no effect. If the input has infinite extent and the filter result is the input to an feTile, the filter is evaluated with periodic boundary conditions.

Attribute definitions:

stdDeviation = "<number-optional-number>"
The standard deviation for the blur operation. If two <number> s are provided, the first number represents a standard deviation value along the x-axis of the coordinate system established by attribute filter/primitiveUnits on the filter element. The second value represents a standard deviation in Y. If one number is provided, then that value is used for both X and Y.
A value of zero disables the effect of the given filter primitive (i.e., the result is the filter input image).
If stdDeviation is 0 in only one of X or Y, then the effect is that the blur is only applied in the direction that has a non-zero value.
The lacuna value for stdDeviation is 0.
Animatable: yes.

The example at the start of this chapter makes use of the feGaussianBlur filter primitive to create a drop shadow effect.

21. Filter primitive feUnsharpMask

This filter primitive performs an image sharpening operation on the input image. This is traditionally known as an unsharp mask operation.

The filter first does a feGaussianBlur operation on the input image and then subtracts the difference between the input image and the blurred image.

For controlling the result there are three attributes that can be used:

22. Filter primitive feImage

This filter primitive refers to a graphic external to this filter element, which is loaded or rendered into an RGBA raster and becomes the result of the filter primitive.

This filter primitive can refer to an external image or can be a reference to another piece of SVG. It produces an image similar to the built-in image source SourceGraphic except that the graphic comes from an external source.

If the xlink:href references a stand-alone image resource such as a JPEG, PNG or SVG file, then the image resource is rendered according to the behavior of the image element; otherwise, the referenced resource is rendered according to the behavior of the use element. In either case, the current user coordinate system depends on the value of attribute filter/primitiveUnits on the filter element. The processing of the preserveAspectRatio attribute on the feImage element is identical to that of the image element.

When the referenced image must be resampled to match the device coordinate system, it is recommended that high quality viewers make use of appropriate interpolation techniques, for example bilinear or bicubic. Depending on the speed of the available interpolents, this choice may be affected by the image-rendering property setting.

Attribute definitions:

xlink:href = "<IRI>"
An IRI reference to an image resource or to an element.
Animatable: yes.
preserveAspectRatio = "[defer] <align> [<meetOrSlice>]"

See preserveAspectRatio.

The lacuna value for preserveAspectRatio is xMidYMid meet.

Animatable: yes.

Example feImage illustrates how images are placed relative to an object. From left to right:

  • The default placement of an image. Note that the image is centered in the filter region and has the maximum size that will fit in the region consistent with preserving the aspect ratio.
  • The image stretched to fit the bounding box of an object.
  • The image placed using user coordinates. Note that the image is first centered in a box the size of the filter region and has the maximum size that will fit in the box consistent with preserving the aspect ratio. This box is then shifted by the given x and y values relative to the viewport the object is in.

23. Filter primitive feMerge

This filter primitive composites input image layers on top of each other using the over operator with Input1 (corresponding to the first feMergeNode child element) on the bottom and the last specified input, InputN (corresponding to the last feMergeNode child element), on top.

Many effects produce a number of intermediate layers in order to create the final output image. This filter allows us to collapse those into a single image. Although this could be done by using n-1 Composite-filters, it is more convenient to have this common operation available in this form, and offers the implementation some additional flexibility.

Each ‘feMerge’ element can have any number of ‘feMergeNode’ subelements, each of which has an in attribute.

The canonical implementation of feMerge is to render the entire effect into one RGBA layer, and then render the resulting layer on the output device. In certain cases (in particular if the output device itself is a continuous tone device), and since merging is associative, it might be a sufficient approximation to evaluate the effect one layer at a time and render each layer individually onto the output device bottom to top.

If the topmost image input is SourceGraphic and this feMerge is the last filter primitive in the filter, the implementation is encouraged to render the layers up to that point, and then render the SourceGraphic directly from its vector description on top.

The example at the start of this chapter makes use of the feMerge filter primitive to composite two intermediate filter results together.

24. Filter primitive feMorphology

This filter primitive performs "fattening" or "thinning" of artwork. It is particularly useful for fattening or thinning an alpha channel.

The dilation (or erosion) kernel is a rectangle with a width of 2*x-radius and a height of 2*y-radius. In dilation, the output pixel is the individual component-wise maximum of the corresponding R,G,B,A values in the input image's kernel rectangle. In erosion, the output pixel is the individual component-wise minimum of the corresponding R,G,B,A values in the input image's kernel rectangle.

Frequently this operation will take place on alpha-only images, such as that produced by the built-in input, SourceAlpha. In that case, the implementation might want to optimize the single channel case.

If the input has infinite extent and is constant (e.g FillPaint where the fill is a solid color), this operation has no effect. If the input has infinite extent and the filter result is the input to an feTile, the filter is evaluated with periodic boundary conditions.

Because feMorphology operates on premultipied color values, it will always result in color values less than or equal to the alpha channel.

Attribute definitions:

operator = "erode | dilate"
A keyword indicating whether to erode (i.e., thin) or dilate (fatten) the source graphic. The lacuna value for operator is erode.
Animatable: yes.
radius = "<number-optional-number>"
The radius (or radii) for the operation. If two <number> s are provided, the first number represents a x-radius and the second value represents a y-radius. If one number is provided, then that value is used for both X and Y. The values are in the coordinate system established by attribute filter/primitiveUnits on the filter element.
A negative or zero value disables the effect of the given filter primitive (i.e., the result is a transparent black image).
If the attribute is not specified, then the effect is as if a value of 0 were specified.
Animatable: yes.

25. Filter primitive feOffset

This filter primitive offsets the input image relative to its current position in the image space by the specified vector.

This is important for effects like drop shadows.

When applying this filter, the destination location may be offset by a fraction of a pixel in device space. In this case a high quality viewer should make use of appropriate interpolation techniques, for example bilinear or bicubic. This is especially recommended for dynamic viewers where this interpolation provides visually smoother movement of images. For static viewers this is less of a concern. Close attention should be made to the image-rendering property setting to determine the authors intent.

Attribute definitions:

dx = "<number>"
The amount to offset the input graphic along the x-axis. The offset amount is expressed in the coordinate system established by attribute filter/primitiveUnits on the filter element element.
If the attribute is not specified, then the effect is as if a value of 0 were specified.
Animatable: yes.
dy = "<number>"
The amount to offset the input graphic along the y-axis. The offset amount is expressed in the coordinate system established by attribute filter/primitiveUnits on the filter element.
If the attribute is not specified, then the effect is as if a value of 0 were specified.
Animatable: yes.

The example at the start of this chapter makes use of the feOffset filter primitive to offset the drop shadow from the original source graphic.

26. Filter primitive feSpecularLighting

This filter primitive lights a source graphic using the alpha channel as a bump map. The resulting image is an RGBA image based on the light color. The lighting calculation follows the standard specular component of the Phong lighting model. The resulting image depends on the light color, light position and surface geometry of the input bump map. The result of the lighting calculation is added. The filter primitive assumes that the viewer is at infinity in the z direction (i.e., the unit vector in the eye direction is (0,0,1) everywhere).

This filter primitive produces an image which contains the specular reflection part of the lighting calculation. Such a map is intended to be combined with a texture using the add term of the arithmetic feComposite method. Multiple light sources can be simulated by adding several of these light maps before applying it to the texture image.

The resulting RGBA image is computed as follows:

Sr = ks * pow(N.H, specularExponent) * Lr
Sg = ks * pow(N.H, specularExponent) * Lg
Sb = ks * pow(N.H, specularExponent) * Lb
Sa = max(Sr, Sg, Sb)

where

ks = specular lighting constant
N = surface normal unit vector, a function of x and y
H = "halfway" unit vector between eye unit vector and light unit vector

Lr,Lg,Lb = RGB components of light

See feDiffuseLighting for definition of N and (Lr, Lg, Lb).

The definition of H reflects our assumption of the constant eye vector E = (0,0,1):

H = (L + E) / Norm(L+E)

where L is the light unit vector.

Unlike the feDiffuseLighting, the feSpecularLighting filter produces a non-opaque image. This is due to the fact that the specular result (Sr,Sg,Sb,Sa) is meant to be added to the textured image. The alpha channel of the result is the max of the color components, so that where the specular light is zero, no additional coverage is added to the image and a fully white highlight will add opacity.

The feDiffuseLighting and feSpecularLighting filters will often be applied together. An implementation may detect this and calculate both maps in one pass, instead of two.

Attribute definitions:

surfaceScale = "<number>"
height of surface when Ain = 1.
If the attribute is not specified, then the effect is as if a value of 1 were specified.
Animatable: yes.
specularConstant = "<number>"
ks in Phong lighting model. In SVG, this can be any non-negative number.
If the attribute is not specified, then the effect is as if a value of 1 were specified.
Animatable: yes.
specularExponent = "<number>"
Exponent for specular term, larger is more "shiny". Range 1.0 to 128.0.
If the attribute is not specified, then the effect is as if a value of 1 were specified.
Animatable: yes.
kernelUnitLength = "<number-optional-number>"
The first number is the <dx> value. The second number is the <dy> value. If the <dy> value is not specified, it defaults to the same value as <dx>. Indicates the intended distance in current filter units (i.e., units as determined by the value of attribute filter/primitiveUnits) for dx and dy, respectively, in the surface normal calculation formulas. By specifying value(s) for kernelUnitLength, the kernel becomes defined in a scalable, abstract coordinate system. If kernelUnitLength is not specified, the dx and dy values should represent very small deltas relative to a given (x,y) position, which might be implemented in some cases as one pixel in the intermediate image offscreen bitmap, which is a pixel-based coordinate system, and thus potentially not scalable. For some level of consistency across display media and user agents, it is necessary that a value be provided for at least one of filterRes and kernelUnitLength. Discussion of intermediate images are in the Introduction and in the description of attribute filter/filterRes.
If a negative or zero value is specified the default value will be used instead.
Animatable: yes.

The light source is defined by one of the child elements feDistantLight, fePointLight or feDistantLight. The light color is specified by property lighting-color.

The example at the start of this chapter makes use of the feSpecularLighting filter primitive to achieve a highly reflective, 3D glowing effect.

27. Filter primitive feTile

This filter primitive fills a target rectangle with a repeated, tiled pattern of an input image. The target rectangle is as large as the filter primitive subregion established by the feTile element.

Typically, the input image has been defined with its own filter primitive subregion in order to define a reference tile. feTile replicates the reference tile in both X and Y to completely fill the target rectangle. The top/left corner of each given tile is at location (x+i*width,y+j*height), where (x,y) represents the top/left of the input image's filter primitive subregion, width and height represent the width and height of the input image's filter primitive subregion, and i and j can be any integer value. In most cases, the input image will have a smaller filter primitive subregion than the feTile in order to achieve a repeated pattern effect.

Implementers must take appropriate measures in constructing the tiled image to avoid artifacts between tiles, particularly in situations where the user to device transform includes shear and/or rotation. Unless care is taken, interpolation can lead to edge pixels in the tile having opacity values lower or higher than expected due to the interaction of painting adjacent tiles which each have partial overlap with particular pixels.

 

28. Filter primitive feTurbulence

ISSUE: Consider phasing out this C algorithm in favor of Simplex noise, which is more HW friendly.

This filter primitive creates an image using the Perlin turbulence function. It allows the synthesis of artificial textures like clouds or marble. For a detailed description the of the Perlin turbulence function, see "Texturing and Modeling", Ebert et al, AP Professional, 1994. The resulting image will fill the entire filter primitive subregion for this filter primitive.

It is possible to create bandwidth-limited noise by synthesizing only one octave.

The C code below shows the exact algorithm used for this filter effect. The filter primitive subregion is to be passed as the arguments fTileX, fTileY, fTileWidth and fTileHeight.

For fractalSum, you get a turbFunctionResult that is aimed at a range of -1 to 1 (the actual result might exceed this range in some cases). To convert to a color value, use the formula colorValue = ((turbFunctionResult * 255) + 255) / 2, then clamp to the range 0 to 255.

For turbulence, you get a turbFunctionResult that is aimed at a range of 0 to 1 (the actual result might exceed this range in some cases). To convert to a color value, use the formula colorValue = (turbFunctionResult * 255), then clamp to the range 0 to 255.

The following order is used for applying the pseudo random numbers. An initial seed value is computed based on the seed attribute. Then the implementation computes the lattice points for R, then continues getting additional pseudo random numbers relative to the last generated pseudo random number and computes the lattice points for G, and so on for B and A.

The generated color and alpha values are in the color space determined by the color-interpolation-filters property:

/* Produces results in the range [1, 2**31 - 2].
Algorithm is: r = (a * r) mod m
where a = 16807 and m = 2**31 - 1 = 2147483647
See [Park & Miller], CACM vol. 31 no. 10 p. 1195, Oct. 1988
To test: the algorithm should produce the result 1043618065
as the 10,000th generated number if the original seed is 1.
*/
#define RAND_m 2147483647 /* 2**31 - 1 */
#define RAND_a 16807 /* 7**5; primitive root of m */
#define RAND_q 127773 /* m / a */
#define RAND_r 2836 /* m % a */
long setup_seed(long lSeed)
{
  if (lSeed <= 0) lSeed = -(lSeed % (RAND_m - 1)) + 1;
  if (lSeed > RAND_m - 1) lSeed = RAND_m - 1;
  return lSeed;
}
long random(long lSeed)
{
  long result;
  result = RAND_a * (lSeed % RAND_q) - RAND_r * (lSeed / RAND_q);
  if (result <= 0) result += RAND_m;
  return result;
}
#define BSize 0x100
#define BM 0xff
#define PerlinN 0x1000
#define NP 12 /* 2^PerlinN */
#define NM 0xfff
static uLatticeSelector[BSize + BSize + 2];
static double fGradient[4][BSize + BSize + 2][2];
struct StitchInfo
{
  int nWidth; // How much to subtract to wrap for stitching.
  int nHeight;
  int nWrapX; // Minimum value to wrap.
  int nWrapY;
};
static void init(long lSeed)
{
  double s;
  int i, j, k;
  lSeed = setup_seed(lSeed);
  for(k = 0; k < 4; k++)
  {
    for(i = 0; i < BSize; i++)
    {
      uLatticeSelector[i] = i;
      for (j = 0; j < 2; j++)
        fGradient[k][i][j] = (double)(((lSeed = random(lSeed)) % (BSize + BSize)) - BSize) / BSize;
      s = double(sqrt(fGradient[k][i][0] * fGradient[k][i][0] + fGradient[k][i][1] * fGradient[k][i][1]));
      fGradient[k][i][0] /= s;
      fGradient[k][i][1] /= s;
    }
  }
  while(--i)
  {
    k = uLatticeSelector[i];
    uLatticeSelector[i] = uLatticeSelector[j = (lSeed = random(lSeed)) % BSize];
    uLatticeSelector[j] = k;
  }
  for(i = 0; i < BSize + 2; i++)
  {
    uLatticeSelector[BSize + i] = uLatticeSelector[i];
    for(k = 0; k < 4; k++)
      for(j = 0; j < 2; j++)
        fGradient[k][BSize + i][j] = fGradient[k][i][j];
  }
}
#define s_curve(t) ( t * t * (3. - 2. * t) )
#define lerp(t, a, b) ( a + t * (b - a) )
double noise2(int nColorChannel, double vec[2], StitchInfo *pStitchInfo)
{
  int bx0, bx1, by0, by1, b00, b10, b01, b11;
  double rx0, rx1, ry0, ry1, *q, sx, sy, a, b, t, u, v;
  register i, j;
  t = vec[0] + PerlinN;
  bx0 = (int)t;
  bx1 = bx0+1;
  rx0 = t - (int)t;
  rx1 = rx0 - 1.0f;
  t = vec[1] + PerlinN;
  by0 = (int)t;
  by1 = by0+1;
  ry0 = t - (int)t;
  ry1 = ry0 - 1.0f;
  // If stitching, adjust lattice points accordingly.
  if(pStitchInfo != NULL)
  {
    if(bx0 >= pStitchInfo->nWrapX)
      bx0 -= pStitchInfo->nWidth;
    if(bx1 >= pStitchInfo->nWrapX)
      bx1 -= pStitchInfo->nWidth;
    if(by0 >= pStitchInfo->nWrapY)
      by0 -= pStitchInfo->nHeight;
    if(by1 >= pStitchInfo->nWrapY)
      by1 -= pStitchInfo->nHeight;
  }
  bx0 &= BM;
  bx1 &= BM;
  by0 &= BM;
  by1 &= BM;
  i = uLatticeSelector[bx0];
  j = uLatticeSelector[bx1];
  b00 = uLatticeSelector[i + by0];
  b10 = uLatticeSelector[j + by0];
  b01 = uLatticeSelector[i + by1];
  b11 = uLatticeSelector[j + by1];
  sx = double(s_curve(rx0));
  sy = double(s_curve(ry0));
  q = fGradient[nColorChannel][b00]; u = rx0 * q[0] + ry0 * q[1];
  q = fGradient[nColorChannel][b10]; v = rx1 * q[0] + ry0 * q[1];
  a = lerp(sx, u, v);
  q = fGradient[nColorChannel][b01]; u = rx0 * q[0] + ry1 * q[1];
  q = fGradient[nColorChannel][b11]; v = rx1 * q[0] + ry1 * q[1];
  b = lerp(sx, u, v);
  return lerp(sy, a, b);
}
double turbulence(int nColorChannel, double *point, double fBaseFreqX, double fBaseFreqY,
          int nNumOctaves, bool bFractalSum, bool bDoStitching,
          double fTileX, double fTileY, double fTileWidth, double fTileHeight)
{
  StitchInfo stitch;
  StitchInfo *pStitchInfo = NULL; // Not stitching when NULL.
  // Adjust the base frequencies if necessary for stitching.
  if(bDoStitching)
  {
    // When stitching tiled turbulence, the frequencies must be adjusted
    // so that the tile borders will be continuous.
    if(fBaseFreqX != 0.0)
    {
      double fLoFreq = double(floor(fTileWidth * fBaseFreqX)) / fTileWidth;
      double fHiFreq = double(ceil(fTileWidth * fBaseFreqX)) / fTileWidth;
      if(fBaseFreqX / fLoFreq < fHiFreq / fBaseFreqX)
        fBaseFreqX = fLoFreq;
      else
        fBaseFreqX = fHiFreq;
    }
    if(fBaseFreqY != 0.0)
    {
      double fLoFreq = double(floor(fTileHeight * fBaseFreqY)) / fTileHeight;
      double fHiFreq = double(ceil(fTileHeight * fBaseFreqY)) / fTileHeight;
      if(fBaseFreqY / fLoFreq < fHiFreq / fBaseFreqY)
        fBaseFreqY = fLoFreq;
      else
        fBaseFreqY = fHiFreq;
    }
    // Set up initial stitch values.
    pStitchInfo = &stitch;
    stitch.nWidth = int(fTileWidth * fBaseFreqX + 0.5f);
    stitch.nWrapX = fTileX * fBaseFreqX + PerlinN + stitch.nWidth;
    stitch.nHeight = int(fTileHeight * fBaseFreqY + 0.5f);
    stitch.nWrapY = fTileY * fBaseFreqY + PerlinN + stitch.nHeight;
  }
  double fSum = 0.0f;
  double vec[2];
  vec[0] = point[0] * fBaseFreqX;
  vec[1] = point[1] * fBaseFreqY;
  double ratio = 1;
  for(int nOctave = 0; nOctave < nNumOctaves; nOctave++)
  {
    if(bFractalSum)
      fSum += double(noise2(nColorChannel, vec, pStitchInfo) / ratio);
    else
      fSum += double(fabs(noise2(nColorChannel, vec, pStitchInfo)) / ratio);
    vec[0] *= 2;
    vec[1] *= 2;
    ratio *= 2;
    if(pStitchInfo != NULL)
    {
      // Update stitch values. Subtracting PerlinN before the multiplication and
      // adding it afterward simplifies to subtracting it once.
      stitch.nWidth *= 2;
      stitch.nWrapX = 2 * stitch.nWrapX - PerlinN;
      stitch.nHeight *= 2;
      stitch.nWrapY = 2 * stitch.nWrapY - PerlinN;
    }
  }
  return fSum;
}

Attribute definitions:

baseFrequency = "<number-optional-number>"

The base frequency (frequencies) parameter(s) for the noise function. If two <number>s are provided, the first number represents a base frequency in the X direction and the second value represents a base frequency in the Y direction. If one number is provided, then that value is used for both X and Y.

The lacuna value for baseFrequency is 0.

Negative values are unsupported.

Animatable: yes.

numOctaves = "<integer>"

The numOctaves parameter for the noise function.

The lacuna value for numOctaves is 1.

Negative values are unsupported.

Animatable: yes.

seed = "<number>"

The starting number for the pseudo random number generator.

The lacuna value for seed is 0.

When the seed number is handed over to the algorithm above it must first be truncated, i.e. rounded to the closest integer value towards zero.

Animatable: yes.

stitchTiles = "stitch | noStitch"

If stitchTiles="noStitch", no attempt it made to achieve smooth transitions at the border of tiles which contain a turbulence function. Sometimes the result will show clear discontinuities at the tile borders.
If stitchTiles="stitch", then the user agent will automatically adjust baseFrequency-x and baseFrequency-y values such that the feTurbulence node's width and height (i.e., the width and height of the current subregion) contains an integral number of the Perlin tile width and height for the first octave. The baseFrequency will be adjusted up or down depending on which way has the smallest relative (not absolute) change as follows: Given the frequency, calculate lowFreq=floor(width*frequency)/width and hiFreq=ceil(width*frequency)/width. If frequency/lowFreq < hiFreq/frequency then use lowFreq, else use hiFreq. While generating turbulence values, generate lattice vectors as normal for Perlin Noise, except for those lattice points that lie on the right or bottom edges of the active area (the size of the resulting tile). In those cases, copy the lattice vector from the opposite edge of the active area.

The lacuna value for stitchTiles is noStitch.

Animatable: yes.

type = "fractalNoise | turbulence"

Indicates whether the filter primitive should perform a noise or turbulence function.

The lacuna value for type is turbulence.

Animatable: yes.

29. Filter primitive feDropShadow

This filter creates a drop shadow of the input image. It is a shorthand filter, and is defined in terms of combinations of other filter primitives. The expectation is that it can be optimized more easily by implementations.

The result of a feDropShadow filter primitive is equivalent to the following:

  <feGaussianBlur in="alpha-channel-of-feDropShadow-in" stdDeviation="stdDeviation-of-feDropShadow"/> 
  <feOffset dx="dx-of-feDropShadow" dy="dy-of-feDropShadow" result="offsetblur"/> 
  <feFlood flood-color="flood-color-of-feDropShadow" flood-opacity="flood-opacity-of-feDropShadow"/> 
  <feComposite in2="offsetblur" operator="in"/> 
  <feMerge> 
    <feMergeNode/>
    <feMergeNode in="in-of-feDropShadow"/> 
  </feMerge>

The above divided into steps:

  1. Take the alpha channel of the input to the feDropShadow filter primitive and the feDropShadow/stdDeviation on the feDropShadow and do processing as if the following feGaussianBlur was applied:
     <feGaussianBlur in="alpha-channel-of-feDropShadow-in" stdDeviation="stdDeviation-of-feDropShadow"/>

  2. Offset the result of step 1 by feDropShadow/dx and feDropShadow/dy as specified on the feDropShadow element, equivalent to applying an feOffset with these parameters:
     <feOffset dx="dx-of-feDropShadow" dy="dy-of-feDropShadow" result="offsetblur"/>

  3. Do processing as if an feFlood element with flood-color and flood-opacity as specified on the feDropShadow was applied:
     <feFlood flood-color="flood-color-of-feDropShadow" flood-opacity="flood-opacity-of-feDropShadow"/>

  4. Composite the result of the feFlood in step 3 with the result of the feOffset in step 2 as if an feComposite filter primitive with operator=‘in’ was applied:
     <feComposite in2="offsetblur" operator="in"/>

  5. Finally merge the result of the previous step, doing processing as if the following feMerge was performed:
     <feMerge>
          <feMergeNode/>
          <feMergeNode in="in-of-feDropShadow"/>
      </feMerge>

Note that while the definition of the feDropShadow filter primitive says that it can be expanded into an equivalent tree it is not required that it is implemented like that. The expectation is that user agents can optimize the handling by not having to do all the steps separately.

Beyond the DOM interface SVGFEDropShadowElement there is no way of accessing the internals of the feDropShadow filter primitive, meaning if the filter primitive is implemented as an equivalent tree then that tree must not be exposed to the DOM.

Attribute definitions:

dx = "<number>"

The x offset of the drop shadow.

The lacuna value for feDropShadow/dx is 2.

This attribute is then forwarded to the feOffset/dx attribute of the internal feOffset element.

Animatable: yes.

dy = "<number>"

The y offset of the drop shadow.

The lacuna value for feDropShadow/dy is 2.

This attribute is then forwarded to the feOffset/dy attribute of the internal feOffset element.

Animatable: yes.

stdDeviation = "<number-optional-number>"

The standard deviation for the blur operation in the drop shadow.

The lacuna value for feDropShadow/stdDeviation is 2.

This attribute is then forwarded to the feGaussianBlur/stdDeviation attribute of the internal feGaussianBlur element.

Animatable: yes.

30. Filter primitive feDiffuseSpecular

The WG is looking at providing a shorthand for diffuse+specular.

31. Filter primitive feCustom

The Filter Effects specification does not define the feCustom element. This document proposes the following definition.

vertexShader: <uri>
The shader at <uri> provides the implementation for the feCustom vertex shader. If the shader cannot be retrieved, or if the shader cannot be loaded or compiled because it contains erroneous code, the shader is a pass through. Otherwise, the vertex shader is invoked for all the vertex mesh vertices.
fragmentShader: <uri>
The shader at <uri> provides the implementation for the feCustom fragment shader. If the shader cannot be retrieved, or if the shader cannot be loaded or compiled because it contains erroneous code, the shader is a pass through. Otherwise, the fragment shader is invoked for each of the pixels during the rasterization phase that follows the vertex shader processing.
vertexMesh: +<integer>{1,2}[wsp<box>][wsp'detached']
See the vertexMesh attribute discussion
params: [<param-def>[,<param-def>*]]
Parameters are passed as uniforms to both the vertex and the fragment shaders.
<param-def> <param-name>wsp<param-value>
<param-name> <ident>
<param-value> true|false[wsp+true|false]{0-3} |
<number>[wsp+<number>]{0-3} |
<array> |
<transform> |
<texture(<uri>)>
<array> array(’<number>[wsp<number>]*‘)
<transform> <css-3d-transform> | <mat>
<css-3d-transform> <transform-function>;[<transform-function>]*
<mat> mat2(’<number>(,<number>){3}‘)’ |
mat3(’<number>(,<number>){8}‘)’ |
mat4(’<number>(,<number>){15}‘)’ )

There are two ways to specify a 4x4 matrix. They differ in how they are interpolated.

The <mat> values are in column major order. For example, mat2(1, 2, 3, 4) has [1, 2] in the first column and [3, 4] in the second one.

There may be different ways to specify the <param-value> syntax. For example, it might be better to not have a texture() function and simply a <uri> for texture parameters. Or it might be better to not have a mat<n> prefixes for matrices.

The following document from Mozilla describes how WebGL vertex and fragment shaders can be defined in <script> elements.

CSS shaders can reference shaders defined in <script> elements, as shown in the following code snippet.

<script id="warp" type="x-shader/x-vertex" >
<-- source code here -->
</script>

..
<style>
.shaded {
    filter: custom(url(#warp));
}

31.0.1. The ‘vertexMesh’ attribute

The <feCustom>s ’vertexMesh' attribute defines the granularity of vertices in the shader mesh. By default, the vertex mesh is made of two triangles that encompass the filter region area.

+<integer>{1,2}[wsp<box>][wsp'detached']
One or two positive integers (zero is invalid) indicating the additional number of vertex lines and columns that will make the vertex mesh. With the initial value of ‘1 1’ there is a single line and a single column, resulting in a four-vertices mesh (top-left, top-right, bottom-right, bottom-left). If only one value is specified, the second (columns) value computes to the first value. In other words, a value of ‘n’ is equivalent to a value of ‘n n’.
A value of ‘n m’ results in a vertex mesh that has ‘n’ lines and ‘m’ column and a total of ‘n + 1’.‘m + 1’ vertices as illustrated in the following figure.

The optional <box> parameter defines the box on which the vertex mesh is stretched to. It defaults to the ‘filter-box’ value which is ‘border-box’ with the added filter margins. For elements that do not have padding or borders (e.g., SVG elements), the values ‘padding-box’ and ‘border-box’ are equivalent to ‘content-box’. For SVG elements, the ‘content-box’ is the object bounding box.

The optional ‘detached’ string specifies whether the mesh triangles are attached or detached. If the value is not specified, the triangles are attached. If ‘detached’ is specified, the triangles are detached. When triangles are attached, the geometry provided to the vertex shader is made of a triangles which share adjacent edges' vertices. In the ‘detached’ mode, the triangles do not share edges.

In the following figure, let us consider the top-left ‘tile’ in the shader mesh. In the detached mode, the vertex shader will receive the bottom right and top left vertices multiple time, one of each of the two triangles which make up the tile. Otherwise, the shader will receive these vertices only once, because they are shared by the ‘connected’ triangles.

See the discussion on uniforms passed to shaders to understand how the shader programs can leverage that feature.

vertexMesh: 6 5

The above figure illustrates how a ‘vertexMesh’ value of ‘5 4’ adds vertices passed to the vertex shader. The red vertices are the default ones and the gray vertices are resulting from the ‘vertexMesh’ value.

The following example applies a vertex shader (‘distort.vs’) to elements with class ‘distorted’. The vertex shader will operate on a mesh that has 5 lines and 4 columns (because there are 4 additional lines and 3 additional columns).

    <style>
    .distorted {
        filter: custom(url(distort.vs), 4 3);
    }
    </style>

    ...
    <div class="distorted">
    ..
    </div>
which could also be written as:
<style>
.distorted {
    filter: url(#distort);
}
</style>

...

<filter id="distort">
    <feCustom vertexShader="url(distort.vs)" vertexMesh="4 3" />
</filter>

<div class="distorted">
..
</div>

31.1. Shader inputs in filter graph

When an feCustom filter primitive is used in a filter graph, a ‘texture’ parameter can take a value of ‘result(<name>)’ where ‘name’ is the output of another filter primitive.

<filter>
    <feGaussianBlur stdDeviation="8" result="blur" />
    <feTurbulence type="fractalNoise" baseFrequency="0.4" numOctaves="4" result="turbulence"/>
    <feCustom fragmentShader="url(complex.fs)" params="tex1 result(blur), tex2 result(turbulence)" />
</filter>

32. The filter CSS <image> value

The filter() function produces a CSS <image> value. It has the following syntax:

32.0.1. filter() syntax

<filter> = filter(
  <image>, 
  none | <filter-function> [ <filter-function> ]*
)

The function takes two parameters. The first is a CSS <image> value. The second is the value of a filter property. The function take the input image parameter and apply the filter rules, returning a processing image.

33. RelaxNG Schema for Filter Effects 1.0

The schema for Filter Effects 1.0 is written in RelaxNG [RelaxNG], a namespace-aware schema language that uses the datatypes from XML Schema Part 2 [Schema2]. This allows namespaces and modularity to be much more naturally expressed than using DTD syntax. The RelaxNG schema for Filter Effects 1.0 may be imported by other RelaxNG schemas, or combined with other schemas in other languages into a multi-namespace, multi-grammar schema using Namespace-based Validation Dispatching Language [NVDL].

Unlike a DTD, the schema used for validation is not hardcoded into the document instance. There is no equivalent to the DOCTYPE declaration. Simply point your editor or other validation tool to the IRI of the schema (or your local cached copy, as you prefer).

The RNG is under construction, and only the individual RNG snippets are available at this time. They have not yet been integrated into a functional schema. The individual RNG files are available here.

34. Shorthands defined in terms of the filter element

Below are the equivalents for each of the filter functions expressed in terms of the ‘filter element’ element. The parameters from the function are labelled with brackets in the following style: [amount]. In the case of parameters that are percentage values, they are converted to real numbers.

34.1. grayscale

 <filter id="grayscale">
    <feColorMatrix type="matrix"
               values="(0.2126 + 0.7874 * [1 - amount]) (0.7152 - 0.7152 * [1 - amount]) (0.0722 - 0.0722 * [1 - amount]) 0 0
                       (0.2126 - 0.2126 * [1 - amount]) (0.7152 + 0.2848 * [1 - amount]) (0.0722 - 0.0722 * [1 - amount]) 0 0
                       (0.2126 - 0.2126 * [1 - amount]) (0.7152 - 0.7152 * [1 - amount]) (0.0722 + 0.9278 * [1 - amount]) 0 0
                       0 0 0 1 0"/>
  </filter> 

34.2. sepia

 <filter id="sepia">
    <feColorMatrix type="matrix"
               values="(0.393 + 0.607 * [1 - amount]) (0.769 - 0.769 * [1 - amount]) (0.189 - 0.189 * [1 - amount]) 0 0
                       (0.349 - 0.349 * [1 - amount]) (0.686 + 0.314 * [1 - amount]) (0.168 - 0.168 * [1 - amount]) 0 0
                       (0.272 - 0.272 * [1 - amount]) (0.534 - 0.534 * [1 - amount]) (0.131 + 0.869 * [1 - amount]) 0 0
                       0 0 0 1 0"/>
  </filter> 

34.3. saturate

 <filter id="saturate">
    <feColorMatrix type="saturate"
               values="(1 - [amount])"/>
  </filter> 

34.4. hue-rotate

 <filter id="hue-rotate">
    <feColorMatrix type="hueRotate"
               values="[angle]"/>
  </filter> 

34.5. invert

 <filter id="invert">
    <feComponentTransfer>
        <feFuncR type="table" tableValues="[amount] (1 - [amount])"/>
        <feFuncG type="table" tableValues="[amount] (1 - [amount])"/>
        <feFuncB type="table" tableValues="[amount] (1 - [amount])"/>
    </feComponentTransfer>
  </filter> 

34.6. opacity

 <filter id="opacity">
    <feComponentTransfer>
        <feFuncA type="table" tableValues="0 [amount]"/>
    </feComponentTransfer>
  </filter> 

34.7. brightness

 <filter id="brightness">
    <feComponentTransfer>
        <feFuncR type="linear" slope="[amount]"/>
        <feFuncG type="linear" slope="[amount]"/>
        <feFuncB type="linear" slope="[amount]"/>
    </feComponentTransfer>
  </filter> 

34.8. contrast

 <filter id="contrast">
    <feComponentTransfer>
        <feFuncR type="linear" slope="[amount]" intercept="-(0.5 * [amount] + 0.5)"/>
        <feFuncG type="linear" slope="[amount]" intercept="-(0.5 * [amount] + 0.5)"/>
        <feFuncB type="linear" slope="[amount]" intercept="-(0.5 * [amount] + 0.5)"/>
    </feComponentTransfer>
  </filter> 

34.9. blur

 <filter id="blur">
    <feGaussianBlur stdDeviation="[radius radius]">
  </filter> 

34.10. drop-shadow

 <filter id="drop-shadow">
    <feGaussianBlur in="[alpha-channel-of-input]" stdDeviation="[radius]"/>
    <feOffset dx="[offset-x]" dy="[offset-y]" result="offsetblur"/>
    <feFlood flood-color="[color]"/>
    <feComposite in2="offsetblur" operator="in"/>
    <feMerge>
      <feMergeNode/>
      <feMergeNode in="input-image"/>
    </feMerge>
  </filter> 

34.11. custom

The custom() function has the following syntax:

custom(<vertex-shader>[wsp<fragment-shader>][,<vertex-mesh>][,<params>])
<vertex-shader> <uri> | none
<fragment-shader> <uri> | none
<vertex-mesh> +<integer>{1,2}[wsp<box>][wsp'detached']
where: <box> = filter-box | border-box | padding-box | content-box
<params> See the <feCustom>s params attribute.

The custom() function is a shorthand for the following filter effect:

  <filter>
      <feCustom vertexShader="vertex-shader" 
                fragmentShader="fragment-shader" 
                vertexMesh="vertex-mesh"
                params="params"/>
  </filter>
  

It can be used in combination with other filter shorthands, for example:

filter: sepia(0.5) custom(none url(add.fs), amount 0.2 0.2 0.2);
It might be clearer to name the custom() function the shader() function instead and introduce an feCustomShader filter primitive instead of feCustom.

35. Shading language

35.1. Precedents

There are many precedents for shading languages, for example:

35.2. Recommended shading language

This document recommends the adoption of the subset of GLSL ES defined in the WebGL 1.0 specification.

In particular, the same restrictions as defined in WebGL should apply to CSS shaders:

All the parameters specified in the <shader-params> values (e.g., the feCustom's param attribute or the custom(<uri>, <shader-params>) filter function or the shader property value) will be available as uniforms to the shader(s) referenced by the ‘shader’ property.

The group may consider applying further restrictions to the GLSL ES language to make it easier to write vertex and fragment shaders.

The OpenGL ES shading language provides a number of variables that can be passed to shaders, exchanged between shaders or set by shaders. In particular, a vertex shader can provide specific data to the fragment shader in the form of ‘varying’ parameters (parameters that vary per pixel). The following sections describe particular variables that are assumed for the vertex and fragment shaders in CSS shaders.

Even though this document recommends the GLSL ES shading language, there are other possible options to consider, for example:
  • Allow multiple shading languages, present or future (similar to how the <script> tag allows different scripting languages).
  • Define a shading language specific to custom filter effects.
The implementation could use the mime type of the url or <script> element to determine the the shading language.

35.2.1. Vertex attribute variables

The following attribute variables are available to the vertex shader.
attribute vec4 a_position The vertex coordinates in the filter region box. Coordinates are normalized to the [-0.5, 0.5] range along the x, y and z axis.
attribute vec2 a_texCoord; The vertex's texture coordinate. Coordinates are in the [0, 1] range on both axis
attribute vec2 a_meshCoord; The vertex's coordinate in the mesh box. Coordinates are in the [0, 1] range on both axis.
attribute vec3 a_triangleCoord;

The x and y values provide the coordinate of the current ‘tile’ in the shader mesh. For example, (0, 0) for the top right tile in the mesh. The x and y values are in the [0, mesh columns] and [0, mesh rows] range, respectively.

The z coordinate is computed according to the following figure. The z coordinate value is provided for each vertex and corresponding triangle. For example, for the bottom right vertex of the top triangle, the z coordinate will be 2. For the bottom right vertex of the bottom triangle, the z coordinate will be 4.

The a_triangleCoord.z value

35.2.2. Shader uniform variables

The following uniform variables are set to specific values by the user agent:
uniform mat4 u_projectionMatrix The current projection matrix to the destination texture's coordinate space). Note that the ‘model matrix’ which the ‘transform’ property sets, is not passed to the shaders. It is applied to the filtered element's rendering.
uniform sampler2D u_texture The input texture. Includes transparent margins for the filter margins.
uniform sampler2D u_contentTexture A texture with the rendering of the filtered element. If the filter is the first in the filter chain, then, this texture is the same as the u_texture uniform. However, if there are preceding filters, this provides the rendering of the original filtered element, whereas u_texture provides the output of the preceding filter in the filter chain (or graph).
uniform vec2 u_textureSize The input texture's size. Includes the filter margins.
uniform vec4 u_meshBox The mesh box position and size in the filter box coordinate system. For example, if the mesh box is the filter box, the value will be (-0.5, -0.5, 1, 1).
uniform vec2 u_tileSize The size of the current mesh tile, in the same coordinate space as the vertices.
uniform vec2 u_meshSize The size of the current mesh in terms of tiles. The x coordinate provides the number of columns and the y coordinate provides the number of rows.

35.2.3. Varyings

When the author provides both a vertex and a fragment shader, there is no requirement on the varyings passed from the vertex shader to the fragment shader. If no vertex shader is provided, the fragment shader can expect the v_texCoord varying. If no fragment shader is provided, the vertex shader must compute a v_texCoord varying for the default shaders.

varying vec2 v_texCoord; The current pixel's texture coordinates (in u_texture).

35.2.4. Other uniform variables: the CSS shaders parameters

When there parameters are passed to the custom() filter function or the feCustom filter primitive, the user agent pass uniforms of the corresponding name and type to the shaders.

The following table shows the mapping between CSS shader parameters and uniform types.

CSS param type GLSL uniform type
true|false[wsp+true|false]{0-3} bool, bvec2, bvec3 or bvec4
<number>[wsp+<number>]{0-3} float, vec2, vec3 or vec4
<array> float[n]
<css-3d-transform> mat4
mat2(’<number>(,<number>){3}‘)’ |
mat3(’<number>(,<number>){8}‘)’ |
mat4(’<number>(,<number>){15}‘)
mat2, mat3 or mat4
texture(<uri>) sampler2D

The following code sample illustrates that mechanism.

  CSS

  .shaded {
      filter: custom(
                     url(distort.vs) url(tint.fs), 
                     distortAmount 0.5, lightVector 1.0 1.0 0.0, 
                     disp texture(disp.png)
                  );
  }

  Shader (vertex or fragment)
  ...

  uniform float distortAmount;
  uniform vec3 lightVector;
  uniform sampler2D disp;
  uniform vec2 dispSize;
  ...

As illustrated in the example, for each <textureName> texture() parameter, an additional vec2 uniform is passed to the shaders with the size of the corresponding texture.

35.2.5. Default shaders

If no vertex shader is provided, the default one is as shown below.

  attribute vec4 a_position;
  attribute vec2 a_texCoord;

  uniform mat4 u_projectionMatrix;

  varying vec2 v_texCoord;

  void main()
  {        
      v_texCoord = a_texCoord;
      gl_Position = u_projectionMatrix * a_position;
  }

      

If no fragment shader is provided, the default one is shown below.

  varying vec2 v_texCoord;
  uniform sampler2D u_texture;

  void main()
  {  
      gl_FragColor = texture2D(u_texture, v_texCoord);  
  }        

35.2.6. Texture access

If shaders access texture values outside the [0, 1] range on both axis, the returned value is a fully transluscent black pixel.

36. Integration with CSS Animations and CSS Transitions

The CSS ‘filter’ property is animatable. Interpolation happens between the filter functions only if the ‘filter’ values have the same number of filter functions, and the same functions appearing in the same order.

36.1. Interpolating filter functions parameters

This section has to be written.

36.2. Interpolating the shader-params component in the custom() function.

To interpolate between params values in a custom() filter function or between <feCustom> params attribute values, the user agent should interpolate between each of the [param-def] values according to its type. List of values need to be of the same length. Matrices need to be of the same dimension. Arrays need to be of the same size.

Interpolation between shader params only happens if all the other shader properties are identical: vertex shader, fragment shader, filter margins and vertex mesh.

<number>[wsp<number>{0-3}] Interpolate between each of the values.
<true|false>[wsp<true|fals>{0-3}] Interpolate between each of the values using a step function.
<array> Interpolate between the array elements.
<css-3d-transform> Follows the CSS 3D transform interpolation rules.
<mat> Interpolate between the matrix components (applies to mat2, mat3 and mat4).
As with the ‘transform’ property, it is not possible to animate the different components of the ‘shader-params’ property on different timelines or with different keyframes. This is a generic issue of animating properties that have multiple components to them.

37. DOM interfaces

The interfaces below will be made available in a IDL file for an upcoming draft.

37.1. Interface ImageData

37.2. Interface SVGFilterElement

37.3. Interface SVGFilterPrimitiveStandardAttributes

37.4. Interface SVGFEBlendElement

37.5. Interface SVGFEColorMatrixElement

37.6. Interface SVGFEComponentTransferElement

37.7. Interface SVGComponentTransferFunctionElement

37.8. Interface SVGFEFuncRElement

37.9. Interface SVGFEFuncGElement

37.10. Interface SVGFEFuncBElement

37.11. Interface SVGFEFuncAElement

37.12. Interface SVGFECompositeElement

37.13. Interface SVGFEConvolveMatrixElement

37.14. Interface SVGFEDiffuseLightingElement

37.15. Interface SVGFEDistantLightElement

37.16. Interface SVGFEPointLightElement

37.17. Interface SVGFESpotLightElement

37.18. Interface SVGFEDisplacementMapElement

37.19. Interface SVGFEFloodElement

37.20. Interface SVGFEGaussianBlurElement

37.21. Interface SVGFEImageElement

37.22. Interface SVGFEMergeElement

37.23. Interface SVGFEMergeNodeElement

37.24. Interface SVGFEMorphologyElement

37.25. Interface SVGFEOffsetElement

37.26. Interface SVGFESpecularLightingElement

37.27. Interface SVGFETileElement

37.28. Interface SVGFETurbulenceElement

37.29. Interface SVGFEDropShadowElement

38. References

38.1. Normative References

[CSS21]
Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification, Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie, eds., W3C, 23 April 2009, (Candidate Recommendation)
[NVDL]
Document Schema Definition Languages (DSDL) — Part 4: Namespace-based Validation Dispatching Language — NVDL. ISO/IEC FCD 19757-4, See http://www.asahi-net.or.jp/~eb2m-mrt/dsdl/
[PORTERDUFF]
Compositing Digital Images, T. Porter, T. Duff, SIGGRAPH ‘84 Conference Proceedings, Association for Computing Machinery, Volume 18, Number 3, July 1984.
[SVG-COMPOSITING]
SVG Compositing Specification, A. Grasso, ed. World Wide Web Consortium, 30 April 2009.
This edition of SVG Compositing is http://www.w3.org/TR/2009/WD-SVGCompositing-20090430/.
The latest edition of SVG Compositing is available at http://www.w3.org/TR/SVGCompositing/.
[RelaxNG]
Document Schema Definition Languages (DSDL) — Part 2: Regular grammar- based validation — RELAX NG. ISO/IEC FDIS 19757-2:2002(E), J. Clark, 村田 真 (Murata M.), eds., 12 December 2002. See http://www.y12.doe.gov/sgml/sc34/document/0362_files/relaxng-is.pdf
[Schema2]
XML Schema Part 2: Datatypes Second Edition, P. Biron, A. Malhotra, eds. W3C, 28 October 2004 (Recommendation). Latest version available at http://www.w3.org/TR/xmlschema-2/. See also Processing XML 1.1 documents with XML Schema 1.0 processors.
[SVG11]
Scalable Vector Graphics (SVG) 1.1 Specification, Dean Jackson editor, W3C, 14 January 2003 (Recommendation). See http://www.w3.org/TR/2003/REC-SVG11-20030114/
[SVGT12]
Scalable Vector Graphics (SVG) Tiny 1.2 Specification, Dean Jackson editor, W3C, 22 December 2008 (Recommendation). See http://www.w3.org/TR/2008/REC-SVGTiny12-20081222/

38.2. Informative References

[HTML5]
HTML5, Ian Hickson editor, Google, 10 June 2008 (Working Draft). See http://www.w3.org/TR/2008/WD-html5-20080610/