Attribution Reporting

Draft Community Group Report,

This version:
https://wicg.github.io/attribution-reporting-api
Issue Tracking:
GitHub
Inline In Spec
Editors:
(Google Inc.)
(Google Inc.)
(Google Inc.)

Abstract

An API to report that an event may have been caused by another cross-site event. These reports are designed to transfer little enough data between sites that the sites can’t use them to track individual users.

Status of this document

This specification was published by the Web Platform Incubator Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

1. Introduction

This section is non-normative

This specification describes how web browsers can provide a mechanism to the web that supports measuring and attributing conversions (e.g. purchases) to ads a user interacted with on another site. This mechanism should remove one need for cross-site identifiers like third-party cookies.

1.1. Overview

Pages/embedded sites are given the ability to register attribution sources and attribution triggers, which can be linked by the User Agent to generate and send attribution reports containing information from both of those events.

A reporter https://reporter.example embedded on https://source.example is able to measure whether an interaction on the page lead to an action on https://destination.example by registering an attribution source with attribution destinations of « https://destination.example ». Reporters are able to register sources through a variety of surfaces, but ultimately the reporter is required to provide the User Agent with an HTTP-response header which allows the source to be eligible for attribution.

At a later point in time, the reporter, now embedded on https://destination.example, may register an attribution trigger. Reporters can register triggers by sending an HTTP-response header containing information about the action/event that occurred. Internally, the User Agent attempts to match the trigger to previously registered source events based on where the sources/triggers were registered and configurations provided by the reporter.

If the User Agent is able to attribute the trigger to a source, it will generate and send an attribution report to the reporter via an HTTP POST request at a later point in time.

2. HTML monkeypatches

2.1. API for elements

interface mixin HTMLAttributionSrcElementUtils {
    [CEReactions, SecureContext] attribute USVString attributionSrc;
};

HTMLAnchorElement includes HTMLAttributionSrcElementUtils;
HTMLImageElement includes HTMLAttributionSrcElementUtils;
HTMLScriptElement includes HTMLAttributionSrcElementUtils;

Add the following content attributes:

a

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when the a is navigated.

img

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when set.

script

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when set.

The IDL attribute attributionSrc must reflect the respective content attribute of the same name.

Whenever an img or a script element is created or element’s attributionSrc attribute is set or changed, run make background attributionsrc requests with element and "event-source-or-trigger".

More precisely specify which mutations are relevant for the attributionsrc attribute.

Modify update the image data as follows:

After the step

Set request’s priority to the current state...

add the step

  1. If the element has an attributionsrc attribute, set request’s Attribution Reporting Eligibility to "event-source-or-trigger".

A script fetch options has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is "unset".

Modify prepare the script element as follows:

After the step

Let fetch priority be the current state of el’s fetchpriority content attribute.

add the step

  1. Let Attribution Reporting eligibility be "event-source-or-trigger" if el has an attributionsrc content attribute and "unset" otherwise.

Add "and Attribution Reporting eligibility is Attribution Reporting eligibility." to the step

Let options be a script fetch options whose...

Modify set up the classic script request and set up the module script request as follows:

Add "and its Attribution Reporting eligibility is options’s Attribution Reporting eligibility."

Modify follow the hyperlink as follows:

After the step

If subject’s link types includes...

add the steps

  1. Let navigationSourceEligible be false.

  2. If subject has an attributionsrc attribute:

    1. Set navigationSourceEligible to true.

    2. Make background attributionsrc requests with subject and "navigation-source".

Add "and navigationSourceEligible set to navigationSourceEligible" to the step

Navigate targetNavigable...

2.2. Window open steps

Modify the tokenize the features argument as follows:

Replace the step

Collect a sequence of code points that are not feature separators code points from features given position. Set value to the collected code points, converted to ASCII lowercase.

with

Collect a sequence of code points that are not feature separators code points from features given position. Set value to the collected code points, converted to ASCII lowercase. Set originalCaseValue to the collected code points.

Replace the step

If name is not the empty string, then set tokenizedFeatures[name] to value.

with the steps

  1. If name is not the empty string:

    1. Switch on name:

      "attributionsrc"

      Run the following steps:

      1. If tokenizedFeatures[name] does not exist, set tokenizedFeatures[name] to a new list.

      2. Append originalCaseValue to tokenizedFeatures[name].

      Anything else

      Set tokenizedFeatures[name] to value.

Modify the window open steps as follows:

After the step

Let tokenizedFeatures be the result of tokenizing features.

add the steps

  1. Let navigationSourceEligible be false.

  2. If tokenizedFeatures["attributionsrc"] exists:

    1. Assert: tokenizedFeatures["attributionsrc"] is a list.

    2. Set navigationSourceEligible to true.

    3. Set attributionSrcUrls to a new list.

    4. For each value of tokenizedFeatures["attributionsrc"]:

      1. If value is the empty string, continue.

      2. Let decodedSrcBytes be the result of percent-decoding value.

      3. Let decodedSrc be the UTF-8 decode without BOM of decodedSrcBytes.

      4. Parse decodedSrc relative to the entry settings object, and set urlRecord to the resulting URL record, if any. If parsing failed, continue.

      5. Append urlRecord to attributionSrcUrls.

Use attributionSrcUrls with make a background attributionsrc request.

In each step that calls navigate, set navigationSourceEligible to navigationSourceEligible.

Add the following item to navigation params:

navigationSourceEligible

A boolean indicating whether the navigation can register a navigation source in its response. Defaults to false.

Modify navigate as follows:

Add an optional boolean parameter called navigationSourceEligible, defaulting to false.

In the step

Set navigationParams to a new navigation params with...

add the property

navigationSourceEligible

navigationSourceEligible

Use/propagate navigationSourceEligible to the navigation request's Attribution Reporting eligibility.

3. Network monkeypatches

dictionary AttributionReportingRequestOptions {
  required boolean eventSourceEligible;
  required boolean triggerEligible;
};

partial dictionary RequestInit {
  AttributionReportingRequestOptions attributionReporting;
};

partial interface XMLHttpRequest {
  [SecureContext]
  undefined setAttributionReporting(AttributionReportingRequestOptions options);
};

A request has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is "unset".

A request has an associated trigger verification metadata which is null or a trigger verification metadata.

To get an eligibility from AttributionReportingRequestOptions given an optional AttributionReportingRequestOptions options:

  1. If options is null, return "unset".

  2. Let eventSourceEligible be options’s eventSourceEligible.

  3. Let triggerEligible be options’s triggerEligible.

  4. If (eventSourceEligible, triggerEligible) is:

    (false, false)

    Return "empty".

    (false, true)

    Return "trigger".

    (true, false)

    Return "event-source".

    (true, true)

    Return "event-source-or-trigger".

Check permissions policy.

"Attribution-Reporting-Eligible" is a Dictionary Structured Header set on a request that indicates which registrations, if any, are allowed on the corresponding response. Its values are not specified and its allowed keys are:

"event-source"

An event source may be registered.

"navigation-source"

A navigation source may be registered.

"trigger"

A trigger may be registered.

To obtain a dictionary structured header value given a list of strings keys and a set of strings allowedKeys:

  1. For each key of allowedKeys, optionally append the concatenation of « "not-", key » to keys.

  2. Optionally, shuffle keys.

  3. Let entries be a new list.

  4. For each key of keys:

    1. Let value be true.

    2. Optionally, set value to a token corresponding to one of strings in allowedKeys.

    3. Let params be an empty map.

    4. For each key of allowedKeys, optionally set params[key] to an arbitrary bare item.

    5. Append a structured dictionary member with the key key, the value value, and the parameters params to entries.

  5. Return a dictionary containing entries.

Note: The user agent MAY "grease" the dictionary structured headers according to the preceding algorithm to help ensure that recipients use a proper structured header parser, rather than naive string equality or contains operations, which makes it easier to introduce backwards-compatible changes to the header definition in the future. Including the allowed keys as dictionary values or parameters helps ensure that only the dictionary’s keys are interpreted by the recipient. Likewise, shuffling the dictionary members helps ensure that, e.g., "key1, key2" is treated equivalently to "key2, key1".

In the following example, only the "trigger" key should be interpreted by the recipient after the header has been parsed as a structured dictionary:

EXAMPLE: Greased Attribution-Reporting-Eligible header
Attribution-Reporting-Eligible: not-event-source, trigger=event-source;navigation-source=3

To set Attribution Reporting headers given a request request and an origin contextOrigin:

  1. Let headers be request’s header list.

  2. Let eligibility be request’s Attribution Reporting eligibility.

  3. Delete "Attribution-Reporting-Eligible" from headers.

  4. Delete "Attribution-Reporting-Support" from headers.

  5. If eligibility is "unset", return.

  6. Let keys be an empty list.

  7. If eligibility is:

    "empty"

    Do nothing.

    "event-source"

    Append "event-source" to keys.

    "navigation-source"

    Append "navigation-source" to keys.

    "trigger"
    1. Append "trigger" to keys.

    1. Set trigger verification request headers with request and contextOrigin.

    "event-source-or-trigger"

    Append "event-source" and "trigger" to keys.

  8. Let dict be the result of obtaining a dictionary structured header value with keys and the set containing all the eligible keys.

  9. Set a structured field value given ("Attribution-Reporting-Eligible", dict) in headers.

  10. Set an OS-support header in headers.

3.1. Fetch monkeypatches

Modify fetch as follows:

Add a Document parameter called document.

After the step

If request’s header list does not contain Accept...

add the step

  1. Set Attribution Reporting headers with request and document’s context origin.

Modify Request(input, init) as follows:

In the step

Set request to a new request with the following properties:

add the property

Attribution Reporting eligibility

request’s Attribution Reporting eligibility.

After the step

If init["priority"] exists, then:

add the step

  1. If init["attributionReporting"] exists, then set request’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with it.

3.2. XMLHttpRequest monkeypatches

An XMLHttpRequest object has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is "unset".

The setAttributionReporting(options) method must run these steps:

  1. If this’s state is not opened, then throw an "InvalidStateError" DOMException.

  2. If this’s send() flag is set, then throw an "InvalidStateError" DOMException.

  3. Set this’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with options.

Modify send(body) as follows:

Add a Document parameter called document.

After the step:

Let req be a new request, initialized as follows...

Add the step:

  1. Set req’s Attribution Reporting eligibility to this’s Attribution Reporting eligibility.

  2. Set Attribution Reporting headers with req and document’s context origin.

4. Permissions Policy integration

This specification defines a policy-controlled feature identified by the string "attribution-reporting". Its default allowlist is *.

5. Clear Site Data integration

In clear DOM-accessible storage for origin, add the following step:

  1. Run clear site data with origin.

To clear site data given an origin origin:

  1. For each attribution source source of the attribution source cache:

    1. If source’s reporting origin and origin are same origin, remove source from the attribution source cache.

  2. For each event-level report report of the event-level report cache:

    1. If report’s reporting origin and origin are same origin, remove report from the event-level report cache.

  3. For each aggregatable report report of the aggregatable report cache:

    1. If report’s reporting origin and origin are same origin, remove report from the aggregatable report cache.

Note: We deliberately do not remove matching entries from the attribution rate-limit cache, as doing so would allow a site to reset and therefore exceed the intended rate limits at will.

6. Structures

6.1. Trigger state

A trigger state is a struct with the following items:

trigger data

A non-negative 64-bit integer.

report window

A report window.

6.2. Randomized response output configuration

A randomized response output configuration is a struct with the following items:

max attributions per source

A positive integer.

trigger specs

A trigger spec map.

6.3. Randomized source response

A randomized source response is null or a set of trigger states.

6.4. Attribution filtering

A filter value is an ordered set of strings.

A filter map is an ordered map whose keys are strings and whose values are filter values.

A filter config is a struct with the following items:

map

A filter map.

lookback window

Null or a positive duration.

6.5. Suitable origin

A suitable origin is an origin that is suitable.

6.6. Source type

A source type is one of the following:

"navigation"

The source was associated with a top-level navigation.

"event"

The source was not associated with a top-level navigation.

6.7. Report window

A report window is a struct with the following items:

start

A moment.

end

A moment, strictly greater than start.

A report window list is a list of report windows. It has the following constraints:

A report window list list’s total window is a report window struct with the following fields:

start

The start of list[0].

end

The end of list[list’s size - 1].

Note: The total window is conceptually a union of report windows, because there are no gaps in time between any of the windows.

6.8. Summary window operator

A summary window operator summarizes the triggers within a report window. Its value is one of the following:

"count"

Number of triggers attributed.

"value_sum"

Sum of the value of triggers.

6.9. Summary bucket

A summary bucket is a struct with the following items:

start

An unsigned 32-bit integer.

end

An unsigned 32-bit integer.

A summary bucket list is a list of summary buckets. It has the following constraints:

6.10. Trigger-data matching mode

A trigger-data matching mode is one of the following:

"exact"

Trigger data must be less than the default trigger data cardinality. Otherwise, no event-level attribution takes place.

"modulus"

Trigger data is taken modulo the default trigger data cardinality.

6.11. Trigger specs

A trigger spec is a struct with the following items:

event-level report windows

A report window list.

A trigger spec map is a map whose keys are unsigned 32-bit integers and values are trigger specs.

To find a matching trigger spec given an attribution source source and an unsigned 64-bit integer triggerData:

  1. Let specs be source’s trigger specs.

  2. If source’s trigger-data matching mode is:

    "exact"

    Run the following steps:

    1. If specs[triggerData] exists, return its entry.

    2. Return null.

    "modulus"

    Run the following steps:

    1. If specs is empty, return null.

    2. Let keys be specs’s keys.

    3. Let index be the remainder when dividing triggerData by keys’s size.

    4. Return the entry for specs[keys[index]].

6.12. Attribution source

An attribution source is a struct with the following items:

source identifier

A string.

source origin

A suitable origin.

event ID

A non-negative 64-bit integer.

attribution destinations

An ordered set of sites.

reporting origin

A suitable origin.

source type

A source type.

expiry

A duration.

trigger specs

A trigger spec map.

aggregatable report window

A report window.

priority

A 64-bit integer.

source time

A moment.

number of event-level reports

Number of event-level reports created for this attribution source.

Max number of event-level reports

The maximum number of event-level reports that can be created for this attribution source.

event-level attributable (default true)

A boolean.

dedup keys

ordered set of dedup keys associated with this attribution source.

randomized response

A randomized source response.

randomized trigger rate

A number between 0 and 1 (both inclusive).

filter data

A filter map.

debug key

Null or a non-negative 64-bit integer.

aggregation keys

An ordered map whose keys are strings and whose values are non-negative 128-bit integers.

aggregatable budget consumed

A non-negative integer, total value of all aggregatable contributions created with this attribution source.

aggregatable dedup keys

ordered set of aggregatable dedup key values associated with this attribution source.

debug reporting enabled

A boolean.

number of aggregatable reports

Number of aggregatable reports created for this attribution source.

trigger-data matching mode

A trigger-data matching mode.

debug cookie set (default false)

A boolean.

An attribution source source’s expiry time is source’s source time + source’s expiry.

An attribution source source’s source site is the result of obtaining a site from source’s source origin.

6.13. Aggregatable trigger data

An aggregatable trigger data is a struct with the following items:

key piece

A non-negative 128-bit integer.

source keys

An ordered set of strings.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.14. Aggregatable values configuration

An aggregatable values configuration is a struct with the following items:

values

A map whose keys are strings and whose values are non-negative 32-bit integers.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.15. Aggregatable dedup key

An aggregatable dedup key is a struct with the following items:

dedup key

Null or a non-negative 64-bit integer.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.16. Event-level trigger configuration

An event-level trigger configuration is a struct with the following items:

trigger data

A non-negative 64-bit integer.

dedup key

Null or a non-negative 64-bit integer.

priority

A 64-bit integer.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.17. Aggregation coordinator

An aggregation coordinator is one of a user-agent-determined set of suitable origins that specifies which aggregation service deployment to use.

6.18. Aggregatable source registration time configuration

An aggregatable source registration time configuration is one of the following:

"exclude"

"source_registration_time" is excluded from an aggregatable report's shared info.

"include"

"source_registration_time" is included in an aggregatable report's shared info.

6.19. Attribution trigger

An attribution trigger is a struct with the following items:

attribution destination

A site.

trigger time

A moment.

reporting origin

A suitable origin.

filters

A list of filter configs.

negated filters

A list of filter configs.

debug key

Null or a non-negative 64-bit integer.

event-level trigger configurations

A set of event-level trigger configuration.

aggregatable trigger data

A list of aggregatable trigger data.

aggregatable values configurations

A list of aggregatable values configuration.

aggregatable dedup keys

A list of aggregatable dedup key.

verifications

A list of trigger verification.

debug reporting enabled

A boolean.

aggregation coordinator

An aggregation coordinator.

aggregatable source registration time configuration

An aggregatable source registration time configuration.

trigger context ID

Null or a string.

6.20. Attribution report

An attribution report is a struct with the following items:

reporting origin

A suitable origin.

report time

A moment.

report ID

A string.

source debug key

Null or a non-negative 64-bit integer.

trigger debug key

Null or a non-negative 64-bit integer.

6.21. Event-level report

An event-level report is an attribution report with the following additional items:

event ID

A non-negative 64-bit integer.

source type

A source type.

trigger data

A non-negative 64-bit integer.

randomized trigger rate

A number between 0 and 1 (both inclusive).

trigger priority

A 64-bit integer.

trigger time

A moment.

source identifier

A string.

attribution destinations

An ordered set of sites.

6.22. Aggregatable contribution

An aggregatable contribution is a struct with the following items:

key

A non-negative 128-bit integer.

value

A non-negative 32-bit integer.

6.23. Aggregatable report

An aggregatable report is an attribution report with the following additional items:

source time

A moment.

contributions

A list of aggregatable contributions.

effective attribution destination

A site.

serialized private state token

A serialized private state token.

aggregation coordinator

An aggregation coordinator.

source registration time configuration

An aggregatable source registration time configuration.

is null report (default false)

A boolean.

trigger context ID

Null or a string.

6.24. Attribution rate-limits

A rate-limit scope is one of the following:

An attribution rate-limit record is a struct with the following items:

scope

A rate-limit scope.

source site

A site.

attribution destination

A site.

reporting origin

A suitable origin.

time

A moment.

expiry time

Null or a moment.

6.25. Attribution debug data

A debug data type is a non-empty string that specifies the set of data that is contained in the body of an attribution debug data.

A source debug data type is a debug data type for source registrations. Possible values are:

A trigger debug data type is a debug data type for trigger registrations. Possible values are:

An OS debug data type is a debug data type for OS registrations. Possible values are:

An attribution debug data is a struct with the following items:

data type

A debug data type.

body

A map whose fields are determined by the data type.

6.26. Attribution debug report

An attribution debug report is a struct with the following items:

data

A list of attribution debug data.

reporting origin

A suitable origin.

6.27. Serialized private state token

A serialized private state token is a forgiving-base64 encoding of a byte sequence.

6.28. Trigger verification

A trigger verification is a struct with the following items:

token

A serialized private state token.

id

The result of generating a random UUID over which the token is signed.

6.29. Trigger verification metadata

A trigger verification metadata is a struct with the following items:

pretokens

A byte sequence.

IDs

A list of string generated using generating a random UUID.

6.30. Triggering result

A triggering status is one of the following:

Note: "noised" only applies for triggering event-level attribution when it is attributed successfully but dropped as the noise was applied to the source.

A triggering result is a tuple with the following items:

status

A triggering status.

debug data

Null or an attribution debug data.

6.31. Destination rate-limit result

A destination rate-limit result is one of the following:

7. Storage

A user agent holds an attribution source cache, which is an ordered set of attribution sources.

A user agent holds an event-level report cache, which is an ordered set of event-level reports.

A user agent holds an aggregatable report cache, which is an ordered set of aggregatable reports.

A user agent holds an attribution rate-limit cache, which is an ordered set of attribution rate-limit records.

The above caches are collectively known as the attribution caches. The attribution caches are shared among all environment settings objects.

Note: This would ideally use storage bottles to provide access to the attribution caches. However attribution data is inherently cross-site, and operations on storage would need to span across all storage bottle maps.

8. Constants

Valid source expiry range is a 2-tuple of positive durations that controls the minimum and maximum value that can be used as an expiry, respectively. Its value is (1 day, 30 days).

Min report window is a positive duration that controls the minimum duration from an attribution source’s source time and any end in aggregatable report window or event-level report windows. Its value is 1 hour.

Max entries per filter data is a positive integer that controls the maximum size of an attribution source's filter data. Its value is 50.

Max values per filter data entry is a positive integer that controls the maximum size of each value of an attribution source's filter data. Its value is 50.

Max length per filter string is a positive integer that controls the maximum length of an attribution source's filter data's keys and its values's items. Its value is 25.

Attribution rate-limit window is a positive duration that controls the rate-limiting window for attribution. Its value is 30 days.

Max destinations per source is a positive integer that controls the maximum size of an attribution source's attribution destinations. Its value is 3.

Max settable event-level attributions per source is a positive integer that controls the maximum value of max number of event-level reports. Its value is 20.

Max settable event-level report windows is a positive integer that controls the maximum size of event-level report windows. Its value is 5.

Default event-level attributions per source is a map that controls how many times a single attribution source can create an event-level report by default. Its value is «[ navigation → 3, event → 1 ]».

Allowed aggregatable budget per source is a positive integer that controls the total required aggregatable budget of all aggregatable reports created for an attribution source. Its value is 65536.

Max aggregation keys per source registration is a positive integer that controls the maximum size of an attribution source's aggregation keys. Its value is 20.

Max length per aggregation key identifier is a positive integer that controls the maximum length of an attribution source's aggregation keys's keys, an attribution trigger's aggregatable values configurations's item's values's keys, and an aggregatable trigger data's source keys's items. Its value is 25.

Default trigger data cardinality is a map that controls the valid range of trigger data. Its value is «navigation → 8, event → 2».

Max distinct trigger data per source is a positive integer that controls the maximum size of a trigger spec map for an attribution source. Its value is 32.

Max length per trigger context ID is a positive integer that controls the maximum length of an attribution trigger's trigger context ID. Its value is 64.

9. Vendor-Specific Values

Max pending sources per source origin is a positive integer that controls how many attribution sources can be in the attribution source cache per source origin.

Max settable event-level epsilon is a non-negative double that controls the default and maximum values that a source registration can specify for the epsilon parameter used by compute the channel capacity of a source and obtain a randomized source response.

Randomized null report rate excluding source registration time is a double between 0 and 1 (both inclusive) that controls the randomized number of null reports generated for an attribution trigger whose [attribution trigger/aggregatable source registration time configuration] is "exclude". If automation local testing mode is true, this is 0.

Randomized null report rate including source registration time is a double between 0 and 1 (both inclusive) that controls the randomized number of null reports generated for an attribution trigger whose [attribution trigger/aggregatable source registration time configuration] is "include". If automation local testing mode is true, this is 0.

Max event-level reports per attribution destination is a positive integer that controls how many event-level reports can be in the event-level report cache per site in attribution destinations.

Max aggregatable reports per attribution destination is a positive integer that controls how many aggregatable reports can be in the aggregatable report cache per effective attribution destination.

Max event-level channel capacity per source is a map that controls how many bits of information can be exposed associated with a single attribution source. The keys are «navigation, event». The values are non-negative integers.

Max aggregatable reports per source is a positive integer that controls how many aggregatable reports can be created by attribution triggers attributed to a single attribution source.

Max destinations covered by unexpired sources is a positive integer that controls the maximum number of distinct sites across all attribution destinations for unexpired attribution sources with a given (source site, reporting origin site).

Destination rate-limit window is a positive duration that controls the rate-limiting window for destinations.

Max destinations per rate-limit window is a tuple consisting of two integers. The first controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given source site per destination rate-limit window. The second controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given (source site, reporting origin site) per destination rate-limit window.

Max source reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a (source site, attribution destination) that can create attribution sources per attribution rate-limit window.

Max source reporting origins per source reporting site is a positive integer that controls the maximum number of distinct reporting origins for a (source site, reporting origin site) that can create attribution sources per origin rate-limit window.

Origin rate-limit window is a positive duration that controls the rate-limiting window for max source reporting origins per source reporting site.

Max attribution reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a (source site, attribution destination) that can create event-level reports per attribution rate-limit window.

Max attributions per rate-limit window is a positive integer that controls the maximum number of attributions for a (source site, attribution destination, reporting origin site) per attribution rate-limit window.

Randomized aggregatable report delay is a positive duration that controls the random delay to deliver an aggregatable report. If automation local testing mode is true, this is 0.

Default aggregation coordinator is the aggregation coordinator that controls how to obtain the public key for encrypting an aggregatable report by default.

Verification tokens per trigger is a positive integer that controls the number of encoded masked messages sent for signature when initiating trigger verification and the maximum number of serialized private state tokens that can be attached to an attribution trigger.

10. General Algorithms

10.1. Serialize an integer

To serialize an integer, represent it as a string of the shortest possible decimal number.

This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201

10.2. Parsing JSON fields

Note: The "Attribution-Reporting-Register-Source" and "Attribution-Reporting-Register-Trigger" response headers contain JSON-encoded data, rather than structured values, because of limitations on nesting in the latter. The recursive nature of JSON makes it more amenable to future extensions.

To parse an optional 64-bit signed integer given a map map, a string key, and a possibly null 64-bit signed integer default:

  1. If map[key] does not exist, return default.

  2. If map[key] is not a string, return an error.

  3. Let value be the result of applying the rules for parsing integers to map[key].

  4. If value is an error, return an error.

  5. If value cannot be represented by a 64-bit signed integer, return an error.

  6. Return value.

To parse an optional 64-bit unsigned integer given a map map, a string key, and a possibly null 64-bit unsigned integer default:

  1. If map[key] does not exist, return default.

  2. If map[key] is not a string, return an error.

  3. Return the result of applying the rules for parsing non-negative integers to map[key].

  4. If value is an error, return an error.

  5. If value cannot be represented by a 64-bit unsigned integer, return an error.

  6. Return value.

10.3. Serialize attribution destinations

To serialize attribution destinations destinations, run the following steps:

  1. Assert: destinations is not empty.

  2. Let destinationStrings be a list.

  3. For each destination in destinations:

    1. Assert: destination is not the opaque origin.

    2. Append destination serialized to destinationStrings.

  4. If destinationStrings’s size is equal to 1, return destinationStrings[0].

  5. Return destinationStrings.

To check if a scheme is suitable given a string scheme:

  1. If scheme is "http" or "https", return true.

  2. Return false.

To check if an origin is suitable given an origin origin:

  1. If origin is not a potentially trustworthy origin, return false.

  2. If origin’s scheme is not suitable, return false.

  3. Return true.

10.4. Parsing filter data

To parse filter values given a value:

  1. If value is not a map, return null.

  2. Let result be a new filter map.

  3. For each filterdata of value:

    1. If filter starts with "_", return null.

    2. If data is not a list, return null.

    3. Let set be a new ordered set.

    4. For each d of data:

      1. If d is not a string, return null.

      2. Append d to set.

    5. Set result[filter] to set.

  4. Return result.

To parse filter data given a value:

  1. Let map be the result of running parse filter values with value.

  2. If map is null, return null.

  3. If map’s size is greater than the max entries per filter data, return null.

  4. For each filterset of map:

    1. If filter’s length is greater than the max length per filter string, return null.

    2. If set’s size is greater than the max values per filter data entry, return null.

    3. For each s of set:

      1. If s’s length is greater than the max length per filter string, return null.

  5. Return map.

To parse filter config given a value:

  1. If value is not a map, return null.

  2. Let lookbackWindow be null.

  3. If value["_lookback_window"] exists:

    1. If value["_lookback_window"] is not a positive integer, return null.

    2. Set lookbackWindow to the duration of value["_lookback_window"] seconds.

    3. Remove value["_lookback_window"].

  4. Let map be the result of running parse filter values with value.

  5. If map is null, return null.

  6. Let filter be a filter config with the items:

    map

    map

    lookback window

    lookbackWindow

  7. Return filter.

10.5. Parsing filters

To parse filters given a value:

  1. Let filtersList be a new list.

  2. If value is a map, then:

    1. Let filterConfig be the result of running parse filter config with value.

    2. If filterConfig is null, return null.

    3. Append filterConfig to filtersList.

    4. Return filtersList.

  3. If value is not a list, return null.

  4. For each data of value:

    1. Let filterConfig be the result of running parse filter config with data.

    2. If filterConfig is null, return null.

    3. Append filterConfig to filtersList.

  5. Return filtersList.

To parse a filter pair given a map map:

  1. Let positive be a list of filter configs, initially empty.

  2. If map["filters"] exists, set positive to the result of running parse filters with map["filters"].

  3. If positive is null, return null.

  4. Let negative be a list of filter configs, initially empty.

  5. If map["not_filters"] exists, set negative to the result of running parse filters with map["not_filters"].

  6. If negative is null, return null.

  7. Return the tuple (positive, negative).

10.6. Cookie-based debugging

To check if cookie-based debugging is allowed given a suitable origin reportingOrigin and a site contextSite:

  1. Assert: contextSite is not the opaque origin.

  2. Let domain be the canonicalized domain name of reportingOrigin’s host.

  3. Let contextDomain be the canonicalized domain name of contextSite’s host.

  4. If the User Agent’s cookie policy or user controls do not allow cookie access for domain on contextDomain within a third-party context, return blocked.

  5. For each cookie of the user agent’s cookie store:

    1. If cookie’s name is not "ar_debug", continue.

    2. If cookie’s http-only-flag is false, continue.

    3. If cookie’s secure-flag is false, continue.

    4. If cookie’s same-site-flag is not "None", continue.

    5. If cookie’s host-only-flag is true and domain is not identical to cookie’s domain, continue.

    6. If cookie’s host-only-flag is false and domain does not domain-match cookie’s domain, continue.

    7. If "/" does not path-match cookie’s path, continue.

    8. Return allowed.

  6. Return blocked.

Ideally this would use the cookie-retrieval algorithm, but it cannot: There is no way to consider only cookies whose http-only-flag is true and whose same-site-flag is "None"; there is no way to prevent the last-access-time from being modified; and the return value is a string that would have to be further processed to check for the "ar_debug" cookie.

10.7. Obtaining context origin

To obtain the context origin of a node node, return node’s node navigable's top-level traversable's active document's origin.

10.8. Obtaining a randomized response

To obtain a randomized response given trueValue, a set possibleValues, and a double randomPickRate:

  1. Assert: randomPickRate is between 0 and 1 (both inclusive).

  2. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  3. If r is less than randomPickRate, return a random item from possibleValues with uniform probability.

  4. Otherwise, return trueValue.

10.9. Parsing aggregation key piece

To parse an aggregation key piece given a string input, perform the following steps. This algorithm will return either a non-negative 128-bit integer or an error.

  1. If input’s code point length is not between 3 and 34 (both inclusive), return an error.

  2. If the first character is not a U+0030 DIGIT ZERO (0), return an error.

  3. If the second character is not a U+0058 LATIN CAPITAL LETTER X character (X) and not a U+0078 LATIN SMALL LETTER X character (x), return an error.

  4. Let value be the code point substring from 2 to the end of input.

  5. If the characters within value are not all ASCII hex digits, return an error.

  6. Interpret value as a hexadecimal number and return as a non-negative 128-bit integer.

10.10. Can attribution rate-limit record be removed

Given an attribution rate-limit record record and a moment now:

  1. If the duration from record’s time and now is <= attribution rate-limit window , return false.

  2. If record’s scope is "attribution", return true.

  3. If record’s expiry time is after now, return false.

  4. Return true.

10.11. Obtaining and delivering an attribution debug report

To obtain and deliver a debug report given a list of attribution debug data data and a suitable origin reportingOrigin:

  1. Let debugReport be an attribution debug report with the items:

    data

    data

    reporting origin

    reportingOrigin

  2. Queue a task to attempt to deliver a verbose debug report with debugReport.

10.12. Making a background attributionsrc request

An eligibility is one of the following:

"unset"

Depending on context, a trigger may or may not be registered.

"empty"

Neither a source nor a trigger may be registered.

"event-source"

An event source may be registered.

"navigation-source"

A navigation source may be registered.

"trigger"

A trigger may be registered.

"event-source-or-trigger"

An event source or a trigger may be registered.

A registrar is one of the following:

"web"

The user agent supports web registrations.

"os"

The user agent supports OS registrations.

To validate a background attributionsrc eligibility given an eligibility eligibility:

  1. Assert: eligibility is "navigation-source" or "event-source-or-trigger".

To make a background attributionsrc request given a URL url, an origin contextOrigin, an eligibility eligibility, and a Document document:

  1. Validate eligibility.

  2. If url’s origin is not suitable, return.

  3. If contextOrigin is not suitable, return.

  4. Let context be document’s relevant settings object.

  5. If context is not a secure context, return.

  6. If document is not allowed to use the "attribution-reporting" feature, return.

  7. Let supportedRegistrars be the result of getting supported registrars.

  8. If supportedRegistrars is empty, return.

  9. Let request be a new request with the following properties:

    method

    "GET"

    URL

    url

    keepalive

    true

    Attribution Reporting eligibility

    eligibility

  10. Fetch request with processResponse being process an attributionsrc response with contextOrigin, eligibility, and request’s trigger verification metadata.

Audit other properties on request and set them properly.

Support header-processing on redirects. Due to atomic HTTP redirect handling, we cannot process registrations through integration with fetch. [Issue #839]

Check for transient activation with "navigation-source".

To make background attributionsrc requests given an HTMLAttributionSrcElementUtils element and an eligibility eligibility:

  1. Let attributionSrc be element’s attributionSrc.

  2. Let tokens be the result of splitting attributionSrc on ASCII whitespace.

  3. For each token of tokens:

    1. Parse token, relative to element’s node document. If that is not successful, continue. Otherwise, let url be the resulting URL record.

    2. Run make a background attributionsrc request with url, element’s context origin, eligibility, and element’s node document.

Consider allowing the user agent to limit the size of tokens.

To process an attributionsrc response given a suitable origin contextOrigin, an eligibility eligibility, a trigger verification metadata triggerVerificationMetadata, and a response response:

  1. Validate eligibility.

  2. Run process an attribution eligible response with contextOrigin, eligibility, and response.

To process an attribution eligible response given a suitable origin contextOrigin, an eligibility eligibility, and a response response:

  1. The user-agent MAY ignore the response; if so, return.

    Note: The user-agent may prevent attribution for a number of reasons, such as user opt-out. In these cases, it is preferred to abort the API flow at response time rather than at request time so this state is not immediately detectable. Attribution may also be blocked if the reporting origin is not enrolled.

  2. Queue a task on the networking task source to proceed with the following steps.

    Note: This algorithm can be invoked from while in parallel.

  3. Assert: eligibility is "navigation-source" or "event-source" or "event-source-or-trigger".

  4. Let reportingOrigin be response’s URL's origin.

  5. If reportingOrigin is not suitable, return.

  6. Let sourceHeader be the result of getting "Attribution-Reporting-Register-Source" from response’s header list.

  7. Let triggerHeader be the result of getting "Attribution-Reporting-Register-Trigger" from response’s header list.

  8. Let osSourceRegistrations be the result of getting OS registrations from response’s header list with "Attribution-Reporting-Register-OS-Source".

  9. Let osTriggerRegistrations be the result of getting OS registrations from response’s header list with "Attribution-Reporting-Register-OS-Trigger".

  10. If eligibility is:

    "navigation-source"
    "event-source"

    Run the following steps:

    1. If sourceHeader and osSourceRegistrations are both null or both not null, return.

    2. If sourceHeader is not null:

      1. Let sourceType be "navigation".

      2. If eligibility is "event-source", set sourceType to "event".

      3. Let source be the result of running parse source-registration JSON with sourceHeader, contextOrigin, reportingOrigin, sourceType, and current wall time.

      4. If source is not null, process source.

    3. If osSourceRegistrations is not null and the user agent supports OS registrations:

      1. Process osSourceRegistrations according to an implementation-defined algorithm.

      2. Run obtain and deliver debug reports on OS registrations with "os-source-delegated", osSourceRegistrations, and contextOrigin.

    "event-source-or-trigger"

    Run the following steps:

    1. If the number of non-null entries in «sourceHeader, triggerHeader, osSourceRegistrations, osTriggerRegistrations» is not 1, return.

    2. If sourceHeader is not null:

      1. Let source be the result of running parse source-registration JSON with sourceHeader, contextOrigin, reportingOrigin, "event", and current wall time.

      2. If source is not null, process source.

    3. If triggerHeader is not null:

      1. Let destinationSite be the result of obtaining a site from contextOrigin.

      2. Let triggerVerifications be the result of receiving trigger verification tokens with response’s URL's origin, response’s header list, and triggerVerificationMetadata.

      3. Let trigger be the result of running create an attribution trigger with triggerHeader destinationSite, reportingOrigin, triggerVerifications, and current wall time.

      4. If trigger is not null:

        1. Maybe defer and then complete trigger attribution with trigger.

    4. If osSourceRegistrations is not null and the user agent supports OS registrations:

      1. Process osSourceRegistrations according to an implementation-defined algorithm.

      2. Run obtain and deliver debug reports on OS registrations with "os-source-delegated", osSourceRegistrations, and contextOrigin.

    5. If osTriggerRegistrations is not null and the user agent supports OS registrations:

      1. Process osTriggerRegistrations according to an implementation-defined algorithm.

      2. Run obtain and deliver debug reports on OS registrations with "os-trigger-delegated", osTriggerRegistrations, and contextOrigin.

11. Source Algorithms

11.1. Obtaining a randomized source response

To obtain a set of possible trigger states given a randomized response output configuration config:

  1. Let possibleTriggerStates be a new empty set.

  2. For each triggerDataspec of config’s trigger specs:

    1. For each reportWindow of spec’s event-level report windows:

      1. Let state be a new trigger state with the items:

        trigger data

        triggerData

        report window

        reportWindow

      2. Append state to possibleTriggerStates.

  3. Let possibleValues be a new empty set.

  4. For each integer attributions of the range 0 to config’s max attributions per source, inclusive:

    1. Append to possibleValues all distinct attributions-length combinations of possibleTriggerStates.

To obtain a randomized source response pick rate given a randomized response output configuration config and a double epsilon:

  1. Let possibleValues be the result of obtaining a set of possible trigger states with config.

  2. Let numPossibleValues be the size of possibleValues.

  3. Return numPossibleValues / (numPossibleValues - 1 + eepsilon).

To obtain a randomized source response given a randomized response output configuration config and a double epsilon:

  1. Let possibleValues be the result of obtaining a set of possible trigger states with config.

  2. Let pickRate be the result of obtaining a randomized source response pick rate with config and epsilon.

  3. Return the result of obtaining a randomized response with null, possibleValues, and pickRate.

11.2. Computing channel capacity

To compute the channel capacity of a source given a randomized response output configuration config and a double epsilon:

  1. Let pickRate be the randomized response pick rate with config and epsilon.

  2. Let states be the number of possible trigger states with config.

  3. If states is 1, return 0.

  4. Let p be pickRate * (states - 1) / states.

  5. Return log2(states) - h(p) - p * log2(states - 1) where h is the binary entropy function [BIN-ENT].

Note: This algorithm computes the channel capacity [CHAN] of a q-ary symmetric channel [Q-SC].

11.3. Parsing source-registration JSON

To parse an attribution destination from a string str:

  1. Let url be the result of running the URL parser on the value of the str.

  2. If url is failure or null, return null.

  3. If url’s origin is not suitable, return null.

  4. Return the result of obtaining a site from url’s origin.

To parse attribution destinations from a map map:

  1. If map["destination"] does not exist, return null.

  2. Let val be map["destination"].

  3. If val is a string, set val to « val ».

  4. If val is not a list, return null.

  5. Let result be an ordered set.

  6. For each value of val:

    1. If value is not a string, return null.

    2. Let destination be the result of parse an attribution destination with value.

    3. If destination is null, return null.

    4. Append destination to result.

  7. If result is empty or its size is greater than the max destinations per source, return null.

  8. Return result.

To parse a duration given a map map, a string key, and a tuple of durations (clampStart, clampEnd):

  1. Assert: clampStart < clampEnd.

  2. Let seconds be null.

  3. If map[key] exists and is a non-negative integer, set seconds to map[key].

  4. Otherwise, set seconds to the result of running parse an optional 64-bit unsigned integer with map, key, and null.

  5. If seconds is an error, return an error.

  6. If seconds is null, return clampEnd.

  7. Let duration be the duration of seconds seconds.

  8. If duration is less than clampStart, return clampStart.

  9. If duration is greater than clampEnd, return clampEnd.

  10. Return duration.

Consider rejecting out-of-bounds values instead of silently clamping.

To parse aggregation keys given an ordered map map:

  1. Let aggregationKeys be a new ordered map.

  2. If map["aggregation_keys"] does not exist, return aggregationKeys.

  3. Let values be map["aggregation_keys"].

  4. If values is not an ordered map, return null.

  5. If values’s size is greater than the max aggregation keys per source registration, return null.

  6. For each keyvalue of values:

    1. If key’s length is greater than the max length per aggregation key identifier, return null.

    2. If value is not a string, return null.

    3. Let keyPiece be the result of running parse an aggregation key piece with value.

    4. If keyPiece is an error, return null.

    5. Set aggregationKeys[key] to keyPiece.

  7. Return aggregationKeys.

To obtain default effective windows given a source type sourceType, a moment sourceTime, and a duration eventReportWindow:

  1. Let deadlines be «» if sourceType is "event", else « 2 days, 7 days ».

  2. Remove all elements in deadlines that are greater than or equal to eventReportWindow.

  3. Append eventReportWindow to deadlines.

  4. Let lastEnd be sourceTime.

  5. Let windows be «».

  6. For each deadline of deadlines:

    1. Let window be a new report window whose items are

      start

      lastEnd

      end

      lastEnd + deadline

    2. Append window to windows.

    3. Set lastEnd to lastEnd + deadline.

  7. Return windows.

To parse top-level report windows given a map map, a moment sourceTime, a source type sourceType, and a duration expiry:

  1. If map["event_report_window"] exists and map["event_report_windows"] exists, return an error.

  2. If map["event_report_window"] exists:

    1. Let eventReportWindow be the result of running parse a duration with map, "event_report_window", and (min report window, expiry).

    2. If eventReportWindow is an error, return eventReportWindow.

    3. Return the result of obtaining default effective windows given sourceType, sourceTime, and eventReportWindow.

  3. If map["event_report_windows"] does not exist, return the result of obtaining default effective windows given sourceType, sourceTime, and expiry.

  4. Return the result of parsing report windows with map["event_report_windows"], sourceTime, and expiry.

To parse report windows given a value, a moment sourceTime, and a duration expiry:

  1. If value is not a map, return an error.

  2. Let startDuration be 0 seconds.

  3. If value["start_time"] exists:

    1. Let start be value["start_time"].

    2. If start is not a non-negative integer, return an error.

    3. Set startDuration to start seconds.

    4. If startDuration is greater than expiry, return an error.

  4. If value["end_times"] does not exist or is not a list, return an error.

  5. Let endDurations be value["end_times"].

  6. If the size of endDurations is greater than max settable event-level report windows, return an error.

  7. If endDurations is empty, return an error.

  8. Let windows be an empty list.

  9. For each end of endDurations:

    1. If end is not a positive integer, return an error.

    2. Let endDuration be end seconds.

    3. If endDuration is greater than expiry, set endDuration to expiry.

    4. If endDuration is less than min report window, set endDuration to min report window.

    5. If endDuration is less than or equal to startDuration, return an error.

    6. Let window be a new report window whose items are

      start

      sourceTime + startDuration

      end

      sourceTime + endDuration

    7. Append window to windows.

    8. Set startDuration to endDuration.

  10. Return windows.

The user-agent has an associated boolean experimental Flexible Event support (default false) that exposes non-normative behavior described in the Flexible event-level configurations proposal.

To parse summary window operator given a map map:

  1. Let value be "count".

  2. If map["summary_window_operator"] exists:

    1. If map["summary_window_operator"] is not a string, return an error.

    2. If map["summary_window_operator"] is not a summary window operator, return an error.

    3. Set value to map["summary_window_operator"].

  3. Return value.

To parse summary buckets given a map map and an integer maxEventLevelReports:

  1. Let values be the range 1 to maxEventLevelReports, inclusive.

  2. If map["summary_buckets"] exists:

    1. If map["summary_buckets"] is not a list, is empty, or its size is greater than maxEventLevelReports, return an error.

    2. Set values to map["summary_buckets"].

  3. Let prev be 0.

  4. Let summaryBuckets be an empty list.

  5. For each item of values:

    1. If item is not an integer or cannot be represented by an unsigned 32-bit integer, or is less than or equal to prev, return an error.

    2. Let summaryBucket be a new summary bucket whose items are

      start

      prev

      end

      item - 1

    3. Append summaryBucket to summaryBuckets.

    4. Set prev to item.

  6. Return summaryBuckets.

To parse trigger data into a trigger spec map given a triggerDataList, a trigger spec spec, a trigger spec map specs, and a boolean allowEmpty:

  1. If triggerDataList is not a list or its size is greater than max distinct trigger data per source, return false.

  2. If allowEmpty is false and triggerDataList is empty, return false.

  3. For each triggerData of triggerDataList:

    1. If triggerData is not an integer or cannot be represented by an unsigned 32-bit integer, or specs[triggerData] exists, return false.

    2. Set specs[triggerData] to spec.

    3. If specs’s size is greater than max distinct trigger data per source, return false.

  4. Return true.

To parse trigger specs given a map map, a moment sourceTime, a source type sourceType, a duration expiry, and a trigger-data matching mode matchingMode:

  1. Let defaultReportWindows be the result of parsing top-level report windows with map, sourceTime, sourceType, and expiry.

  2. If defaultReportWindows is an error, return an error.

  3. Let specs be a new trigger spec map.

  4. If experimental Flexible Event support is true and map["trigger_specs"] exists:

    1. If map["trigger_data"] exists, return an error.

    2. If map["trigger_specs"] is not a list or its size is greater than max distinct trigger data per source, return an error.

    3. For each item of map["trigger_specs"]:

      1. If item is not a map, return an error.

      2. Let spec be a new trigger spec with the following items:

        event-level report windows

        defaultReportWindows

      3. If item["event_report_windows"] exists:

        1. Let reportWindows be the result of parsing report windows with item["event_report_windows"], sourceTime, and expiry.

        2. If reportWindows is an error, return it.

        3. Set spec’s event-level report windows to reportWindows.

      4. If item["trigger_data"] does not exist, return an error.

      5. Let allowEmpty be false.

      6. If the result of running parse trigger data into a trigger spec map with item["trigger_data"], spec, specs, and allowEmpty is false, return an error.

  5. Otherwise:

    1. Let spec be a new trigger spec with the following items:

      event-level report windows

      defaultReportWindows

    2. If map["trigger_data"] exists:

      1. Let allowEmpty be true.

      2. If the result of running parse trigger data into a trigger spec map with map["trigger_data"], spec, specs, and allowEmpty is false, return an error.

    3. Otherwise:

      1. For each integer triggerData of the range 0 to default trigger data cardinality[sourceType], exclusive:

        1. Set specs[triggerData] to spec.

  6. If matchingMode is "modulus":

    1. Let i be 0.

    2. For each triggerData of specs’s keys:

      1. If triggerData does not equal i, return an error.

      2. Set i to i + 1.

  7. Return specs.

Invoke parse summary buckets and parse summary window operator from this algorithm.

To parse source-registration JSON given a byte sequence json, a suitable origin sourceOrigin, a suitable origin reportingOrigin, a source type sourceType, and a moment sourceTime:

  1. Let value be the result of running parse JSON bytes to an Infra value with json.

  2. If value is not an ordered map, return null.

  3. Let attributionDestinations be the result of running parse attribution destinations with value.

  4. If attributionDestinations is null, return null.

  5. Let sourceEventId be the result of running parse an optional 64-bit unsigned integer with value, "source_event_id", and 0.

  6. If sourceEventId is an error, return null.

  7. Let expiry be the result of running parse a duration with value, "expiry", and valid source expiry range.

  8. If expiry is an error, return null.

  9. If sourceType is "event", round expiry away from zero to the nearest day (86400 seconds).

  10. Let priority be the result of running parse an optional 64-bit signed integer with value, "priority", and

  11. If priority is an error, return null.

  12. Let filterData be a new filter map.

  13. If value["filter_data"] exists:

    1. Set filterData to the result of running parse filter data with value["filter_data"].

    2. If filterData is null, return null.

    3. If filterData["source_type"] exists, return null.

  14. Set filterData["source_type"] to « sourceType ».

  15. Let debugKey be the result of running parse an optional 64-bit unsigned integer with value, "debug_key", and null.

  16. If debugKey is an error, set debugKey to null.

  17. Let debugCookieSet be false.

  18. Let sourceSite be the result of obtaining a site from sourceOrigin.

  19. If the result of running check if cookie-based debugging is allowed with reportingOrigin and sourceSite is allowed, set debugCookieSet to true.

  20. If debugCookieSet is false, set debugKey to null.

  21. Let aggregationKeys be the result of running parse aggregation keys with value.

  22. If aggregationKeys is null, return null.

  23. Let maxAttributionsPerSource be default event-level attributions per source[sourceType].

  24. Set maxAttributionsPerSource to value["max_event_level_reports"] if it exists.

  25. If maxAttributionsPerSource is not a non-negative integer, or is greater than max settable event-level attributions per source, return null.

  26. Let aggregatableReportWindowEnd be the result of running parse a duration with value, "aggregatable_report_window", and (min report window, expiry).

  27. If aggregatableReportWindowEnd is an error, return null.

  28. Let debugReportingEnabled be false.

  29. If value["debug_reporting"] exists and is a boolean, set debugReportingEnabled to value["debug_reporting"].

  30. Let aggregatableReportWindow be a new report window with the following items:

    start

    sourceTime

    end

    sourceTime + aggregatableReportWindowEnd

  31. Let triggerDataMatchingMode be "modulus".

  32. If value["trigger_data_matching"] exists:

    1. If value["trigger_data_matching"] is not a string, return null.

    2. If value["trigger_data_matching"] is not a trigger-data matching mode, return null.

    3. Set triggerDataMatchingMode to value["trigger_data_matching"].

  33. Let triggerSpecs be the result of parsing trigger specs with value, sourceTime, sourceType, expiry, and triggerDataMatchingMode.

  34. If triggerSpecs is an error, return null.

  35. Let randomizedResponseConfig be a new randomized response output configuration whose items are:

    max attributions per source

    maxAttributionsPerSource

    trigger specs

    triggerSpecs

  36. Let epsilon be the user agent’s max settable event-level epsilon.

  37. Set epsilon to value["event_level_epsilon"] if it exists:

  38. If epsilon is not a double, is less than 0, or is greater than the user agent’s max settable event-level epsilon, return null.

  39. If automation local testing mode is true, set epsilon to .

  40. If the result of computing the channel capacity of a source with randomizedResponseConfig and epsilon is greater than max event-level channel capacity per source[sourceType], return null.

  41. Let source be a new attribution source struct whose items are:

    source identifier

    A new unique string

    source origin

    sourceOrigin

    event ID

    sourceEventId

    attribution destinations

    attributionDestinations

    reporting origin

    reportingOrigin

    expiry

    expiry

    trigger specs

    triggerSpecs

    aggregatable report window

    aggregatableReportWindow

    priority

    priority

    source time

    sourceTime

    source type

    sourceType

    max number of event-level reports

    maxAttributionsPerSource

    randomized response

    The result of obtaining a randomized source response with randomizedResponseConfig and epsilon.

    randomized trigger rate

    The result of obtaining a randomized source response pick rate with randomizedResponseConfig and epsilon.

    filter data

    filterData

    debug key

    debugKey

    aggregation keys

    aggregationKeys

    aggregatable budget consumed

    0

    debug reporting enabled

    debugReportingEnabled

    trigger-data matching mode

    triggerDataMatchingMode

    debug cookie set

    debugCookieSet

  42. Return source.

Determine proper charset-handling for the JSON header value.

11.4. Processing an attribution source

To check if an attribution source exceeds the time-based destination limits given an attribution source source, run the following steps:

  1. Let matchingSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  2. Let matchingSameReportingSources be all the records in matchingSources whose associated reporting origin is same site with source’s reporting origin.

  3. Let destinations be the set of every attribution destination in matchingSources, unioned with source’s attribution destinations.

  4. Let sameReportingDestinations be the set of every attribution destination in matchingSameReportingSources, unioned with source’s attribution destinations.

  5. Let hitRateLimit be whether destinations’s size is greater than max destinations per rate-limit window[0].

  6. Let hitSameReportingRateLimit be whether sameReportingDestinations’s size is greater than max destinations per rate-limit window[1].

  7. If (hitRateLimit, hitSameReportingRateLimit) is

    (false, false)

    Return "allowed".

    (false, true)

    Return "hit reporting limit".

    (true, false)

    Return "hit global limit".

    (true, true)

    Return "hit reporting limit".

Note: We do not emit an explicit source debug data type for "hit global limit", we only emit a source-success type. For this reason, when both limits are hit, just interpret it as "hit reporting limit" to ensure that the most useful report is sent.

To check if an attribution source exceeds the unexpired destination limit given an attribution source source, run the following steps:

  1. Let unexpiredSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  2. Let unexpiredDestinations be an empty set.

  3. For each attribution rate-limit record unexpiredRecord of unexpiredSources:

    1. Append unexpiredRecord’s attribution destination to unexpiredDestinations.

  4. Let newDestinations be the result of taking the union of unexpiredDestinations and source’s attribution destinations.

  5. Return whether newDestinations’s size is greater than the user agent’s max destinations covered by unexpired sources.

To obtain a fake report given an attribution source source and a trigger state triggerState:

  1. Let specEntry be the entry for source’s trigger specs[triggerState’s trigger data].

  2. Let triggerTime be the greatest moment that is strictly less than triggerState’s report window's end.

  3. Let priority be 0.

  4. Let fakeReport be the result of running obtain an event-level report with source, triggerTime, triggerDebugKey set to null, priority, and specEntry.

  5. Assert: fakeReport’s report time is equal to triggerState’s report window's end.

  6. Return fakeReport.

To check if debug reporting is allowed given a source debug data type dataType and a boolean debugCookieSet:

  1. If dataType is:

    "source-destination-limit"
    "source-destination-rate-limit"

    Return allowed.

    "source-noised"
    "source-storage-limit"
    "source-success"
    "source-unknown-error"
    1. If debugCookieSet is true, return allowed.

    2. Otherwise, return blocked.

To obtain and deliver a debug report on source registration given a source debug data type dataType and an attribution source source:

  1. If source’s debug reporting enabled is false, return.

  2. If the result of running check if debug reporting is allowed with dataType and source’s debug cookie set is blocked, return.

  3. Let body be a new map with the following key/value pairs:

    "attribution_destination"

    source’s attribution destinations, serialized.

    "source_event_id"

    source’s event ID, serialized.

    "source_site"

    source’s source site, serialized.

  4. If source’s debug key is not null, set body["source_debug_key"] to source’s debug key, serialized.

  5. If dataType is:

    "source-destination-limit"

    Set body["limit"] to the user agent’s max destinations covered by unexpired sources, serialized.

    "source-destination-rate-limit"

    Set body["limit"] to the user agent’s max destinations per rate-limit window[1], serialized.

    "source-storage-limit"

    Set body["limit"] to the user agent’s max pending sources per source origin, serialized.

  6. Let data be a new attribution debug data with the items:

    data type

    dataType

    body

    body

  7. Run obtain and deliver a debug report with « data » and source’s reporting origin.

To process an attribution source given an attribution source source:

  1. Let destinationRateLimitResult be the result of running check if an attribution source exceeds the time-based destination limit with source.

  2. If destinationRateLimitResult is "hit reporting limit":

    1. Run obtain and deliver a debug report on source registration with "source-destination-rate-limit" and source.

    2. Return.

  3. If destinationRateLimitResult is "hit global limit":

    1. Run obtain and deliver a debug report on source registration with "source-success" and source.

    2. Return.

  4. Let cache be the user agent’s attribution source cache.

  5. Remove all attribution sources entry in cache where entry’s expiry time is less than source’s source time.

  6. Let pendingSourcesForSourceOrigin be the set of all attribution sources pendingSource of cache where pendingSource’s source origin and source’s source origin are same origin.

  7. If pendingSourcesForSourceOrigin’s size is greater than or equal to the user agent’s max pending sources per source origin:

    1. Run obtain and deliver a debug report on source registration with "source-storage-limit" and source.

    2. Return.

  8. If the result of running check if an attribution source exceeds the unexpired destination limit with source is true:

    1. Run obtain and deliver a debug report on source registration with "source-destination-limit" and source.

    2. Return.

  9. For each destination in source’s attribution destinations:

    1. Let rateLimitRecord be a new attribution rate-limit record with the items:

      scope

      "source"

      source site

      source’s source site

      attribution destination

      destination

      reporting origin

      source’s reporting origin

      time

      source’s source time

      expiry time

      source’s expiry time

    2. If the result of running should processing be blocked by reporting-origin limit with rateLimitRecord is blocked:

      1. Run obtain and deliver a debug report on source registration with "source-success" and source.

      2. Return.

    3. Append rateLimitRecord to the attribution rate-limit cache.

  10. Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and source’s source time is true.

  11. Let debugDataType be "source-success".

  12. If source’s randomized response is not null and is a set:

    1. For each trigger state triggerState of source’s randomized response:

      1. Let fakeReport be the result of running obtain a fake report with source and triggerState.

      2. Append fakeReport to the event-level report cache.

    2. If source’s randomized response is not empty, then set source’s event-level attributable value to false.

    3. For each destination in source’s attribution destinations:

      1. Let rateLimitRecord be a new attribution rate-limit record with the items:

        scope

        "attribution"

        source site

        source’s source site

        attribution destination

        destination

        reporting origin

        source’s reporting origin

        time

        source’s source time

        expiry time

        null

      2. Append rateLimitRecord to the attribution rate-limit cache.

    4. Set debugDataType to "source-noised".

  13. Run obtain and deliver a debug report on source registration with debugDataType and source.

  14. Append source to cache.

Note: Because a fake report does not have a "real" effective destination, we need to subtract from the privacy budget of all possible destinations.

12. Triggering Algorithms

12.1. Creating an attribution trigger

To parse event triggers given an ordered map map:

  1. Let eventTriggers be a new set.

  2. If map["event_trigger_data"] does not exist, return eventTriggers.

  3. Let values be map["event_trigger_data"].

  4. If values is not a list, return null.

  5. For each value of values:

    1. If value is not an ordered map, return null.

    2. Let triggerData be the result of running parse an optional 64-bit unsigned integer with value, "trigger_data", and 0.

    3. If triggerData is an error, return null.

    4. Let dedupKey be the result of running parse an optional 64-bit unsigned integer with value, "deduplication_key", and null.

    5. If dedupKey is an error, return null.

    6. Let priority be the result of running parse an optional 64-bit signed integer with value, "priority", and 0.

    7. If priority is an error, return null.

    8. Let filterPair be the result of running parse a filter pair with value.

    9. If filterPair is null, return null.

    10. Let eventTrigger be a new event-level trigger configuration with the items:

      trigger data

      triggerData

      dedup key

      dedupKey

      priority

      priority

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    11. Append eventTrigger to eventTriggers.

  6. Return eventTriggers.

To parse aggregatable trigger data given an ordered map map:

  1. Let aggregatableTriggerData be a new list.

  2. If map["aggregatable_trigger_data"] does not exist, return aggregatableTriggerData.

  3. Let values be map["aggregatable_trigger_data"].

  4. If values is not a list, return null.

  5. For each value of values:

    1. If value is not an ordered map, return null.

    2. If value["key_piece"] does not exist or is not a string, return null.

    3. Let keyPiece be the result of running parse an aggregation key piece with value["key_piece"].

    4. If keyPiece is an error, return null.

    5. Let sourceKeys be a new ordered set.

    6. If value["source_keys"] exists:

      1. If value["source_keys"] is not a list, return null.

      2. For each sourceKey of value["source_keys"]:

        1. If sourceKey is not a string, return null.

        2. If sourceKey’s length is greater than the max length per aggregation key identifier, return null.

        3. Append sourceKey to sourceKeys.

    7. Let filterPair be the result of running parse a filter pair with value.

    8. If filterPair is null, return null.

    9. Let aggregatableTrigger be a new aggregatable trigger data with the items:

      key piece

      keyPiece

      source keys

      sourceKeys

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    10. Append aggregatableTrigger to aggregatableTriggerData.

  6. Return aggregatableTriggerData.

To parse aggregatable key-values given a map map:

  1. For each keyvalue of map:

    1. If key’s length is greater than the max length per aggregation key identifier, return null.

    2. If value is not an integer, return null.

    3. If value is less than or equal to 0, return null.

    4. If value is greater than allowed aggregatable budget per source, return null.

  2. Return map.

To parse aggregatable values given an ordered map map:

  1. If map["aggregatable_values"] does not exist, return a new empty list.

  2. Let values be map["aggregatable_values"].

  3. If values is not an ordered map or a list, return null.

  4. Let aggregatableValuesConfigurations be a list of aggregatable values configurations, initially empty.

  5. If values is a map:

    1. Let aggregatableKeyValues be the result of running parse aggregatable key-values with values.

    2. If aggregatableKeyValues is null, return null.

    3. Let aggregatableValuesConfiguration be a new aggregatable values configuration with the items:

      values

      aggregatableKeyValues

      filters

      «»

      negated filters

      «»

    4. Append aggregatableValuesConfiguration to aggregatableValuesConfigurations.

    5. Return aggregatableValuesConfigurations.

  6. For each value of values:

    1. If value is not a map, return null.

    2. If value["values"] does not exist, return null.

    3. Let aggregatableKeyValues be the result of running parse aggregatable key-values with value["values"].

    4. If aggregatableKeyValues is null, return null.

    5. Let filterPair be the result of running parse a filter pair with value.

    6. If filterPair is null, return null.

    7. Let aggregatableValuesConfiguration be a new aggregatable values configuration with the items:

      values

      aggregatableKeyValues

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    8. Append aggregatableValuesConfiguration to aggregatableValuesConfigurations.

  7. Return aggregatableValuesConfigurations.

To parse aggregatable dedup keys given an ordered map map:

  1. Let aggregatableDedupKeys be a new list.

  2. If map["aggregatable_deduplication_keys"] does not exist, return aggregatableDedupKeys.

  3. Let values be map["aggregatable_deduplication_keys"].

  4. If values is not a list, return null.

  5. For each value of values:

    1. If value is not an ordered map, return null.

    2. Let dedupKey be the result of running parse an optional 64-bit unsigned integer with value, "deduplication_key", and null.

    3. If dedupKey is an error, return null.

    4. Let filterPair be the result of running parse a filter pair with value.

    5. If filterPair is null, return null.

    6. Let aggregatableDedupKey be a new aggregatable dedup key with the items:

      dedup key

      dedupKey

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    7. Append aggregatableDedupKey to aggregatableDedupKeys.

  6. Return aggregatableDedupKeys.

To create an attribution trigger given a byte sequence json, a site destination, a suitable origin reportingOrigin, a list of trigger verification triggerVerifications, and a moment triggerTime:

  1. Let value be the result of running parse JSON bytes to an Infra value with json.

  2. If value is not an ordered map, return null.

  3. Let eventTriggers be the result of running parse event triggers with value.

  4. If eventTriggers is null, return null.

  5. Let aggregatableTriggerData be the result of running parse aggregatable trigger data with value.

  6. If aggregatableTriggerData is null, return null.

  7. Let aggregatableValuesConfigurations be the result of running parse aggregatable values with value.

  8. If aggregatableValuesConfigurations is null, return null.

  9. Let aggregatableDedupKeys be the result of running parse aggregatable dedup keys with value.

  10. If aggregatableDedupKeys is null, return null.

  11. Let debugKey be the result of running parse an optional 64-bit unsigned integer with value, "debug_key", and null.

  12. If debugKey is an error, set debugKey to null.

  13. If the result of running check if cookie-based debugging is allowed with reportingOrigin and destination is blocked, set debugKey to null.

  14. Let filterPair be the result of running parse a filter pair with value.

  15. If filterPair is null, return null.

  16. Let debugReportingEnabled be false.

  17. If value["debug_reporting"] exists and is a boolean, set debugReportingEnabled to value["debug_reporting"].

  18. Let aggregationCoordinator be default aggregation coordinator.

  19. If value["aggregation_coordinator_origin"] exists:

    1. If value["aggregation_coordinator_origin"] is not a string, return null.

    2. Let url be the result of running the URL parser on value["aggregation_coordinator_origin"].

    3. If url is failure or null, return null.

    4. If url’s origin is not an aggregation coordinator, return null.

    5. Set aggregationCoordinator to url’s origin.

  20. Let aggregatableSourceRegTimeConfig be "exclude".

  21. If value["aggregatable_source_registration_time"] exists:

    1. If value["aggregatable_source_registration_time"] is not a string, return null.

    2. If value["aggregatable_source_registration_time"] is not an aggregatable source registration time configuration, return null.

    3. Set aggregatableSourceRegTimeConfig to value["aggregatable_source_registration_time"].

  22. Let triggerContextID be null.

  23. If value["trigger_context_id"] exists:

    1. If value["trigger_context_id"] is not a string, return null.

    2. If value["trigger_context_id"]'s length is 0 or is greater than the max length per trigger context ID, return null.

    3. If aggregatableSourceRegTimeConfig is not "exclude", return null.

    4. Set triggerContextID to value["trigger_context_id"].

  24. Let trigger be a new attribution trigger with the items:

    attribution destination

    destination

    trigger time

    triggerTime

    reporting origin

    reportingOrigin

    filters

    filterPair[0]

    negated filters

    filterPair[1]

    debug key

    debugKey

    event-level trigger configurations

    eventTriggers

    aggregatable trigger data

    aggregatableTriggerData

    aggregatable values configurations

    aggregatableValuesConfigurations

    aggregatable dedup keys

    aggregatableDedupKeys

    verifications

    triggerVerifications

    debug reporting enabled

    debugReportingEnabled

    aggregation coordinator

    aggregationCoordinator

    aggregatable source registration time configuration

    aggregatableSourceRegTimeConfig

    trigger context ID

    triggerContextID

  25. Return trigger.

Determine proper charset-handling for the JSON header value.

12.2. Does filter data match

To match filter values given a filter value a and a filter value b:

  1. If b is empty, then:

    1. If a is empty, then return true.

    2. Otherwise, return false.

  2. Let i be the intersection of a and b.

  3. If i is empty, then return false.

  4. Return true.

To match filter values with negation given a filter value a and a filter value b:

  1. If b is empty, then:

    1. If a is not empty, then return true.

    2. Otherwise, return false.

  2. Let i be the intersection of a and b.

  3. If i is not empty, then return false.

  4. Return true.

To match an attribution source against a filter config given an attribution source source, a filter config filter, a moment moment, and a boolean isNegated:

  1. Let lookbackWindow be filter’s lookback window.

  2. If lookbackWindow is not null:

    1. If the duration from moment and the source’s source time is greater than lookbackWindow:

      1. If isNegated is false, return false.

    2. Else if isNegated is true, return false.

    Note: If non-negated, the source must have been registered inside of the lookback window. If negated, it must be outside of the lookback window.

  3. Let filterMap be filter’s map.

  4. Let sourceData be source’s filter data.

  5. For each keyfilterValues of filterMap:

    1. If sourceData[key] does not exist, continue.

    2. Let sourceValues be sourceData[key].

    3. If isNegated is:

      false
      If the result of running match filter values with sourceValues and filterValues is false, return false.
      true
      If the result of running match filter values with negation with sourceValues and filterValues is false, return false.
  6. Return true.

To match an attribution source against filters given an attribution source source, a list of filter configs filters, a moment moment, and a boolean isNegated:

  1. If filters is empty, return true.

  2. For each filter of filters:

    1. If the result of running match an attribution source against a filter config with source, filter, moment, and isNegated is true, return true.

  3. Return false.

To match an attribution source against filters and negated filters given an attribution source source, a list of filter configs filters, a list of filter configs notFilters, and a moment moment:

  1. If the result of running match an attribution source against filters with source, filters, moment, and isNegated set to false is false, return false.

  2. If the result of running match an attribution source against filters with source, notFilters, moment, and isNegated set to true is false, return false.

  3. Return true.

12.3. Should attribution be blocked by attribution rate limit

Given an attribution trigger trigger and attribution source sourceToAttribute:

  1. Let matchingRateLimitRecords be all attribution rate-limit records record of attribution rate-limit cache where all of the following are true:

  2. If matchingRateLimitRecords’s size is greater than or equal to max attributions per rate-limit window, return blocked.

  3. Return allowed.

12.4. Should processing be blocked by reporting-origin limit

Given an attribution rate-limit record newRecord:

  1. Let max be max source reporting origins per rate-limit window.

  2. If newRecord’s scope is "attribution", set max to max attribution reporting origins per rate-limit window.

  3. Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  4. Let distinctReportingOrigins be the set of all reporting origin in matchingRateLimitRecords, unioned with «newRecord’s reporting origin»

  5. If distinctReportingOrigins’s size is greater than max, return blocked.

    NOTE: source scopes have an auxiliary max source reporting origins per source reporting site rate limit that also must be enforced.

  6. If newRecord’s scope is "attribution", return allowed

  7. Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  8. Let distinctReportingOrigins be the set of all reporting origin in matchingRateLimitRecords, unioned with «newRecord’s reporting origin»

  9. If distinctReportingOrigins’s size is greater than max source reporting origins per source reporting site, return blocked.

  10. Return allowed.

12.5. Should attribution be blocked by rate limits

Given an attribution trigger trigger, an attribution source sourceToAttribute, and an attribution rate-limit record newRecord:

  1. If the result of running should attribution be blocked by attribution rate limit with trigger and sourceToAttribute is blocked:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-attributions-per-source-destination-limit", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  2. If the result of running should processing be blocked by reporting-origin limit with newRecord is blocked:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-reporting-origin-limit", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  3. Return null.

12.6. Creating aggregatable contributions

To create aggregatable contributions from aggregation keys and aggregatable values given a map aggregationKeys and a map aggregatableValues, run the following steps:

  1. Let contributions be an empty list.

  2. For each idkey of aggregationKeys:

    1. If aggregatableValues[id] does not exist, continue.

    2. Let contribution be a new aggregatable contribution with the items:

      key

      key

      value

      aggregatableValues[id]

    3. Append contribution to contributions.

  3. Return contributions.

To create aggregatable contributions given an attribution source source and an attribution trigger trigger, run the following steps:

  1. Let aggregationKeys be the result of cloning source’s aggregation keys.

  2. For each triggerData of trigger’s aggregatable trigger data:

    1. If the result of running match an attribution source against filters and negated filters with source, triggerData’s filters, triggerData’s negated filters, and trigger’s trigger time is false, continue:

    2. For each sourceKey of triggerData’s source keys:

      1. If aggregationKeys[sourceKey] does not exist, continue.

      2. Set aggregationKeys[sourceKey] to aggregationKeys[sourceKey] bitwise-OR triggerData’s key piece.

  3. Let aggregatableValuesConfigurations be trigger’s aggregatable values configurations.

  4. For each aggregatableValuesConfiguration of aggregatableValuesConfigurations:

    1. If the result of running match an attribution source against filters and negated filters with source, aggregatableValuesConfiguration’s filters, aggregatableValuesConfiguration’s negated filters, and trigger’s trigger time is true:

      1. Return the result of running create aggregatable contributions from aggregation keys and aggregatable values with aggregationKeys and aggregatableValuesConfiguration’s values.

  5. Return a new empty list.

12.7. Can source create aggregatable contributions

To check if an attribution source can create aggregatable contributions given an aggregatable report report and an attribution source sourceToAttribute, run the following steps:

  1. Let remainingAggregatableBudget be allowed aggregatable budget per source minus sourceToAttribute’s aggregatable budget consumed.

  2. Assert: remainingAggregatableBudget is greater than or equal to 0.

  3. If report’s required aggregatable budget is greater than remainingAggregatableBudget, return false.

  4. Return true.

12.8. Obtaining debug data on trigger registration

To obtain debug data body on trigger registration given a trigger debug data type dataType, an attribution trigger trigger, an optional attribution source sourceToAttribute, and an optional attribution report report:

  1. Let body be a new empty map.

  2. If dataType is:

    "trigger-attributions-per-source-destination-limit"

    Set body["limit"] to the user agent’s max attributions per rate-limit window, serialized.

    "trigger-reporting-origin-limit"

    Set body["limit"] to the user agent’s max attribution reporting origins per rate-limit window, serialized.

    "trigger-event-storage-limit"

    Set body["limit"] to max event-level reports per attribution destination, serialized.

    "trigger-aggregate-storage-limit"

    Set body["limit"] to max aggregatable reports per attribution destination, serialized.

    "trigger-aggregate-insufficient-budget"

    Set body["limit"] to allowed aggregatable budget per source, serialized.

    "trigger-aggregate-excessive-reports"

    Set body["limit"] to max aggregatable reports per source,

    "trigger-event-low-priority"
    "trigger-event-excessive-reports"
    1. Assert: report is not null and is an event-level report.

    2. Return the result of running obtain an event-level report body with report.

  3. Set body["attribution_destination"] to trigger’s attribution destination, serialized.

  4. If trigger’s debug key is not null, set body["trigger_debug_key"] to trigger’s debug key, serialized.

  5. If sourceToAttribute is not null:

    1. Set body["source_event_id"] to source’s event ID, serialized.

    2. Set body["source_site"] to source’s source site, serialized.

    3. If sourceToAttribute’s debug key is not null, set body["source_debug_key"] to sourceToAttribute’s debug key, serialized.

  6. Return body.

To obtain debug data on trigger registration given a trigger debug data type dataType, an attribution trigger trigger, an optional attribution source sourceToAttribute, and an optional attribution report report:

  1. If trigger’s debug reporting enabled is false, return null.

  2. If the result of running check if cookie-based debugging is allowed with trigger’s reporting origin and trigger’s attribution destination is blocked, return null.

  3. If sourceToAttribute is not null and sourceToAttribute’s debug cookie set is false, return null.

  4. Let data be a new attribution debug data with the items:

    data type

    dataType.

    body

    The result of running obtain debug data body on trigger registration with dataType, trigger, sourceToAttribute and report.

  5. Return data.

12.9. Triggering event-level attribution

An event-level report a is lower-priority than an event-level report b if any of the following are true:

An event-level-report-replacement result is one of the following:

"add-new-report"

The new report should be added.

"drop-new-report-none-to-replace"

The new report should be dropped because the attributed source has reached its report limit and there is no pending report to consider for replacement.

"drop-new-report-low-priority"

The new report should be dropped because the attributed source has reached its report limit and the new report is lower-priority than all pending reports.

To maybe replace event-level report given an attribution source sourceToAttribute and an event-level report report:

  1. Assert: sourceToAttribute’s number of event-level reports is less than or equal to sourceToAttribute’s max number of event-level reports.

  2. If sourceToAttribute’s number of event-level reports is less than sourceToAttribute’s max number of event-level reports, return "add-new-report".

  3. Let matchingReports be a new list whose elements are all the elements in the event-level report cache whose report time and source identifier are equal to report’s, sorted in ascending order using is lower-priority than.

  4. If matchingReports is empty:

    1. Set sourceToAttribute’s event-level attributable value to false.

    2. Return "drop-new-report-none-to-replace".

  5. Assert: sourceToAttribute’s number of event-level reports is greater than or equal to matchingReports’s size.

  6. Let lowestPriorityReport be matchingReports[0].

  7. If report is lower-priority than lowestPriorityReport, return "drop-new-report-low-priority".

  8. Remove lowestPriorityReport from the event-level report cache.

  9. Decrement sourceToAttribute’s number of event-level reports value by 1.

  10. Return "add-new-report".

This algorithm is not compatible with the behavior proposed for experimental Flexible Event support with differing event-level report windows for a given source.

To trigger event-level attribution given an attribution trigger trigger, an attribution source sourceToAttribute, and an attribution rate-limit record rateLimitRecord, run the following steps:

  1. If trigger’s event-level trigger configurations is empty, return the triggering result ("dropped", null).

  2. If sourceToAttribute’s randomized response is not null and is not empty:

    1. Assert: sourceToAttribute’s event-level attributable is false.

    2. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-noise", trigger, sourceToAttribute and report set to null.

    3. Return the triggering result ("dropped", debugData).

  3. Let matchedConfig be null.

  4. For each event-level trigger configuration config of trigger’s event-level trigger configurations:

    1. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, config’s filters, config’s negated filters, and trigger’s trigger time is true:

      1. Set matchedConfig to config.

      2. Break.

  5. If matchedConfig is null:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-no-matching-configurations", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  6. If matchedConfig’s dedup key is not null and sourceToAttribute’s dedup keys contains it:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-deduplicated", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  7. Let specEntry be the result of finding a matching trigger spec with sourceToAttribute and matchedConfig’s trigger data.

  8. If specEntry is null:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-no-matching-trigger-data", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  9. Let windowResult be the result of check whether a moment falls within a window with trigger’s trigger time and specEntry’s value's event-level report windows's total window.

  10. If windowResult is falls before:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-report-window-not-started", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  11. If windowResult is falls after:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-report-window-passed", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  12. Assert: windowResult is falls within.

  13. Let numMatchingReports be the number of entries in the event-level report cache whose attribution destinations contains trigger’s attribution destination.

  14. If numMatchingReports is greater than or equal to the user agent’s max event-level reports per attribution destination:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-storage-limit", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  15. If the result of running should attribution be blocked by rate limits with trigger, sourceToAttribute, and rateLimitRecord is not null, return it.

  16. Let report be the result of running obtain an event-level report with sourceToAttribute, trigger’s trigger time, trigger’s debug key, matchedConfig’s priority, and specEntry.

  17. If sourceToAttribute’s event-level attributable value is false:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-excessive-reports", trigger, sourceToAttribute and report.

    2. Return the triggering result ("dropped", debugData).

  18. If the result of running maybe replace event-level report with sourceToAttribute and report is:

    "add-new-report"

    Do nothing.

    "drop-new-report-none-to-replace"
    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-excessive-reports", trigger, sourceToAttribute and report.

    2. Return the triggering result ("dropped", debugData).

    "drop-new-report-low-priority"
    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-event-low-priority", trigger, sourceToAttribute and report.

    2. Return the triggering result ("dropped", debugData).

  19. Let triggeringStatus be "attributed".

  20. Let debugData be null.

  21. If sourceToAttribute’s randomized response is:

    null

    Append report to the event-level report cache.

    not null
    1. Set triggeringStatus to "noised".

    2. Set debugData to the result of running obtain debug data on trigger registration with "trigger-event-noise", trigger, sourceToAttribute and report set to null.

  22. Increment sourceToAttribute’s number of event-level reports value by 1.

  23. If matchedConfig’s dedup key is not null, append it to sourceToAttribute’s dedup keys.

  24. If triggeringStatus is "attributed" and report’s source debug key is not null and report’s trigger debug key is not null, queue a task to attempt to deliver a debug report with report.

  25. Return the triggering result (triggeringStatus, debugData).

12.10. Triggering aggregatable attribution

To trigger aggregatable attribution given an attribution trigger trigger, an attribution source sourceToAttribute, and an attribution rate-limit record rateLimitRecord, run the following steps:

  1. If the result of running check if an attribution trigger contains aggregatable data is false, return the triggering result ("dropped", null).

  2. Let windowResult be the result of check whether a moment falls within a window with trigger’s trigger time and sourceToAttribute’s aggregatable report window.

  3. If windowResult is falls after:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-aggregate-report-window-passed", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  4. Assert: windowResult is falls within.

  5. Let matchedDedupKey be null.

  6. For each aggregatable dedup key aggregatableDedupKey of trigger’s aggregatable dedup keys:

    1. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, aggregatableDedupKey’s filters, aggregatableDedupKey’s negated filters, and trigger’s trigger time is true:

      1. Set matchedDedupKey to aggregatableDedupKey’s dedup key.

      2. Break.

  7. If matchedDedupKey is not null and sourceToAttribute’s aggregatable dedup keys contains it:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-aggregate-deduplicated", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  8. Let report be the result of running obtain an aggregatable report with sourceToAttribute and trigger.

  9. If report’s contributions is empty:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-aggregate-no-contributions", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  10. Let numMatchingReports be the number of entries in the aggregatable report cache whose effective attribution destination equals trigger’s attribution destination and is null report is false.

  11. If numMatchingReports is greater than or equal to the user agent’s max aggregatable reports per attribution destination:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-aggregate-storage-limit", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  12. If the result of running should attribution be blocked by rate limits with trigger, sourceToAttribute, and rateLimitRecord is not null, return it.

  13. If sourceToAttribute’s number of aggregatable reports value is equal to max aggregatable reports per source, then:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-aggregate-excessive-reports", trigger, sourceToAttribute, and report.

    2. Return the triggering result ("dropped", debugData).

  14. If the result of running check if an attribution source can create aggregatable contributions with report and sourceToAttribute is false:

    1. Let debugData be the result of running obtain debug data on trigger registration with "trigger-aggregate-insufficient-budget", trigger, sourceToAttribute and report set to null.

    2. Return the triggering result ("dropped", debugData).

  15. Add report to the aggregatable report cache.

  16. Increment sourceToAttribute’s number of aggregatable reports value by 1.

  17. Increment sourceToAttribute’s aggregatable budget consumed value by report’s required aggregatable budget.

  18. If matchedDedupKey is not null, append it to sourceToAttribute’s aggregatable dedup keys.

  19. Run generate null reports and assign private state tokens with trigger and report.

  20. If report’s source debug key is not null and report’s trigger debug key is not null, queue a task to attempt to deliver a debug report with report.

  21. Return the triggering result ("attributed", null).

12.11. Triggering attribution

To obtain and deliver a debug report on trigger registration given a trigger debug data type dataType, an attribution trigger trigger and an optional attribution source sourceToAttribute:

  1. Let debugData be the result of running obtain debug data on trigger registration with dataType, trigger, sourceToAttribute and report set to null.

  2. If debugData is null, return.

  3. Run obtain and deliver a debug report with « debugData » and trigger’s reporting origin.

To find matching sources given an attribution trigger trigger:

  1. Let matchingSources be a new empty list.

  2. For each source of the attribution source cache:

    1. If source’s attribution destinations does not contain trigger’s attribution destination, continue.

    2. If source’s reporting origin and trigger’s reporting origin are not same origin, continue.

    3. If source’s expiry time is less than or equal to trigger’s trigger time, continue.

    4. Append source to matchingSources.

  3. Set matchingSources to the result of sorting matchingSources in descending order, with a being less than b if any of the following are true:

  4. Return matchingSources.

To check if an attribution trigger contains aggregatable data given an attribution trigger trigger, run the following steps:

  1. If trigger’s aggregatable trigger data is not empty, return true.

  2. If any of trigger’s aggregatable values configurations's values is not empty, return true.

  3. Return false.

To trigger attribution given an attribution trigger trigger, run the following steps:

  1. Let hasAggregatableData be the result of checking if an attribution trigger contains aggregatable data with trigger.

  2. If trigger’s event-level trigger configurations is empty and hasAggregatableData is false, return.

  3. Let matchingSources be the result of running find matching sources with trigger.

  4. If matchingSources is empty:

    1. Run obtain and deliver a debug report on trigger registration with "trigger-no-matching-source", trigger and sourceToAttribute set to null.

    2. If hasAggregatableData is true, then run generate null reports and assign private state tokens with trigger and report set to null.

    3. Return.

  5. Let sourceToAttribute be matchingSources[0].

  6. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, trigger’s filters, trigger’s negated filters, and trigger’s trigger time is false:

    1. Run obtain and deliver a debug report on trigger registration with "trigger-no-matching-filter-data", trigger, and sourceToAttribute.

    2. If hasAggregatableData is true, then run generate null reports and assign private state tokens with trigger and report set to null.

    3. Return.

  7. Remove sourceToAttribute from matchingSources.

  8. For each item of matchingSources:

    1. Remove item from the attribution source cache.

  9. Let rateLimitRecord be a new attribution rate-limit record with the items:

    scope

    "attribution"

    source site

    sourceToAttribute’s source site

    attribution destination

    trigger’s attribution destination

    reporting origin

    sourceToAttribute’s reporting origin

    time

    sourceToAttribute’s source time

    expiry time

    null

  10. Let eventLevelResult be the result of running trigger event-level attribution with trigger, sourceToAttribute, and rateLimitRecord.

  11. Let aggregatableResult be the result of running trigger aggregatable attribution with trigger, sourceToAttribute, and rateLimitRecord.

  12. Let eventLevelDebugData be eventLevelResult’s debug data.

  13. Let aggregatableDebugData be aggregatableResult’s debug data.

  14. Let debugDataList be an empty list.

  15. If eventLevelDebugData is not null, then append eventLevelDebugData to debugDataList.

  16. If aggregatableDebugData is not null:

    1. If debugDataList is empty or aggregatableDebugData’s data type does not equal eventLevelDebugData’s data type, then append aggregatableDebugData to debugDataList.

  17. If debugDataList is not empty, then run obtain and deliver a debug report with debugDataList and trigger’s reporting origin.

  18. If hasAggregatableData and aggregatableResult’s status is "dropped", run generate null reports and assign private state tokens with trigger and report set to null.

  19. If both eventLevelResult’s status and aggregatableResult’s status are "dropped", return.

  20. If neither eventLevelResult’s status nor aggregatableResult’s status is "attributed", return.

  21. Append rateLimitRecord to the attribution rate-limit cache.

  22. Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and trigger’s trigger time is true.

12.12. Establishing report delivery time

To check whether a moment falls within a window given a moment moment and a report window window:

  1. If moment is less than window’s start, return falls before.

  2. If moment is greater than or equal to window’s end, return falls after.

  3. Return falls within.

To obtain an event-level report delivery time given a report window list windows and a moment triggerTime:

  1. If automation local testing mode is true, return triggerTime.

  2. For each window of windows:

    1. If the result of check whether a moment falls within a window with triggerTime and window is falls within, return window’s end.

  3. Assert: not reached.

To obtain an aggregatable report delivery time given an attribution trigger trigger, perform the following steps. They return a moment.

  1. Let triggerTime be trigger’s trigger time.

  2. If trigger’s trigger context ID is not null, return triggerTime.

  3. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  4. Return triggerTime + r * randomized aggregatable report delay.

12.13. Obtaining an event-level report

To obtain an event-level report given an attribution source source, a moment triggerTime, an optional non-negative 64-bit integer triggerDebugKey, a 64-bit integer priority priority, and a trigger spec map entry specEntry:

  1. Let reportTime be the result of running obtain an event-level report delivery time with specEntry’s value's event-level report windows and triggerTime.

  2. Let report be a new event-level report struct whose items are:

    event ID

    source’s event ID.

    trigger data

    specEntry’s key

    randomized trigger rate

    source’s randomized trigger rate.

    reporting origin

    source’s reporting origin.

    attribution destinations

    source’s attribution destinations.

    report time

    reportTime

    trigger priority

    priority.

    trigger time

    triggerTime.

    source identifier

    source’s source identifier.

    report id

    The result of generating a random UUID.

    source debug key

    source’s debug key.

    trigger debug key

    triggerDebugKey.

  3. Return report.

12.14. Obtaining an aggregatable report’s required budget

An aggregatable report report’s required aggregatable budget is the total value of report’s contributions.

12.15. Obtaining an aggregatable report

To obtain an aggregatable report given an attribution source source and an attribution trigger trigger:

  1. Let reportTime be the result of running obtain an aggregatable report delivery time with trigger.

  2. Let report be a new aggregatable report struct whose items are:

    reporting origin

    source’s reporting origin.

    effective attribution destination

    trigger’s attribution destination.

    source time

    source’s source time.

    report time

    reportTime.

    report id

    The result of generating a random UUID.

    source debug key

    source’s debug key.

    trigger debug key

    trigger’s debug key.

    contributions

    The result of running create aggregatable contributions with source and trigger.

    serialized private state token

    null.

    aggregation coordinator

    trigger’s aggregation coordinator.

    source registration time configuration

    trigger’s aggregatable source registration time configuration.

    trigger context ID

    trigger’s trigger context ID

  3. Return report.

12.16. Generating randomized null reports

To obtain a null report given an attribution trigger trigger and a moment sourceTime:

  1. Let reportTime be the result of running obtain an aggregatable report delivery time with trigger.

  2. Let report be a new aggregatable report struct whose items are:

    reporting origin

    trigger’s reporting origin

    effective attribution destination

    trigger’s attribution destination

    source time

    sourceTime

    report time

    reportTime

    report id

    The result of generating a random UUID

    source debug key

    null

    trigger debug key

    trigger’s debug key

    contributions

    «»

    serialized private state token

    null

    aggregation coordinator

    trigger’s aggregation coordinator

    source registration time configuration

    trigger’s aggregatable source registration time configuration

    is null report

    true

    trigger context ID

    trigger’s trigger context ID

  3. Return report.

To obtain rounded source time given a moment sourceTime, return sourceTime in seconds since the UNIX epoch, rounded down to a multiple of a whole day (86400 seconds).

To determine if a randomized null report is generated given a double randomPickRate:

  1. Assert: randomPickRate is between 0 and 1 (both inclusive).

  2. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  3. If r is less than randomPickRate, return true.

  4. Otherwise, return false.

To generate null reports given an attribution trigger trigger and an optional aggregatable report report defaulting to null:

  1. Let nullReports be a new empty list.

  2. If trigger’s aggregatable source registration time configuration is "exclude":

    1. Let randomizedNullReportRate be randomized null report rate excluding source registration time.

    2. If trigger’s trigger context ID is not null, set randomizedNullReportRate to 1.

    3. If report is null and the result of determining if a randomized null report is generated with randomizedNullReportRate is true:

      1. Let nullReport be the result of obtaining a null report with trigger and trigger’s trigger time.

      2. Append nullReport to the aggregatable report cache.

      3. Append nullReport to nullReports.

  3. Otherwise:

    1. Assert: trigger’s trigger context ID is null.

    2. Let maxSourceExpiry be valid source expiry range[1].

    3. Round maxSourceExpiry away from zero to the nearest day (86400 seconds).

    4. Let roundedAttributedSourceTime be null.

    5. If report is not null, set roundedAttributedSourceTime to the result of obtaining rounded source time with report’s source time.

    6. For each integer day of the range 0 to the number of days in maxSourceExpiry, inclusive:

      1. Let fakeSourceTime be trigger’s trigger time - day days.

      2. If roundedAttributedSourceTime is not null and equals the result of obtaining rounded source time with fakeSourceTime:

        1. Continue.

      3. If the result of determining if a randomized null report is generated with randomized null report rate including source registration time is true:

        1. Let nullReport be the result of obtaining a null report with trigger and fakeSourceTime.

        2. Append nullReport to the aggregatable report cache.

        3. Append nullReport to nullReports.

  4. Return nullReports.

To shuffle a list list, reorder list’s elements such that each possible permutation has equal probability of appearance.

To assign private state tokens given a list of aggregatable reports reports and an attribution trigger trigger:

  1. If reports is empty, return.

  2. Let verifications be trigger’s verifications.

  3. If verifications is empty, return.

  4. Shuffle reports.

  5. Shuffle verifications.

  6. Let n be the minimum of reports’s size and verifications’s size.

  7. For each integer i of the range 0 to n, exclusive:

    1. Set reports[i]'s report ID to verifications[i]'s id.

    2. Set reports[i]'s serialized private state token to verifications[i]'s token.

To generate null reports and assign private state tokens given an attribution trigger trigger and an optional aggregatable report report defaulting to null:

  1. Let reports be the result of generating null reports with trigger and report.

  2. If report is not null:

    1. Append report to reports.

  3. Run assign private state tokens with reports and trigger.

12.17. Deferring trigger attribution

To maybe defer and then complete trigger attribution given an attribution trigger trigger, run the following steps in parallel:

  1. Let navigation be the navigation that landed on the document from which trigger’s registration was initiated.

  2. If navigation is null, return.

  3. Let sources be all source registrations originating from background attributionsrc requests initiated by navigation.

  4. If sources is empty, return.

  5. Wait until all sources are processed.

  6. Queue a task on the networking task source to trigger attribution with trigger.

Specify this in terms of Navigation

13. Report delivery

The user agent MUST periodically run queue reports for delivery on the event-level report cache and aggregatable report cache.

To queue reports for delivery given a set of attribution reports cache, run the following steps:

  1. For each report of cache:

  2. If report’s report time is greater than the current wall time, continue.

  3. Remove report from cache.

    Note: In order to support sending, waiting, and retries across various forms of interruption, including shutdown, the user agent may need to persist reports that are in the process of being sent in some other storage.

  4. Run the following steps in parallel:

    1. Wait an implementation-defined random non-negative duration.

      Note: On startup, it is possible the user agent will need to send many reports whose report times passed while the browser was closed. Adding random delay prevents temporal joining of reports from different source origins.

    2. Optionally, wait a further implementation-defined duration.

      Note: This is intended to allow user agents to optimize device resource usage.

    3. Run attempt to deliver a report with report.

13.1. Encode an unsigned k-bit integer

To encode an unsigned k-bit integer, represent it as a big-endian byte sequence of length k / 8, left padding with zero as necessary.

13.2. Obtaining an aggregatable report’s debug mode

An aggregatable report report’s debug mode is the result of running the following steps:

  1. If report’s source debug key is null, return disabled.

  2. If report’s trigger debug key is null, return disabled.

  3. Return enabled.

13.3. Obtaining an aggregatable report’s shared info

An aggregatable report report’s shared info is the result of running the following steps:

  1. Let reportingOrigin be report’s reporting origin.

  2. Let sharedInfo be an ordered map of the following key/value pairs:

    "api"

    "attribution-reporting"

    "attribution_destination"

    report’s effective attribution destination, serialized

    "report_id"

    report’s report ID

    "reporting_origin"

    reportingOrigin, serialized

    "scheduled_report_time"

    report’s report time in seconds since the UNIX epoch, serialized

    "version"

    A string, API version.

  3. If report’s debug mode is enabled, set sharedInfo["debug_mode"] to "enabled".

  4. If report’s source registration time configuration is:

    "include"

    Set sharedInfo["source_registration_time"] to the result of obtaining rounded source time with report’s source time, serialized.

    "exclude"

    Set sharedInfo["source_registration_time"] to "0".

  5. Return the string resulting from executing serialize an infra value to a json string on sharedInfo.

13.4. Obtaining an aggregatable report’s aggregation service payloads

To obtain the public key for encryption given an aggregation coordinator aggregationCoordinator:

  1. Let url be a new URL record.

  2. Set url’s scheme to aggregationCoordinator’s scheme.

  3. Set url’s host to aggregationCoordinator’s host.

  4. Set url’s port to aggregationCoordinator’s port.

  5. Set url’s path to «".well-known", "aggregation-service", "v1", "public-keys"».

  6. Return a user-agent-determined public key from url or an error in the event that the user agent failed to obtain the public key from url. This step may be asynchronous.

Specify this in terms of fetch.

Note: The user agent might enforce weekly key rotation. If there are multiple keys, the user agent might independently pick a key uniformly at random for every encryption operation. The key should be uniquely identifiable.

An aggregatable report report’s plaintext payload is the result of running the following steps:

  1. Let payloadData be a new empty list.

  2. Let contributions be report’s contributions.

  3. While contributionssize is less than max aggregation keys per source registration:

    1. Let nullContribution be a new aggregatable contribution with the items:

      key

      0

      value

      0

    2. Append nullContribution to contributions.

  4. For each contribution of contributions:

    1. Let contributionData be a map of the following key/value pairs:

      "bucket"

      contribution’s key, encoded

      "value"

      contribution’s value, encoded

    2. Append contributionData to payloadData.

  5. Let payload be a map of the following key/value pairs:

    "data"

    payloadData

    "operation"

    "histogram"

  6. Return the byte sequence resulting from CBOR encoding payload.

To obtain the encrypted payload given an aggregatable report report and a public key pkR, run the following steps:

  1. Let plaintext be report’s plaintext payload.

  2. Let encodedSharedInfo be report’s shared info, encoded.

  3. Let info be the concatenation of «"aggregation_service", encodedSharedInfo».

  4. Set up HPKE sender’s context with pkR and info.

  5. Return the byte sequence or an error resulting from encrypting plaintext with the sender’s context.

To obtain the aggregation service payloads given an aggregatable report report, run the following steps:

  1. Let pkR be the result of running obtain the public key for encryption with report’s aggregation coordinator.

  2. If pkR is an error, return pkR.

  3. Let encryptedPayload be the result of running obtain the encrypted payload with report and pkR.

  4. If encryptedPayload is an error, return encryptedPayload.

  5. Let aggregationServicePayloads be a new empty list.

  6. Let aggregationServicePayload be a map of the following key/value pairs:

    "payload"

    encryptedPayload, base64 encoded

    "key_id"

    A string identifying pkR

  7. If report’s debug mode is enabled, set aggregationServicePayload["debug_cleartext_payload"] to report’s plaintext payload, base64 encoded.

  8. Append aggregationServicePayload to aggregationServicePayloads.

  9. Return aggregationServicePayloads.

13.5. Serialize attribution report body

To obtain an event-level report body given an attribution report report, run the following steps:

  1. Let data be a map of the following key/value pairs:

    "attribution_destination"

    report’s attribution destinations, serialized.

    "randomized_trigger_rate"

    report’s randomized trigger rate

    "source_type"

    report’s source type

    "source_event_id"

    report’s event ID, serialized

    "trigger_data"

    report’s trigger data, serialized

    "report_id"

    report’s report ID

    "scheduled_report_time"

    report’s report time in seconds since the UNIX epoch, serialized

  2. If report’s source debug key is not null, set data["source_debug_key"] to report’s source debug key, serialized.

  3. If report’s trigger debug key is not null, set data["trigger_debug_key"] to report’s trigger debug key, serialized.

  4. Return data.

To serialize an event-level report report, run the following steps:

  1. Let data be the result of running obtain an event-level report body with report.

  2. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.

To serialize an aggregatable report report, run the following steps:

  1. Assert: report’s effective attribution destination is not the opaque origin.

  2. Let aggregationServicePayloads be the result of running obtain the aggregation service payloads.

  3. If aggregationServicePayloads is an error, return aggregationServicePayloads.

  4. Let data be a map of the following key/value pairs:

    "shared_info"

    report’s shared info

    "aggregation_service_payloads"

    aggregationServicePayloads

    "aggregation_coordinator_origin"

    report’s aggregation coordinator, serialized

  5. If report’s source debug key is not null, set data["source_debug_key"] to report’s source debug key, serialized.

  6. If report’s trigger debug key is not null, set data["trigger_debug_key"] to report’s trigger debug key, serialized.

  7. If report’s trigger context ID is not null, set data["trigger_context_id"] to report’s trigger context ID.

  8. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.

To serialize an attribution report report, run the following steps:

  1. If report is an:

    event-level report
    Return the result of running serialize an event-level report with report.
    aggregatable report
    Return the result of running serialize an aggregatable report with report.

Note: The inclusion of "report_id" in the report body is intended to allow the report recipient to perform deduplication and prevent double counting, in the event that the user agent retries reports on failure. To prevent the report recipient from learning additional information about whether a user is online, retries might be limited in number and subject to random delays.

13.6. Serialize attribution debug report body

To serialize an attribution debug report report, run the following steps:

  1. Let collection be an empty list.

  2. For each debugData of report’s data:

    1. Let data be a map of the following key/value pairs:

      "type"

      debugData’s data type

      "body"

      debugData’s body

    2. Append data to collection.

  3. Return the byte sequence resulting from executing serialize an Infra value to JSON bytes on collection.

13.7. Get report request URL

To generate a report URL given a suitable origin reportingOrigin and a list of strings path:

  1. Let reportUrl be a new URL record.

  2. Set reportUrl’s scheme to reportingOrigin’s scheme.

  3. Set reportUrl’s host to reportingOrigin’s host.

  4. Set reportUrl’s port to reportingOrigin’s port.

  5. Let fullPath be «".well-known", "attribution-reporting"».

  6. Append path to fullPath.

  7. Set reportUrl’s path to fullPath.

  8. Return reportUrl.

To generate an attribution report URL given an attribution report report and an optional boolean isDebugReport (default false):

  1. Let path be an empty list.

  2. If isDebugReport is true, append "debug" to path.

  3. If report is an:

    event-level report
    Append "report-event-attribution" to path.
    aggregatable report
    Append "report-aggregate-attribution" to path.
  4. Return the result of running generate a report URL with report’s reporting origin and path.

To generate an attribution debug report URL given an attribution debug report report:

  1. Let path be «"debug", "verbose"».

  2. Return the result of running generate a report URL with report’s reporting origin and path.

13.8. Creating a report request

To create a report request given a URL url, a byte sequence body, and a header list newHeaders (defaults to an empty list):

  1. Let headers be a new header list containing a header named "Content-Type" whose value is "application/json".

  2. For each header in newHeaders:

    1. Append header to headers.

  3. Let request be a new request with the following properties:

    method

    "POST"

    URL

    url

    header list

    headers

    body

    A body whose source is body.

    referrer

    "no-referrer"

    client

    null

    origin

    url’s origin

    window

    "no-window"

    service-workers mode

    "none"

    initiator

    ""

    mode

    "same-origin"

    unsafe-request flag

    set

    credentials mode

    "omit"

    cache mode

    "no-store"

  4. Return request.

13.9. Get report request headers

To generate attribution report headers given an attribution report report:

  1. Let newHeaders be a new header list.

  2. If report is an aggregatable report:

    1. If report’s serialized private state token is not null, append a new header named "Sec-Attribution-Reporting-Private-State-Token" to newHeaders whose value is report’s serialized private state token.

  3. Return newHeaders.

13.10. Issuing a report request

This algorithm constructs a request and attempts to deliver it to a suitable origin.

To attempt to deliver a report given an attribution report report, run the following steps:

  1. Assert: Neither the event-level report cache nor the aggregatable report cache contains report.

  2. The user-agent MAY ignore the report; if so, return.

  3. Let url be the result of executing generate an attribution report URL on report.

  4. Let data be the result of executing serialize an attribution report on report.

  5. If data is an error, return.

  6. Let headers be the result of executing generate attribution report headers.

  7. Let request be the result of executing create a report request on url, data, and headers.

  8. Queue a task to fetch request.

This fetch should use a network partition key for an opaque origin. [Issue #220]

A user agent MAY retry this algorithm in the event that there was an error.

13.11. Issuing a debug report request

To attempt to deliver a debug report given an attribution report report:

  1. The user-agent MAY ignore the report; if so, return.

  2. Let url be the result of executing generate an attribution report URL on report with isDebugReport set to true.

  3. Let data be the result of executing serialize an attribution report on report.

  4. If data is an error, return.

  5. Let headers be the result of executing generate attribution report headers.

  6. Let request be the result of executing create a report request on url, data, and headers.

  7. Fetch request.

13.12. Issuing a verbose debug request

To attempt to deliver a verbose debug report given an attribution debug report report:

  1. The user-agent MAY ignore the report; if so, return.

  2. Let url be the result of executing generate an attribution debug report URL on report.

  3. Let data be the result of executing serialize an attribution debug report on report.

  4. Let request be the result of executing create a report request on url and data.

  5. Fetch request.

This fetch should use a network partition key for an opaque origin. [Issue #220]

A user agent MAY retry this algorithm in the event that there was an error.

14. Cross App and Web Algorithms

An OS registration is a struct with the following items:

URL

A URL

debug reporting enabled

A boolean

To get OS registrations from a header list given a header list headers and a header name name:

  1. Let values be the result of getting name from headers with a type of "list".

  2. If values is not a list, return null.

  3. Let registrations be a new list.

  4. For each value of values:

    1. If value is not a string, continue.

    2. Let url be the result of running the URL parser on value.

    3. If url is failure or null, continue.

    4. Let debugReporting be false.

    5. Let params be the parameters associated with value.

    6. If params["debug-reporting"] exists and params["debug-reporting"] is a boolean, set debugReporting to params["debug-reporting"].

    7. Let registration be a new OS registration struct whose items are:

      URL

      url

      debug reporting enabled

      debugReporting

    8. Append registration to registrations.

  5. Return registrations.

To get supported registrars:

  1. Let supportedRegistrars be an empty list.

  2. If the user agent supports web registrations, append "web" to supportedRegistrars.

  3. If the user agent supports OS registrations, append "os" to supportedRegistrars.

  4. Return supportedRegistrars.

"Attribution-Reporting-Support" is a Dictionary Structured Header set on a request that indicates which registrars, if any, the corresponding response can use. Its values are not specified and its allowed keys are the registrars.

To set an OS-support header given a header list headers:

  1. Let supportedRegistrars be the result of getting supported registrars.

  2. Let dict be the result of obtaining a dictionary structured header value with supportedRegistrars and the set containing all the registrars.

  3. Set a structured field value given ("Attribution-Reporting-Support", dict) in headers.

15. Report Verification Algorithms

"Sec-Attribution-Reporting-Private-State-Token" is a structured header used to orchestrate report verification.

15.1. Initiate trigger verification

To set trigger verification request headers given a request request and a suitable origin destination:

  1. Let issuer be request’s URL's origin.

  2. If issuer is not suitable, return.

  3. Let (issuerKeys, pretokens, cryptoProtocolVersion) be the result of looking up the key commitments for issuer.

  4. If issuerKeys is null, return.

  5. Let ids be a new list.

  6. Let base64EncodedMaskedMessages be a new list.

  7. While ids’s size is less than verification tokens per trigger:

    1. Let id be the result of generating a random UUID.

    2. Append id to ids.

    3. Let message be the concatenation of id and destination.

    4. Let maskedMessage be the result of generating masked tokens with issuerKeys and message.

    5. Let base64EncodedMaskedMessage be the forgiving-base64 encoding of maskedMessage.

    6. Let fieldValue be the result of serializing a string with base64EncodedMaskedMessage.

    7. Append fieldValue to base64EncodedMaskedMessages.

  8. Let request’s trigger verification metadata be a trigger verification metadata with the items:

    pretokens

    pretokens

    IDs

    ids

  9. Set a structured field value given ("Sec-Attribution-Reporting-Private-State-Token", base64EncodedMaskedMessages) in request’s header list.

  10. Set a structured field value given ("Sec-Private-State-Token-Crypto-Version", cryptoProtocolVersion) in request’s header list.

Link to a generating masked tokens algorithm that receives messages.

15.2. Handle trigger verification

To receive trigger verification tokens given an origin issuer, a header list, headers, and a trigger verification metadata metadata:

  1. If metadata is null, return a new list.

  2. If metadata’s IDs's size is not equal to verification tokens per trigger, return a new list.

  3. Let pretokens be metadata’s pretokens.

  4. Let issuerKeys be the result of looking up the key commitments for issuer.

  5. If issuerKeys is null, return a new list.

  6. Let fieldValue be the result of getting a structured field value given ("Sec-Attribution-Reporting-Private-State-Token", "list") from response’s header list.

  7. Let base64EncodedMaskedTokens be the result of parsing a list with fieldValue.

  8. Delete "Sec-Attribution-Reporting-Private-State-Token" from response’s header list.

  9. If base64EncodedMaskedTokens’s size is greater than metadata’s IDs's size, return a new list.

  10. Let base64EncodedPrivateStateTokens be a new list.

  11. For each base64EncodedMaskedToken of base64EncodedMaskedTokens:

    1. Let base64EncodedMaskedTokenValue be the result of parsing a string with base64EncodedMaskedToken.

    2. Let maskedToken be the forgiving-base64 decoding of base64EncodedMaskedTokenValue.

    3. Let token be the result of unmasking tokens given issuerKeys, pretokens, and maskedToken.

    4. If token is null, return a new list.

    5. Let privateStateToken be the result of running a cryptographic redemption procedure with token.

    6. If privateStateToken is null, return a new list.

    7. Let base64EncodedPrivateStateToken be the forgiving-base64 encoding of privateStateToken.

    8. Append base64EncodedPrivateStateToken to base64EncodedPrivateStateTokens.

  12. Let triggerVerifications be a new list.

  13. For each integer i of base64EncodedPrivateStateTokens’s indices:

    1. Let triggerVerification be a trigger verification with the items:

      token

      base64EncodedPrivateStateTokens[i]

      id

      metadata’s IDs[i]

    2. Append triggerVerification to triggerVerifications.

  14. Return triggerVerifications.

Specify the cryptographic redemption procedure.

To obtain and deliver debug reports on OS registrations given an OS debug data type dataType, a list of OS registrations registrations, and an origin contextOrigin:

  1. If registrations is empty, return.

  2. Let contextSite be the result of obtaining a site from contextOrigin.

  3. For each registration of registrations:

    1. If registration’s debug reporting enabled is false, continue.

    2. Let origin be registration’s URL's origin.

    3. If origin is not suitable, continue.

    4. Let body be a new map with the following key/value pairs:

      "context_site"

      contextSite, serialized.

      "registration_url"

      registration’s URL, serialized.

    5. Let data be a new attribution debug data with the items:

      data type

      dataType

      body

      body

    6. Run obtain and deliver a debug report with « data » and origin.

16. User-Agent Automation

The user-agent has an associated boolean automation local testing mode (default false).

For the purposes of user-agent automation and website testing, this document defines the below [WebDriver] extension commands to control the API configuration.

16.1. Set local testing mode

HTTP Method URI Template
POST /session/{session id}/ara/localtestingmode

The remote end steps are:

  1. If parameters is not a JSON-formatted Object, return a WebDriver error with error code invalid argument.

  2. Let enabled be the result of getting a property named "enabled" from parameters.

  3. If enabled is undefined or is not a boolean, return a WebDriver error with error code invalid argument.

  4. Set automation local testing mode to enabled.

  5. Return success with data null.

Note: Without this, reports would be subject to noise and delays, making testing difficult.

17. Security considerations

17.1. Same-Origin Policy

This section is non-normative.

Writes to the attribution source cache, event-level report cache, and aggregatable report cache are separated by the reporting origin, and reports sent to a given origin are generated via data only written to by that origin, via HTTP response headers.

However, the attribution rate-limit cache is not fully partitioned by origin. Reads to that cache involve grouping together data submitted by multiple origins. This is the case for the following limits:

These limits are explicit relaxations of the Same-Origin Policy, in that they allow different origins to influence the API’s behavior. In particular, one risk that is introduced with these shared limits is denial of service attacks, where a group of origins could collude to intentionally hit a rate-limit, causing subsequent origins to be unable to access the API.

This trades off security for privacy, in that the limits are there to reduce the efficacy of many origins colluding together to violate privacy. API deployments should monitor for abuse using these vectors to evaluate the trade-off.

The generation of attribution debug reports involves reads to the attribution source cache, event-level report cache, aggregatable report cache, and attribution rate-limit cache, and the attribution debug data sent to a given origin may encode non-Same-Origin data that are generated from grouping together data submitted by multiple origins, e.g. failures due to rate-limits that are not fully compliant with the Same-Origin Policy. This is of greater concern for source registrations as the source origin could intentionally hit a rate-limit to identify sensitive user data. These attribution debug data cannot be reported explicitly and may be reported as a source-success attribution debug report. This is a tradeoff between security and utility, and mitigates the security concern with respect to the Same-Origin Policy. The risk is of less concern for trigger registrations as attribution sources have to be registered to start with and it requires browsing activity on multiple sites.

17.2. Opting in to the API

This section is non-normative.

As a general principle, the API cannot be used purely at the HTTP layer without some level of opt-in from JavaScript or HTML. For HTML, this opt-in is in the form of the attributionSrc attribute, and for JavaScript, it is the various modifications to fetch, XMLHttpRequest, and the window open steps.

However, this principle is only strictly applied to registering attribution sources. For triggering attribution, we waive this requirement for the sake of compatibility with existing systems, see 347 for context.

18. Privacy considerations

18.1. Clearing site data

The attribution caches contain data about a user’s web activity. As such, the user agent MAY expose controls that allow the user to delete data from them.

18.2. Cross-site information disclosure

This section is non-normative.

The API is concerned with protecting arbitrary cross-site information from being passed from one site to another. For a given attribution source, any outcome associated with it is considered cross-site information. This includes:

The information embedded in the API output is arbitrary but can include things like browsing history and other cross-site activity. The API aims to provide some protection for this information:

18.2.1. Event-level reports

Any given attribution source has a set of possible trigger states. The choice of trigger state may encode cross-site information. To protect the cross-site information disclosure, each attribution source is subject to a randomized response mechanism [RR], which will choose a state at random with pick rate dependent on the source’s event-level epsilon, which has an upper bound of the user agent’s max settable event-level epsilon.

This introduces some level of plausible deniability into the resulting event-level reports (or lack thereof), as there is always a chance that the output was generated from a random process. We can reason about the protection this gives an individual attribution source from the lens of differential privacy [DP].

Additionally, event-level reports limit the amount of relative cross-site information associated with a particular attribution source. We model this using the notion of channel capacity [CHAN]. For every attribution source, it is possible to model its output as a noisy channel. The number of input/output symbols is governed by its associated set of possible trigger states. With the randomized response mechanism, this allows us to analyze the output as a q-ary symmetric channel [Q-SC], with q equal to the size of the set of possible trigger states. This is normatively defined in the compute the channel capacity of a source algorithm.

Note that navigation attribution sources and event attribution sources may have different channel capacities, given that event attribution sources can be registered without user activation or top-level navigation. Maximum capacity for each type is governed by the vendor-defined max event-level channel capacity per source.

18.2.2. Aggregatable reports

Aggregatable reports protect against cross-site information disclosure in two primary ways:

  1. For a given attribution trigger, whether it is attributed to a source is subject to one-way noise via generating null reports with some probability. Note that because the noise does not drop true reports, this is only a partial mitigation, as if an attribution source never generates an aggregatable report, an adversary can learn with 100% certainty that an attribution source was never matched with an attribution trigger.

  2. Cross-site information embedded in an aggregatable report's contributions is encrypted with a public key, ensuring that individual contributions cannot be accessed until an aggregation service subjects them to aggregation and an additive noise process.

add links to the aggregation service noise addition algorithm.

model the channel capacity of a trigger registration.

18.2.3. Debug reports

Fine-grained cross-site information may be embedded in attribution reports sent with isDebugReport being true and certain types of attribution debug reports. These reports will only be allowed when third-party cookies are available, in which case the API caller had the capability to learn the underlying information.

18.3. Protecting against cross-site recognition

This section is non-normative.

A primary privacy goal of the API is to make linking identity between two different top-level sites difficult. This happens when either a request or a JavaScript environment has two user IDs from two different sites simultaneously. Both event-level reports and aggregatable reports were designed to make this kind of recognition difficult:

18.3.1. Event-level reports

Event-level reports come bearing a fine-grained event id that can uniquely identify the source event, which may be joinable with a user’s identity. As such, for event-level reports to protect the cross-site recognition risk, they contain only a small amount (measured via channel capacity) of relative cross-site information from any of the attribution destinations. By limiting the amount of relative cross-site information embedded in event-level reports, we make it difficult for an identifier to be passed through this channel to enable cross-site recognition alone.

18.3.2. Aggregatable reports

Aggregatable reports only contain fine-grained cross-site information in encrypted form. In cleartext, they contain only coarse-grained information from the source site and effective attribution destination. This makes it difficult for an aggregatable report to be associated with a user from either site.

The cross-site recognition risk of the data encrypted in "aggregation_service_payloads" is mitigated by the additive noise addition in the aggregation service.

18.4. Mitigating against repeated API use

fill in this section

18.5. Protecting against browsing history reconstruction

fill in this section

18.6. Reporting-delay concerns

This section is non-normative.

Sending reports some time after attribution occurs enables side-channel leakage in some situations.

18.6.1. Cross-network reporting-origin leakage

A report may be stored while the browser is connected to one network but sent while the browser is connected to a different network, potentially enabling cross-network leakage of the reporting origin.

Example: A user runs the browser with a particular browsing profile on their home network. An attribution report with a particular reporting origin is stored with a scheduled report time in the future. After the scheduled report time is reached, the user runs the browser with the same browsing profile on their employer’s network, at which point the browser sends the report to the reporting origin. Although the report itself may be sent over HTTPS, the reporting origin may be visible to the network administrator via DNS or the TLS client hello (which can be mitigated with ECH). Some reporting origins may be known to operate only or primarily on sensitive sites, so this could leak information about the user’s browsing activity to the user’s employer without their knowledge or consent.

Possible mitigations:

  1. Only send reports with a given reporting origin when the browser has already made a request to that origin on the same network. This prevents the network administrator from gaining additional information from the Attribution Reporting API. However, it increases report loss and report delays, which reduces the utility of the API for the reporting origin. It might also increase the effectiveness of timing attacks, as the origin may be able to better link the report with the user’s request that allowed the report to be released.

  2. Send reports immediately: This reduces the likelihood of a report being stored and sent on different networks. However, it increases the likelihood that the reporting origin can correlate the original request made to the reporting origin for attribution to the report, which weakens the attribution-side privacy controls of the API. In particular, this destroys the differential privacy framework we have for event-level reports. It would also make the trigger priority functionality impossible, as there would be no way to replace a lower-priority report that was already sent.

  3. Use a trusted proxy server to send reports: This effectively moves the reporting origin into the report body, so only the proxy server would be visible to the network administrator.

  4. Require DNS over HTTPS: This effectively hides the reporting origin from the network administrator, but is likely impractical to enforce and is itself perhaps circumventable by the network administrator.

18.6.2. User-presence tracking

The browser only tries to send reports while it is running and while it has internet connectivity (even without an explicit check for connectivity, naturally the report will fail to be sent if there is none), so receiving or not receiving an event-level report at the expected time leaks information about the user’s presence. Additionally, because the report request inherently includes an IP address, this could reveal the user’s IP-derived whereabouts to the reporting origin, including at-home vs. at-work or approximate real-world geolocation, or reveal patterns in the user’s browsing activity.

Possible mitigations:

  1. Send reports immediately: This effectively eliminates the presence tracking, as the original request made to the reporting origin is in close temporal proximity to the report request. However, it increases the likelihood that the reporting origin can correlate the two requests, which weakens the attribution-side privacy controls of the API. It would also make the trigger priority functionality impossible, as there would be no way to replace a lower-priority report that was already sent.

  2. Send reports immediately to a trusted proxy server, which would itself send the report to the reporting origin with additional delay. This would effectively hide both the user’s IP address and their online-offline presence from the reporting origin. Compared to the previous mitigation, the proxy server could itself handle the trigger priority functionality, at the cost of increased complexity in the proxy.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[CLEAR-SITE-DATA]
Mike West. Clear Site Data. URL: https://w3c.github.io/webappsec-clear-site-data/
[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[ENCODING]
Anne van Kesteren. Encoding Standard. Living Standard. URL: https://encoding.spec.whatwg.org/
[FENCED-FRAME]
Fenced Frame. Draft Community Group Report. URL: https://wicg.github.io/fenced-frame/
[FETCH]
Anne van Kesteren. Fetch Standard. Living Standard. URL: https://fetch.spec.whatwg.org/
[HR-TIME-3]
Yoav Weiss. High Resolution Time. URL: https://w3c.github.io/hr-time/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[PERMISSIONS-POLICY-1]
Ian Clelland. Permissions Policy. URL: https://w3c.github.io/webappsec-permissions-policy/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[RFC8949]
C. Bormann; P. Hoffman. Concise Binary Object Representation (CBOR). December 2020. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc8949
[RFC9180]
R. Barnes; et al. Hybrid Public Key Encryption. February 2022. Informational. URL: https://www.rfc-editor.org/rfc/rfc9180
[SECURE-CONTEXTS]
Mike West. Secure Contexts. URL: https://w3c.github.io/webappsec-secure-contexts/
[URL]
Anne van Kesteren. URL Standard. Living Standard. URL: https://url.spec.whatwg.org/
[WebDriver]
Simon Stewart; David Burns. WebDriver. URL: https://w3c.github.io/webdriver/
[WEBDRIVER2]
Simon Stewart; David Burns. WebDriver. URL: https://w3c.github.io/webdriver/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[XHR]
Anne van Kesteren. XMLHttpRequest Standard. Living Standard. URL: https://xhr.spec.whatwg.org/

Informative References

[BIN-ENT]
Binary entropy function. URL: https://en.wikipedia.org/wiki/Binary_entropy_function
[CHAN]
Channel capacity. URL: https://en.wikipedia.org/wiki/Channel_capacity
[DP]
Differential privacy. URL: https://en.wikipedia.org/wiki/Differential_privacy
[Q-SC]
Claudio Weidmann; Gottfried Lechner. q-ary symmetric channel. URL: https://arxiv.org/pdf/0909.2009.pdf
[RR]
Randomized response. URL: https://en.wikipedia.org/wiki/Randomized_response
[STORAGE]
Anne van Kesteren. Storage Standard. Living Standard. URL: https://storage.spec.whatwg.org/

IDL Index

interface mixin HTMLAttributionSrcElementUtils {
    [CEReactions, SecureContext] attribute USVString attributionSrc;
};

HTMLAnchorElement includes HTMLAttributionSrcElementUtils;
HTMLImageElement includes HTMLAttributionSrcElementUtils;
HTMLScriptElement includes HTMLAttributionSrcElementUtils;

dictionary AttributionReportingRequestOptions {
  required boolean eventSourceEligible;
  required boolean triggerEligible;
};

partial dictionary RequestInit {
  AttributionReportingRequestOptions attributionReporting;
};

partial interface XMLHttpRequest {
  [SecureContext]
  undefined setAttributionReporting(AttributionReportingRequestOptions options);
};

Issues Index

More precisely specify which mutations are relevant for the attributionsrc attribute.
Use attributionSrcUrls with make a background attributionsrc request.
Use/propagate navigationSourceEligible to the navigation request's Attribution Reporting eligibility.
Check permissions policy.
This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201
Ideally this would use the cookie-retrieval algorithm, but it cannot: There is no way to consider only cookies whose http-only-flag is true and whose same-site-flag is "None"; there is no way to prevent the last-access-time from being modified; and the return value is a string that would have to be further processed to check for the "ar_debug" cookie.
Audit other properties on request and set them properly.
Support header-processing on redirects. Due to atomic HTTP redirect handling, we cannot process registrations through integration with fetch. [Issue #839]
Check for transient activation with "navigation-source".
Consider allowing the user agent to limit the size of tokens.
Consider rejecting out-of-bounds values instead of silently clamping.
Invoke parse summary buckets and parse summary window operator from this algorithm.
Determine proper charset-handling for the JSON header value.
Determine proper charset-handling for the JSON header value.
This algorithm is not compatible with the behavior proposed for experimental Flexible Event support with differing event-level report windows for a given source.
Specify this in terms of Navigation
Specify this in terms of fetch.
This fetch should use a network partition key for an opaque origin. [Issue #220]
This fetch should use a network partition key for an opaque origin. [Issue #220]
Link to a generating masked tokens algorithm that receives messages.
Specify the cryptographic redemption procedure.
add links to the aggregation service noise addition algorithm.
model the channel capacity of a trigger registration.
fill in this section
fill in this section