1. Introduction
This section is non-normative
This specification describes how web browsers can provide a mechanism to the web that supports measuring and attributing conversions (e.g. purchases) to ads a user interacted with on another site. This mechanism should remove one need for cross-site identifiers like third-party cookies.
1.1. Overview
Pages/embedded sites are given the ability to register attribution sources and attribution triggers, which can be linked by the User Agent to generate and send attribution reports containing information from both of those events.
A reporter https://reporter.example
embedded on https://source.example
is able to
measure whether an interaction on the page lead to an action on https://destination.example
by registering an attribution source with attribution destinations of « https://destination.example
». Reporters are able to register sources through a variety
of surfaces, but ultimately the reporter is required to provide the User Agent with an
HTTP-response header which allows the source to be eligible for attribution.
At a later point in time, the reporter, now embedded on https://destination.example
,
may register an attribution trigger. Reporters can register triggers by sending an
HTTP-response header containing information about the action/event that occurred. Internally,
the User Agent attempts to match the trigger to previously registered source events based on
where the sources/triggers were registered and configurations provided by the reporter.
If the User Agent is able to attribute the trigger to a source, it will generate and send an attribution report to the reporter via an HTTP POST request at a later point in time.
2. HTML monkeypatches
2.1. API for elements
interface mixin { [
HTMLAttributionSrcElementUtils CEReactions ,SecureContext ]attribute USVString ; };
attributionSrc HTMLAnchorElement includes HTMLAttributionSrcElementUtils ;HTMLImageElement includes HTMLAttributionSrcElementUtils ;HTMLScriptElement includes HTMLAttributionSrcElementUtils ;
Add the following content attributes:
a
-
attributionsrc
- A string containing zero or more URLs to which a background attributionsrc request will be made when thea
is navigated. img
-
attributionsrc
- A string containing zero or more URLs to which a background attributionsrc request will be made when set. script
-
attributionsrc
- A string containing zero or more URLs to which a background attributionsrc request will be made when set.
The IDL attribute attributionSrc
must reflect the respective content attribute of the same
name.
Whenever an img
or a script
element is created or element’s attributionSrc
attribute is set or changed,
run make background attributionsrc requests with element and
"event-source-or-trigger
" and the current state of element’s referrerpolicy
content attribute or referrerpolicy
content attribute.
More precisely specify which mutations are relevant for the attributionsrc attribute.
Modify update the image data as follows:
After the step
Set request’s priority to the current state...
add the step
-
If the element has an
attributionsrc
attribute, set request’s Attribution Reporting Eligibility to "event-source-or-trigger
".
A script fetch options has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise
stated it is "unset
".
Modify prepare the script element as follows:
After the step
Let fetch priority be the current state of el’s
fetchpriority
content attribute.
add the step
-
Let Attribution Reporting eligibility be "
event-source-or-trigger
" if el has anattributionsrc
content attribute and "unset
" otherwise.
Add "and Attribution Reporting eligibility is Attribution Reporting eligibility." to the step
Let options be a script fetch options whose...
Modify set up the classic script request and set up the module script request as follows:
Add "and its Attribution Reporting eligibility is options’s Attribution Reporting eligibility."
Modify follow the hyperlink as follows:
After the step
If subject’s link types includes...
add the steps
-
Let navigationSourceEligible be false.
-
If subject has an
attributionsrc
attribute:-
Set navigationSourceEligible to true.
-
Make background attributionsrc requests with subject and "
navigation-source
", and referrerPolicy.
-
Add "and navigationSourceEligible set to navigationSourceEligible" to the step
Navigate targetNavigable...
2.2. Window open steps
Modify the tokenize the features argument as follows:
Replace the step
Collect a sequence of code points that are not feature separators code points from features given position. Set value to the collected code points, converted to ASCII lowercase.
with
Collect a sequence of code points that are not feature separators code points from features given position. Set value to the collected code points, converted to ASCII lowercase. Set originalCaseValue to the collected code points.
Replace the step
If name is not the empty string, then set tokenizedFeatures[name] to value.
with the steps
-
If name is not the empty string:
Modify the window open steps as follows:
After the step
Let tokenizedFeatures be the result of tokenizing features.
add the steps
-
Let navigationSourceEligible be false.
-
If tokenizedFeatures["
attributionsrc
"] exists:-
Set navigationSourceEligible to true.
-
Set attributionSrcUrls to a new list.
-
For each value of tokenizedFeatures["
attributionsrc
"]:-
If value is the empty string, continue.
-
Let decodedSrcBytes be the result of percent-decoding value.
-
Let decodedSrc be the UTF-8 decode without BOM of decodedSrcBytes.
-
Parse decodedSrc relative to the entry settings object, and set urlRecord to the resulting URL record, if any. If parsing failed, continue.
-
Append urlRecord to attributionSrcUrls.
-
Use attributionSrcUrls and referrerPolicy with make a background attributionsrc request.
In each step that calls navigate, set navigationSourceEligible to navigationSourceEligible.
2.3. Navigation monkeypatches
Add the following item to navigation params:
- navigationSourceEligible
-
A boolean indicating whether the navigation can register a navigation source in its response. Defaults to false.
Modify navigate as follows:
Add an optional boolean parameter called navigationSourceEligible (default false).
In the step
Set navigationParams to a new navigation params with...
add the property
- navigationSourceEligible
-
navigationSourceEligible
Use/propagate navigationSourceEligible to the navigation request's Attribution Reporting eligibility.
Enforce attribution-scope privacy limits.
3. Network monkeypatches
dictionary {
AttributionReportingRequestOptions required boolean ;
eventSourceEligible required boolean ; };
triggerEligible partial dictionary RequestInit {AttributionReportingRequestOptions ; };
attributionReporting partial interface XMLHttpRequest { [SecureContext ]undefined (
setAttributionReporting AttributionReportingRequestOptions ); };
options
A request has an associated Attribution Reporting eligibility (an eligibility).
Unless otherwise stated it is "unset
".
To get an eligibility from AttributionReportingRequestOptions
given an AttributionReportingRequestOptions
options:
-
Let eventSourceEligible be options’s
eventSourceEligible
. -
Let triggerEligible be options’s
triggerEligible
. -
If (eventSourceEligible, triggerEligible) is:
- (false, false)
-
Return "
empty
". - (false, true)
-
Return "
trigger
". - (true, false)
-
Return "
event-source
". - (true, true)
-
Return "
event-source-or-trigger
".
"Attribution-Reporting-Eligible
" is a Dictionary Structured Header set on a request that indicates which registrations, if
any, are allowed on the corresponding response. Its values are not specified
and its allowed keys are:
- "
event-source
" - "
navigation-source
" -
A navigation source may be registered.
- "
trigger
" -
A trigger may be registered.
"Attribution-Reporting-Support
" is a Dictionary Structured Header set on a request that indicates which registrars, if
any, the corresponding response can use. Its values are not specified and
its allowed keys are the registrars.
To obtain a dictionary structured header value given a list of strings keys and a set of strings allowedKeys:
-
For each key of allowedKeys, optionally append the concatenation of « "
not-
", key » to keys. -
Optionally, shuffle keys.
-
Let entries be a new list.
-
For each key of keys:
-
Let value be true.
-
Optionally, set value to a token corresponding to one of strings in allowedKeys.
-
Let params be a new map.
-
For each key of allowedKeys, optionally set params[key] to an arbitrary bare item.
-
Append a structured dictionary member with the key key, the value value, and the parameters params to entries.
-
-
Return a dictionary containing entries.
Note: The user agent MAY "grease" the
dictionary structured headers according to the preceding algorithm to help ensure that recipients
use a proper structured header parser, rather than naive string equality or contains
operations, which makes it easier to introduce backwards-compatible
changes to the header definition in the future. Including the allowed keys
as dictionary values or parameters helps ensure that only the dictionary’s
keys are interpreted by the recipient. Likewise, shuffling the dictionary
members helps ensure that, e.g., "key1, key2
" is treated equivalently to "key2, key1
".
In the following example, only the "trigger
" key should be interpreted by the
recipient after the header has been parsed as a structured dictionary:
EXAMPLE: Greased Attribution-Reporting-Eligible headerAttribution-Reporting-Eligible: not-event-source, trigger=event-source;navigation-source=3
In the following example, only the "os
" key should be interpreted by the
recipient after the header has been parsed as a structured dictionary:
EXAMPLE: Greased Attribution-Reporting-Support headerAttribution-Reporting-Support: os=web
To set Attribution Reporting headers given a request request:
-
Let headers be request’s header list.
-
Let eligibility be request’s Attribution Reporting eligibility.
-
Delete "
Attribution-Reporting-Eligible
" from headers. -
Delete "
Attribution-Reporting-Support
" from headers. -
If eligibility is "
unset
", return. -
Let keys be a new list.
-
If eligibility is:
- "
empty
" -
Do nothing.
- "
event-source
" -
Append "
event-source
" to keys. - "
navigation-source
" -
Append "
navigation-source
" to keys. - "
trigger
" - "
event-source-or-trigger
" -
Append "
event-source
" and "trigger
" to keys.
- "
-
Let supportedRegistrars be the result of getting supported registrars.
-
Let eligibleDict be the result of obtaining a dictionary structured header value with keys and the set containing all the eligible keys.
-
Set a structured field value given ("
Attribution-Reporting-Eligible
", eligibleDict) in headers. -
Let supportDict be the result of obtaining a dictionary structured header value with supportedRegistrars and the set containing all the registrars.
-
Set a structured field value given ("
Attribution-Reporting-Support
", supportDict) in headers.
3.1. Fetch monkeypatches
Modify fetch as follows:
After the step
If request’s header list does not contain
Accept
...
add the step
-
Set Attribution Reporting headers with request.
Modify Request(input, init)
as follows:
In the step
Set request to a new request with the following properties:
add the property
- Attribution Reporting eligibility
-
request’s Attribution Reporting eligibility.
After the step
If init["
priority
"] exists, then:
add the step
-
If init["
attributionReporting
"] exists, then set request’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with it.
3.2. XMLHttpRequest monkeypatches
An XMLHttpRequest
object has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is
"unset
".
The setAttributionReporting(options)
method must run these
steps:
-
If this’s state is not opened, then throw an "
InvalidStateError
"DOMException
. -
If this’s
send()
flag is set, then throw an "InvalidStateError
"DOMException
. -
Set this’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with options.
Modify send(body)
as follows:
Add a Document parameter called document.
After the step:
Let req be a new request, initialized as follows...
Add the step:
-
Set req’s Attribution Reporting eligibility to this’s Attribution Reporting eligibility.
-
Set Attribution Reporting headers with req and document’s context origin.
4. Permissions Policy integration
This specification defines a policy-controlled feature identified by the string "
".
Its default allowlist is attribution-reporting
*
.
5. Clear Site Data integration
In clear DOM-accessible storage for origin, add the following step:
Run clear site data with origin.
To clear site data given an origin origin:
-
For each attribution source source of the attribution source cache:
-
If source’s reporting origin and origin are same origin, remove source from the attribution source cache.
-
-
For each event-level report report of the event-level report cache:
-
If report’s reporting origin and origin are same origin, remove report from the event-level report cache.
-
-
For each aggregatable attribution report report of the aggregatable attribution report cache:
-
If report’s reporting origin and origin are same origin, remove report from the aggregatable attribution report cache.
-
Note: We deliberately do not remove matching entries from the attribution rate-limit cache and aggregatable debug rate-limit cache, as doing so would allow a site to reset and therefore exceed the intended rate limits at will.
6. Structures
6.1. Registration info
A registration info is a struct with the following items:
- preferred platform (default null)
-
Null or a registrar.
- report header errors (default false)
-
A boolean.
6.2. Trigger state
A trigger state is a struct with the following items:
- trigger data
-
A non-negative 64-bit integer.
- report window
6.3. Randomized response output configuration
A randomized response output configuration is a struct with the following items:
- max attributions per source
-
A positive integer.
- trigger specs
6.4. Randomized source response
A randomized source response is null or a list of trigger states.
6.5. Attribution filtering
A filter value is a set of strings.
A filter map is a map whose keys are strings and whose values are filter values.
A filter config is a struct with the following items:
- map
-
A filter map.
- lookback window
-
Null or a positive duration.
6.6. Suitable origin
A suitable origin is an origin that is suitable.
6.7. Source type
A source type is one of the following:
- "
navigation
" -
The source was associated with a top-level navigation.
- "
event
" -
The source was not associated with a top-level navigation.
6.8. Report window
A report window is a struct with the following items:
A report window list is a list of report windows. It has the following constraints:
-
Elements are in ascending order based on their start.
-
Every element’s start is equal to the previous element’s end, if it exists.
-
There is at least one element in the list.
A report window list list’s total window is a report window struct with the following fields:
Note: The total window is conceptually a union of report windows, because there are no gaps in time between any of the windows.
6.9. Summary operator
A summary operator summarizes the triggers attributed to an attribution source. Its value is one of the following:
- "
count
" -
Number of triggers attributed.
- "
value_sum
" -
Sum of the value of triggers.
6.10. Summary bucket
A summary bucket is a struct with the following items:
- start
-
An unsigned 32-bit integer.
- end
-
An unsigned 32-bit integer.
A summary bucket list is a list of summary buckets. It has the following constraints:
-
Elements are strictly in ascending order based on their start.
-
Every element’s end is equal to the next element’s start - 1, if it exists.
-
There is at least one element in the list.
6.11. Trigger-data matching mode
A trigger-data matching mode is one of the following:
- "
exact
" -
Trigger data must be less than the default trigger data cardinality. Otherwise, no event-level attribution takes place.
- "
modulus
" -
Trigger data is taken modulo the default trigger data cardinality.
6.12. Trigger specs
A trigger spec is a struct with the following items:
- event-level report windows
A trigger spec map is a map whose keys are unsigned 32-bit integers and values are trigger specs.
To find a matching trigger spec given an attribution source source and an unsigned 64-bit integer triggerData:
-
Let specs be source’s trigger specs.
-
If source’s trigger-data matching mode is:
6.13. Aggregatable debug reporting config
An aggregatable debug reporting config is a struct with the following items:
- key piece (default 0)
-
A non-negative 128-bit integer.
- debug data (default an empty map)
-
A map whose keys are debug data types and whose values are aggregatable contributions.
- aggregation coordinator (default default aggregation coordinator)
6.14. Attribution scopes
An attribution scopes is a struct with the following items:
- limit
-
A positive 32-bit integer representing the number of distinct values allowed per attribution destination for the source’s reporting origin.
- values
- max event states
-
A positive integer representing the maximum number of trigger states for event sources per attribution destination for the source’s reporting origin.
6.15. Attribution source
An attribution source is a struct with the following items:
- internal ID
-
An internal ID.
- source origin
- event ID
-
A non-negative 64-bit integer.
- attribution destinations
- reporting origin
- source type
-
A source type.
- expiry
-
A duration.
- trigger specs
- aggregatable report window
- priority
-
A 64-bit integer.
- source time
-
A moment.
- number of event-level reports
-
Number of event-level reports created for this attribution source.
- max number of event-level reports
-
The maximum number of event-level reports that can be created for this attribution source.
- event-level attributable (default true)
-
A boolean.
- dedup keys
-
A set of dedup keys associated with this attribution source.
- randomized response (default null)
- randomized trigger rate (default 0)
-
A number between 0 and 1 (both inclusive).
- event-level epsilon
-
A double.
- filter data
-
A filter map.
- debug key
-
Null or a non-negative 64-bit integer.
- aggregation keys
-
A map whose keys are strings and whose values are non-negative 128-bit integers.
- remaining aggregatable attribution budget
-
A non-negative integer.
- aggregatable dedup keys
-
A set of aggregatable dedup key values associated with this attribution source.
- debug reporting enabled
-
A boolean.
- number of aggregatable attribution reports
-
Number of aggregatable attribution reports created for this attribution source.
- trigger-data matching mode
- debug cookie set (default false)
-
A boolean.
- fenced
-
A boolean.
- remaining aggregatable debug budget
-
A non-negative integer.
- number of aggregatable debug reports
-
Number of aggregatable debug reports created for this attribution source.
- aggregatable debug reporting config
- destination limit priority
-
A 64-bit integer.
- attribution scopes (default null)
-
Null or an attribution scopes.
An attribution source source’s expiry time is source’s source time + source’s expiry.
An attribution source source’s source site is the result of obtaining a site from source’s source origin.
6.16. Aggregatable trigger data
An aggregatable trigger data is a struct with the following items:
- key piece
-
A non-negative 128-bit integer.
- source keys
- filters
-
A list of filter configs.
- negated filters
-
A list of filter configs.
6.17. Aggregatable values configuration
An aggregatable key value is a struct with the following items:
- value
-
A non-negative 32-bit integer.
- filtering ID
-
A non-negative integer.
An aggregatable values configuration is a struct with the following items:
- values
-
A map whose keys are strings and whose values are aggregatable key values.
- filters
-
A list of filter configs.
- negated filters
-
A list of filter configs.
6.18. Aggregatable dedup key
An aggregatable dedup key is a struct with the following items:
- dedup key
-
Null or a non-negative 64-bit integer.
- filters
-
A list of filter configs.
- negated filters
-
A list of filter configs.
6.19. Event-level trigger configuration
An event-level trigger configuration is a struct with the following items:
- trigger data
-
A non-negative 64-bit integer.
- dedup key
-
Null or a non-negative 64-bit integer.
- priority
-
A 64-bit integer.
- filters
-
A list of filter configs.
- negated filters
-
A list of filter configs.
- value
-
A positive unsigned 32-bit integer.
6.20. Aggregation coordinator
An aggregation coordinator is one of a user-agent-determined set of suitable origins that specifies which aggregation service deployment to use.
6.21. Aggregatable source registration time configuration
An aggregatable source registration time configuration is one of the following:
- "
exclude
" -
"
source_registration_time
" is excluded from an aggregatable attribution report's shared info. - "
include
" -
"
source_registration_time
" is included in an aggregatable attribution report's shared info.
6.22. Attribution trigger
An attribution trigger is a struct with the following items:
- attribution destination
-
A site.
- trigger time
-
A moment.
- reporting origin
- filters
-
A list of filter configs.
- negated filters
-
A list of filter configs.
- debug key
-
Null or a non-negative 64-bit integer.
- event-level trigger configurations
- aggregatable trigger data
- aggregatable values configurations
- aggregatable dedup keys
-
A list of aggregatable dedup key.
- debug reporting enabled
-
A boolean.
- aggregation coordinator
- aggregatable source registration time configuration
- trigger context ID
-
Null or a string.
- fenced
-
A boolean.
- aggregatable filtering ID max bytes
-
A positive integer.
- aggregatable debug reporting config
- attribution scopes
6.23. Attribution report
An attribution report is a struct with the following items:
- reporting origin
- report time
-
A moment.
- external ID
-
A UUID formatted as a string.
- internal ID
-
An internal ID.
An attribution debug info is a tuple with the following items:
- source debug key
-
Null or a non-negative 64-bit integer.
- trigger debug key
-
Null or a non-negative 64-bit integer.
6.24. Event-level report
An event-level report is an attribution report with the following additional items:
- event ID
-
A non-negative 64-bit integer.
- source type
-
A source type.
- trigger data
-
A non-negative 64-bit integer.
- randomized trigger rate
-
A number between 0 and 1 (both inclusive).
- trigger priority
-
A 64-bit integer.
- trigger time
-
A moment.
- source ID
-
A string.
- attribution destinations
- attribution debug info
6.25. Aggregatable report
An aggregatable contribution is a struct with the following items:
- key
-
A non-negative 128-bit integer.
- value
-
A non-negative 32-bit integer.
- filtering ID
-
A non-negative integer.
An aggregatable report is an attribution report with the following additional items:
- contributions
- effective attribution destination
-
A site.
- aggregation coordinator
An aggregatable attribution report is an aggregatable report with the following additional items:
- source time
-
A moment.
- source registration time configuration
- is null report (default false)
-
A boolean.
- trigger context ID
-
Null or a string.
- filtering ID max bytes
-
A positive integer.
- attribution debug info
- source ID
-
Null or a string.
An aggregatable debug report is an aggregatable report.
6.26. Attribution rate-limits
A rate-limit scope is one of the following:
- "
source
" - "
event-attribution
" - "
aggregatable-attribution
"
An attribution rate-limit record is a struct with the following items:
- scope
- source site
-
A site.
- attribution destination
-
A site.
- reporting origin
- time
-
A moment.
- expiry time
-
Null or a moment.
- entity ID
-
Null for fake reports or an internal ID for an event-level report, aggregatable attribution report, or attribution source.
- deactivated for unexpired destination limit (default false)
-
A boolean.
- destination limit priority (default null)
-
Null or a 64-bit integer.
6.27. Aggregatable debug rate-limits
An aggregatable debug rate-limit record is a struct with the following items:
6.28. Attribution debug data
A debug data type is a non-empty string that specifies the set of data that is contained in a verbose debug report or in an aggregatable debug report.
A source debug data type is a debug data type for source registrations. Possible values are:
- "
source-channel-capacity-limit
" - "
source-destination-global-rate-limit
" - "
source-destination-limit
" - "
source-destination-limit-replaced
" - "
source-destination-per-day-rate-limit
" - "
source-destination-rate-limit
" - "
source-max-event-states-limit
" - "
source-noised
" - "
source-reporting-origin-limit
" - "
source-reporting-origin-per-site-limit
" - "
source-scopes-channel-capacity-limit
" - "
source-storage-limit
" - "
source-success
" - "
source-trigger-state-cardinality-limit
" - "
source-unknown-error
"
A trigger debug data type is a debug data type for trigger registrations. Possible values are:
- "
trigger-aggregate-attributions-per-source-destination-limit
" - "
trigger-aggregate-deduplicated
" - "
trigger-aggregate-excessive-reports
" - "
trigger-aggregate-insufficient-budget
" - "
trigger-aggregate-no-contributions
" - "
trigger-aggregate-report-window-passed
" - "
trigger-aggregate-storage-limit
" - "
trigger-event-attributions-per-source-destination-limit
" - "
trigger-event-deduplicated
" - "
trigger-event-excessive-reports
" - "
trigger-event-low-priority
" - "
trigger-event-no-matching-configurations
" - "
trigger-event-no-matching-trigger-data
" - "
trigger-event-noise
" - "
trigger-event-report-window-not-started
" - "
trigger-event-report-window-passed
" - "
trigger-event-storage-limit
" - "
trigger-no-matching-filter-data
" - "
trigger-no-matching-source
" - "
trigger-reporting-origin-limit
" - "
trigger-unknown-error
"
An OS debug data type is a debug data type for OS registrations. Possible values are:
- "
os-source-delegated
" - "
os-trigger-delegated
"
A header errors debug data type is a debug data type for registration header errors. Possible values are:
- "
header-parsing-error
"
6.29. Verbose debug report
A verbose debug data is a struct with the following items:
- data type
- body
A verbose debug report is a struct with the following items:
- data
-
A list of verbose debug data.
- reporting origin
6.30. Triggering result
A triggering status is one of the following:
- "
dropped
" - "
noised
" - "
attributed
"
Note: "noised
" only applies for triggering event-level attribution when it is attributed
successfully but dropped as the noise was applied to the source.
A trigger debug data is a tuple with the following items:
- data type
- report
-
Null or an attribution report.
A triggering result is a tuple with the following items:
- status
- debug data
-
Null or a trigger debug data.
6.31. Destination rate-limit result
A destination rate-limit result is one of the following:
- "
allowed
" - "
hit global limit
" - "
hit reporting limit
"
7. Storage
A user agent holds an attribution source cache, which is a set of attribution sources.
A user agent holds an event-level report cache, which is a set of event-level reports.
A user agent holds an aggregatable attribution report cache, which is a set of aggregatable attribution reports.
A user agent holds an attribution rate-limit cache, which is a set of attribution rate-limit records.
A user agent holds an aggregatable debug rate-limit cache, which is a set of aggregatable debug rate-limit records.
The above caches are collectively known as the attribution caches. The attribution caches are shared among all environment settings objects.
Note: This would ideally use storage bottles to provide access to the attribution caches. However attribution data is inherently cross-site, and operations on storage would need to span across all storage bottle maps.
An internal ID is an integer.
To get the next internal ID, return an internal ID strictly greater than any previously returned by this algorithm. The user agent MAY reset this sequence when no attribution cache entry contains an internal ID.
8. Constants
Valid source expiry range is a 2-tuple of positive durations that controls the minimum and maximum value that can be used as an expiry, respectively. Its value is (1 day, 30 days).
Min report window is a positive duration that controls the minimum duration from an attribution source’s source time and any end in aggregatable report window or event-level report windows. Its value is 1 hour.
Max entries per filter data is a positive integer that controls the maximum size of an attribution source's filter data. Its value is 50.
Max values per filter data entry is a positive integer that controls the maximum size of each value of an attribution source's filter data. Its value is 50.
Max length per filter string is a positive integer that controls the maximum length of an attribution source's filter data's keys and its values's items. Its value is 25.
Attribution rate-limit window is a positive duration that controls the rate-limiting window for attribution. Its value is 30 days.
Max destinations per source is a positive integer that controls the maximum size of an attribution source's attribution destinations. Its value is 3.
Max settable event-level attributions per source is a positive integer that controls the maximum value of max number of event-level reports. Its value is 20.
Max settable event-level report windows is a positive integer that controls the maximum size of event-level report windows. Its value is 5.
Default event-level attributions per source is a map that controls how many times a single attribution source can create an event-level report by default. Its value is «[ navigation → 3, event → 1 ]».
Allowed aggregatable budget per source is a positive integer that controls the total required aggregatable budget of all aggregatable reports created for an attribution source. Its value is 65536.
Max aggregation keys per source registration is a positive integer that controls the maximum size of an attribution source's aggregation keys. Its value is 20.
Max length per aggregation key identifier is a positive integer that controls the maximum length of an attribution source's aggregation keys's keys. Its value is 25.
Default trigger data cardinality is a map that controls the valid range of trigger data. Its value is «navigation → 8, event → 2».
Max distinct trigger data per source is a positive integer that controls the maximum size of a trigger spec map for an attribution source. Its value is 32.
Max length per trigger context ID is a positive integer that controls the maximum length of an attribution trigger's trigger context ID. Its value is 64.
Default filtering ID value is a non-negative integer. Its value is 0. It is the default value for flexible contribution filtering of aggregatable reports.
Default filtering ID max bytes is a positive integer that controls the max bytes used if none is explicitly chosen. Its value is 1. The max bytes value limits the size of filtering IDs within an aggregatable attribution report.
Valid filtering ID max bytes range is a set of positive integers that controls the allowable values of max bytes. Its value is the range 1 to 8, inclusive.
Max contributions per aggregatable debug report is a positive integer that controls the maximum size of an aggregatable debug report's contributions. Its value is 2.
Aggregatable debug rate-limit window is a positive duration that controls the rate-limiting window for aggregatable debug reporting. Its value is 1 day.
Max aggregatable debug budget per rate-limit window is a tuple consisting of two positive integers. The first controls the total required aggregatable budget of all aggregatable debug reports with a given context site per aggregatable debug rate-limit window. The second controls the total required aggregatable budget of all aggregatable debug reports with a given (context site, reporting site) per aggregatable debug rate-limit window. Its value is (220, 65536).
Default max event states is a positive integer that controls the default max event states. Its value is 3.
Max length of attribution scope for source is a positive integer that controls the maximum length of an attribution scope from an attribution source's values. Its value is 50.
Max attribution scopes per source is a positive integer that controls the maximum size of an attribution source's values. Its value is 20.
9. Vendor-Specific Values
Max pending sources per source origin is a positive integer that controls how many attribution sources can be in the attribution source cache per source origin.
Max settable event-level epsilon is a non-negative double that controls the default and maximum values that a source registration can specify for the epsilon parameter used by compute the channel capacity of a source and obtain a randomized source response.
Max trigger-state cardinality is a positive integer that controls the maximum size of the set of possible trigger states for any one attribution source.
Randomized null attribution report rate excluding source registration time is a
double between 0 and 1 (both inclusive) that controls the randomized number of null attribution reports
generated for an attribution trigger whose aggregatable source registration time configuration is "exclude
". If automation local testing mode is true,
this is 0.
Randomized null attribution report rate including source registration time is a
double between 0 and 1 (both inclusive) that controls the randomized number of null attribution reports
generated for an attribution trigger whose aggregatable source registration time configuration is "include
". If automation local testing mode is true,
this is 0.
Max event-level reports per attribution destination is a positive integer that controls how many event-level reports can be in the event-level report cache per site in attribution destinations.
Max aggregatable attribution reports per attribution destination is a positive integer that controls how many aggregatable attribution reports can be in the aggregatable attribution report cache per effective attribution destination.
Max event-level channel capacity per source is a map that controls how many bits of information can be exposed associated with a single attribution source. The keys are «navigation, event». The values are non-negative doubles.
Max event-level attribution scopes channel capacity per source is a map that controls how many bits of information can be exposed due to attribution scopes for a single attribution source. The keys are «navigation, event». The values are non-negative doubles.
Max aggregatable reports per source is a tuple consisting of two positive integers. The first controls how many aggregatable attribution reports can be created by attribution triggers attributed to a single attribution source. The second controls how many aggregatable debug reports can be created for an attribution source.
Max destinations covered by unexpired sources is a positive integer that controls the maximum number of distinct sites across all attribution destinations for unexpired attribution sources with a given (source site, reporting origin site).
Destination rate-limit window is a positive duration that controls the rate-limiting window for destinations.
Max destinations per rate-limit window is a tuple consisting of two integers. The first controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given source site per destination rate-limit window. The second controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given (source site, reporting origin site) per destination rate-limit window.
Max destinations per source reporting site per day is an integer that controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given (source site, reporting origin site) per day.
Max source reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a (source site, attribution destination) that can create attribution sources per attribution rate-limit window.
Max source reporting origins per source reporting site is a positive integer that controls the maximum number of distinct reporting origins for a (source site, reporting origin site) that can create attribution sources per origin rate-limit window.
Origin rate-limit window is a positive duration that controls the rate-limiting window for max source reporting origins per source reporting site.
Max attribution reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a (source site, attribution destination) that can create event-level reports per attribution rate-limit window.
Max attributions per rate-limit window is a positive integer that controls the maximum number of attributions for a (source site, attribution destination, reporting origin site) per attribution rate-limit window. This attribution limit is separate for event-level and aggregate reporting.
Randomized aggregatable attribution report delay is a positive duration that controls the random delay to deliver an aggregatable attribution report. If automation local testing mode is true, this is 0.
Default aggregation coordinator is the aggregation coordinator that controls how to obtain the public key for encrypting an aggregatable report by default.
10. General Algorithms
10.1. Serialize an integer
To serialize an integer, represent it as a string of the shortest possible decimal number.
This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201
10.2. Parsing JSON fields
Note: The "Attribution-Reporting-Register-Source
" and
"Attribution-Reporting-Register-Trigger
" response headers contain JSON-encoded
data, rather than structured values,
because of limitations on nesting in the latter. The recursive
nature of JSON makes it more amenable to future extensions.
To parse an optional 64-bit signed integer given a map map, a string key, and a possibly null 64-bit signed integer default:
-
If map[key] does not exist, return default.
-
If map[key] is not a string, return an error.
-
Let value be the result of applying the rules for parsing integers to map[key].
-
If value is an error, return an error.
-
If value cannot be represented by a 64-bit signed integer, return an error.
-
Return value.
To parse an optional 64-bit unsigned integer given a map map, a string key, and a possibly null 64-bit unsigned integer default:
-
If map[key] does not exist, return default.
-
If map[key] is not a string, return an error.
-
Let value be the result of applying the rules for parsing non-negative integers to map[key].
-
If value is an error, return an error.
-
If value cannot be represented by a 64-bit unsigned integer, return an error.
-
Return value.
10.3. Serialize attribution destinations
To serialize attribution destinations destinations, run the following steps:
-
Assert: destinations is sorted in ascending order, with a being less than b if a, serialized, is less than b, serialized.
-
Let destinationStrings be a list.
-
For each destination in destinations:
-
Assert: destination is not an opaque origin.
-
Append destination serialized to destinationStrings.
-
-
If destinationStrings’s size is equal to 1, return destinationStrings[0].
-
Return destinationStrings.
Note: destinations is required to be sorted to avoid revealing extra
information about the original source registration, namely the order of the
"destination
" field in the
original JSON registration, which can be used to distinguish semantically
equivalent registrations.
To check if a scheme is suitable given a string scheme:
-
If scheme is "
http
" or "https
", return true. -
Return false.
To check if an origin is suitable given an origin origin:
-
If origin is not a potentially trustworthy origin, return false.
-
Return true.
10.4. Parsing filter data
To parse filter values given a value:
-
If value is not a map, return an error.
-
Let result be a new filter map.
-
For each filter → data of value:
-
Return result.
To parse filter data given a value:
-
Let map be the result of running parse filter values with value.
-
If map is an error, return it.
-
If map’s size is greater than the max entries per filter data, return an error.
-
For each filter → set of map:
-
If filter’s length is greater than the max length per filter string, return an error.
-
If set’s size is greater than the max values per filter data entry, return an error.
-
For each s of set:
-
If s’s length is greater than the max length per filter string, return an error.
-
-
-
Return map.
To parse filter config given a value:
-
If value is not a map, return an error.
-
Let lookbackWindow be null.
-
If value["
_lookback_window
"] exists: -
Let map be the result of running parse filter values with value.
-
If map is an error, return it.
-
Let filter be a filter config with the items:
- map
-
map
- lookback window
-
lookbackWindow
-
Return filter.
10.5. Parsing filters
To parse filters given a value:
-
Let filtersList be a new list.
-
If value is a map, then:
-
Let filterConfig be the result of running parse filter config with value.
-
If filterConfig is an error, return it.
-
Append filterConfig to filtersList.
-
Return filtersList.
-
-
If value is not a list, return an error.
-
For each data of value:
-
Let filterConfig be the result of running parse filter config with data.
-
If filterConfig is an error, return it.
-
Append filterConfig to filtersList.
-
-
Return filtersList.
To parse a filter pair given a map map:
-
Let positive be a list of filter configs, initially empty.
-
If map["
filters
"] exists, set positive to the result of running parse filters with map["filters
"]. -
If positive is an error, return it.
-
Let negative be a list of filter configs, initially empty.
-
If map["
not_filters
"] exists, set negative to the result of running parse filters with map["not_filters
"]. -
If negative is an error, return it.
-
Return the tuple (positive, negative).
10.6. Parsing aggregation coordinator
To parse an aggregation coordinator given value:
-
If value is not a string, return an error.
-
Let url be the result of running the URL parser on value.
-
If url is failure or null, return an error.
-
If url’s origin is not an aggregation coordinator, return an error.
-
Return url’s origin.
10.7. Parsing aggregatable debug reporting config
An aggregatable-debug-reporting JSON key is one of the following:
- "
aggregation_coordinator_origin
" - "
debug_data
" - "
key_piece
" - "
types
" - "
value
"
To parse aggregatable debug reporting data given a dataList, a positive integer maxValue, and a set of debug data types supportedTypes:
-
Assert: maxValue is less than or equal to allowed aggregatable budget per source.
-
If dataList is not a list, return an error.
-
Let debugDataMap be a new map.
-
Let unknownTypes be a new set.
-
Let unspecifiedContribution be null.
-
For each data of dataList:
-
If data is not a map, return an error.
-
Let dataKeyPiece be the result of running parse an aggregation key piece with data["
key_piece
"]. -
If dataKeyPiece is an error, return an error.
-
If data["
value
"] is not an integer or is less than or equal to 0 or is greater than maxValue, return an error. -
Let contribution be a new aggregatable contribution with the items:
- key
-
dataKeyPiece
- value
-
data["
value
"] - filtering ID
-
Let dataTypes be data["
types
"]. -
For each type of dataTypes:
-
-
If unspecifiedContribution is null, return debugDataMap.
-
For each type of supportedTypes:
-
Return debugDataMap.
To parse an aggregatable debug reporting config given a map map, a positive integer maxValue, a set of debug data types supportedTypes, and an aggregatable debug reporting config default:
-
Let keyPiece be the result of running parse an aggregation key piece with map["
key_piece
"]. -
If keyPiece is an error, return default.
-
Let debugDataMap be a new map.
-
If map["
debug_data
"] exists:-
Set debugDataMap to the result of running parse aggregatable debug reporting data with map["
debug_data
"], maxValue, and supportedTypes. -
If debugDataMap is an error, return default.
-
-
Let aggregationCoordinator be default aggregation coordinator.
-
If map["
aggregation_coordinator_origin
"] exists:-
Set aggregationCoordinator to the result of parsing an aggregation coordinator.
-
If aggregationCoordinator is an error, return default.
-
-
Let aggregatableDebugReportingConfig be a new aggregatable debug reporting config with the items:
- key piece
-
keyPiece
- debug data
-
debugDataMap
- aggregation coordinator
-
aggregationCoordinator
-
Return aggregatableDebugReportingConfig.
Note: The parsing errors are intentionally ignored in this algorithm with default returned to avoid data loss from the optional debug reporting feature.
10.8. Getting registration info
To get registration info from a header list given a header list headers:
-
If headers does not contain "
Attribution-Reporting-Info
", return a new registration info. -
Let map be the result of getting "
Attribution-Reporting-Info
" from headers with a type of "dictionary
". -
If map is not a map, return an error.
-
Let preferredPlatform be null.
-
If map["
preferred-platform
"] exists:-
Let preferredPlatformValue be map["
preferred-platform
"][0]. -
If preferredPlatformValue is not a registrar, return an error.
-
Set preferredPlatform to preferredPlatformValue.
-
-
Let reportHeaderErrors be false.
-
If map["
report-header-errors
"] exists:-
Let reportHeaderErrorsValue be map["
report-header-errors
"][0]. -
If reportHeaderErrorsValue is not a boolean, return an error.
-
Set reportHeaderErrors to reportHeaderErrorsValue.
-
-
Let registrationInfo be a new registration info struct whose items are:
- preferred platform
-
preferredPlatform
- report header errors
-
reportHeaderErrors
-
Return registrationInfo.
Require preferredPlatformValue to be a token.
10.9. Cookie-based debugging
To check if cookie-based debugging is allowed given a suitable origin reportingOrigin and a site contextSite:
-
Assert: contextSite is not an opaque origin.
-
Let domain be the canonicalized domain name of reportingOrigin’s host.
-
Let contextDomain be the canonicalized domain name of contextSite’s host.
-
If the User Agent’s cookie policy or user controls do not allow cookie access for domain on contextDomain within a third-party context, return blocked.
-
For each cookie of the user agent’s cookie store:
-
If cookie’s name is not "
ar_debug
", continue. -
If cookie’s http-only-flag is false, continue.
-
If cookie’s secure-flag is false, continue.
-
If cookie’s same-site-flag is not "
None
", continue. -
If cookie’s host-only-flag is true and domain is not identical to cookie’s domain, continue.
-
If cookie’s host-only-flag is false and domain does not domain-match cookie’s domain, continue.
-
If "
/
" does not path-match cookie’s path, continue. -
Return allowed.
-
-
Return blocked.
Ideally this would use the cookie-retrieval algorithm,
but it cannot: There is no way to consider only cookies whose http-only-flag
is true and whose same-site-flag is "None
"; there is no way to prevent the
last-access-time from being modified; and the return value is a string that
would have to be further processed to check for the "ar_debug
" cookie.
10.10. Obtaining context origin
To obtain the context origin of a node node, return node’s node navigable's top-level traversable's active document's origin.
10.11. Obtaining a randomized response
To obtain a randomized response given trueValue, a set possibleValues, and a double randomPickRate:
-
Assert: randomPickRate is between 0 and 1 (both inclusive).
-
Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.
-
If r is less than randomPickRate, return a random item from possibleValues with uniform probability.
-
Otherwise, return trueValue.
10.12. Parsing aggregation key piece
To parse an aggregation key piece given a string input, perform the following steps. This algorithm will return either a non-negative 128-bit integer or an error.
-
If input’s code point length is not between 3 and 34 (both inclusive), return an error.
-
If the first character is not a U+0030 DIGIT ZERO (0), return an error.
-
If the second character is not a U+0058 LATIN CAPITAL LETTER X character (X) and not a U+0078 LATIN SMALL LETTER X character (x), return an error.
-
Let value be the code point substring from 2 to the end of input.
-
If the characters within value are not all ASCII hex digits, return an error.
-
Interpret value as a hexadecimal number and return as a non-negative 128-bit integer.
10.13. Should processing be blocked by reporting-origin limit
Given an attribution rate-limit record newRecord:
-
Let max be max source reporting origins per rate-limit window.
-
Let scopeSet be « "
source
" ». -
If newRecord’s scope is "
event-attribution
" or "aggregatable-attribution
":-
Set max to max attribution reporting origins per rate-limit window.
-
Set scopeSet to « "
event-attribution
", "aggregatable-attribution
" ».
-
-
Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:
-
record’s source site and newRecord’s source site are equal
-
record’s attribution destination and newRecord’s attribution destination are equal
-
The duration from record’s time and newRecord’s time is <= attribution rate-limit window
-
Let distinctReportingOrigins be the set of all reporting origin in matchingRateLimitRecords, unioned with «newRecord’s reporting origin».
-
If distinctReportingOrigins’s size is greater than max, return blocked.
-
Return allowed.
10.14. Can attribution rate-limit record be removed
Given an attribution rate-limit record record and a moment now:
-
If the duration from record’s time and now is <= attribution rate-limit window , return false.
-
If record’s scope is "
event-attribution
" or "aggregatable-attribution
", return true. -
If record’s expiry time is after now, return false.
-
Return true.
10.15. Obtaining and delivering a verbose debug report
To obtain and deliver a verbose debug report given a list of verbose debug data data, a suitable origin reportingOrigin, and a boolean fenced:
-
If fenced is true, return.
-
Let debugReport be a verbose debug report with the items:
- data
-
data
- reporting origin
-
reportingOrigin
-
Queue a task to attempt to deliver a verbose debug report with debugReport.
10.16. Making a background attributionsrc request
An eligibility is one of the following:
- "
unset
" -
Depending on context, a trigger may or may not be registered.
- "
empty
" - "
event-source
" - "
navigation-source
" -
A navigation source may be registered.
- "
trigger
" -
A trigger may be registered.
- "
event-source-or-trigger
"
To validate a background attributionsrc eligibility given an eligibility eligibility:
-
Assert: eligibility is "
navigation-source
" or "event-source-or-trigger
".
To make a background attributionsrc request given a URL url, an origin contextOrigin, an eligibility eligibility, a boolean fenced,
a Document
document, and a referrer policy referrerPolicy:
-
Validate eligibility.
-
If contextOrigin is not suitable, return.
-
Let context be document’s relevant settings object.
-
If context is not a secure context, return.
-
If document is not allowed to use the "
" feature, return.attribution-reporting
-
Let supportedRegistrars be the result of getting supported registrars.
-
If supportedRegistrars is empty, return.
-
Let request be a new request with the following properties:
- method
-
"
GET
" - URL
-
url
- keepalive
-
true
- Attribution Reporting eligibility
-
eligibility
- referrer policy
-
referrerPolicy
-
Fetch request with processResponse being process an attributionsrc response with contextOrigin, eligibility, and fenced.
Audit other properties on request and set them properly.
Support header-processing on redirects. Due to atomic HTTP redirect handling, we cannot process registrations through integration with fetch. [Issue #839]
Check for transient activation with "navigation-source
".
To make background attributionsrc requests given an HTMLAttributionSrcElementUtils
element, an eligibility eligibility,
and a referrer policy referrerPolicy:
-
Let attributionSrc be element’s
attributionSrc
. -
Let tokens be the result of splitting attributionSrc on ASCII whitespace.
-
For each token of tokens:
-
Parse token, relative to element’s node document. If that is not successful, continue. Otherwise, let url be the resulting URL record.
-
Let fenced be true if element’s node navigable is a fenced navigable, false otherwise.
-
Run make a background attributionsrc request with url, element’s context origin, eligibility, fenced, element’s node document, and referrerPolicy.
-
Consider allowing the user agent to limit the size of tokens.
To process an attributionsrc response given a suitable origin contextOrigin, an eligibility eligibility, a boolean fenced, and a response response:
-
Validate eligibility.
-
Run process an attribution eligible response with contextOrigin, eligibility, fenced, and response.
To get the registration platform given a header value or null webHeader, a header value or null osHeader, and a registrar or null preferredPlatform:
-
If webHeader and osHeader are both null, return null.
-
If preferredPlatform is null:
-
If preferredPlatform is:
To process an attribution source response given a suitable origin contextOrigin, a suitable origin reportingOrigin, a source type sourceType, a header value or null webSourceHeader, a header value or null osSourceHeader, a registration info registrationInfo, and a boolean fenced:
-
Let platform be the result of get the registration platform with webSourceHeader, osSourceHeader, and registrationInfo’s preferred platform.
-
If platform is null, return.
-
Let reportHeaderErrors be registrationInfo’s report header errors.
-
If platform is:
- "
web
" -
-
Let source be the result of running parse source-registration JSON with webSourceHeader, contextOrigin, reportingOrigin, sourceType, current wall time, and fenced.
-
If source is an error:
-
If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "
Attribution-Reporting-Register-Source
", webSourceHeader, reportingOrigin, contextOrigin, and fenced. -
Return.
-
-
If sourceType is "
navigation
", enforce the attribution-scope privacy limit. -
Process source.
-
- "
os
" -
-
Let osSourceRegistrations be the result of running get OS registrations from a header value with osSourceHeader.
-
If osSourceRegistrations is an error:
-
If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "
Attribution-Reporting-Register-OS-Source
", osSourceHeader, reportingOrigin, contextOrigin, and fenced. -
Return.
-
-
Process osSourceRegistrations according to an implementation-defined algorithm.
-
Run obtain and deliver debug reports on OS registrations with "
os-source-delegated
", osSourceRegistrations, contextOrigin, and fenced.
-
- "
To process an attribution trigger response given a suitable origin contextOrigin, a suitable origin reportingOrigin, a response response, a header value or null webTriggerHeader, a header value or null osTriggerHeader, a registration info registrationInfo, and a boolean fenced:
-
Let platform be the result of get the registration platform with webTriggerHeader, osTriggerHeader, and registrationInfo’s preferred platform.
-
If platform is null, return.
-
Let reportHeaderErrors be registrationInfo’s report header errors.
-
If platform is:
- "
web
" -
-
Let destinationSite be the result of obtaining a site from contextOrigin.
-
Let trigger be the result of running create an attribution trigger with webTriggerHeader destinationSite, reportingOrigin, current wall time, and fenced.
-
If trigger is an error:
-
If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "
Attribution-Reporting-Register-Trigger
", webTriggerHeader, reportingOrigin, contextOrigin, and fenced. -
Return.
-
-
Maybe defer and then complete trigger attribution with trigger.
-
- "
os
" -
-
Let osTriggerRegistrations be the result of running get OS registrations from a header value with osTriggerHeader.
-
If osTriggerRegistrations is an error:
-
If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "
Attribution-Reporting-Register-OS-Trigger
", osTriggerHeader, reportingOrigin, contextOrigin, and fenced. -
Return.
-
-
Process osTriggerRegistrations according to an implementation-defined algorithm.
-
Run obtain and deliver debug reports on OS registrations with "
os-trigger-delegated
", osTriggerRegistrations, contextOrigin, and fenced.
-
- "
To process an attribution eligible response given a suitable origin contextOrigin, an eligibility eligibility, a boolean fenced, and a response response:
-
The user-agent MAY ignore the response; if so, return.
Note: The user-agent may prevent attribution for a number of reasons, such as user opt-out. In these cases, it is preferred to abort the API flow at response time rather than at request time so this state is not immediately detectable. Attribution may also be blocked if the reporting origin is not enrolled.
-
Queue a task on the networking task source to proceed with the following steps.
Note: This algorithm can be invoked from while in parallel.
-
Assert: eligibility is "
navigation-source
" or "event-source
" or "event-source-or-trigger
". -
If reportingOrigin is not suitable, return.
-
Let sourceHeader be the result of getting "
Attribution-Reporting-Register-Source
" from response’s header list. -
Let triggerHeader be the result of getting "
Attribution-Reporting-Register-Trigger
" from response’s header list. -
Let osSourceHeader be the result of getting "
Attribution-Reporting-Register-OS-Source
" from response’s header list. -
Let osTriggerHeader be the result of getting "
Attribution-Reporting-Register-OS-Trigger
" from response’s header list. -
Let registrationInfo be the result of getting registration info from response’s header list.
-
If registrationInfo is an error, return.
-
If eligibility is:
- "
navigation-source
"- "
event-source
" - "
-
Run the following steps:
-
Let sourceType be "
navigation
". -
If eligibility is "
event-source
", set sourceType to "event
". -
Run process an attribution source response with contextOrigin, reportingOrigin, sourceType, sourceHeader, osSourceHeader, and registrationInfo.
-
- "
event-source-or-trigger
" -
Run the following steps:
-
Let hasSourceRegistration be false.
-
If sourceHeader or osSourceHeader is not null, set hasSourceRegistration to true.
-
Let hasTriggerRegistration be false.
-
If triggerHeader or osTriggerHeader is not null, set hasTriggerRegistration to true.
-
If both hasSourceRegistration and hasTriggerRegistration are true, return.
-
If hasSourceRegistration is true:
-
Run process an attribution source response with contextOrigin, reportingOrigin, "
event
", sourceHeader, osSourceHeader, registrationInfo, and fenced.
-
-
If hasTriggerRegistration is true:
-
Run process an attribution trigger response with contextOrigin, reportingOrigin, response, triggerHeader, osTriggerHeader, registrationInfo, and fenced.
-
-
- "
To obtain and deliver debug reports on registration header errors given a header name headerName, a header value headerValue, a suitable origin reportingOrigin, a suitable origin contextOrigin, and a boolean fenced:
-
Let contextSite be the result of obtaining a site from contextOrigin.
-
Let body be a new map with the following key/value pairs:
- "
context_site
" -
contextSite, serialized.
- "
header
" -
headerName.
- "
value
" -
headerValue.
- "
-
Let data be a new verbose debug data with the items:
-
Run obtain and deliver a verbose debug report with « data », reportingOrigin, and fenced.
Note: The user agent may optionally include error details of any type in body["error
"].
10.17. Attribution debugging
To check if attribution debugging can be enabled given an attribution debug info debugInfo:
-
If debugInfo’s source debug key is null, return false.
-
If debugInfo’s trigger debug key is null, return false.
-
Return true.
To serialize an attribution debug info given a map data and an attribution debug info debugInfo:
-
If the result of checking if attribution debugging can be enabled with debugInfo is false, return.
-
Set data["
source_debug_key
"] to debugInfo’s source debug key, serialized. -
Set data["
trigger_debug_key
"] to debugInfo’s trigger debug key, serialized.
Note: We require both source and trigger debug keys to be present to avoid a privacy leak from one-sided third-party cookie access.
10.18. Obtaining and delivering an aggregatable debug report
To check if aggregatable debug reporting should be blocked by rate-limit given an aggregatable debug rate-limit record newRecord:
-
Let matchingRecords be all aggregatable debug rate-limit records record of aggregatable debug rate-limit cache where all of the following are true:
-
record’s context site and newRecord’s context site are equal
-
The duration from record’s time and newRecord’s time is <= aggregatable debug rate-limit window
-
-
Let totalBudget be newRecord’s consumed budget.
-
Let totalSameReportingBudget be totalBudget.
-
For each record of matchingRecords:
-
Increment totalBudget value by record’s consumed budget.
-
If record’s reporting site and newRecord’s reporting site are equal, increment totalSameReportingBudget by record’s consumed budget.
-
-
If totalBudget is greater than max aggregatable debug budget per rate-limit window[0], return blocked.
-
If totalSameReportingBudget is greater than max aggregatable debug budget per rate-limit window[1], return blocked.
-
Return allowed.
To obtain and deliver an aggregatable debug report given a list of aggregatable contributions contributions, a suitable origin reportingOrigin, a site effectiveDestination, an aggregation coordinator aggregationCoordinator, and a moment now:
-
Assert: effectiveDestination is not an opaque origin.
-
Let report be a new aggregatable debug report with the items:
- reporting origin
-
reportingOrigin
- effective attribution destination
-
effectiveDestination
- report time
-
now
- external ID
-
The result of generating a random UUID
- internal ID
-
The result of getting the next internal ID
- contributions
-
contributions
- aggregation coordinator
-
aggregationCoordinator
-
Queue a task to attempt to deliver an aggregatable debug report with report.
To obtain and deliver an aggregatable debug report on registration given a list contributions, a site contextSite, an origin reportingOrigin, a possibly null attribution source source, a site effectiveDestination, an aggregation coordinator aggregationCoordinator, and a moment now:
-
If contributions is empty:
-
Run obtain and deliver an aggregatable debug report with «», contextSite, reportingOrigin, effectiveDestination, aggregationCoordinator, and now.
-
Return.
-
-
Let remainingBudget be allowed aggregatable budget per source.
-
Let numReports be 0.
-
If source is not null:
-
Set remainingBudget to source’s remaining aggregatable debug budget.
-
Set numReports to source’s number of aggregatable debug reports.
-
-
Let requiredBudget be the total value of contributions.
-
If requiredBudget is greater than remainingBudget:
-
Run obtain and deliver an aggregatable debug report with «», reportingOrigin, effectiveDestination, aggregationCoordinator, and now.
-
Return.
-
-
If numReports is equal to max aggregatable reports per source[1]:
-
Run obtain and deliver an aggregatable debug report with «», reportingOrigin, effectiveDestination, aggregationCoordinator, and now.
-
Return.
-
-
Let rateLimitRecord be a new aggregatable debug rate-limit record with the items:
- context site
-
contextSite
- reporting site
-
The result of obtaining a site from reportingOrigin
- time
-
now
- consumed budget
-
requiredBudget
-
If the result of running check if aggregatable debug reporting should be blocked by rate-limit with rateLimitRecord is blocked:
-
Run obtain and deliver an aggregatable debug report with «», reportingOrigin, effectiveDestination, aggregationCoordinator, and now.
-
Return.
-
-
Run obtain and deliver an aggregatable debug report with contributions, reportingOrigin, effectiveDestination, aggregationCoordinator, and now.
-
If source is not null:
-
Decrement source’s remaining aggregatable debug budget value by requiredBudget.
-
Increment source’s number of aggregatable debug reports value by 1.
-
-
Append rateLimitRecord to the aggregatable debug rate-limit cache.
-
Remove all aggregatable debug rate-limit records entry from the aggregatable debug rate-limit cache if the duration from entry’s time and now is > aggregatable debug rate-limit window.
11. Source Algorithms
11.1. Obtaining a randomized source response
To obtain a set of possible trigger states given a randomized response output configuration config:
-
Let possibleTriggerStates be a new set.
-
For each triggerData → spec of config’s trigger specs:
-
For each reportWindow of spec’s event-level report windows:
-
Let state be a new trigger state with the items:
- trigger data
-
triggerData
- report window
-
reportWindow
-
Append state to possibleTriggerStates.
-
-
-
Let possibleValues be a new set.
-
For each integer attributions of the range 0 to config’s max attributions per source, inclusive:
-
Return possibleValues.
To obtain a randomized source response pick rate given a positive integer states and a double epsilon:
-
Return states / (states - 1 + eepsilon).
To obtain a randomized source response given a set of possible trigger states possibleValues and a double epsilon:
-
Let pickRate be the result of obtaining a randomized source response pick rate with possibleValues’s size and epsilon.
-
Return the result of obtaining a randomized response with null, possibleValues, and pickRate.
11.2. Computing channel capacity
To compute the channel capacity of a source given a positive integer states and a double epsilon:
-
If states is 1, return 0.
-
If states is greater than the user agent’s max trigger-state cardinality, return an error.
-
Let pickRate be the randomized response pick rate with states and epsilon.
-
Let p be pickRate * (states - 1) / states.
-
Return log2(states) - h(p) - p * log2(states - 1) where h is the binary entropy function [BIN-ENT].
Note: This algorithm computes the channel capacity [CHAN] of a q-ary symmetric channel [Q-SC].
To compute the scopes channel capacity of a source given a positive integer numTriggerStates, a positive integer attributionScopeLimit, and a positive integer maxEventStates:
-
Let totalStates be numTriggerStates + maxEventStates * (attributionScopeLimit - 1).
-
Return log2(totalStates).
11.3. Parsing source-registration JSON
A source-registration JSON key is one of the following:
- "
aggregatable_debug_reporting
" - "
aggregatable_report_window
" - "
aggregation_keys
" - "
attribution_scopes
" - "
budget
" - "
debug_key
" - "
debug_reporting
" - "
destination
" - "
destination_limit_priority
" - "
end_times
" - "
event_level_epsilon
" - "
event_report_window
" - "
event_report_windows
" - "
expiry
" - "
filter_data
" - "
limit
" - "
max_event_level_reports
" - "
max_event_states
" - "
priority
" - "
source_event_id
" - "
start_time
" - "
summary_buckets
" - "
summary_operator
" - "
trigger_data
" - "
trigger_data_matching
" - "
trigger_specs
" - "
values
"
To parse an attribution destination from a string str:
-
Let url be the result of running the URL parser on the value of the str.
-
If url is failure or null, return an error.
-
Return the result of obtaining a site from url’s origin.
To parse attribution destinations from a map map:
-
If map["
destination
"] does not exist, return an error. -
Let val be map["
destination
"]. -
If val is a string, set val to « val ».
-
If val is not a list, return an error.
-
Let result be a set.
-
For each value of val:
-
If value is not a string, return an error.
-
Let destination be the result of parse an attribution destination with value.
-
If destination is an error, return it.
-
Append destination to result.
-
-
If result is empty or its size is greater than the max destinations per source, return an error.
-
Sort result in ascending order, with a being less than b if a, serialized, is less than b, serialized.
-
Return result.
Note: Sorting result helps ensure that registrations with the same set of destinations are equivalent, regardless of the order of sites in the registration JSON.
To parse a duration given a map map, a string key, and a tuple of durations (clampStart, clampEnd):
-
Assert: clampStart < clampEnd.
-
Let seconds be null.
-
If map[key] exists and is a non-negative integer, set seconds to map[key].
-
Otherwise, set seconds to the result of running parse an optional 64-bit unsigned integer with map, key, and null.
-
If seconds is an error, return an error.
-
If seconds is null, return clampEnd.
-
Let duration be the duration of seconds seconds.
-
If duration is less than clampStart, return clampStart.
-
If duration is greater than clampEnd, return clampEnd.
-
Return duration.
Consider rejecting out-of-bounds values instead of silently clamping.
To parse aggregation keys given a map map:
-
Let aggregationKeys be a new map.
-
If map["
aggregation_keys
"] does not exist, return aggregationKeys. -
Let values be map["
aggregation_keys
"]. -
If values is not an map, return an error.
-
If values’s size is greater than the max aggregation keys per source registration, return an error.
-
For each key → value of values:
-
If key’s length is greater than the max length per aggregation key identifier, return an error.
-
If value is not a string, return an error.
-
Let keyPiece be the result of running parse an aggregation key piece with value.
-
If keyPiece is an error, return it.
-
Set aggregationKeys[key] to keyPiece.
-
-
Return aggregationKeys.
To obtain default effective windows given a source type sourceType, a moment sourceTime, and a duration eventReportWindow:
-
Let deadlines be «» if sourceType is "
event
", else « 2 days, 7 days ». -
Remove all elements in deadlines that are greater than or equal to eventReportWindow.
-
Append eventReportWindow to deadlines.
-
Let lastEnd be sourceTime.
-
Let windows be «».
-
For each deadline of deadlines:
-
Let window be a new report window whose items are
-
Append window to windows.
-
Set lastEnd to lastEnd + deadline.
-
-
Return windows.
To parse top-level report windows given a map map, a moment sourceTime, a source type sourceType, and a duration expiry:
-
If map["
event_report_window
"] exists and map["event_report_windows
"] exists, return an error. -
If map["
event_report_window
"] exists:-
Let eventReportWindow be the result of running parse a duration with map, "
event_report_window
", and (min report window, expiry). -
If eventReportWindow is an error, return eventReportWindow.
-
Return the result of obtaining default effective windows given sourceType, sourceTime, and eventReportWindow.
-
-
If map["
event_report_windows
"] does not exist, return the result of obtaining default effective windows given sourceType, sourceTime, and expiry. -
Return the result of parsing report windows with map["
event_report_windows
"], sourceTime, and expiry.
To parse report windows given a value, a moment sourceTime, and a duration expiry:
-
If value is not a map, return an error.
-
Let startDuration be 0 seconds.
-
If value["
start_time
"] exists:-
Let start be value["
start_time
"]. -
If start is not a non-negative integer, return an error.
-
Set startDuration to start seconds.
-
If startDuration is greater than expiry, return an error.
-
-
If value["
end_times
"] does not exist or is not a list, return an error. -
Let endDurations be value["
end_times
"]. -
If the size of endDurations is greater than max settable event-level report windows, return an error.
-
If endDurations is empty, return an error.
-
Let windows be a new list.
-
For each end of endDurations:
-
If end is not a positive integer, return an error.
-
Let endDuration be end seconds.
-
If endDuration is greater than expiry, set endDuration to expiry.
-
If endDuration is less than min report window, set endDuration to min report window.
-
If endDuration is less than or equal to startDuration, return an error.
-
Let window be a new report window whose items are
-
Append window to windows.
-
Set startDuration to endDuration.
-
-
Return windows.
The user-agent has an associated boolean experimental Flexible Event support (default false) that exposes non-normative behavior described in the Flexible event-level configurations proposal.
To parse summary operator given a map map:
-
Let value be "
count
". -
If map["
summary_operator
"] exists:-
If map["
summary_operator
"] is not a string, return an error. -
If map["
summary_operator
"] is not a summary operator, return an error. -
Set value to map["
summary_operator
"].
-
-
Return value.
To parse summary buckets given a map map and an integer maxEventLevelReports:
-
Let values be the range 1 to maxEventLevelReports, inclusive.
-
If map["
summary_buckets
"] exists:-
If map["
summary_buckets
"] is not a list, is empty, or its size is greater than maxEventLevelReports, return an error. -
Set values to map["
summary_buckets
"].
-
-
Let prev be 0.
-
Let summaryBuckets be a new list.
-
For each item of values:
-
If item is not an integer or cannot be represented by an unsigned 32-bit integer, or is less than or equal to prev, return an error.
-
Let summaryBucket be a new summary bucket whose items are
-
Append summaryBucket to summaryBuckets.
-
Set prev to item.
-
-
Return summaryBuckets.
To parse trigger data into a trigger spec map given a triggerDataList, a trigger spec spec, a trigger spec map specs, and a boolean allowEmpty:
-
If triggerDataList is not a list or its size is greater than max distinct trigger data per source, return false.
-
If allowEmpty is false and triggerDataList is empty, return false.
-
For each triggerData of triggerDataList:
-
If triggerData is not an integer or cannot be represented by an unsigned 32-bit integer, or specs[triggerData] exists, return false.
-
Set specs[triggerData] to spec.
-
If specs’s size is greater than max distinct trigger data per source, return false.
-
-
Return true.
To parse trigger specs given a map map, a moment sourceTime, a source type sourceType, a duration expiry, and a trigger-data matching mode matchingMode:
-
Let defaultReportWindows be the result of parsing top-level report windows with map, sourceTime, sourceType, and expiry.
-
If defaultReportWindows is an error, return an error.
-
Let specs be a new trigger spec map.
-
If experimental Flexible Event support is true and map["
trigger_specs
"] exists:-
If map["
trigger_data
"] exists, return an error. -
If map["
trigger_specs
"] is not a list or its size is greater than max distinct trigger data per source, return an error. -
For each item of map["
trigger_specs
"]:-
If item is not a map, return an error.
-
Let spec be a new trigger spec with the following items:
- event-level report windows
-
defaultReportWindows
-
If item["
event_report_windows
"] exists:-
Let reportWindows be the result of parsing report windows with item["
event_report_windows
"], sourceTime, and expiry. -
If reportWindows is an error, return it.
-
Set spec’s event-level report windows to reportWindows.
-
-
If item["
trigger_data
"] does not exist, return an error. -
Let allowEmpty be false.
-
If the result of running parse trigger data into a trigger spec map with item["
trigger_data
"], spec, specs, and allowEmpty is false, return an error.
-
-
-
Otherwise:
-
Let spec be a new trigger spec with the following items:
- event-level report windows
-
defaultReportWindows
-
If map["
trigger_data
"] exists:-
Let allowEmpty be true.
-
If the result of running parse trigger data into a trigger spec map with map["
trigger_data
"], spec, specs, and allowEmpty is false, return an error.
-
-
Otherwise:
-
For each integer triggerData of the range 0 to default trigger data cardinality[sourceType], exclusive:
-
Set specs[triggerData] to spec.
-
-
-
-
If matchingMode is "
modulus
": -
Return specs.
Invoke parse summary buckets and parse summary operator from this algorithm.
To parse a source aggregatable debug reporting config given value, a non-negative integer defaultBudget, and an aggregatable debug reporting config defaultConfig:
-
If value is not a map, return the tuple (defaultBudget, defaultConfig).
-
If value["
budget
"] does not exist, return the tuple (defaultBudget, defaultConfig). -
Let budget be value["
budget
"]. -
If budget is not an integer or is less than or equal to 0 or is greater than allowed aggregatable budget per source, return the tuple (defaultBudget, defaultConfig).
-
Let supportedTypes be the set of all source debug data types.
-
Let config be the result of running parse an aggregatable debug reporting config with value, budget, supportedTypes, and defaultConfig.
-
Return the tuple (budget, config).
To parse attribution scopes from a map map:
-
If map["
attribution_scopes
"] does not exist, return null. -
Let value be map["
attribution_scopes
"]. -
If value is not a map, return an error.
-
Set limit to value["
limit
"]. -
If limit is not an integer, cannot be represented by an unsigned 32-bit integer, or is less than or equal to zero, return an error.
-
Let maxEventStates be the result of running parse max event states with value.
-
If maxEventStates is an error, return an error.
-
Let values be the result of running parse attribution scope values for source with value and limit.
-
If values is an error, return an error.
-
Let attributionScopes be a new attribution scopes with the items:
- limit
-
limit
- values
-
values
- max event states
-
maxEventStates
-
Return attributionScopes.
To parse max event states from a map map:
-
If map["
max_event_states
"] does not exist, return default max event states. -
Let maxEventStates be map["
max_event_states
"]. -
If maxEventStates is not an integer or is less than or equal to 0, return an error.
-
If maxEventStates is greater than the user agent’s max trigger-state cardinality, return an error.
-
Return maxEventStates.
To parse attribution scope values for source from a map map and a 32-bit positive integer limit:
-
Let result be a new set.
-
Let values be map["
values
"]. -
If values is not a list, return an error.
-
If values is empty, return an error.
-
For each value of values:
-
If value is not a string, return an error.
-
If value’s length is greater than max length of attribution scope for source, return an error.
-
Append value to result.
-
-
If result’s size is greater than limit or max attribution scopes per source, return an error.
-
Return result.
Note: Empty attribution scopes are not allowed if limit is set, to prevent the selection of both sources with and without scopes, which would effectively result in limit + 1 scopes.
To parse source-registration JSON given a byte sequence json, a suitable origin sourceOrigin, a suitable origin reportingOrigin, a source type sourceType, a moment sourceTime, and a boolean fenced:
-
Let value be the result of running parse JSON bytes to an Infra value with json.
-
If value is not a map, return an error.
-
Let attributionDestinations be the result of running parse attribution destinations with value.
-
If attributionDestinations is an error, return it.
-
Let sourceEventId be the result of running parse an optional 64-bit unsigned integer with value, "
source_event_id
", and 0. -
If sourceEventId is an error, return it.
-
Let expiry be the result of running parse a duration with value, "
expiry
", and valid source expiry range. -
If expiry is an error, return it.
-
If sourceType is "
event
", round expiry away from zero to the nearest day (86400 seconds). -
Let priority be the result of running parse an optional 64-bit signed integer with value, "
priority
", and -
If priority is an error, return it.
-
Let destinationLimitPriority be the result of running parse an optional 64-bit signed integer with value, "
destination_limit_priority
", and 0. -
If destinationLimitPriority is an error, return it.
-
Let filterData be a new filter map.
-
If value["
filter_data
"] exists:-
Set filterData to the result of running parse filter data with value["
filter_data
"]. -
If filterData is an error, return it.
-
If filterData["
source_type
"] exists, return an error.
-
-
Set filterData["
source_type
"] to « sourceType ». -
Let debugKey be the result of running parse an optional 64-bit unsigned integer with value, "
debug_key
", and null. -
If debugKey is an error, set debugKey to null.
-
Let debugCookieSet be false.
-
Let sourceSite be the result of obtaining a site from sourceOrigin.
-
If the result of running check if cookie-based debugging is allowed with reportingOrigin and sourceSite is allowed, set debugCookieSet to true.
-
If debugCookieSet is false, set debugKey to null.
-
Let aggregationKeys be the result of running parse aggregation keys with value.
-
If aggregationKeys is an error, return it.
-
Let maxAttributionsPerSource be default event-level attributions per source[sourceType].
-
Set maxAttributionsPerSource to value["
max_event_level_reports
"] if it exists. -
If maxAttributionsPerSource is not a non-negative integer, or is greater than max settable event-level attributions per source, return an error.
-
Let aggregatableReportWindowEnd be the result of running parse a duration with value, "
aggregatable_report_window
", and (min report window, expiry). -
If aggregatableReportWindowEnd is an error, return it.
-
Let debugReportingEnabled be false.
-
If value["
debug_reporting
"] exists and is a boolean, set debugReportingEnabled to value["debug_reporting
"]. -
Let aggregatableReportWindow be a new report window with the following items:
-
Let triggerDataMatchingMode be "
modulus
". -
If value["
trigger_data_matching
"] exists:-
If value["
trigger_data_matching
"] is not a string, return an error. -
If value["
trigger_data_matching
"] is not a trigger-data matching mode, return an error. -
Set triggerDataMatchingMode to value["
trigger_data_matching
"].
-
-
Let triggerSpecs be the result of parsing trigger specs with value, sourceTime, sourceType, expiry, and triggerDataMatchingMode.
-
If triggerSpecs is an error, return it.
-
Let attributionScopes be the result of running parse attribution scopes with value.
-
If attributionScopes is an error, return it.
-
Let epsilon be the user agent’s max settable event-level epsilon.
-
Set epsilon to value["
event_level_epsilon
"] if it exists: -
If epsilon is not a double, is less than 0, or is greater than the user agent’s max settable event-level epsilon, return an error.
-
Let aggregatableDebugBudget be 0.
-
Let aggregatableDebugReportingConfig be a new aggregatable debug reporting config.
-
If value["
aggregatable_debug_reporting
"] exists:-
Set (aggregatableDebugBudget, aggregatableDebugReportingConfig) to the result of running parse a source aggregatable debug reporting config with value["
aggregatable_debug_reporting
"], aggregatableDebugBudget, and aggregatableDebugReportingConfig.
-
-
Let aggregatableAttributionBudget be allowed aggregatable budget per source - aggregatableDebugBudget.
-
If automation local testing mode is true, set epsilon to
∞
. -
Let source be a new attribution source struct whose items are:
- internal ID
-
The result of getting the next internal ID
- source origin
-
sourceOrigin
- event ID
-
sourceEventId
- attribution destinations
-
attributionDestinations
- reporting origin
-
reportingOrigin
- expiry
-
expiry
- trigger specs
-
triggerSpecs
- aggregatable report window
-
aggregatableReportWindow
- priority
-
priority
- source time
-
sourceTime
- source type
-
sourceType
- number of event-level reports
-
0
- max number of event-level reports
-
maxAttributionsPerSource
- event-level epsilon
-
epsilon
- filter data
-
filterData
- debug key
-
debugKey
- aggregation keys
-
aggregationKeys
- remaining aggregatable attribution budget
-
aggregatableAttributionBudget
- debug reporting enabled
-
debugReportingEnabled
- trigger-data matching mode
-
triggerDataMatchingMode
- debug cookie set
-
debugCookieSet
- fenced
-
fenced
- remaining aggregatable debug budget
-
aggregatableDebugBudget
- aggregatable debug reporting config
-
aggregatableDebugReportingConfig
- destination limit priority
-
destinationLimitPriority
- attribution scopes
-
attributionScopes
-
Return source.
Determine proper charset-handling for the JSON header value.
11.4. Processing an attribution source
To check if an attribution source exceeds the time-based destination limits given an attribution source source, run the following steps:
-
Let matchingSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:
-
record’s source site and source’s source site are equal
-
record’s expiry time is greater than source’s source time
-
The duration from record’s time and source’s source time is less than destination rate-limit window
-
Let matchingSameReportingSources be all the records in matchingSources whose associated reporting origin is same site with source’s reporting origin.
-
Let destinations be the set of every attribution destination in matchingSources, unioned with source’s attribution destinations.
-
Let sameReportingDestinations be the set of every attribution destination in matchingSameReportingSources, unioned with source’s attribution destinations.
-
Let hitRateLimit be whether destinations’s size is greater than max destinations per rate-limit window[0].
-
Let hitSameReportingRateLimit be whether sameReportingDestinations’s size is greater than max destinations per rate-limit window[1].
-
If (hitRateLimit, hitSameReportingRateLimit) is
- (false, false)
-
Return "
allowed
". - (false, true)
-
Return "
hit reporting limit
". - (true, false)
-
Return "
hit global limit
". - (true, true)
-
Return "
hit reporting limit
".
Note: When both limits are hit, we interpret it as "hit reporting limit
"
for debug reporting.
To check if an attribution source exceeds the per day destination limits given an attribution source source, run the following steps:
-
Let matchingSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:
-
record’s source site and source’s source site are equal
-
record’s reporting origin is same site with source’s reporting origin
-
record’s expiry time is greater than source’s source time
-
The duration from record’s time and source’s source time is less than 1 day
-
Let destinations be the set of all attribution destination in matchingSources, unioned with source’s attribution destinations.
-
Return whether destinations’s size is greater than max destinations per source reporting site per day.
To delete sources for unexpired destination limit given a set of internal IDs sourcesToDelete and a moment now:
-
If sourcesToDelete is empty, return.
-
For each attribution source source of the attribution source cache:
-
Remove source from the attribution source cache if sourcesToDelete contains source’s internal ID.
-
-
Let deletedEventLevelReports be a new set.
-
For each event-level report report of the event-level report cache:
-
If sourcesToDelete contains report’s source ID and report’s trigger time is greater than or equal to now:
-
Append report’s internal ID to deletedEventLevelReports.
-
Remove report from the event-level report cache.
-
Note: Leaking browsing history of destinations deactivated for unexpired destination limit from event-level reports whose trigger time is earlier than now is mitigated by the presence of fake reports. Event-level reports whose trigger time is greater than or equal to now must be deleted to avoid exposing whether an attribution source has a randomized response.
-
-
Let deletedAggregatableReports be a new set.
-
For each aggregatable attribution report report of the aggregatable attribution report cache:
-
If report’s source ID is not null and sourcesToDelete contains report’s source ID:
-
Append report’s internal ID to deletedAggregatableReports.
-
Remove report from the aggregatable attribution report cache.
-
-
-
For each attribution rate-limit record record of the attribution rate-limit cache:
-
If record’s scope is:
- "
source
" -
Set record’s deactivated for unexpired destination limit to true if sourcesToDelete contains record’s entity ID.
- "
event-attribution
" -
Remove record from the attribution rate-limit cache if deletedEventLevelReports contains record’s entity ID.
- "
aggregatable-attribution
" -
Remove record from the attribution rate-limit cache if deletedAggregatableReports contains record’s entity ID.
- "
-
A destination limit record is a struct with the following items:
- attribution destination
-
A site.
- priority
-
A 64-bit integer.
- time
-
A moment
- source ID
-
An internal ID.
To get sources to delete for the unexpired destination limit given an attribution source source, run the following steps:
-
Let destinationRecords be a new list.
-
For each attribution rate-limit record record of the attribution rate-limit cache:
-
If record’s deactivated for unexpired destination limit is true, continue.
-
If record’s source site and source’s source site are not equal, continue.
-
If record’s reporting origin and source’s reporting origin are not same site, continue.
-
If record’s expiry time is less than or equal to source’s source time, continue.
-
Assert: record’s destination limit priority is not null.
-
Let destinationRecord be a new destination limit record struct whose items are:
- attribution destination
-
record’s attribution destination
- priority
-
record’s destination limit priority
- time
-
record’s time
- source ID
-
record’s entity ID
-
Append destinationRecord to destinationRecords.
-
For each site destination of source’s attribution destinations:
-
Let destinationRecord be a new destination limit record struct whose items are:
- attribution destination
-
destination
- priority
-
source’s destination limit priority
- time
-
source’s source time
- source ID
-
record’s internal ID
-
Append destinationRecord to destinationRecords.
-
-
Sort destinationRecords in descending order, with a less than b if the following steps return true:
-
If a’s serialized attribution destination is less than b’s serialized attribution destination, return true.
-
Return false.
-
Let sourcesToDelete be a new set.
-
Let newDestinations be a new set.
-
For each destination limit record record of destinationRecords:
-
Let destination be record’s attribution destination.
-
If newDestinations’s size is less than the user agent’s max destinations covered by unexpired sources, append destination to newDestinations.
-
Otherwise, if newDestinations does not contain destination:
-
-
Return sourcesToDelete.
To check if an attribution source should be blocked by reporting-origin per site limit given an attribution source source:
-
Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:
-
record’s source site and source’s source site are equal
-
The duration from record’s time and source’s source time is <= origin rate-limit window
-
Let distinctReportingOrigins be the set of all reporting origin in matchingRateLimitRecords, unioned with «source’s reporting origin».
-
If distinctReportingOrigins’s size is greater than max source reporting origins per source reporting site, return blocked.
-
Return allowed.
To obtain a fake report given an attribution source source and a trigger state triggerState:
-
Let specEntry be the entry for source’s trigger specs[triggerState’s trigger data].
-
Let triggerTime be triggerState’s report window's start.
-
Let priority be 0.
-
Let fakeReport be the result of running obtain an event-level report with source, triggerTime, triggerDebugKey set to null, priority, and specEntry.
-
Assert: fakeReport’s report time is equal to triggerState’s report window's end.
-
Return fakeReport.
To obtain and deliver a verbose debug report on source registration given a source debug data types dataType, an attribution source source, a boolean isNoised, and a boolean destinationLimitReplaced:
-
If source’s debug reporting enabled is false, return.
-
If source’s debug cookie set is false, return.
-
Let body be a new map with the following key/value pairs:
- "
attribution_destination
" -
source’s attribution destinations, serialized.
- "
source_event_id
" -
source’s event ID, serialized.
- "
source_site
" -
source’s source site, serialized.
- "
-
If source’s debug key is not null, set body["
source_debug_key
"] to source’s debug key, serialized. -
Let dataTypeToReport be dataType.
-
If dataType is:
- "
source-destination-global-rate-limit
"- "
source-reporting-origin-limit
" - "
-
Set dataTypeToReport to "
source-success
".
- "
-
If dataTypeToReport is "
source-success
" and isNoised is true, set dataTypeToReport to "source-noised
". -
If dataTypeToReport is:
- "
source-destination-limit
" -
Set body["
limit
"] to the user agent’s max destinations covered by unexpired sources, serialized. - "
source-destination-rate-limit
" -
Set body["
limit
"] to the user agent’s max destinations per rate-limit window[1], serialized. - "
source-destination-per-day-rate-limit
" -
Set body["
limit
"] to the user agent’s max destinations per source reporting site per day, serialized. - "
source-storage-limit
" -
Set body["
limit
"] to the user agent’s max pending sources per source origin, serialized. - "
source-channel-capacity-limit
" -
-
Let sourceType be source’s source type.
-
Set body["
limit
"] to the user agent’s max event-level channel capacity per source[sourceType].
-
- "
source-scopes-channel-capacity-limit
" -
-
Assert: source’s attribution scopes is not null.
-
Let sourceType be source’s source type.
-
Set body["
limit
"] to the user agent’s max event-level attribution scopes channel capacity per source[sourceType].
-
- "
source-trigger-state-cardinality-limit
" -
Set body["
limit
"] to the user agent’s max trigger-state cardinality, serialized. - "
source-reporting-origin-per-site-limit
" -
Set body["
limit
"] to the user agent’s max source reporting origins per source reporting site, serialized. - "
source-max-event-states-limit
" -
-
Assert: source’s attribution scopes is not null.
-
Set body["
limit
"] to source’s attribution scopes's max event states.
-
- "
-
If destinationLimitReplaced is true, set body["
source_destination_limit
"] to the user agent’s max destinations covered by unexpired sources, serialized.
Note: The "source_destination_limit
" field may be included to indicate that max destinations covered by unexpired sources was hit, which is not
reported as "source-destination-limit
" to prevent side-channel
leakage of cross-origin data.
-
Let data be a new verbose debug data with the items:
-
Run obtain and deliver a verbose debug report with « data », source’s reporting origin, and source’s fenced.
To obtain and deliver an aggregatable debug report on source registration given a source debug data type dataType, an attribution source source, a boolean isNoised, and a boolean destinationLimitReplaced:
-
If source’s fenced is true, return.
-
Let config be source’s aggregatable debug reporting config.
-
Let debugDataMap be config’s debug data.
-
If debugDataMap is empty, return.
-
Let dataTypesToReport be a new set.
-
If dataType is "
source-success
" and isNoised is true, append "source-noised
" to dataTypesToReport. -
Otherwise, append dataType to dataTypesToReport.
-
If destinationLimitReplaced is true, append "
source-destination-limit-replaced
" to dataTypesToReport. -
Let contributions be a new list.
-
For each dataTypeToReport of dataTypesToReport:
-
If debugDataMap[dataTypeToReport] exists:
-
Let contribution be a new aggregatable contribution with items:
-
Append contribution to contributions.
-
-
-
Run obtain and deliver an aggregatable debug report on registration with contributions, source’s source site, source’s reporting origin, source, source’s attribution destinations[0], config’s aggregation coordinator, and source’s source time.
To obtain and deliver debug reports on source registration given a source debug data type dataType, an attribution source source, an optional boolean isNoised (default false), and an optional boolean destinationLimitReplaced (default false):
-
Run obtain and deliver a verbose debug report on source registration with dataTypes, source, isNoised, and destinationLimitReplaced.
-
Run obtain and deliver an aggregatable debug report on source registration with dataTypes, source, isNoised, and destinationLimitReplaced.
To delete expired sources given a moment now:
-
For each source of the attribution source cache:
-
If source’s expiry time is less than now, remove source from the attribution source cache.
-
To find sources with common destinations and reporting origin given an attribution source pendingSource:
-
Let matchingSources be a new list.
-
For each source of the user agent’s attribution source cache:
-
Let commonDestinations be the intersection of source’s attribution destinations and pendingSource’s attribution destinations.
-
If source’s reporting origin and pendingSource’s reporting origin are not same origin, continue.
-
Append source to matchingSources.
-
-
Return matchingSources.
To remove associated event-level reports and rate-limit records given an internal ID sourceId and a moment minTriggerTime:
-
For each event-level report report of the event-level report cache:
-
If report’s trigger time is less than minTriggerTime, continue.
-
Remove report from the event-level report cache.
-
Remove all attribution rate-limit records entry from the attribution rate-limit cache where entry’s entity ID is equal to report’s internal ID.
To remove sources with unselected attribution scopes for destination given a site destination and an attribution source pendingSource:
-
Let scopeRecords be a new list.
-
Let scopes be pendingSource’s attribution scopes's values.
-
For each source of the attribution source cache:
-
If source’s reporting origin and pendingSource’s reporting origin are not same origin, continue.
-
If source’s attribution destinations does not contain destination, continue.
-
If source’s attribution scopes's is null, continue.
-
For each scope in source’s attribution scopes's values:
-
-
Sort scopeRecords in ascending order with a being less than b if any of the following are true:
-
a[1]'s source time is greater than b[1]'s source time.
-
a[1]'s source time is equal to b[1]'s source time and a[0] is greater than b[0].
-
-
Let selectedScopes be scopes, cloned.
-
Let sourcesToRemove be a new set.
-
For each record of scopeRecords:
-
For each source of the sourcesToRemove:
-
Remove associated event-level reports and rate-limit records with source’s internal ID and pendingSource’s source time.
-
Remove source from the attribution source cache.
-
To remove sources with unselected attribution scopes given an attribution source pendingSource:
-
If pendingSource’s attribution scopes is null, return.
-
Assert: pendingSource’s attribution destinations is sorted in ascending order, with a being less than b if a, serialized, is less than b, serialized.
-
For each destination in pendingSource’s attribution destinations:
-
Remove sources with unselected attribution scopes for destination with destination and pendingSource.
-
To remove or update sources for attribution scopes given an attribution source pendingSource:
-
Let pendingScopes be pendingSource’s attribution scopes.
-
Let matchingSources be the result of running find sources with common destinations and reporting origin with pendingSource.
-
For each source of matchingSources:
-
Let existingScopes be source’s attribution scopes.
-
If pendingScopes is null:
-
Set source’s attribution scopes to null if existingScopes is not null.
-
-
Otherwise:
-
If existingScopes is null or existingScopes’s max event states is not equal to pendingScopes’s max event states or existingScopes’s limit is less than pendingScopes’s limit:
-
Remove associated event-level reports and rate-limit records with source’s internal ID and pendingSource’s source time.
-
Remove source from the attribution source cache.
-
-
-
-
Remove sources with unselected attribution scopes with pendingSource.
To process an attribution source given an attribution source source:
-
Delete expired sources with source’s source time.
-
Let randomizedResponseConfig be a new randomized response output configuration whose items are:
-
Let epsilon be source’s event-level epsilon.
-
Let possibleTriggerStates be the result of obtaining a set of possible trigger states with randomizedResponseConfig.
-
Let numPossibleTriggerStates be possibleTriggerStates’s size.
-
Let channelCapacity be the result of computing the channel capacity of a source with numPossibleTriggerStates and epsilon.
-
If channelCapacity is an error:
-
Run obtain and deliver debug reports on source registration with "
source-trigger-state-cardinality-limit
" and source. -
Return.
-
-
Let sourceType be source’s source type.
-
If channelCapacity is greater than max event-level channel capacity per source[sourceType]:
-
Run obtain and deliver debug reports on source registration with "
source-channel-capacity-limit
" and source. -
Return.
-
-
If source’s attribution scopes is not null:
-
Let attributionScopes be attribution scopes.
-
If sourceType is "
event
" and numPossibleTriggerStates is greater than attributionScopes’s max event states:-
Run obtain and deliver debug reports on source registration with "
source-max-event-states-limit
" and source. -
Return.
-
-
Let scopesChannelCapacity be the result of computing the scopes channel capacity of a source with numPossibleTriggerStates, attributionScopes’s limit, and attributionScopes’s max event states.
-
If scopesChannelCapacity is greater than max event-level attribution scopes channel capacity per source[sourceType]:
-
Run obtain and deliver debug reports on source registration with "
source-scopes-channel-capacity-limit
" and source. -
Return.
-
-
-
Set source’s randomized response to the result of obtaining a randomized source response with possibleTriggerStates and epsilon.
-
Set source’s randomized trigger rate to the result of obtaining a randomized source response pick rate with numPossibleTriggerStates and epsilon.
-
Set source’s number of event-level reports to 0 if source’s randomized response is null, randomized response's size otherwise.
-
Let pendingSourcesForSourceOrigin be the set of all attribution sources pendingSource of the attribution source cache where pendingSource’s source origin and source’s source origin are same origin.
-
If pendingSourcesForSourceOrigin’s size is greater than or equal to the user agent’s max pending sources per source origin:
-
Run obtain and deliver debug reports on source registration with "
source-storage-limit
" and source. -
Return.
-
-
Let destinationRateLimitResult be the result of running check if an attribution source exceeds the time-based destination limit with source.
-
If destinationRateLimitResult is "
hit reporting limit
":-
Run obtain and deliver debug reports on source registration with "
source-destination-rate-limit
" and source. -
Return.
-
-
If the result of running check if an attribution source exceeds the per day destination limit with source is true:
-
Run obtain and deliver debug reports on source registration with "
source-destination-per-day-rate-limit
" and source. -
Return.
-
-
Let sourcesToDeleteForDestinationLimit be the result of running get sources to delete for the unexpired destination limit with source.
-
If sourcesToDeleteForDestinationLimit contains source’s internal ID:
-
Run obtain and deliver debug reports on source registration with "
source-destination-limit
" and source. -
Return.
-
-
Let destinationLimitReplaced be true if sourcesToDeleteForDestinationLimit is not empty, otherwise false.
-
Run delete sources for unexpired destination limit with sourcesToDeleteForDestinationLimit and source’s source time.
-
Remove or update sources for attribution scopes with source.
-
Let isNoised be true if source’s randomized response is not null, otherwise false.
-
If destinationRateLimitResult is "
hit global limit
":-
Run obtain and deliver debug reports on source registration with "
source-destination-global-rate-limit
", source, isNoised, and destinationLimitReplaced. -
Return.
-
-
Let newRateLimitRecords be a new set.
-
For each destination in source’s attribution destinations:
-
Let rateLimitRecord be a new attribution rate-limit record with the items:
- scope
-
"
source
" - source site
-
source’s source site
- attribution destination
-
destination
- reporting origin
-
source’s reporting origin
- time
-
source’s source time
- expiry time
-
source’s expiry time
- entity ID
-
source’s internal ID
- destination limit priority
-
source’s destination limit priority
-
If the result of running should processing be blocked by reporting-origin limit with rateLimitRecord is blocked:
-
Run obtain and deliver debug reports on source registration with "
source-reporting-origin-limit
", source, isNoised, and destinationLimitReplaced. -
Return.
-
-
Append rateLimitRecord to newRateLimitRecords.
-
-
For each record of newRateLimitRecords, append record to the attribution rate-limit cache.
-
Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and source’s source time is true.
-
If source’s randomized response is not null and is a list:
-
For each trigger state triggerState of source’s randomized response:
-
Let fakeReport be the result of running obtain a fake report with source and triggerState.
-
Append fakeReport to the event-level report cache.
-
-
If source’s randomized response is not empty, then set source’s event-level attributable value to false.
-
For each destination in source’s attribution destinations:
-
Let rateLimitRecord be a new attribution rate-limit record with the items:
- scope
- source site
-
source’s source site
- attribution destination
-
destination
- reporting origin
-
source’s reporting origin
- time
-
source’s source time
- expiry time
-
null
- entity ID
-
null
-
Append rateLimitRecord to the attribution rate-limit cache.
-
-
-
Run obtain and deliver debug reports on source registration with "
source-success
", source, isNoised, and destinationLimitReplaced. -
Append source to the attribution source cache.
Note: Because a fake report does not have a "real" effective destination, we need to subtract from the privacy budget of all possible destinations.
Note: The limits that are not reported as source-success
in verbose debug reports should be checked before any limits that are reported implicitly as source-success
( source-destination-global-rate-limit
and source-reporting-origin-limit
) to
prevent side-channel leakage of cross-origin data. Furthermore, the verbose debug data should be fully determined regardless of the result of checks on implicitly reported limits.
12. Triggering Algorithms
A trigger-registration JSON key is one of the following:
- "
aggregatable_debug_reporting
" - "
aggregatable_deduplication_keys
" - "
aggregatable_filtering_id_max_bytes
" - "
aggregatable_source_registration_time
" - "
aggregatable_trigger_data
" - "
aggregatable_values
" - "
aggregation_coordinator_origin
" - "
attribution_scopes
" - "
debug_key
" - "
debug_reporting
" - "
deduplication_key
" - "
event_trigger_data
" - "
filtering_id
" - "
filters
" - "
key_piece
" - "
not_filters
" - "
priority
" - "
source_keys
" - "
trigger_context_id
" - "
trigger_data
" - "
value
" - "
values
"
12.1. Creating an attribution trigger
To parse an event-trigger value given a map map:
-
If experimental Flexible Event support is false or map["
value
"] does not exist, return 1. -
Let value be map["
value
"]. -
If value is not an integer, cannot be represented by an unsigned 32-bit integer, or is less than or equal to zero, return an error.
-
Return value.
To parse event triggers given a map map:
-
Let eventTriggers be a new set.
-
If map["
event_trigger_data
"] does not exist, return eventTriggers. -
Let values be map["
event_trigger_data
"]. -
If values is not a list, return an error.
-
For each value of values:
-
If value is not a map, return an error.
-
Let triggerData be the result of running parse an optional 64-bit unsigned integer with value, "
trigger_data
", and 0. -
If triggerData is an error, return it.
-
Let dedupKey be the result of running parse an optional 64-bit unsigned integer with value, "
deduplication_key
", and null. -
If dedupKey is an error, return it.
-
Let priority be the result of running parse an optional 64-bit signed integer with value, "
priority
", and 0. -
If priority is an error, return it.
-
Let filterPair be the result of running parse a filter pair with value.
-
If filterPair is an error, return it.
-
Let triggerValue be the result of running parse an event-trigger value with value.
-
If triggerValue is an error, return it.
-
Let eventTrigger be a new event-level trigger configuration with the items:
- trigger data
-
triggerData
- dedup key
-
dedupKey
- priority
-
priority
- filters
-
filterPair[0]
- negated filters
-
filterPair[1]
- value
-
triggerValue
-
Append eventTrigger to eventTriggers.
-
-
Return eventTriggers.
To parse aggregatable trigger data given a map map:
-
Let aggregatableTriggerData be a new list.
-
If map["
aggregatable_trigger_data
"] does not exist, return aggregatableTriggerData. -
Let values be map["
aggregatable_trigger_data
"]. -
If values is not a list, return an error.
-
For each value of values:
-
If value is not a map, return an error.
-
If value["
key_piece
"] does not exist or is not a string, return an error. -
Let keyPiece be the result of running parse an aggregation key piece with value["
key_piece
"]. -
If keyPiece is an error, return it.
-
Let sourceKeys be a new set.
-
If value["
source_keys
"] exists:-
If value["
source_keys
"] is not a list, return an error. -
For each sourceKey of value["
source_keys
"]:
-
-
Let filterPair be the result of running parse a filter pair with value.
-
If filterPair is an error, return it.
-
Let aggregatableTrigger be a new aggregatable trigger data with the items:
- key piece
-
keyPiece
- source keys
-
sourceKeys
- filters
-
filterPair[0]
- negated filters
-
filterPair[1]
-
Append aggregatableTrigger to aggregatableTriggerData.
-
-
Return aggregatableTriggerData.
To parse aggregatable filtering ID max bytes given a map map:
-
Let maxBytes be default filtering ID max bytes.
-
If map["
aggregatable_filtering_id_max_bytes
"] exists:-
Set maxBytes to map["
aggregatable_filtering_id_max_bytes
"]. -
If maxBytes is a positive integer and is contained in the valid filtering ID max bytes range, return maxBytes.
-
Otherwise, return an error.
-
-
Return maxBytes.
To validate aggregatable key-values value given a value:
-
If value is not an integer, return false.
-
If value is less than or equal to 0, return false.
-
If value is greater than allowed aggregatable budget per source, return false.
-
Return true.
To parse aggregatable key-values given a map map and a positive integer maxBytes:
-
Let out be a new map.
-
For each key → value of map:
-
If value is not a map or an integer, return an error.
-
If value is an integer:
-
If the result of running validate aggregatable key-values value with value is false, return an error.
-
Set out[key] to a new aggregatable key value whose items are
-
-
If the result of running validate aggregatable key-values value with value["
value
"] is false, return an error. -
Let filteringId be default filtering ID value.
-
If value["
filtering_id
"] exists:-
Set filteringId to the result of applying the rules for parsing non-negative integers to value["
filtering_id
"]. -
If filteringId is an error, return it.
-
If filteringId is not in the range 0 to 256maxBytes, exclusive, return an error.
-
-
Set out[key] to a new aggregatable key value whose items are
- value
-
value["
value
"] - filtering ID
-
filteringId
-
-
Return out.
To parse aggregatable values given a map map and a positive integer maxBytes:
-
If map["
aggregatable_values
"] does not exist, return a new list. -
Let values be map["
aggregatable_values
"]. -
Let aggregatableValuesConfigurations be a list of aggregatable values configurations, initially empty.
-
If values is a map:
-
Let aggregatableKeyValues be the result of running parse aggregatable key-values with values and maxBytes.
-
If aggregatableKeyValues is an error, return it.
-
Let aggregatableValuesConfiguration be a new aggregatable values configuration with the items:
- values
-
aggregatableKeyValues
- filters
-
«»
- negated filters
-
«»
-
Append aggregatableValuesConfiguration to aggregatableValuesConfigurations.
-
Return aggregatableValuesConfigurations.
-
-
For each value of values:
-
If value is not a map, return an error.
-
Let aggregatableKeyValues be the result of running parse aggregatable key-values with value["
values
"] and maxBytes. -
If aggregatableKeyValues is an error, return it.
-
Let filterPair be the result of running parse a filter pair with value.
-
If filterPair is an error, return it.
-
Let aggregatableValuesConfiguration be a new aggregatable values configuration with the items:
- values
-
aggregatableKeyValues
- filters
-
filterPair[0]
- negated filters
-
filterPair[1]
-
Append aggregatableValuesConfiguration to aggregatableValuesConfigurations.
-
-
Return aggregatableValuesConfigurations.
To parse aggregatable dedup keys given a map map:
-
Let aggregatableDedupKeys be a new list.
-
If map["
aggregatable_deduplication_keys
"] does not exist, return aggregatableDedupKeys. -
Let values be map["
aggregatable_deduplication_keys
"]. -
If values is not a list, return an error.
-
For each value of values:
-
If value is not a map, return an error.
-
Let dedupKey be the result of running parse an optional 64-bit unsigned integer with value, "
deduplication_key
", and null. -
If dedupKey is an error, return it.
-
Let filterPair be the result of running parse a filter pair with value.
-
If filterPair is an error, return it.
-
Let aggregatableDedupKey be a new aggregatable dedup key with the items:
- dedup key
-
dedupKey
- filters
-
filterPair[0]
- negated filters
-
filterPair[1]
-
Append aggregatableDedupKey to aggregatableDedupKeys.
-
-
Return aggregatableDedupKeys.
To parse attribution scopes for trigger from a map map:
-
Let result be a new set.
-
If map["
attribution_scopes
"] does not exist, return result. -
Let values be map["
attribution_scopes
"]. -
If values is not a list, return an error.
-
For each value of values:
-
Return result.
To create an attribution trigger given a byte sequence json, a site destination, a suitable origin reportingOrigin, a moment triggerTime, and a boolean fenced:
-
Let value be the result of running parse JSON bytes to an Infra value with json.
-
If value is not a map, return an error.
-
Let eventTriggers be the result of running parse event triggers with value.
-
If eventTriggers is an error, return it.
-
Let aggregatableTriggerData be the result of running parse aggregatable trigger data with value.
-
If aggregatableTriggerData is an error, return it.
-
Let filteringIdsMaxBytes be the result of parsing aggregatable filtering ID max bytes with value.
-
If filteringIdsMaxBytes is an error, return it.
-
Let aggregatableValuesConfigurations be the result of running parse aggregatable values with value and filteringIdsMaxBytes.
-
If aggregatableValuesConfigurations is an error, return it.
-
Let aggregatableDedupKeys be the result of running parse aggregatable dedup keys with value.
-
If aggregatableDedupKeys is an error, return it.
-
Let debugKey be the result of running parse an optional 64-bit unsigned integer with value, "
debug_key
", and null. -
If debugKey is an error, set debugKey to null.
-
If the result of running check if cookie-based debugging is allowed with reportingOrigin and destination is blocked, set debugKey to null.
-
Let filterPair be the result of running parse a filter pair with value.
-
If filterPair is an error, return it.
-
Let debugReportingEnabled be false.
-
If value["
debug_reporting
"] exists and is a boolean, set debugReportingEnabled to value["debug_reporting
"]. -
Let aggregationCoordinator be default aggregation coordinator.
-
If value["
aggregation_coordinator_origin
"] exists:-
Set aggregationCoordinator to the result of running parse an aggregation coordinator with value["
aggregation_coordinator_origin
"]. -
If aggregationCoordinator is an error, return it.
-
-
Let aggregatableSourceRegTimeConfig be "
exclude
". -
If value["
aggregatable_source_registration_time
"] exists:-
If value["
aggregatable_source_registration_time
"] is not a string, return an error. -
If value["
aggregatable_source_registration_time
"] is not an aggregatable source registration time configuration, return an error. -
Set aggregatableSourceRegTimeConfig to value["
aggregatable_source_registration_time
"].
-
-
Let triggerContextID be null.
-
If value["
trigger_context_id
"] exists:-
If value["
trigger_context_id
"] is not a string, return an error. -
If value["
trigger_context_id
"]'s length is greater than the max length per trigger context ID, return an error. -
Set triggerContextID to value["
trigger_context_id
"].
-
-
Let aggregatableDebugReportingConfig be a new aggregatable debug reporting config.
-
If value["
aggregatable_debug_reporting
"] exists:-
Let supportedTypes be the set of all trigger debug data types.
-
Set aggregatableDebugReportingConfig to the result of running parse an aggregatable debug reporting config with value["
aggregatable_debug_reporting
"], allowed aggregatable budget per source, supportedTypes, and aggregatableDebugReportingConfig.
-
-
Let attributionScopes be the result of running parse attribution scopes for trigger with value.
-
If attributionScopes is an error, return it.
-
Let trigger be a new attribution trigger with the items:
- attribution destination
-
destination
- trigger time
-
triggerTime
- reporting origin
-
reportingOrigin
- filters
-
filterPair[0]
- negated filters
-
filterPair[1]
- debug key
-
debugKey
- event-level trigger configurations
-
eventTriggers
- aggregatable trigger data
-
aggregatableTriggerData
- aggregatable values configurations
-
aggregatableValuesConfigurations
- aggregatable dedup keys
-
aggregatableDedupKeys
- debug reporting enabled
-
debugReportingEnabled
- aggregation coordinator
-
aggregationCoordinator
- aggregatable source registration time configuration
-
aggregatableSourceRegTimeConfig
- trigger context ID
-
triggerContextID
- fenced
-
fenced
- aggregatable filtering ID max bytes
-
filteringIdsMaxBytes
- aggregatable debug reporting config
-
aggregatableDebugReportingConfig
- attribution scopes
-
attributionScopes
-
If aggregatableSourceRegTimeConfig is not "
exclude
" and the result of running check if an aggregatable attribution report should be unconditionally sent with trigger is true, return an error. -
Return trigger.
Determine proper charset-handling for the JSON header value.
12.2. Does filter data match
To match filter values given a filter value a and a filter value b:
-
If b is empty, then:
-
If a is empty, then return true.
-
Otherwise, return false.
-
-
Let i be the intersection of a and b.
-
If i is empty, then return false.
-
Return true.
To match filter values with negation given a filter value a and a filter value b:
-
If b is empty, then:
-
If a is not empty, then return true.
-
Otherwise, return false.
-
-
Let i be the intersection of a and b.
-
If i is not empty, then return false.
-
Return true.
To match an attribution source against a filter config given an attribution source source, a filter config filter, a moment moment, and a boolean isNegated:
-
Let lookbackWindow be filter’s lookback window.
-
If lookbackWindow is not null:
-
If the duration from moment and the source’s source time is greater than lookbackWindow:
-
If isNegated is false, return false.
-
-
Else if isNegated is true, return false.
Note: If non-negated, the source must have been registered inside of the lookback window. If negated, it must be outside of the lookback window.
-
-
Let filterMap be filter’s map.
-
Let sourceData be source’s filter data.
-
For each key → filterValues of filterMap:
-
Let sourceValues be sourceData[key].
-
If isNegated is:
- false
- If the result of running match filter values with sourceValues and filterValues is false, return false.
- true
- If the result of running match filter values with negation with sourceValues and filterValues is false, return false.
-
Return true.
To match an attribution source against filters given an attribution source source, a list of filter configs filters, a moment moment, and a boolean isNegated:
-
If filters is empty, return true.
-
For each filter of filters:
-
If the result of running match an attribution source against a filter config with source, filter, moment, and isNegated is true, return true.
-
-
Return false.
To match an attribution source against filters and negated filters given an attribution source source, a list of filter configs filters, a list of filter configs notFilters, and a moment moment:
-
If the result of running match an attribution source against filters with source, filters, moment, and isNegated set to false is false, return false.
-
If the result of running match an attribution source against filters with source, notFilters, moment, and isNegated set to true is false, return false.
-
Return true.
12.3. Should send a report unconditionally
To check if an aggregatable attribution report should be unconditionally sent given an attribution trigger trigger:
-
If trigger’s trigger context ID is not null, return true.
-
If trigger’s aggregatable filtering ID max bytes is not equal to default filtering ID max bytes, return true.
-
Return false.
12.4. Should attribution be blocked by rate limits
To check if attribution should be blocked by attribution rate limit given an attribution trigger trigger, an attribution source sourceToAttribute, and a scope rateLimitScope:
-
Let matchingRateLimitRecords be all attribution rate-limit records record of attribution rate-limit cache where all of the following are true:
-
record’s scope is rateLimitScope
-
record’s source site and sourceToAttribute’s source site are equal
-
record’s attribution destination and trigger’s attribution destination are equal
-
record’s reporting origin and trigger’s reporting origin are same site
-
record’s time is greater than attribution rate-limit window before trigger’s trigger time
-
-
If matchingRateLimitRecords’s size is greater than or equal to max attributions per rate-limit window, return blocked.
-
Return allowed.
To check if attribution should be blocked by rate limits given an attribution trigger trigger, an attribution source sourceToAttribute, and an attribution rate-limit record newRecord:
-
If the result of running check if attribution should be blocked by attribution rate limit with trigger, sourceToAttribute, and newRecord’s scope is blocked:
-
Let debugDataType be "
trigger-event-attributions-per-source-destination-limit
". -
If newRecord’s scope is "
aggregatable-attribution
", set debugDataType to "trigger-aggregate-attributions-per-source-destination-limit
". -
Return the triggering result ("
dropped
", (debugDataType, null)).
-
-
If the result of running should processing be blocked by reporting-origin limit with newRecord is blocked:
-
Return the triggering result ("
dropped
", ("trigger-reporting-origin-limit
", null)).
-
-
Return null.
Consider performing should processing be blocked by reporting-origin limit from triggering attribution to avoid duplicate invocation from triggering event-level attribution and triggering aggregatable attribution. [Issue #1287]
12.5. Creating aggregatable contributions
To create aggregatable contributions from aggregation keys and aggregatable values given a map aggregationKeys and a map aggregatableValues, run the following steps:
-
Let contributions be an empty list.
-
For each id → key of aggregationKeys:
-
Let contribution be a new aggregatable contribution with the items:
- key
-
key
- value
-
aggregatableValues[id]'s value
- filtering ID
-
aggregatableValues[id]'s filtering ID
-
Append contribution to contributions.
-
Return contributions.
To create aggregatable contributions given an attribution source source and an attribution trigger trigger, run the following steps:
-
Let aggregationKeys be the result of cloning source’s aggregation keys.
-
For each triggerData of trigger’s aggregatable trigger data:
-
If the result of running match an attribution source against filters and negated filters with source, triggerData’s filters, triggerData’s negated filters, and trigger’s trigger time is false, continue:
-
For each sourceKey of triggerData’s source keys:
-
-
Let aggregatableValuesConfigurations be trigger’s aggregatable values configurations.
-
For each aggregatableValuesConfiguration of aggregatableValuesConfigurations:
-
If the result of running match an attribution source against filters and negated filters with source, aggregatableValuesConfiguration’s filters, aggregatableValuesConfiguration’s negated filters, and trigger’s trigger time is true:
-
Return the result of running create aggregatable contributions from aggregation keys and aggregatable values with aggregationKeys and aggregatableValuesConfiguration’s values.
-
-
-
Return a new list.
12.6. Can source create aggregatable contributions
To check if an attribution source can create aggregatable contributions given an aggregatable attribution report report and an attribution source sourceToAttribute, run the following steps:
-
Let remainingAggregatableBudget be sourceToAttribute’s remaining aggregatable attribution budget.
-
Assert: remainingAggregatableBudget is greater than or equal to 0.
-
If report’s required aggregatable budget is greater than remainingAggregatableBudget, return false.
-
Return true.
12.7. Obtaining verbose debug data on trigger registration
To obtain verbose debug data body on trigger registration given a trigger debug data type dataType, an attribution trigger trigger, a possibly null attribution source sourceToAttribute, and a possibly null attribution report report:
-
Let body be a new map.
-
If dataType is:
- "
trigger-event-attributions-per-source-destination-limit
"- "
trigger-aggregate-attributions-per-source-destination-limit
" - "
-
Set body["
limit
"] to the user agent’s max attributions per rate-limit window, serialized. - "
trigger-reporting-origin-limit
" -
Set body["
limit
"] to the user agent’s max attribution reporting origins per rate-limit window, serialized. - "
trigger-event-storage-limit
" -
Set body["
limit
"] to max event-level reports per attribution destination, serialized. - "
trigger-aggregate-storage-limit
" -
Set body["
limit
"] to max aggregatable attribution reports per attribution destination, serialized. - "
trigger-aggregate-insufficient-budget
" -
Set body["
limit
"] to allowed aggregatable budget per source, serialized. - "
trigger-aggregate-excessive-reports
" -
Set body["
limit
"] to max aggregatable reports per source[0], - "
trigger-event-low-priority
"- "
trigger-event-excessive-reports
" - "
-
-
Assert: report is not null and is an event-level report.
-
Return the result of running obtain an event-level report body with report.
-
- "
-
Set body["
attribution_destination
"] to trigger’s attribution destination, serialized. -
If trigger’s debug key is not null, set body["
trigger_debug_key
"] to trigger’s debug key, serialized. -
If sourceToAttribute is not null:
-
Set body["
source_event_id
"] to source’s event ID, serialized. -
Set body["
source_site
"] to source’s source site, serialized. -
If sourceToAttribute’s debug key is not null, set body["
source_debug_key
"] to sourceToAttribute’s debug key, serialized.
-
-
Return body.
To obtain verbose debug data on trigger registration given a trigger debug data type dataType, an attribution trigger trigger, a possibly null attribution source sourceToAttribute, and a possibly null attribution report report:
-
If trigger’s debug reporting enabled is false, return null.
-
If the result of running check if cookie-based debugging is allowed with trigger’s reporting origin and trigger’s attribution destination is blocked, return null.
-
If sourceToAttribute is not null and sourceToAttribute’s debug cookie set is false, return null.
-
Let data be a new verbose debug data with the items:
- data type
-
dataType.
- body
-
The result of running obtain verbose debug data body on trigger registration with dataType, trigger, sourceToAttribute, and report.
-
Return data.
12.8. Triggering event-level attribution
An event-level report a is lower-priority than an event-level report b if any of the following are true:
-
a’s trigger priority is less than b’s trigger priority.
-
a’s trigger priority is equal to b’s trigger priority and a’s trigger time is greater than b’s trigger time.
An event-level-report-replacement result is one of the following:
- "
add-new-report
" -
The new report should be added.
- "
drop-new-report-none-to-replace
" -
The new report should be dropped because the attributed source has reached its report limit and there is no pending report to consider for replacement.
- "
drop-new-report-low-priority
" -
The new report should be dropped because the attributed source has reached its report limit and the new report is lower-priority than all pending reports.
To maybe replace event-level report given an attribution source sourceToAttribute and an event-level report report:
-
Assert: sourceToAttribute’s number of event-level reports is less than or equal to sourceToAttribute’s max number of event-level reports.
-
If sourceToAttribute’s number of event-level reports is less than sourceToAttribute’s max number of event-level reports, return "
add-new-report
". -
Let matchingReports be a new list whose elements are all the elements in the event-level report cache whose report time and source ID are equal to report’s, sorted in ascending order using is lower-priority than.
-
If matchingReports is empty:
-
Set sourceToAttribute’s event-level attributable value to false.
-
Return "
drop-new-report-none-to-replace
".
-
-
Assert: sourceToAttribute’s number of event-level reports is greater than or equal to matchingReports’s size.
-
Let lowestPriorityReport be matchingReports[0].
-
If report is lower-priority than lowestPriorityReport, return "
drop-new-report-low-priority
". -
Remove lowestPriorityReport from the event-level report cache.
-
Decrement sourceToAttribute’s number of event-level reports value by 1.
-
Let rateLimitRecord be the element from attribution rate-limit cache whose entity ID is equal to lowestPriorityReport’s internal ID and scope is equal to "
event-attribution
". -
Assert: rateLimitRecord is not null.
Note: We are making an implicit assumption that attribution rate-limit window is greater than or equal to sourceToAttribute’s expiry. If this assumption does not hold then rateLimitRecord might be null.
-
Remove rateLimitRecord from the attribution rate-limit cache.
-
Return "
add-new-report
".
This algorithm is not compatible with the behavior proposed for experimental Flexible Event support with differing event-level report windows for a given source.
To trigger event-level attribution given an attribution trigger trigger and an attribution source sourceToAttribute, run the following steps:
-
If trigger’s event-level trigger configurations is empty, return the triggering result ("
dropped
", null). -
If sourceToAttribute’s randomized response is not null and is not empty:
-
Assert: sourceToAttribute’s event-level attributable is false.
-
Return the triggering result ("
dropped
", ("trigger-event-noise
", null)).
-
-
Let matchedConfig be null.
-
For each event-level trigger configuration config of trigger’s event-level trigger configurations:
-
If the result of running match an attribution source against filters and negated filters with sourceToAttribute, config’s filters, config’s negated filters, and trigger’s trigger time is true:
-
Set matchedConfig to config.
-
-
-
If matchedConfig is null:
-
Return the triggering result ("
dropped
", ("trigger-event-no-matching-configurations
", null)).
-
-
If matchedConfig’s dedup key is not null and sourceToAttribute’s dedup keys contains it:
-
Return the triggering result ("
dropped
", ("trigger-event-deduplicated
", null)).
-
-
Let specEntry be the result of finding a matching trigger spec with sourceToAttribute and matchedConfig’s trigger data.
-
If specEntry is an error:
-
Return the triggering result ("
dropped
", ("trigger-event-no-matching-trigger-data
", null)).
-
-
Let windowResult be the result of check whether a moment falls within a window with trigger’s trigger time and specEntry’s value's event-level report windows's total window.
-
If windowResult is falls before:
-
Return the triggering result ("
dropped
", ("trigger-event-report-window-not-started
", null)).
-
-
If windowResult is falls after:
-
Return the triggering result ("
dropped
", ("trigger-event-report-window-passed
", null)).
-
-
Assert: windowResult is falls within.
-
Let report be the result of running obtain an event-level report with sourceToAttribute, trigger’s trigger time, trigger’s debug key, matchedConfig’s priority, and specEntry.
-
If sourceToAttribute’s event-level attributable value is false:
-
Return the triggering result ("
dropped
", ("trigger-event-excessive-reports
", report)).
-
-
If the result of running maybe replace event-level report with sourceToAttribute and report is:
- "
add-new-report
" -
-
Do nothing.
-
- "
drop-new-report-none-to-replace
" -
-
Return the triggering result ("
dropped
", ("trigger-event-excessive-reports
", report)).
-
- "
drop-new-report-low-priority
" -
-
Return the triggering result ("
dropped
", ("trigger-event-low-priority
", report)).
-
- "
-
Let rateLimitRecord be a new attribution rate-limit record with the items:
- scope
- source site
-
sourceToAttribute’s source site
- attribution destination
-
trigger’s attribution destination
- reporting origin
-
sourceToAttribute’s reporting origin
- time
-
sourceToAttribute’s source time
- expiry time
-
null
- entity ID
-
report’s internal ID
-
If the result of running check if attribution should be blocked by rate limits with trigger, sourceToAttribute, and rateLimitRecord is not null, return it.
-
Let numMatchingReports be the number of entries in the event-level report cache whose attribution destinations contains trigger’s attribution destination.
-
If numMatchingReports is greater than or equal to the user agent’s max event-level reports per attribution destination:
-
Return the triggering result ("
dropped
", ("trigger-event-storage-limit
", null)).
-
-
Let triggeringStatus be "
attributed
". -
Let debugData be null.
-
If sourceToAttribute’s randomized response is:
- null
-
-
Append report to the event-level report cache.
-
Append rateLimitRecord to the attribution rate-limit cache.
-
- not null
-
-
Set triggeringStatus to "
noised
". -
Set debugData to ("
trigger-event-noise
", null).
-
-
Increment sourceToAttribute’s number of event-level reports value by 1.
-
If matchedConfig’s dedup key is not null, append it to sourceToAttribute’s dedup keys.
-
If triggeringStatus is "
attributed
" and the result of checking if attribution debugging can be enabled with report’s attribution debug info is true, queue a task to attempt to deliver a debug report with report. -
Return the triggering result (triggeringStatus, debugData).
12.9. Triggering aggregatable attribution
To trigger aggregatable attribution given an attribution trigger trigger and an attribution source sourceToAttribute, run the following steps:
-
If the result of running check if an attribution trigger contains aggregatable data is false, return the triggering result ("
dropped
", null). -
Let windowResult be the result of check whether a moment falls within a window with trigger’s trigger time and sourceToAttribute’s aggregatable report window.
-
If windowResult is falls after:
-
Return the triggering result ("
dropped
", ("trigger-aggregate-report-window-passed
", null)).
-
-
Assert: windowResult is falls within.
-
Let matchedDedupKey be null.
-
For each aggregatable dedup key aggregatableDedupKey of trigger’s aggregatable dedup keys:
-
If the result of running match an attribution source against filters and negated filters with sourceToAttribute, aggregatableDedupKey’s filters, aggregatableDedupKey’s negated filters, and trigger’s trigger time is true:
-
-
If matchedDedupKey is not null and sourceToAttribute’s aggregatable dedup keys contains it:
-
Return the triggering result ("
dropped
", ("trigger-aggregate-deduplicated
", null)).
-
-
Let report be the result of running obtain an aggregatable attribution report with sourceToAttribute and trigger.
-
If report’s contributions is empty:
-
Return the triggering result ("
dropped
", ("trigger-aggregate-no-contributions
", null)).
-
-
Let numMatchingReports be the number of entries in the aggregatable attribution report cache whose effective attribution destination equals trigger’s attribution destination and is null report is false.
-
If numMatchingReports is greater than or equal to the user agent’s max aggregatable attribution reports per attribution destination:
-
Return the triggering result ("
dropped
", ("trigger-aggregate-storage-limit
", null)).
-
-
Let rateLimitRecord be a new attribution rate-limit record with the items:
- scope
- source site
-
sourceToAttribute’s source site
- attribution destination
-
trigger’s attribution destination
- reporting origin
-
sourceToAttribute’s reporting origin
- time
-
sourceToAttribute’s source time
- expiry time
-
null
- entity ID
-
null
-
If the result of running check if attribution should be blocked by rate limits with trigger, sourceToAttribute, and rateLimitRecord is not null, return it.
-
If sourceToAttribute’s number of aggregatable attribution reports value is equal to max aggregatable reports per source[0], then:
-
Return the triggering result ("
dropped
", ("trigger-aggregate-excessive-reports
", null)).
-
-
If the result of running check if an attribution source can create aggregatable contributions with report and sourceToAttribute is false:
-
Return the triggering result ("
dropped
", ("trigger-aggregate-insufficient-budget
", null)).
-
-
Append report to the aggregatable attribution report cache.
-
Increment sourceToAttribute’s number of aggregatable attribution reports value by 1.
-
Decrement sourceToAttribute’s remaining aggregatable attribution budget value by report’s required aggregatable budget.
-
If matchedDedupKey is not null, append it to sourceToAttribute’s aggregatable dedup keys.
-
Append rateLimitRecord to the attribution rate-limit cache.
-
Run generate null attribution reports with trigger and report.
-
If the result of checking if attribution debugging can be enabled with report’s attribution debug info is true, queue a task to attempt to deliver a debug report with report.
-
Return the triggering result ("
attributed
", null).
12.10. Triggering attribution
To obtain and deliver a verbose debug report on trigger registration given a set of trigger debug data dataSet, an attribution trigger trigger, and a possibly null attribution source sourceToAttribute:
-
Let debugDataList be a new list.
-
For each data of dataSet:
-
Let debugData be the result of running obtain verbose debug data on trigger registration with data’s data type, trigger, sourceToAttribute, and data’s report.
-
If debugData is not null, append debugData to debugDataList.
-
-
If debugDataList is empty, return.
-
Run obtain and deliver a verbose debug report with debugDataList, trigger’s reporting origin, and trigger’s fenced.
To obtain and deliver an aggregatable debug report on trigger registration given a set of trigger debug data dataSet, an attribution trigger trigger, and a possibly null attribution source sourceToAttribute:
-
If trigger’s fenced is true, return.
-
Let config be trigger’s aggregatable debug reporting config.
-
Let debugDataMap be config’s debug data.
-
If debugDataMap is empty, return.
-
Let sourceKeyPiece be 0.
-
If sourceToAttribute is not null, then set sourceKeyPiece to sourceToAttribute’s aggregatable debug reporting config's key piece.
-
Let contributions be a new list.
-
Let contextKeyPiece be sourceKeyPiece bitwise-OR config’s key piece.
-
For each data of dataSet:
-
Let type be data’s data type.
-
If debugDataMap[type] exists:
-
Let keyPiece be contextKeyPiece bitwise-OR debugDataMap[type]'s key.
-
Let contribution be a new aggregatable contribution with the items:
- key
-
keyPiece
- value
-
debugDataMap[type]'s value
- filtering ID
-
Append contribution to contributions.
-
-
-
Run obtain and deliver an aggregatable debug report on registration with contributions, trigger’s attribution destination, trigger’s reporting origin, sourceToAttribute, trigger’s attribution destination, config’s aggregation coordinator, and trigger’s trigger time.
To obtain and deliver debug reports on trigger registration given a set of trigger debug data dataSet, an attribution trigger trigger, and a possibly null attribution source sourceToAttribute:
-
Run obtain and deliver a verbose debug report on trigger registration with dataSet, trigger, and sourceToAttribute.
-
Run obtain and deliver an aggregatable debug report on trigger registration with dataSet, trigger, and sourceToAttribute.
To check if an attribution source and attribution trigger have matching attribution scopes given an attribution source source and an attribution trigger trigger:
-
If trigger’s attribution scopes is empty, return true.
-
Return whether the intersection of source’s attribution scopes's values and trigger’s attribution scopes is not empty.
An attribution source a is higher-priority than an attribution source b if the following steps return true:
-
If a’s source time is greater than b’s source time, return true.
-
Return false.
To find matching sources given an attribution trigger trigger:
-
Let matchingSources be a new list.
-
For each source of the attribution source cache:
-
If source’s attribution destinations does not contain trigger’s attribution destination, continue.
-
If source’s reporting origin and trigger’s reporting origin are not same origin, continue.
-
If source’s expiry time is less than or equal to trigger’s trigger time, continue.
-
Append source to matchingSources.
-
-
Let sourceToAttribute be null.
-
For each source of matchingSources:
-
If the result of checking if an attribution source and attribution trigger have matching attribution scopes with source and trigger is false, continue.
-
If sourceToAttribute is null or source is higher-priority than sourceToAttribute, set sourceToAttribute to source.
-
-
If sourceToAttribute is null, return the tuple (null, an empty list).
-
Remove sourceToAttribute from matchingSources.
-
Return the tuple (sourceToAttribute, matchingSources).
Note: We deliberately return all matching sources for deletion even if they don’t have matching attribution scopes with the attribution trigger to avoid creating multiple attribution reports from a single cross-site user interaction.
To check if an attribution trigger contains aggregatable data given an attribution trigger trigger, run the following steps:
-
If trigger’s aggregatable trigger data is not empty, return true.
-
If any of trigger’s aggregatable values configurations's values is not empty, return true.
-
Return false.
To trigger attribution given an attribution trigger trigger, run the following steps:
-
Let hasAggregatableData be the result of checking if an attribution trigger contains aggregatable data with trigger.
-
If trigger’s event-level trigger configurations is empty and hasAggregatableData is false, return.
-
Let (sourceToAttribute, matchingSources) be the result of running find matching sources with trigger.
-
If sourceToAttribute is null:
-
Run obtain and deliver debug reports on trigger registration with « ("
trigger-no-matching-source
", null) », trigger, and sourceToAttribute set to null. -
If hasAggregatableData is true, then run generate null attribution reports with trigger and report set to null.
-
Return.
-
-
If the result of running match an attribution source against filters and negated filters with sourceToAttribute, trigger’s filters, trigger’s negated filters, and trigger’s trigger time is false:
-
Run obtain and deliver debug reports on trigger registration with « ("
trigger-no-matching-filter-data
", null) », trigger, and sourceToAttribute. -
If hasAggregatableData is true, then run generate null attribution reports with trigger and report set to null.
-
Return.
-
-
For each item of matchingSources:
-
Remove item from the attribution source cache.
-
-
Let eventLevelResult be the result of running trigger event-level attribution with trigger and sourceToAttribute.
-
Let aggregatableResult be the result of running trigger aggregatable attribution with trigger and sourceToAttribute.
-
Let debugDataSet be a new set.
-
If eventLevelResult’s debug data is not null, then append eventLevelResult’s debug data to debugDataSet.
-
If aggregatableResult’s debug data is not null, then append aggregatableResult’s debug data to debugDataSet.
-
Run obtain and deliver debug reports on trigger registration with debugDataSet, trigger, and sourceToAttribute.
-
If hasAggregatableData and aggregatableResult’s status is "
dropped
", run generate null attribution reports with trigger and report set to null. -
Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and trigger’s trigger time is true.
Consider replacing debugDataSet with a list. [Issue #1287]
12.11. Establishing report delivery time
To check whether a moment falls within a window given a moment moment and a report window window:
-
If moment is less than window’s start, return falls before.
-
If moment is greater than or equal to window’s end, return falls after.
-
Return falls within.
To obtain an event-level report delivery time given a report window list windows and a moment triggerTime:
-
If automation local testing mode is true, return triggerTime.
-
For each window of windows:
-
If the result of check whether a moment falls within a window with triggerTime and window is falls within, return window’s end.
-
-
Assert: not reached.
To obtain an aggregatable attribution report delivery time given an attribution trigger trigger, perform the following steps. They return a moment.
-
Let triggerTime be trigger’s trigger time.
-
If the result of running check if an aggregatable attribution report should be unconditionally sent with trigger is true, return triggerTime.
-
Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.
-
Return triggerTime + r * randomized aggregatable attribution report delay.
12.12. Obtaining an event-level report
To obtain an event-level report given an attribution source source, a moment triggerTime, a possibly null non-negative 64-bit integer triggerDebugKey, a 64-bit integer priority priority, and a trigger spec map entry specEntry:
-
Let reportTime be the result of running obtain an event-level report delivery time with specEntry’s value's event-level report windows and triggerTime.
-
Let report be a new event-level report struct whose items are:
- event ID
-
source’s event ID.
- trigger data
-
specEntry’s key
- randomized trigger rate
-
source’s randomized trigger rate.
- reporting origin
-
source’s reporting origin.
- attribution destinations
-
source’s attribution destinations.
- report time
-
reportTime
- trigger priority
-
priority.
- trigger time
-
triggerTime.
- source ID
-
source’s internal ID.
- external ID
-
The result of generating a random UUID.
- internal ID
-
The result of getting the next internal ID
- attribution debug info
-
(source’s debug key, triggerDebugKey).
-
Return report.
12.13. Obtaining an aggregatable report’s required budget
An aggregatable report report’s required aggregatable budget is the total value of report’s contributions.
12.14. Obtaining an aggregatable attribution report
To obtain an aggregatable attribution report given an attribution source source and an attribution trigger trigger:
-
Let reportTime be the result of running obtain an aggregatable attribution report delivery time with trigger.
-
Let report be a new aggregatable attribution report struct whose items are:
- reporting origin
-
source’s reporting origin.
- effective attribution destination
-
trigger’s attribution destination.
- source time
-
source’s source time.
- report time
-
reportTime.
- external ID
-
The result of generating a random UUID.
- internal ID
-
The result of getting the next internal ID
- attribution debug info
- contributions
-
The result of running create aggregatable contributions with source and trigger.
- aggregation coordinator
-
trigger’s aggregation coordinator.
- source registration time configuration
-
trigger’s aggregatable source registration time configuration.
- trigger context ID
-
trigger’s trigger context ID
- filtering ID max bytes
-
trigger’s aggregatable filtering ID max bytes
- source ID
-
source’s internal ID.
-
Return report.
12.15. Generating randomized null attribution reports
To obtain a null attribution report given an attribution trigger trigger and a moment sourceTime:
-
Let reportTime be the result of running obtain an aggregatable attribution report delivery time with trigger.
-
Let report be a new aggregatable attribution report struct whose items are:
- reporting origin
-
trigger’s reporting origin
- effective attribution destination
-
trigger’s attribution destination
- source time
-
sourceTime
- report time
-
reportTime
- external ID
-
The result of generating a random UUID
- internal ID
-
The result of getting the next internal ID
- attribution debug info
-
(null, trigger’s debug key)
- contributions
-
«»
- aggregation coordinator
-
trigger’s aggregation coordinator
- source registration time configuration
-
trigger’s aggregatable source registration time configuration
- is null report
-
true
- trigger context ID
-
trigger’s trigger context ID
- filtering ID max bytes
-
trigger’s aggregatable filtering ID max bytes
- source ID
-
Null
-
Return report.
To obtain rounded source time given a moment sourceTime, return sourceTime in seconds since the UNIX epoch, rounded down to a multiple of a whole day (86400 seconds).
To determine if a randomized null attribution report is generated given a double randomPickRate:
-
Assert: randomPickRate is between 0 and 1 (both inclusive).
-
Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.
-
If r is less than randomPickRate, return true.
-
Otherwise, return false.
To generate null attribution reports given an attribution trigger trigger and a possibly null aggregatable attribution report report:
-
Let nullReports be a new list.
-
If trigger’s aggregatable source registration time configuration is "
exclude
":-
Let randomizedNullReportRate be randomized null attribution report rate excluding source registration time.
-
If the result of running check if an aggregatable attribution report should be unconditionally sent with trigger is true, set randomizedNullReportRate to 1.
-
If report is null and the result of determining if a randomized null attribution report is generated with randomizedNullReportRate is true:
-
Let nullReport be the result of obtaining a null attribution report with trigger and trigger’s trigger time.
-
Append nullReport to the aggregatable attribution report cache.
-
Append nullReport to nullReports.
-
-
-
Otherwise:
-
Assert: trigger’s trigger context ID is null.
-
Let maxSourceExpiry be valid source expiry range[1].
-
Round maxSourceExpiry away from zero to the nearest day (86400 seconds).
-
Let roundedAttributedSourceTime be null.
-
If report is not null, set roundedAttributedSourceTime to the result of obtaining rounded source time with report’s source time.
-
For each integer day of the range 0 to the number of days in maxSourceExpiry, inclusive:
-
Let fakeSourceTime be trigger’s trigger time - day days.
-
If roundedAttributedSourceTime is not null and equals the result of obtaining rounded source time with fakeSourceTime:
-
If the result of determining if a randomized null attribution report is generated with randomized null attribution report rate including source registration time is true:
-
Let nullReport be the result of obtaining a null attribution report with trigger and fakeSourceTime.
-
Append nullReport to the aggregatable attribution report cache.
-
Append nullReport to nullReports.
-
-
-
-
Return nullReports.
To shuffle a list list, reorder list’s elements such that each possible permutation has equal probability of appearance.
12.16. Deferring trigger attribution
To maybe defer and then complete trigger attribution given an attribution trigger trigger, run the following steps in parallel:
-
Let navigation be the navigation that landed on the document from which trigger’s registration was initiated.
-
If navigation is null, return.
-
Let sources be all source registrations originating from background attributionsrc requests initiated by navigation.
-
If sources is empty, return.
-
Wait until all sources are processed.
-
Queue a task on the networking task source to trigger attribution with trigger.
Specify this in terms of Navigation
13. Report delivery
The user agent MUST periodically run queue reports for delivery on the event-level report cache and aggregatable attribution report cache.
To queue reports for delivery given a set of attribution reports cache, run the following steps:
-
Let reportsToSend be a new list.
-
For each report of cache:
-
If report’s report time is greater than the current wall time, continue.
-
Remove report from cache.
Note: In order to support sending, waiting, and retries across various forms of interruption, including shutdown, the user agent may need to persist reports that are in the process of being sent in some other storage.
-
Append report to reportsToSend.
-
-
Shuffle reportsToSend.
Note: Shuffling ensures event-level reports for the same source with the same report time are never sent in the order they were created. This results in less information gained from a single attribution source.
-
For each report of reportsToSend, run the following steps in parallel:
-
Wait an implementation-defined random non-negative duration.
Note: On startup, it is possible the user agent will need to send many reports whose report times passed while the browser was closed. Adding random delay prevents prevents event IDs from different source origins being joined based on the time they were received.
-
Optionally, wait a further implementation-defined duration.
Note: This is intended to allow user agents to optimize device resource usage.
-
Run attempt to deliver a report with report.
-
13.1. Encode an unsigned k-bit integer
To encode an unsigned k-byte integer given an integer integerToEncode and an integer byteLength, return the representation of integerToEncode as a big-endian byte sequence of length byteLength, left padding with zeroes as necessary.
13.2. Obtaining an aggregatable report’s debug mode
An aggregatable report report’s debug mode is the result of running the following steps:
-
If report is an:
- aggregatable attribution report
-
-
If the result of checking if attribution debugging can be enabled with report’s attribution debug info is true, return enabled.
-
Return disabled.
-
- aggregatable debug report
-
Return disabled.
13.3. Obtaining an aggregatable report’s shared info
An aggregatable report report’s shared info is the result of running the following steps:
-
Let reportingOrigin be report’s reporting origin.
-
Let api be null.
-
If report is an:
- aggregatable attribution report
-
Set api to "
attribution-reporting
". - aggregatable debug report
-
Set api to "
attribution-reporting-debug
".
-
Let sharedInfo be a map of the following key/value pairs:
- "
api
" -
api
- "
attribution_destination
" -
report’s effective attribution destination, serialized
- "
report_id
" -
report’s external ID
Note: The inclusion of "
report_id
" in the shared info is intended to allow the report recipient to perform deduplication and prevent double counting, in the event that the user agent retries reports on failure.- "
reporting_origin
" -
reportingOrigin, serialized
- "
scheduled_report_time
" -
report’s report time in seconds since the UNIX epoch, serialized
- "
version
" -
"
1.0
"
Note: The "
version
" value needs to be bumped if the aggregation service upgrades. - "
-
If report’s debug mode is enabled, set sharedInfo["
debug_mode
"] to "enabled
". -
If report is an aggregatable attribution report and report’s source registration time configuration is "
include
": set sharedInfo["source_registration_time
"] to the result of obtaining rounded source time with report’s source time, serialized. -
Return the string resulting from executing serialize an infra value to a json string on sharedInfo.
13.4. Obtaining an aggregatable report’s aggregation service payloads
To obtain the public key for encryption given an aggregation coordinator aggregationCoordinator:
-
Let url be a new URL record.
-
Set url’s path to «"
.well-known
", "aggregation-service
", "v1
", "public-keys
"». -
Return a user-agent-determined public key from url or an error in the event that the user agent failed to obtain the public key from url. This step may be asynchronous.
Specify this in terms of fetch.
Note: The user agent might enforce weekly key rotation. If there are multiple keys, the user agent might independently pick a key uniformly at random for every encryption operation. The key should be uniquely identifiable.
An aggregatable report report’s plaintext payload is the result of running the following steps:
-
Let payloadData be a new list.
-
Let maxContributions be null.
-
If report is an:
- aggregatable attribution report
-
Set maxContributions to max aggregation keys per source registration.
- aggregatable debug report
-
Set maxContributions to max contributions per aggregatable debug report.
-
Let contributions be report’s contributions.
-
Assert: contributions’s size is less than or equal to maxContributions.
-
While contributions’s size is less than maxContributions:
-
Let nullContribution be a new aggregatable contribution with the items:
-
Append nullContribution to contributions.
-
-
For each contribution of contributions:
-
Let contributionData be a map of the following key/value pairs:
- "
bucket
" -
The result of encoding an unsigned k-byte integer given contribution’s key and 16.
- "
value
" -
The result of encoding an unsigned k-byte integer given contribution’s value and 4.
- "
id
" -
The result of encoding an unsigned k-byte integer given contribution’s filtering ID and report’s filtering ID max bytes.
- "
-
Append contributionData to payloadData.
-
-
Let payload be a map of the following key/value pairs:
- "
data
" -
payloadData
- "
operation
" -
"
histogram
"
- "
-
Return the byte sequence resulting from CBOR encoding payload.
To obtain the encrypted payload given an aggregatable report report and a public key pkR, run the following steps:
-
Let plaintext be report’s plaintext payload.
-
Let encodedSharedInfo be report’s shared info, encoded.
-
Let info be the concatenation of «"
aggregation_service
", encodedSharedInfo». -
Set up HPKE sender’s context with pkR and info.
-
Return the byte sequence or an error resulting from encrypting plaintext with the sender’s context.
To obtain the aggregation service payloads given an aggregatable report report, run the following steps:
-
Let pkR be the result of running obtain the public key for encryption with report’s aggregation coordinator.
-
If pkR is an error, return pkR.
-
Let encryptedPayload be the result of running obtain the encrypted payload with report and pkR.
-
If encryptedPayload is an error, return encryptedPayload.
-
Let aggregationServicePayloads be a new list.
-
Let aggregationServicePayload be a map of the following key/value pairs:
- "
payload
" -
encryptedPayload, base64 encoded
- "
key_id
" -
A string identifying pkR
- "
-
If report’s debug mode is enabled, set aggregationServicePayload["
debug_cleartext_payload
"] to report’s plaintext payload, base64 encoded. -
Append aggregationServicePayload to aggregationServicePayloads.
-
Return aggregationServicePayloads.
13.5. Serialize attribution report body
To obtain an event-level report body given an attribution report report, run the following steps:
-
Let data be a map of the following key/value pairs:
- "
attribution_destination
" -
report’s attribution destinations, serialized.
- "
randomized_trigger_rate
" -
report’s randomized trigger rate, rounded to 7 digits after the decimal point
- "
source_type
" -
report’s source type
- "
source_event_id
" -
report’s event ID, serialized
- "
trigger_data
" -
report’s trigger data, serialized
- "
report_id
" -
report’s external ID
Note: The inclusion of "
report_id
" in the report body is intended to allow the report recipient to perform deduplication and prevent double counting, in the event that the user agent retries reports on failure.- "
scheduled_report_time
" -
report’s report time in seconds since the UNIX epoch, serialized
- "
-
Run serialize an attribution debug info with data and report’s attribution debug info.
-
Return data.
To serialize an event-level report report, run the following steps:
-
Let data be the result of running obtain an event-level report body with report.
-
Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.
To obtain an aggregatable report body given an aggregatable report report, run the following steps:
-
Assert: report’s effective attribution destination is not an opaque origin.
-
Let aggregationServicePayloads be the result of running obtain the aggregation service payloads.
-
If aggregationServicePayloads is an error, return aggregationServicePayloads.
-
Let data be a map of the following key/value pairs:
- "
shared_info
" -
report’s shared info
- "
aggregation_service_payloads
" -
aggregationServicePayloads
- "
aggregation_coordinator_origin
" -
report’s aggregation coordinator, serialized
- "
-
Return data.
To serialize an aggregatable attribution report report, run the following steps:
-
Let data be the result of running obtain an aggregatable report body with report.
-
Run serialize an attribution debug info with data and report’s attribution debug info.
-
If report’s trigger context ID is not null, set data["
trigger_context_id
"] to report’s trigger context ID. -
Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.
To serialize an aggregatable debug report report, run the following steps:
-
Let data be the result of running obtain an aggregatable report body with report.
-
Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.
To serialize an attribution report report, run the following steps:
-
Assert: report is an event-level report or an aggregatable attribution report.
-
If report is an:
- event-level report
-
Return the result of running serialize an event-level report with report.
- aggregatable attribution report
-
Return the result of running serialize an aggregatable attribution report with report.
13.6. Serialize verbose debug report body
To serialize a verbose debug report report, run the following steps:
-
Let collection be a new list.
-
Return the byte sequence resulting from executing serialize an Infra value to JSON bytes on collection.
13.7. Get report request URL
To generate a report URL given a suitable origin reportingOrigin and a list of strings path:
-
Let reportUrl be a new URL record.
-
Let fullPath be «"
.well-known
", "attribution-reporting
"». -
Append path to fullPath.
-
Set reportUrl’s path to fullPath.
-
Return reportUrl.
To generate an attribution report URL given an attribution report report and an optional boolean isDebugReport (default false):
-
Assert: report is an event-level report or an aggregatable attribution report.
-
Let path be a new list.
-
If isDebugReport is true, append "
debug
" to path. -
If report is an:
- event-level report
-
Append "
report-event-attribution
" to path. - aggregatable attribution report
-
Append "
report-aggregate-attribution
" to path.
-
Return the result of running generate a report URL with report’s reporting origin and path.
To generate a verbose debug report URL given a verbose debug report report:
-
Let path be «"
debug
", "verbose
"». -
Return the result of running generate a report URL with report’s reporting origin and path.
To generate an aggregatable debug report URL given an aggregatable debug report report:
-
Let path be «"
debug
", "report-aggregate-debug
"». -
Return the result of running generate a report URL with report’s reporting origin and path.
13.8. Creating a report request
To create a report request given a URL url and a byte sequence body:
-
Let headers be a new header list containing a header named "
Content-Type
" whose value is "application/json
". -
Let request be a new request with the following properties:
- method
-
"
POST
" - URL
-
url
- header list
-
headers
- body
- referrer
-
"
no-referrer
" - client
-
null
- origin
-
url’s origin
- window
-
"
no-window
" - service-workers mode
-
"
none
" - initiator
-
""
- mode
-
"
same-origin
" - unsafe-request flag
-
set
- credentials mode
-
"
omit
" - cache mode
-
"
no-store
"
-
Return request.
13.9. Issuing a report request
This algorithm constructs a request and attempts to deliver it to a suitable origin.
To attempt to deliver a report given an attribution report report, run the following steps:
-
Assert: Neither the event-level report cache nor the aggregatable attribution report cache contains report.
-
The user-agent MAY ignore the report; if so, return.
-
Let url be the result of executing generate an attribution report URL on report.
-
Let data be the result of executing serialize an attribution report on report.
-
If data is an error, return.
-
Let request be the result of executing create a report request on url and data.
-
Queue a task to fetch request.
This fetch should use a network partition key for an opaque origin. [Issue #220]
A user agent MAY retry this algorithm in the event that there was an error. To prevent the report recipient from learning additional information about whether a user is online, retries might be limited in number and subject to random delays.
13.10. Issuing a debug report request
To attempt to deliver a debug report given an attribution report report:
-
The user-agent MAY ignore the report; if so, return.
-
Let url be the result of executing generate an attribution report URL on report with isDebugReport set to true.
-
Let data be the result of executing serialize an attribution report on report.
-
If data is an error, return.
-
Let request be the result of executing create a report request on url and data.
-
Fetch request.
13.11. Issuing a verbose debug request
To attempt to deliver a verbose debug report given a verbose debug report report:
-
The user-agent MAY ignore the report; if so, return.
-
Let url be the result of executing generate a verbose debug report URL on report.
-
Let data be the result of executing serialize a verbose debug report on report.
-
Let request be the result of executing create a report request on url and data.
-
Fetch request.
13.12. Issuing an aggregatable debug request
To attempt to deliver an aggregatable debug report given an aggregatable debug report report:
-
The user-agent MAY ignore the report; if so, return.
-
Let url be the result of executing generate an aggregatable debug report URL on report.
-
Let data be the result of executing serialize an aggregatable debug report on report.
-
Let request be the result of executing create a report request on url and data.
-
Fetch request.
This fetch should use a network partition key for an opaque origin. [Issue #220]
A user agent MAY retry this algorithm in the event that there was an error.
14. Cross App and Web Algorithms
14.1. Get OS registrations
An OS registration is a struct with the following items:
To get OS registrations from a header value given a header value header:
-
Let values be the result of parsing structured fields with input_string set to header and header_type set to "
list
". -
If parsing failed, return an error.
-
Let registrations be a new list.
-
For each value of values:
-
Let url be the result of running the URL parser on value.
-
If url is failure or null, continue.
-
Let debugReporting be false.
-
Let params be the parameters associated with value.
-
If params["
debug-reporting
"] exists and params["debug-reporting
"] is a boolean, set debugReporting to params["debug-reporting
"]. -
Let registration be a new OS registration struct whose items are:
- URL
-
url
- debug reporting enabled
-
debugReporting
-
Append registration to registrations.
-
If registrations is empty, return an error.
-
Return registrations.
14.2. Registrars
A registrar is one of the following:
- "
web
" -
The user agent supports web registrations.
- "
os
" -
The user agent supports OS registrations.
To get supported registrars:
-
Let supportedRegistrars be a new list.
-
If the user agent supports web registrations, append "
web
" to supportedRegistrars. -
If the user agent supports OS registrations, append "
os
" to supportedRegistrars. -
Return supportedRegistrars.
14.3. Deliver OS registration debug reports
To obtain and deliver debug reports on OS registrations given an OS debug data type dataType, a list of OS registrations registrations, an origin contextOrigin, and a boolean fenced:
-
Let contextSite be the result of obtaining a site from contextOrigin.
-
For each registration of registrations:
-
If registration’s debug reporting enabled is false, continue.
-
Let body be a new map with the following key/value pairs:
- "
context_site
" -
contextSite, serialized.
- "
registration_url
" -
registration’s URL, serialized.
- "
-
Let data be a new verbose debug data with the items:
-
Run obtain and deliver a verbose debug report with « data », origin, and fenced.
-
15. User-Agent Automation
The user-agent has an associated boolean automation local testing mode (default false).
For the purposes of user-agent automation and website testing, this document defines the below [WebDriver] extension commands to control the API configuration.
15.1. Set local testing mode
HTTP Method | URI Template |
---|---|
POST | /session/{session id}/ara/ localtestingmode
|
The remote end steps are:
-
If parameters is not a JSON-formatted Object, return a WebDriver error with error code invalid argument.
-
Let enabled be the result of getting a property named
"enabled"
from parameters. -
If enabled is
undefined
or is not a boolean, return a WebDriver error with error code invalid argument. -
Set automation local testing mode to enabled.
-
Return success with data
null
.
Note: Without this, reports would be subject to noise and delays, making testing difficult.
15.2. Send pending reports
HTTP Method | URI Template |
---|---|
POST | /session/{session id}/ara/ sendpendingreports
|
The remote end steps are:
-
For each cache of « event-level report cache, aggregatable attribution report cache »:
-
For each report of cache:
-
Remove report from cache.
-
Attempt to deliver report.
-
-
-
Return success with data
null
.
16. Security considerations
16.1. Same-Origin Policy
This section is non-normative.
Writes to the attribution source cache, event-level report cache, and aggregatable attribution report cache are separated by the reporting origin, and reports sent to a given origin are generated via data only written to by that origin, via HTTP response headers.
However, the attribution rate-limit cache is not fully partitioned by origin. Reads to that cache involve grouping together data submitted by multiple origins. This is the case for the following limits:
These limits are explicit relaxations of the Same-Origin Policy, in that they allow different origins to influence the API’s behavior. In particular, one risk that is introduced with these shared limits is denial of service attacks, where a group of origins could collude to intentionally hit a rate-limit, causing subsequent origins to be unable to access the API.
This trades off security for privacy, in that the limits are there to reduce the efficacy of many origins colluding together to violate privacy. API deployments should monitor for abuse using these vectors to evaluate the trade-off.
The generation of verbose debug reports involves reads to the attribution source cache, event-level report cache, aggregatable attribution report cache, and attribution rate-limit cache, and the verbose debug data sent to a given origin may encode non-Same-Origin data that are generated from grouping together data submitted by multiple origins, e.g. failures due to rate-limits that are not fully compliant with the Same-Origin Policy. This is of greater concern for source registrations as the source origin could intentionally hit a rate-limit to identify sensitive user data. These verbose debug data cannot be reported explicitly and may be reported as a source-success verbose debug report. This is a tradeoff between security and utility, and mitigates the security concern with respect to the Same-Origin Policy. The risk is of less concern for trigger registrations as attribution sources have to be registered to start with and it requires browsing activity on multiple sites.
The aggregatable debug reports may also encode non-Same-Origin data but in encrypted form. The security risk is further mitigated by the generation of null debug reports and the additive noise in the aggregation service.
16.2. Opting in to the API
This section is non-normative.
As a general principle, the API cannot be used purely at the HTTP layer without some level of
opt-in from JavaScript or HTML. For HTML, this opt-in is in the form of the attributionSrc
attribute, and for JavaScript, it is
the various modifications to fetch, XMLHttpRequest
, and the window open steps.
However, this principle is only strictly applied to registering attribution sources. For triggering attribution, we waive this requirement for the sake of compatibility with existing systems, see 347 for context.
17. Privacy considerations
17.1. Clearing site data
The attribution caches contain data about a user’s web activity. As such, the user agent MAY expose controls that allow the user to delete data from them.
17.2. Cross-site information disclosure
This section is non-normative.
The API is concerned with protecting arbitrary cross-site information from being passed from one site to another. For a given attribution source, any outcome associated with it is considered cross-site information. This includes:
-
Whether the attribution source generates any attribution reports or not
-
The contents of the associated attribution reports, if present
The information embedded in the API output is arbitrary but can include things like browsing history and other cross-site activity. The API aims to provide some protection for this information:
17.2.1. Event-level reports
Any given attribution source has a set of possible trigger states. The choice of trigger state may encode cross-site information. To protect the cross-site information disclosure, each attribution source is subject to a randomized response mechanism [RR], which will choose a state at random with pick rate dependent on the source’s event-level epsilon, which has an upper bound of the user agent’s max settable event-level epsilon.
This introduces some level of plausible deniability into the resulting event-level reports (or lack thereof), as there is always a chance that the output was generated from a random process. We can reason about the protection this gives an individual attribution source from the lens of differential privacy [DP].
Additionally, event-level reports limit the amount of relative cross-site information associated with
a particular attribution source. We model this using the notion of channel capacity [CHAN]. For every attribution source,
it is possible to model its output as a noisy channel. The number of input/output symbols is governed by its associated set of possible trigger states. With the randomized response mechanism,
this allows us to analyze the output as a q-ary symmetric channel [Q-SC], with q
equal to the size of the set of possible trigger states. This is normatively defined in the compute the channel capacity of a source algorithm.
Note that navigation attribution sources and event attribution sources may have different channel capacities, given that event attribution sources can be registered without user activation or top-level navigation. Maximum capacity for each type is governed by the vendor-defined max event-level channel capacity per source.
17.2.2. Aggregatable attribution reports
Aggregatable attribution reports protect against cross-site information disclosure in two primary ways:
-
For a given attribution trigger, whether it is attributed to a source is subject to one-way noise via generating null attribution reports with some probability. Note that because the noise does not drop true reports, this is only a partial mitigation, as if an attribution source never generates an aggregatable attribution report, an adversary can learn with 100% certainty that an attribution source was never matched with an attribution trigger.
-
Cross-site information embedded in an aggregatable attribution report's contributions is encrypted with a public key, ensuring that individual contributions cannot be accessed until an aggregation service subjects them to aggregation and an additive noise process.
add links to the aggregation service noise addition algorithm.
model the channel capacity of a trigger registration.
17.2.3. Debug reports
Fine-grained cross-site information may be embedded in attribution reports sent with isDebugReport being true and certain types of verbose debug reports. These reports will only be allowed when third-party cookies are available, in which case the API caller had the capability to learn the underlying information.
17.3. Protecting against cross-site recognition
This section is non-normative.
A primary privacy goal of the API is to make linking identity between two different top-level sites difficult. This happens when either a request or a JavaScript environment has two user IDs from two different sites simultaneously. Both event-level reports and aggregatable attribution reports were designed to make this kind of recognition difficult:
17.3.1. Event-level reports
Event-level reports come bearing a fine-grained event id that can uniquely identify the source event, which may be joinable with a user’s identity. As such, for event-level reports to protect the cross-site recognition risk, they contain only a small amount (measured via channel capacity) of relative cross-site information from any of the attribution destinations. By limiting the amount of relative cross-site information embedded in event-level reports, we make it difficult for an identifier to be passed through this channel to enable cross-site recognition alone.
17.3.2. Aggregatable attribution reports
Aggregatable attribution reports only contain fine-grained cross-site information in encrypted form. In cleartext, they contain only coarse-grained information from the source site and effective attribution destination. This makes it difficult for an aggregatable attribution report to be associated with a user from either site.
The cross-site recognition risk of the data encrypted in "aggregation_service_payloads
" is
mitigated by the additive noise addition in the aggregation service.
17.4. Mitigating against repeated API use
17.5. Protecting against browsing history reconstruction
17.6. Reporting-delay concerns
This section is non-normative.
Sending reports some time after attribution occurs enables side-channel leakage in some situations.
17.6.1. Cross-network reporting-origin leakage
A report may be stored while the browser is connected to one network but sent while the browser is connected to a different network, potentially enabling cross-network leakage of the reporting origin.
Example: A user runs the browser with a particular browsing profile on their home network. An attribution report with a particular reporting origin is stored with a scheduled report time in the future. After the scheduled report time is reached, the user runs the browser with the same browsing profile on their employer’s network, at which point the browser sends the report to the reporting origin. Although the report itself may be sent over HTTPS, the reporting origin may be visible to the network administrator via DNS or the TLS client hello (which can be mitigated with ECH). Some reporting origins may be known to operate only or primarily on sensitive sites, so this could leak information about the user’s browsing activity to the user’s employer without their knowledge or consent.
Possible mitigations:
-
Only send reports with a given reporting origin when the browser has already made a request to that origin on the same network. This prevents the network administrator from gaining additional information from the Attribution Reporting API. However, it increases report loss and report delays, which reduces the utility of the API for the reporting origin. It might also increase the effectiveness of timing attacks, as the origin may be able to better link the report with the user’s request that allowed the report to be released.
-
Send reports immediately: This reduces the likelihood of a report being stored and sent on different networks. However, it increases the likelihood that the reporting origin can correlate the original request made to the reporting origin for attribution to the report, which weakens the attribution-side privacy controls of the API. In particular, this destroys the differential privacy framework we have for event-level reports. It would also make the trigger priority functionality impossible, as there would be no way to replace a lower-priority report that was already sent.
-
Use a trusted proxy server to send reports: This effectively moves the reporting origin into the report body, so only the proxy server would be visible to the network administrator.
-
Require DNS over HTTPS: This effectively hides the reporting origin from the network administrator, but is likely impractical to enforce and is itself perhaps circumventable by the network administrator.
17.6.2. User-presence tracking
The browser only tries to send reports while it is running and while it has internet connectivity (even without an explicit check for connectivity, naturally the report will fail to be sent if there is none), so receiving or not receiving an event-level report at the expected time leaks information about the user’s presence. Additionally, because the report request inherently includes an IP address, this could reveal the user’s IP-derived whereabouts to the reporting origin, including at-home vs. at-work or approximate real-world geolocation, or reveal patterns in the user’s browsing activity.
Possible mitigations:
-
Send reports immediately: This effectively eliminates the presence tracking, as the original request made to the reporting origin is in close temporal proximity to the report request. However, it increases the likelihood that the reporting origin can correlate the two requests, which weakens the attribution-side privacy controls of the API. It would also make the trigger priority functionality impossible, as there would be no way to replace a lower-priority report that was already sent.
-
Send reports immediately to a trusted proxy server, which would itself send the report to the reporting origin with additional delay. This would effectively hide both the user’s IP address and their online-offline presence from the reporting origin. Compared to the previous mitigation, the proxy server could itself handle the trigger priority functionality, at the cost of increased complexity in the proxy.
17.7. Attribution scope
This section is non-normative.
It is possible for an adversary to register multiple navigation sources in response to a single navigation, and use these multiple sources, each with a different attribution scopes value, to gain additional information about a user based on which attribution scope is chosen. To prevent this abuse the number of unique attribution scope sets per reporting origin per navigation needs to be limited.
Proposed mitigation:
Limit to 1 unique attribution scope set per reporting origin per navigation. Extraneous sources will be dropped.