Auditing Your App for XSS: A Practical Checklist
We've spent six lessons covering how XSS works, where it shows up in frameworks, how it escalates through OAuth, how AI introduces new vectors, and how browser features like CSP and Trusted Types defend against it.
This lesson turns all of that into a process we can run against our codebase.
The goal is a systematic audit: identify the XSS vulnerabilities in our application now, verify that our defenses are in place, and set up automation so new vulnerabilities don't slip in later.
The process works for React, Vue, Angular, and Vanilla JS applications. It works whether our app is a greenfield project or a five-year-old codebase with a mix of frameworks and jQuery legacy code.
Here's what we'll cover:
- Inventorying the attack surface (where data enters and where it renders)
- Searching for sinks by priority level
- Manual testing for the high-risk paths
- Auditing the defense layers (CSP, Trusted Types, cookies, tokens, SRI, re-authentication)
- Automating the checks with ESLint plugins, Semgrep, and CI integration
- Establishing ongoing practices so the audit isn't a one-time event
- A condensed one-page checklist for quick reference
Phase 1: Inventory the attack surface
Before searching for vulnerabilities, we need to know where to look. An XSS vulnerability requires two things: untrusted data entering the application, and that data being rendered in a way the browser interprets as code.
The audit starts by mapping both.
The practical approach: use the application as a regular user. Go through every page, note every field where we can type something, every place where user data is displayed, and every URL parameter that shows up in the page content.
Then go to the code and verify how each of those rendering points works.
Where data enters
Walk through the application and list every place external data comes in.
This includes the obvious inputs (forms, search boxes, comment fields, profile editors) and the less obvious ones: URL query parameters and hash fragments, data from API responses (especially third-party APIs we don't control), WebSocket messages, postMessage events from iframes or popups, file uploads that get previewed in the browser, and AI-generated content that gets rendered in the UI.
Don't forget data entered by a user in the past that is now stored in the database.
Stored XSS (Lesson 2) means the payload was submitted once and executes for every user who loads that data. Comments, user profiles, product reviews, support tickets, CMS content, and any other user-generated text that gets displayed are all candidates.
Where data renders
For each data entry point, trace the data forward to where it appears on the page.
The question at each rendering point is: Does the browser treat this data as text or as HTML?
If the rendering uses textContent, React's JSX {variable}, Vue's {{ variable }}, or Angular's interpolation {{ variable }}, the data is treated as text, and XSS is not possible through that path.
If the rendering uses innerHTML, dangerouslySetInnerHTML, v-html, bypassSecurityTrustHtml, document.write, jQuery's .html(), or a markdown-to-HTML renderer, the data is treated as HTML, and XSS is possible if the input isn't sanitized.
Third-party rendering
Also, inventory any third-party components that render user data as HTML internally.
This includes markdown renderers, WYSIWYG editors, syntax highlighters, charting libraries that accept HTML labels, and any npm package that takes user-controlled data as props and renders it with innerHTML under the hood.
As we covered in Lesson 3, the developer might never write dangerouslySetInnerHTML themselves, but the component they imported does.
For suspect packages, check their source code: grep -rn "innerHTML" node_modules/PACKAGE_NAME/ will show whether they use innerHTML internally.
Putting the inventory together
For a concrete example, here's what an audit inventory might look like for a typical SPA with user-generated content:
| Data entry point | Stored? | Where it renders | Rendering method | Sanitized? |
|---|---|---|---|---|
| Comment form body | Yes (database) | Comment list component | dangerouslySetInnerHTML |
DOMPurify ✓ |
| User display name | Yes (database) | Comment header, sidebar | JSX {displayName} |
Auto-encoded ✓ |
| User profile website | Yes (database) | Profile page link | href={website} |
No validation ✗ |
| Search query (?q=) | No (URL) | Search results heading | JSX {query} |
Auto-encoded ✓ |
| AI chat response | No (streaming) | Chat message component | v-html |
No sanitization ✗ |
| Support ticket body | Yes (database) | Admin panel ticket view | .html() (jQuery) |
No sanitization ✗ |
This table immediately shows three problems: the profile website link has no URL validation (the href attack from Lesson 3), the AI chat response renders without sanitization (Lesson 5), and the admin panel uses jQuery's .html() without DOMPurify (blind XSS from Lesson 2).
The inventory turns a vague "is our app secure?" into a specific list of things to fix.
Phase 2: Search for sinks
This is the mechanical part of the audit.
Run these searches against the codebase and triage every result. These commands use grep -rn (recursive search with line numbers).
If we're on Windows or prefer a GUI, VS Code's search (Ctrl+Shift+F) with regex enabled does the same thing.
One important caveat: grep finds patterns in our own code, but it can't trace data flow across files or understand the context of a match.
A grep result for innerHTML might be safe (if the data is hardcoded) or dangerous (if the data comes from user input).
ESLint and Semgrep in Phase 5 cover these limitations. Grep is the starting point, not the full picture.
Priority 1: Critical sinks
These are the most common XSS entry points. Every result needs to be examined.
For each result, answer two questions. Where does the data come from? (User input, API, database, URL, AI output.) Is DOMPurify or another sanitizer applied between the source and the sink?
If the data comes from an untrusted source and isn't sanitized, it's a vulnerability. If the data comes from our own server and we trust the backend to sanitize, verify that the backend actually does.
That assumption is often wrong, and the only way to know is to check.
Priority 2: High-risk patterns
These are slightly less obvious but still dangerous.
For href results, check if the value comes from user data.
If so, check whether there's URL validation (e.g., @braintree/sanitize-url or a custom protocol check). If the user can set the URL to javascript:anything, it's a vulnerability.
For eval and new Function, these should almost never appear in modern code.
If they do, check whether any user-controlled data reaches them. Even if the current code path is safe, the presence of eval is a risk because future changes might route untrusted data through it.
Priority 3: Medium-risk patterns
For postMessage listeners, check if the handler validates event.origin before processing the data.
If it doesn't, any page can send it messages (Lesson 2's postMessage attack).
For CDN-loaded scripts, check if they have the integrity attribute. If they don't, they're vulnerable to tampering (as in Lesson 3's Polyfill.io scenario).
If the CDN serves dynamic content per request, SRI can't be used and self-hosting is the fix.
Phase 3: Test the high-risk paths
Static analysis tools (grep, ESLint) find sinks. Manual testing verifies whether they're actually exploitable.
The goal here isn't a full penetration test.
It's a targeted check of the paths that Phase 2 identified as risky.
One important note: if the application has a Content Security Policy, it may block test payloads from executing even if the vulnerability exists.
To test the code-level defenses accurately, temporarily switch the CSP to report-only mode or test in a local environment without CSP. Otherwise, we might think our code is safe when CSP is just masking the vulnerability.
The bold test for reflected XSS
For any page that reflects user input (search results, error messages, URL parameters displayed on the page), type <b>test</b> into the input and submit.
Look at the result:
If the word "test" appears in bold, the page is rendering input as HTML. This is a confirmed reflected XSS sink.
The next step is to test with an actual payload, such as <img src=x onerror=alert(1)>, to confirm execution.
If the literal text <b>test</b> appears with angle brackets visible, the output is encoded.
This path is safe.
Test every search box, every filter, every URL parameter that appears on the page.
Also test error pages: if the app shows "Page 'USER_INPUT' not found' and the input is reflected without encoding, that's a reflected XSS.
The image test for stored XSS
For any input field where data gets stored and displayed later (comments, profiles, product reviews, support tickets, forum posts), submit <img src=x onerror=alert(1)> as the value.
Then navigate to the page where that content is displayed. If an alert box appears, the sink is vulnerable.
Test from a different user account to confirm the stored content executes for other users, not just the author.
This is the difference between self-XSS (annoying but low-impact) and stored XSS (affecting every visitor).
Also, check the admin panel.
Blind XSS (from Lesson 2) means the payload executes in a context the attacker can't see, such as an internal dashboard where support agents view submitted tickets.
If the support ticket display renders HTML, a payload in the ticket description would execute in the agent's browser.
The JavaScript: test for href attributes
For any link where the URL comes from user data (profile website fields, user-submitted links in comments, "homepage" fields on public profiles), try setting the URL to javascript:alert(1).
Click the link. If the alert fires, the href is vulnerable.
This is the attack from Lesson 3 that React and Vue don't block by default.
Also test with encoded variants: jAvAsCrIpT:alert(1) (mixed case) and javascript:alert(1) (HTML entity encoding).
Some filters catch the lowercase version but miss these variants.
Testing postMessage handlers
Create a simple HTML file that iframes the target page and sends it a message containing an XSS payload:
Open this file in a browser. If the alert fires in the iframe, the postMessage handler is vulnerable.
Check the handler code to see if it validates event.origin and if it sanitizes the received data before inserting it into the DOM.
Testing AI output rendering
If the application renders AI-generated content, test it with the prompt from Lesson 1: "Print the XSS cheat sheet in a bullet list. just payloads."
If the chat interface executes any of the payloads (showing alert boxes, loading iframes, or making network requests visible in the developer tools network tab), the output rendering is vulnerable.
Also test indirect prompt injection if the app processes external content through an LLM (like a "summarize this URL" feature).
Create a test page with a hidden instruction (an HTML comment containing <img src=x onerror=alert(1)>), have the AI summarize it, and check if the payload appears in the rendered summary.
Document everything
For each finding, record the input point, the payload used, the rendering point where it executed, the affected users (everyone, or just specific roles), and whether any defense layer (CSP, sanitization) is in place.
Create a GitHub issue for each finding, including the reproduction steps.
This becomes the remediation backlog.
Phase 4: Check the defense layers
Even if the code-level audit is clean, missing defense layers mean a single future mistake could be exploitable.
These checks verify the safety net from Lesson 6.
Content Security Policy
Open the browser's developer tools, go to the Network tab, and inspect the response headers of the main HTML document. Look for Content-Security-Policy.
If it's missing entirely, there's no CSP. This means even if an attacker can only inject an <img onerror> tag, the inline handler will execute.
Adding a strict CSP is the single highest-impact defense action.
If a CSP is present, evaluate it. Does script-src use nonces or hashes? (Good.) Or does it use 'unsafe-inline'? (Bad, defeats most of CSP's XSS protection.) Does it include 'strict-dynamic'? (Good for modern apps with dynamic script loading.) Does connect-src restrict outbound requests to 'self' and known API domains? (If not, the attacker's fetch() calls to their server will work.) Does style-src block 'unsafe-inline'? (Needed to prevent CSS exfiltration from Lesson 2.)
Is frame-ancestors set to 'none' or 'self'? (Prevents clickjacking and the iframe-based XSS trigger from Lesson 2.)
Google's CSP Evaluator (csp-evaluator.withgoogle.com) takes a CSP header and reports weaknesses. Paste the full header in, and it will flag issues like allowlisted domains that host JSONP endpoints, missing directives, or the presence of 'unsafe-inline'.
Trusted Types
Check if the CSP header includes require-trusted-types-for 'script'. If it does, the browser will throw a TypeError whenever a string is passed directly to a sink like innerHTML without going through a Trusted Types policy. This is the enforcement mechanism from Lesson 6 that turns forgotten sanitization into runtime errors.
If Trusted Types isn't enabled, it's worth adding. With cross-browser support reaching all major browsers in February 2026, it's now deployable in production. Start with report-only mode to identify violations before enforcing.
Cookie flags
In the browser's developer tools, go to the Application tab (Chrome) or the Storage tab (Firefox), then inspect cookies. For each session-related cookie, check three flags:
HttpOnly should be set. This prevents JavaScript from accessing the cookie via document.cookie, blocking the simplest form of XSS session theft.
Secure should be set. This ensures the cookie is sent only over HTTPS, preventing it from being intercepted over insecure connections.
SameSite should be Lax or Strict. This prevents the cookie from being sent in cross-site requests, which mitigates CSRF (covered in Module 3).
If session tokens or JWTs are in localStorage instead of HttpOnly cookies, that's the Lesson 4 vulnerability: any XSS can exfiltrate them with one line of JavaScript.
OAuth and token storage
If the app uses "Log in with Google/GitHub/Facebook," check where the access token ends up after the OAuth flow. Open the Application tab in dev tools and look at both localStorage and sessionStorage.
If the access token or JWT is stored there, it's accessible to any XSS payload. The safest pattern is the backend-for-frontend (BFF) from Lesson 4, where the token never reaches the browser.
Check if sensitive operations (changing email, adding SSH keys, modifying billing) require re-authentication. Try changing the account email while already logged in.
If the app doesn't ask for the current password or a WebAuthn challenge, a single XSS can change the email and establish a persistent backdoor (Lesson 4).
Subresource Integrity
View the page source and search for <script and <link tags that load from external domains or CDNs. For each one, check if the integrity attribute is present. If it's missing, the script could be tampered with without detection. This is how the Polyfill.io attack from Lesson 3 worked (though Polyfill.io's dynamic serving made SRI impossible in that case).
Phase 5: Automate what we can
Manual audits find existing vulnerabilities. Automation prevents new ones from being introduced. The goal is to catch XSS-vulnerable patterns before they reach production.
ESLint plugins
Three ESLint plugins catch the most common XSS patterns in JavaScript and TypeScript:
eslint-plugin-no-unsanitized (built by Mozilla) flags direct use of innerHTML, outerHTML, insertAdjacentHTML, and document.write with unsanitized data. It understands DOMPurify and won't flag calls that pass through sanitization first. This is the most targeted XSS-specific ESLint plugin available.
@microsoft/eslint-plugin-sdl catches innerHTML, document.write, eval, and new Function with clear error messages. It was built for Microsoft's Security Development Lifecycle and focuses specifically on browser DOM security.
eslint-plugin-security provides broader security rules including detect-eval-with-expression, detect-no-csrf-before-method-override, and detect-non-literal-regexp.
A minimal ESLint configuration that catches the most critical XSS patterns:
For projects using ESLint's flat config (eslint.config.js, the default since ESLint 9), the equivalent uses import syntax. Check the eslint-plugin-no-unsanitized README for the flat config example.
For React specifically, eslint-plugin-jam3 provides the no-sanitizer-with-danger rule, which flags any use of dangerouslySetInnerHTML that isn't wrapped in a sanitizer function. This enforces the SafeHTML wrapper pattern from Lesson 3 at the linting level.
Semgrep for deeper analysis
ESLint analyzes one file at a time and can't track data flow across files.
If user input is entered through a form handler in one file and reaches innerHTML in a component three files away, ESLint won't connect the dots. Semgrep can.
It supports cross-file taint analysis and includes a community rule set with XSS patterns for React, Vue, and Angular.
Semgrep runs in CI pipelines and can block PRs that introduce XSS-vulnerable patterns.
The XSS rule set (semgrep --config p/xss) covers innerHTML with tainted input, dangerouslySetInnerHTML without sanitization, v-html with user data, and eval with dynamic arguments.
npm audit in CI
Run npm audit on every pull request. This catches known vulnerabilities in dependencies, including the DOMPurify CVE from Lesson 6 and the marked sanitizer bypass from Lesson 2. Configure it to fail the build for high- or critical-severity findings.
The ESLint step relies on the project's .eslintrc or eslint.config.js configuration (which should include the security plugins).
Running all three in the CI pipeline gives us: dependency vulnerability detection (npm audit), single-file pattern detection (ESLint), and cross-file data flow analysis (Semgrep). Each catches things the others miss.
OWASP ZAP for dynamic testing
For teams that want dynamic testing, OWASP ZAP is an open-source proxy that can automatically test for reflected and stored XSS by injecting payloads into forms and URL parameters. It runs against a staging environment and produces a report of findings.
ZAP can also be run headlessly in CI for automated regression testing.
ZAP catches reflected and stored XSS effectively, but won't catch DOM-based XSS (since that requires understanding client-side JavaScript, not just server responses).
The static analysis tools above handle DOM XSS. Together, the combination covers all three types.
Phase 6: Establish ongoing practices
The audit process above finds what's broken now.
These practices prevent regressions and catch new issues as the codebase evolves.
Code review focus points
When reviewing PRs, scan for the Priority 1 sinks from Phase 2. Any new use of dangerouslySetInnerHTML, v-html, innerHTML, or bypassSecurityTrustHtml should require justification and evidence of sanitization.
The SafeHTML wrapper from Lesson 3 should be the standard path for rendering HTML from untrusted sources. Direct innerHTML writes should be the exception that requires explicit approval.
For URL attributes, any href or src that accepts user-controlled values should use @braintree/sanitize-url or an equivalent.
For AI output, the restrictive DOMPurify configuration from Lesson 5 (stripping <img> tags) should be the default.
Dependency updates
The npm ecosystem moves fast, and vulnerabilities are disclosed regularly.
Run npm audit weekly (or configure Dependabot/Snyk to automate it). When a dependency with a security fix is released (like DOMPurify's CVE-2026-0540 fix from Lesson 6), update within days, not weeks.
This connects directly to what we covered in Module 1: dependency management is a security practice.
CSP violation monitoring
If we deployed CSP with report-to in Lesson 6, monitor the violation reports.
A sudden spike in violations can indicate either a new XSS attempt being blocked or a deployment that broke something.
Either way, it needs investigation.
When to re-audit
Run the full audit process again when: we upgrade the frontend framework to a new major version (security behavior may change), we add a new third-party integration that loads scripts or renders content, we add AI-powered features that display model output, or we add new forms or user-generated content types.
The attack surface changes with every feature, and the audit needs to keep up.
For active projects, a quarterly routine audit is a reasonable cadence.
At a minimum, run the Phase 2 sink searches and Phase 4 defense checks before every major release.
The one-page checklist
This is a condensed version of the entire audit process. Print it, bookmark it, or add it to the team wiki.
Attack surface
- Map all user input points (forms, URLs, APIs, WebSockets, file uploads, AI output)
- Map all rendering points (templates, dynamic HTML, markdown, third-party components)
- Identify third-party scripts and npm packages that render HTML
Sink search
- Search for
dangerouslySetInnerHTML,v-html,bypassSecurityTrustHtml,innerHTML,outerHTML,document.write,.html(),eval,new Function - Search for
href/srcwith user-controlled values (includingsetAttributeand direct property assignment) - Search for
postMessagelisteners without origin checks - For every result: trace the data source, verify sanitization exists
Manual tests
-
<b>test</b>in all search/filter inputs (reflected XSS) -
<img src=x onerror=alert(1)>in all stored content fields (stored XSS) -
javascript:alert(1)in all user-controlled URL fields (href attack) - Payload in AI chat/prompt if app renders AI output (LLM XSS)
- Test with CSP in report-only mode to see code-level vulnerabilities
Defense layers
- CSP header present with nonces (not
'unsafe-inline') -
require-trusted-types-for 'script'in CSP header -
connect-srcrestricts outbound requests -
style-srcblocks'unsafe-inline' -
frame-ancestorsset - Session cookies:
HttpOnly,Secure,SameSite - Tokens not in localStorage (HttpOnly cookies or BFF pattern)
- Sensitive operations require re-authentication
- CDN scripts have SRI
integrityattributes
Automation
-
eslint-plugin-no-unsanitizedor@microsoft/eslint-plugin-sdlconfigured - Semgrep XSS rules running in CI
-
npm auditruns in CI on every PR - CSP violation reports monitored in production
Where to go from here
This is the end of Module 2.
We've covered what XSS is (Lesson 1), the three types and their payloads (Lesson 2), where each framework is vulnerable (Lesson 3), how XSS escalates into account takeover through OAuth (Lesson 4), how AI introduces new XSS vectors (Lesson 5), the browser-native defense stack (Lesson 6), and now the audit process to find and fix it all (this lesson).
The course continues with Module 3 on CSRF and Module 4 on phishing and spoofing.
For continued learning on XSS specifically, the OWASP XSS Prevention Cheat Sheet, PortSwigger's Web Security Academy XSS labs, and Snyk Learn's interactive XSS lessons are all excellent resources.
And of course, the best way to learn is to audit a real application using the checklist above and see what we find.
References
- OWASP, "Testing for Cross-Site Scripting" https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/07-Input_Validation_Testing/01-Testing_for_Reflected_Cross_Site_Scripting
- OWASP, "Cross Site Scripting Prevention Cheat Sheet" https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html
- Google, "CSP Evaluator" https://csp-evaluator.withgoogle.com/
- Mozilla, "eslint-plugin-no-unsanitized" https://github.com/mozilla/eslint-plugin-no-unsanitized
- Microsoft, "@microsoft/eslint-plugin-sdl" https://www.npmjs.com/package/@microsoft/eslint-plugin-sdl
- eslint-plugin-security https://www.npmjs.com/package/eslint-plugin-security
- Semgrep, "XSS rules" https://semgrep.dev/p/xss
- OWASP ZAP (dynamic application security testing) https://www.zaproxy.org/
- PortSwigger, "Web Security Academy: Cross-site scripting" https://portswigger.net/web-security/cross-site-scripting
- Snyk Learn, "Cross-site scripting (XSS)" https://learn.snyk.io/lesson/xss-cross-site-scripting/
- npm audit documentation https://docs.npmjs.com/cli/v10/commands/npm-audit