⚡ LIVE From Lizard to Wizard · Thursday, May 28 · LIMITED SEATS Save my seat →
Episode 18 57 minutes

Security at Scale – With Liran Tal (Snyk)

Key Takeaways from our conversation with Liran Tal

Liran Tal

Director of Developer Advocacy at Snyk, GitHub Star, Open Source Security Champion

Señors @ Scale host Neciu Dan sits down with Liran Tal, Director of Developer Advocacy at Snyk and GitHub Star, to unpack NPM malware, maintainer compromise, MCP attacks, toxic flows, and why AI-generated code is statistically insecure without the right guardrails. Liran shares real incidents from the Node and open source ecosystem, how Snyk and tools like NPQ help developers build safer workflows, and why security at scale starts with developers, not firewalls.

🎧 New Señors @ Scale Episode

This week, I spoke with Liran Tal, Director of Developer Advocacy at Snyk, longtime open source maintainer, and GitHub Star, about what security at scale really looks like when you are shipping JavaScript and Node.js into production in 2025.

Liran has been around security since the BBS and IRC days, but his focus has always been on developers and real software delivery. In this episode, we unpack NPM malware, maintainer compromise, MCP attacks, AI generated code, and the uncomfortable gap between “we ship fast” and “we actually understand our risk surface.”

Rather than generic OWASP checklists, this conversation stays close to incidents, patterns, and the habits that make or break teams.

⚙️ Main Takeaways

1. Security at scale is now a developer problem, not just an AppSec problem

For a long time, security meant network perimeters, firewalls, and a security team that dropped a PDF on you every six months. The last decade flipped that model. The main attack surface today is application code, third party packages, and the tools developers use to build and deploy.

Developer-first security means bringing scanning and fixes into the CLI, IDE, and pull request. It removes the backlog and replaces it with fast, contextual feedback where work actually happens.

The core idea: security only scales when developers participate by default, not as an escalation path.

2. NPM supply chain risk is about people, not just packages

Vulnerable dependencies get headlines, but the modern attacks Liran describes target maintainers, not code.

Weak passwords, reused credentials, unprotected accounts, old maintainers with forgotten access — once attackers compromise a maintainer, they can publish malicious versions, harvest tokens, and infect other repositories in a chain reaction.

Even worse are internal workflows that increase the blast radius. A common example:
“Upgrade everything to latest” in CI.
It sounds efficient, but in CI you expose environment variables, private modules, and proprietary source. If a malicious package slips through, your CI pipeline becomes an exfiltration tool.

This is the real threat. Not just vulnerable code, but untrusted people, untrusted processes, and untrusted defaults.

3. Healthy dependency habits are real security controls

Liran built NPQ, a small CLI that intercepts npm install and performs health checks before the package lands on your system.

Checks include:

  • recent publish date
  • known vulnerabilities
  • suspicious release patterns
  • activity and maintenance signals

It’s not meant to be perfect. It’s meant to stop developers from installing packages published seven hours ago or typosquatted variants that look legitimate.

There’s a broader lesson here:
small, lightweight guardrails outperform heavyweight audits.
Pin versions. Upgrade intentionally. Add friction in the right places.

4. AI, MCP, and prompt injection create new classes of security problems

This episode goes deep into MCP servers and AI agent security.

MCPs introduce multiple layers of risk:

  • malicious MCP servers poisoning tool behavior
  • legitimate MCP servers containing classic security bugs
  • prompt injection that alters agent logic or extracts protected data
  • toxic flows where data from one repo triggers actions in another

AI browsers amplify this. Invisible Unicode characters can carry instructions that humans never see (“Glassworm”). Shadowed tools can override intended commands. Prompt injection is not theoretical — it is inherent.

Liran’s message is clear:
traditional AppSec patterns do not cover how agents and MCPs behave.
You need isolation, scanning, version pinning, and layered validation.

5. AI generated code is statistically insecure — you need a feedback loop

Models train on real code. Real code contains vulnerabilities. So AI-generated code will inevitably drift into insecure patterns like:

  • unsafe path concatenation
  • injection vulnerabilities
  • unsafe default configuration
  • exposure of sensitive data

Snyk has tested this across multiple models using prompts demanding secure output. The results still vary.

The fix is tying scanning directly into the agent loop:
agent writes code → Snyk scans → feedback returned → agent refactors → repeat until secure.

This shifts AI development from “trust the model” to “verify by construction.”

6. Real incidents show how tiny details uncover massive backdoors

The stories Liran shares ground everything in reality:

  • EventStream showed how precise attackers can be when they understand the dependency graph.
  • XZ Utils revealed a years-long, social-engineering-driven supply chain attack that nearly compromised SSH on Linux.
  • It was discovered because one engineer noticed a few hundred milliseconds of delay on disconnect.

Security failures rarely arrive with alarms. They appear as small anomalies that curious engineers refuse to ignore.

Liran also shares more everyday issues: plaintext passwords discovered during a migration, XSS caused by lax UX permissions, and weak governance that let anonymous actions write unsafe HTML.

The through-line:
security is a human practice, not a theoretical discipline.

🧠 What I Learned

  • Developer workflows shape your security posture more than any static checklist.
  • Supply chain risk is deeply tied to identity, trust, and maintainer security.
  • Guardrails like NPQ prevent entire classes of mistakes.
  • AI coding and MCPs create new threat surfaces that don’t map cleanly to OWASP.
  • Prompt-level instructions cannot ensure secure output — automated scanning can.
  • UX decisions can become attack vectors without anyone noticing.

💬 Favorite Quotes

“Security at scale is a complex challenge.”
“AI generated code is not always secure.”
“Security and UX must work together.”
“You probably don’t want to install something that was published seven hours ago.”
“If your CLI has command injection and the agent calls it, that’s a breach waiting to happen.”

🎯 Also in this Episode

  • How NPM became the highest-value target in modern software
  • Why local MCP servers are riskier than remote ones
  • Toxic flows and the GitHub and Cursor incidents
  • The mechanics of SQL injection and command injection inside MCP servers
  • Why Liran will not install browser extensions or AI browsers
  • What real maintainer compromise looks like in practice

Resources

More from Liran:
Node Security Books by Liran Tal
GitHub
NPQ Package Checker
Snyk Blog
LinkedIn

🎧 Listen Now

🎧 Spotify
📺 YouTube
🍏 Apple Podcasts

Episode Length: 57 minutes on modern security, supply chain risk, developer workflows, and how to ship safely with AI and open source.

Happy shipping,
Dan

🏆 SOLD OUT IN SINGAPORE · ATHENS · LONDON

From Lizard to Wizard

4-hour remote system design intensive.
Chat apps, microfrontends, BFF, SDUI, event-driven, observability.

€299 4-HOUR INTENSIVE
Save your seat →

Spots are vanishing. Don't be the one who waited.

💡 More Recent Takeaways

React Native at Scale with Kadi Kraman
Episode 35

Señors @ Scale host Neciu Dan sits down with Kadi Kraman, software developer at Expo working on the tools that make React Native development as smooth as possible. Kadi's path started with C++ in a university maths degree, took her through Angular 1, scientific programming for pharmaceutical and defense companies, five and a half years at Formidable, and finally to Expo itself. From the limitations of early React Native to development builds, EAS workflows, fingerprint-based repacks, and the right way to think about over-the-air updates, this is the React Native conversation most web developers never get.

Browser ML at Scale with Nico Martin
Episode 34

Señors @ Scale host Neciu Dan sits down with Nico Martin — open source ML engineer at Hugging Face working on Transformers.js, and Google Developer Expert in AI and web technology — to go deep on running machine learning models directly in the browser. Nico breaks down architectures vs. weights, quantization, tokenizers, ONNX, WebGPU, and why on-device AI is the right answer for a huge class of problems. He also shares the road from ski instructor and self-taught web developer to landing what he calls his dream job at Hugging Face.

Frontend Foundations at Scale with Giorgio Polvara
Episode 33

Señors @ Scale host Neciu Dan sits down with Giorgio Polvara, Staff Engineer at Perk (formerly TravelPerk), who joined when the company was 15 people in two flats with a hole knocked through the wall and helped build the frontend foundations that still hold up at unicorn scale. Giorgio covers the multi-year migration from a monolithic frontend to vertical micro-frontends, why their first attempt with single-spa didn't work, how they pulled off a full rebrand behind feature flags without leaking, and the staff engineer mindset of treating every feature as a system improvement.

Module Federation at Scale with Zack Chapple & Nestor
Episode 32

Señors @ Scale host Neciu Dan sits down with Zack Chapple, CEO and co-founder of Zephyr Cloud, and Nestor, the platform engineer building it, to go deep on module federation, microfrontends, and what it actually takes to go from code to global scale in seconds. They unpack why module federation is Docker for the frontend, how Zephyr composes applications at the edge in 80 milliseconds, and why the real unlock for enterprise teams isn't deployment — it's composition.

📻 Never Miss New Takeaways

Get notified when new episodes drop. Join our community of senior developers learning from real scaling stories.

💬 Share These Takeaways

Share:

Want More Insights Like This?

Subscribe to Señors @ Scale and never miss conversations with senior engineers sharing their scaling stories.