⚡ LIVE From Lizard to Wizard workshop · April 27 – May 2 Pick your date →
Episode 1 57 min

Performance at Scale - With Danilo Velasquez

Key Takeaways from our conversation with Danilo Velasquez

Danilo Velasquez

Staff Engineer at Adevinta

In this kickoff episode of Señors @ Scale, host Neciu Dan sits down with Danilo Velasquez — Staff Engineer at Adevinta and longtime frontend performance obsessive.

Main Takeaways from my conversation with Danilo:

Tracking performance manually doesn’t scale. Danilo initially spent 4–5 hours each week gathering Lighthouse scores for dozens of pages. Automating that process with the Lighthouse Node API cut the task to 15 minutes and opened the door to real-time Prometheus integration. The result? A living dashboard that made performance a continuous conversation—not a one-time audit.

💡 Page ownership should align with team ownership. Danilo saw firsthand how abstract “domain boundaries” broke down when multiple teams had partial responsibility for the same pages. His solution: make each team responsible for its own application and page set. When ownership changes, move the code. It’s painful, but clear.

💡 Micro frontends led to micro-messes. In one system, they had 50+ frontend services powering a single feature. It looked modular on paper, but was impossible to maintain. Danilo now recommends grouping apps by team instead of slicing purely by domain—favoring practical boundaries over architectural purity.

💡 Old dependencies are a form of technical debt. Danilo advocates for building as close to the platform as possible—favoring native web components over frameworks when feasible. A five-year-old web component still runs today, but a five-year-old React app might be a minefield of outdated packages.

💡 Platform engineering is a force multiplier. Rather than shipping features directly, Danilo found more leverage in improving CI pipelines, optimizing DX, and standardizing tooling. Helping 10 teams ship better beats building a single product faster. It’s the clearest example of “scaling through others” in engineering.

💡 End-to-end tests are flaky for a reason—and it’s not just the tooling. Danilo points out that many e2e failures reflect real user bugs: delayed renders, missing elements, or race conditions. If your tests fail often, it might be telling you your frontend is too complex, not that Cypress is broken.

💡 Performance regressions should trigger incidents. In Danilo’s org, if a Web Vitals metric turns red, it’s treated like a production outage. That means war rooms, halted deployments, and dedicated fixes. It's a cultural shift: performance isn’t a “nice-to-have”—it’s infrastructure.

💡 Core Web Vitals data needs business context. Most teams track Web Vitals separately from conversion metrics—but that’s a missed opportunity. Danilo recommends wiring performance data into tools like Google Analytics to see if faster LCP actually improves sales, signups, or retention. Without that, you’re optimizing in a vacuum.

💡 Fonts can kill your LCP. One project had 12 separate font files for various weights and styles, all blocking the render. Switching to system fonts massively improved performance with zero user complaints. If users can’t tell the difference—but your metrics can—it’s worth rethinking your stack.

💡 The consent banner might be your LCP—and that’s okay. For EU users, the cookie/consent UI is often the first required interaction. Trying to hide or defer it might break compliance. Instead, Danilo suggests embracing it: load it early, strip down fonts, and make it snappy.

💡 Lighthouse results depend heavily on where and how you run them. Danilo discovered huge swings in scores when Lighthouse ran on shared cloud infrastructure vs. isolated VPS machines. To reduce variability, consider running it on dedicated, stable environments—especially if you’re gating PRs with performance checks.

💡 Long tasks? Try setTimeout(0). If your TTI or FID is struggling, Danilo recommends identifying heavy JS operations and wrapping them in a microtask with setTimeout. It’s a surgical fix—but it can yield major gains by unblocking the main thread faster.

💡 Mentorship isn’t about knowledge transfer—it’s about presence. Danilo believes the best mentors don’t just answer questions—they stick around, ask “why,” and help mentees think for themselves. That kind of leadership builds long-term capability, not short-term productivity.

Episode Length: 57 minutes of pure value

🏆 SOLD OUT IN SINGAPORE · ATHENS · LONDON

From Lizard to Wizard

4 hours. Algorithms, system design, security, observability, AI.
The workshop that turns mid engineers into senior ones.

€250 4-HOUR INTENSIVE
Pick your date →

Spots are vanishing. Don't be the one who waited.

💡 More Recent Takeaways

Database Performance at Scale with Tyler Benfield
Episode 30

Señors @ Scale host Neciu Dan sits down with Tyler Benfield, Staff Software Engineer at Prisma, to go deep on database performance. Tyler's path into databases started at Penske Racing, writing trackside software for NASCAR pit stops, and eventually led him into query optimization, connection pooling, and building Prisma Postgres from scratch. From the most common ORM anti-patterns to scaling Postgres on bare metal with memory snapshots, this is the database conversation most frontend developers never get.

Open Source at Scale with Corbin Crutchley
Episode 29

Señors @ Scale host Neciu Dan sits down with Corbin Crutchley — lead maintainer of TanStack Form, Microsoft MVP, VP of Engineering, and author of a free book that teaches React, Angular, and Vue simultaneously — to dig into what it actually means to maintain a library that gets a million downloads a week. Corbin covers the origin of TanStack Form, why versioning is a social contract, what nearly made him quit open source, and the surprisingly non-technical path that got him into a VP role.

PostCSS, AutoPrefixer & Open Source at Scale with Andrey Sitnik
Episode 28

Señors @ Scale host Neciu Dan sits down with Andrey Sitnik — creator of PostCSS, AutoPrefixer, and Browserslist, and Lead Engineer at Evil Martians — to explore how one developer became responsible for 0.7% of all npm downloads. Andrey shares the discrimination story that drove AutoPrefixer, the open pledge that forced PostCSS 8 to ship, and why the Mythical Man-Month applies directly to LLM agent coordination.

React Server Components at Scale with Aurora Scharff
Episode 27

Señors @ Scale host Neciu Dan sits down with Aurora Scharff — Senior Consultant at Creon Consulting, Microsoft MVP in Web Technologies, and React Certifications Lead at certificates.dev — to explore the real mental model shift required to understand React Server Components. Aurora shares her path from Robotics to frontend, what it was like building a controller UI for Boston Dynamics' Spot robot dog in React, and why the ecosystem finally feels like it's stabilizing.

📻 Never Miss New Takeaways

Get notified when new episodes drop. Join our community of senior developers learning from real scaling stories.

💬 Share These Takeaways

Share:

Want More Insights Like This?

Subscribe to Señors @ Scale and never miss conversations with senior engineers sharing their scaling stories.