· ai and git · 13 min read

The Future of Coding in the Age of AI is Git

AI writes the code. We review it. The real work happens in the diff, the history, and the branch—and that means Git is where you live now.

Neciu Dan

Neciu Dan

Hi there, it's Dan, a technical co-founder of an ed-tech startup, internation speaker and Staff Software Engineer, I'm here to share insights on combining technology and education to solve real problems.

I write about startup challenges, tech innovations, and the Frontend Development. Subscribe to join me on this journey of transforming education through technology. Want to discuss Tech, Frontend or Startup life? Let's connect.

Share:
AI writes the code. We review it. The real work happens in the diff, the history, and the branch—and that means Git is where you live now.

What a time to be a developer, huh?

A couple of weeks ago, Spotify co-CEO Gustav Söderström dropped a bomb during their Q4 earnings call. He said, and I quote, that their most experienced engineers “have not written a single line of code since December.” Not a single line. Instead, they’re using an internal system called Honk, which sits on top of Claude Code, to do everything. Their engineers open Slack on their phones during the morning commute, tell the AI to fix a bug or add a feature, and by the time they get to the office, a new build of the iOS app is waiting for them to review and merge.

They shipped over 50 new features in 2025 this way. Fifty.

And before you think this is just a Spotify thing – it’s not. Satya Nadella said AI now writes 20 to 30% of Microsoft’s code. Sundar Pichai confirmed that over 25% of Google’s new code is AI-assisted. Mark Zuckerberg went full send, saying he expects AI to write most of Meta’s code within the next year. These are not startups with three developers and a dream. These are the biggest engineering orgs on the planet.

Now here’s where it gets interesting. Addy Osmani, who works on developer experience at Google, has been tracking this shift obsessively. He wrote about what he calls “the 80% problem”: by late 2025, early adopters reported that AI was generating about 80% of their code. Sounds amazing, right? More code, faster, everybody goes home early.

Except that’s not what happened.

The data tells a different story. Faros AI and Google’s DORA report found that teams with high AI adoption merged 98% more pull requests – but review time ballooned by up to 91%. Let that sink in. Almost double the review time. Atlassian’s 2025 survey found that 99% of developers using AI saved 10+ hours per week, yet most reported no decrease in overall workload. The time we saved writing code? We now spend it reviewing code. The bottleneck didn’t disappear. It just moved.

And the numbers keep getting worse: PRs are about 18% larger, incidents per PR are up 24%, and change failure rates are up 30%. Around 45% of AI-generated code has security flaws.

So here we are. AI writes the code. We review it. We test it. We ship it. And when it breaks at 2 AM, we fix it. Our job didn’t get smaller – it changed shape. And the tools that matter most now are not the ones that generate code. They’re the ones that help us understand it, verify it, and manage it.

Which brings us to Git.


How does a British developer start version control for his side project?

git init, mate.


Git is the new programming language. Not in the sense that you write apps in it – but in the sense that this is where you’ll spend most of your time. When AI writes the code, your job is to understand what changed, why it changed, and whether it’s safe to ship. That happens in the diff. In the history. In the branch. The better you know Git – the commands, the workflows, the escape hatches – the better you can review what the AI produced and catch the mistakes before they hit production. The following sections are the Git you need for that job.


Undoing Commits Without Losing Work

Let’s start with something we’ve all done: committing too early. The message was wrong, we forgot to stage a file, or the AI spit out something we accepted too quickly and now we need to restructure before pushing.

git reset --soft HEAD~1

This undoes the last commit but keeps all changes staged exactly as they were. Working directory untouched. Fix what you need to fix, restage if necessary, recommit. Need to go back further? HEAD~2, HEAD~3, same thing.

The key thing to remember: --soft keeps everything. --hard nukes everything. Use --soft for local fixes before pushing, and treat --hard like you’d treat a loaded gun.

If the commit has already been pushed and your teammates pulled it, git reset will mess up their history. Use git revert instead – it creates a new commit that undoes the changes without rewriting history.


The Reflog: Git’s Safety Net

Here’s something that changed how I think about Git: it almost never truly deletes anything. Even after a bad rebase, an accidental branch deletion, or a reset --hard you immediately regret – the data is still there, hiding in the reflog.

git reflog

The reflog records every position HEAD has ever pointed to. Unlike git log, which shows the commit history of a branch, the reflog shows your personal movement through the repo – every checkout, commit, reset, rebase. It’s your private timeline.

A typical reflog looks like this:

a1b2c3d HEAD@{0}: reset: moving to HEAD~2
e4f5g6h HEAD@{1}: commit: Add payment processing module
i7j8k9l HEAD@{2}: commit: Refactor auth middleware
m0n1o2p HEAD@{3}: checkout: moving from feature/auth to main

Accidentally reset too far and lost a commit? Find it in the reflog:

git checkout e4f5g6h         # inspect the lost commit
git checkout -b recovered    # save it to a new branch

Entries stick around for 30 to 90 days. Once you know the reflog exists, rebasing and resetting become way less scary. There’s always a way back.


Reading diffs with intention

git diff main..feature/new-auth

Shows every change between two branches. But for big AI-generated PRs, the raw diff is overwhelming. Break it down:

git diff main..feature/new-auth -- src/auth/       # scope to a directory
git diff main..feature/new-auth --stat             # summary: files changed, lines added/removed
git diff main..feature/new-auth --name-only        # just the filenames

I always start with --stat. It tells you the shape of the PR before you get into the weeds.

Reviewing commit by commit

AI-generated PRs often appear as a single giant commit. When they don’t, reviewing commit by commit is so much better:

git log --oneline main..feature/new-auth           # list all commits in the branch
git show <commit-hash>                              # inspect a single commit
git log -p main..feature/new-auth                  # full patch for every commit

Who changed what and when

When reviewing unfamiliar code or trying to understand why something exists:

git blame src/auth/middleware.js                    # line-by-line authorship
git log --follow -p -- src/auth/middleware.js       # full history of a single file

git blame is invaluable now because it shows whether a line was written by a human, committed through an AI workflow, or part of older code the AI may have modified without fully understanding the context.

Actually checking out the branch

I cannot stress this enough. Every PR now has lots of code, and 75% of it will be AI-generated – guaranteed. So it’s your duty to pull it locally, run it, test it, click around.

git fetch origin
git checkout feature/new-auth
npm test
npm run dev                                         # open the app, use the feature

If the PR added an auth flow, try logging in with bad credentials. If the PR added a payment form, see what happens when the network drops. The diff tells you what changed. Running the code tells you if it works.

Best practice: the PR author should write Given/When/Then acceptance criteria with a checkbox for each scenario. The reviewer can then pull the branch and tick each one off while walking through the actual flow – no guessing, no missed edge cases.


Cherry-Picking with Precision

Most of us know git cherry-pick – grab a commit from one branch, apply it to another. The trick most people don’t know is the --no-commit flag:

git cherry-pick -n <commit-hash>

This pulls the changes into your working directory and staging area, but doesn’t commit. Super useful when you want to review changes before committing, combine multiple cherry-picks into a single clean commit, or verify that the changes work with what you already have before making them permanent.


Actually Using the Stash

Everyone knows stash exists. Almost nobody uses it properly. The typical workflow is:

git stash        # throw everything in
git stash pop    # get it back

That’s fine for simple stuff. But the stash is a full stack with names, indexes, and selective stashing.

Name your stashes

git stash push -m "WIP: auth flow refactor"
git stash push -m "experimental: new caching strategy"

Now git stash list actually makes sense:

stash@{0}: On feature/auth: experimental: new caching strategy
stash@{1}: On feature/auth: WIP: auth flow refactor

Target specific stashes

git stash pop stash@{1}     # apply and remove a specific stash
git stash apply stash@{0}   # apply but keep it around
git stash drop stash@{2}    # remove without applying

pop removes the stash after applying. apply keeps it. Use apply when you want to test the same changes on multiple branches.

Stash only specific files

git stash push -m "just the config changes" -- config/ .env.local

Or go interactive and pick individual hunks:

git stash push -p -m "partial stash"

This walks through each change and asks whether you want to stash it, just like git add -p. Really handy when your working directory is a mess of changes from different tasks.


Finding Bugs with Bisect

Something is broken in production. It worked two weeks ago. Between then and now, there are 200 commits from a dozen developers and a handful of AI agents. Good luck checking each one manually.

git bisect uses binary search to find the exact commit that broke things. 200 commits? About 7 or 8 steps.

git bisect start
git bisect bad                   # current commit has the bug
git bisect good v2.3.0           # this tag was working

Git checks out a commit halfway between the two. Test it, tell Git:

git bisect good    # this one's fine
# or
git bisect bad     # this one's broken

It keeps halving until it finds the first bad commit. When you’re done:

git bisect reset

Automate it

If you have a test that catches the bug:

git bisect start HEAD v2.3.0
git bisect run npm test

Git automatically runs the test at each step. Zero manual work. In a world where AI generates dozens of PRs a day, automated bisecting is no longer optional.


Working on Multiple Branches with Worktrees

Context switching kills productivity. You’re deep in a code review, and someone pings you about a hotfix. The old way: stash, switch branches, do the work, switch back, pop. Tedious and error-prone.

git worktree lets you check out multiple branches into separate directories at the same time:

git worktree add ../hotfix main

Now you have two working directories sharing the same Git history.

~/projects/my-app/                # reviewing feature/new-dashboard
git worktree add ../my-app-hotfix hotfix/login-bug
cd ../my-app-hotfix
# fix the bug, commit, push
cd ../my-app
git worktree remove ../my-app-hotfix

Worktrees pair amazingly well with bisect – spin up a dedicated worktree for the investigation so your review work stays untouched:

git worktree add --detach ../bisect-workspace HEAD
cd ../bisect-workspace
git bisect start HEAD v2.3.0
git bisect run npm test
git bisect reset
cd ../my-app
git worktree remove ../bisect-workspace

Keep things tidy:

git worktree list              # see all active worktrees
git worktree remove ../hotfix  # clean up
git worktree prune             # remove stale metadata

Rewriting History with Interactive Rebase

Before pushing a feature branch, the commit history usually looks something like this:

fix typo
WIP
actually fix the bug
add feature X
WIP part 2
fix tests

This is normal – and even more common when AI generates code in iterations. But it doesn’t need to be part of the permanent record.

git rebase -i HEAD~6

This opens an editor with the last 6 commits. Reorder, squash, reword, or drop:

  • squash (s): merge into the commit above, combine messages
  • fixup (f): same, but discard the message
  • reword (r): keep changes, edit the message
  • drop (d): remove entirely

Clean history isn’t cosmetic. It makes future reviews, bisecting, and debugging dramatically easier.

Never rebase commits that have already been pushed to a shared branch. Same principle as git reset – rewriting shared history causes chaos for everyone.


Seeing Who Knows What with Shortlog

When working on a big codebase, especially one you didn’t build, it helps to know who has context on what:

git shortlog -sne

-s for summary, -n for numerical sort, -e for email. Scope it to a directory:

git shortlog -sne -- src/auth/

Invaluable when you need to track down someone who actually understands the legacy code you’re reviewing.


Reviewer’s Cheat Sheet

Some extra commands specifically useful during code review:

# branches sorted by last commit date
git branch -a --sort=-committerdate --format='%(committerdate:short) %(refname:short)'

# compare a file across branches
git diff main:src/app.js feature:src/app.js

# search commits for a string (the "pickaxe")
git log -S "deprecated_function" --oneline

# history of a specific function
git log -L :functionName:src/file.js

# graphical branch history
git log --oneline --graph --all

# files changed in last N commits
git diff --name-only HEAD~10

The pickaxe (-S) is a personal favorite. When AI generates code that references a function you don’t recognize, the pickaxe traces where it came from.


Aliases That Save Your Sanity

git config --global alias.st status
git config --global alias.co checkout
git config --global alias.br branch
git config --global alias.undo "reset --soft HEAD~1"
git config --global alias.visual "log --oneline --graph --all"
git config --global alias.contributors "shortlog -sne"
git config --global alias.review "diff --stat"
git config --global alias.last "log -1 HEAD"
git config --global alias.changed "diff --name-only"

git undo is the one I use most. Small things, but across hundreds of reviews, they add up.


The Part That Actually Matters

Everything above is tooling. Commands. Syntax. The easy stuff to write about, the stuff that fits neatly into a blog post.

But here’s the uncomfortable part.

Our most important job now is not writing code. It’s understanding code. And the gap between those two things is getting wider by the day. Osmani himself talked about crossing a line he didn’t see coming – an AI agent implemented a feature he’d been putting off for days, the tests passed, he skimmed it, nodded, merged. Three days later, he couldn’t explain how it worked.

He and others call this “comprehension debt.” You can review code long after you’ve lost the ability to write it from scratch. But at some point, reviewing becomes nodding along. And that’s where things get dangerous.

The numbers back this up. PRs are bigger. Failure rates are climbing. Security vulnerabilities in AI-generated code sit at around 45%. Greg Foster from Graphite put it well: “If we’re shipping code that’s never actually read or understood by a fellow human, we’re running a huge risk.”

So what does responsible reviewing actually look like?

Check out the code and run it. Pull the branch, start the app, click through the feature, try to break it. The diff tells you what changed. Running the code tells you whether it actually works.

Read until you can explain it. Not until the tests pass – until you could explain it to a teammate. Until you could debug it at 2 AM when it breaks. If you can’t, the review isn’t done.

Question the architecture. AI is great at writing functions that do what they’re supposed to do. It’s terrible at understanding how those functions fit into a larger system. Is it duplicating logic? Introducing weird dependencies? Breaking patterns the team agreed on? Those are the things AI can’t catch.

Keep changes small. When AI can generate a thousand lines in minutes, it’s tempting to ship everything at once. Don’t. Break it into small, reviewable commits. Smaller PRs catch more bugs and are way easier to bisect when something goes wrong.

And finally, own the outcome. This is the part I really want to be direct about. When code reaches production, nobody cares if a human wrote it or an AI generated it. If you reviewed it and approved it, you are responsible for it. You, together with whoever (or whatever) wrote it.

The reviewer’s signature on a pull request is not a formality. It’s a statement: this code is correct, it’s secure, and it’s ready for production. That carries the same weight whether the author was a junior dev, a senior engineer, or an LLM.

The AI writes the code. You make sure it’s worth shipping.

Let’s fucking do this. 🚀

    Share:
    Author

    Discover more from The Neciu Dan Newsletter

    A weekly column on Tech & Education, startup building and occasional hot takes.

    Over 1,000 subscribers

    🎙️ Latest Podcast Episodes

    Dive deeper with conversations from senior engineers about scaling applications, teams, and careers.

    Leveling Up as a Tech Lead with Anamari Fisher
    Episode 24
    52 minutes

    Señors @ Scale host Neciu Dan sits down with Anamari Fisher — engineering leader, coach, and O'Reilly author of 'Leveling Up as a Tech Lead' — to explore the first jump into leadership. Anamari shares how she went from software engineer to tech lead and product director, why accountability is the key differentiator from senior engineer, and how to scale your impact through soft skills that actually work in real teams.

    📖 Read Takeaways
    MicroFrontends at Scale with Florian Rappl
    Episode 23
    69 minutes

    Señors @ Scale host Neciu Dan sits down with Florian Rappl — author of 'The Art of Micro Frontends,' creator of the Piral framework, and Microsoft MVP — to explore how micro frontends are transforming how we build scalable web applications. Florian shares hard-won lessons from over a decade of building distributed systems, from smart home platforms to enterprise portals for some of Germany's largest companies.

    📖 Read Takeaways
    Nuxt at Scale with Daniel Roe
    Episode 22
    54 minutes

    Señors @ Scale host Neciu Dan sits down with Daniel Roe, leader of the Nuxt Core team at Vercel, for an in-depth conversation about building and scaling with Nuxt, Vue's most powerful meta-framework. Daniel shares his journey from the Laravel world into Vue and Nuxt, revealing how he went from being a user to becoming the lead maintainer of one of the most important frameworks in the JavaScript ecosystem.

    📖 Read Takeaways
    State Management at Scale with Daishi Kato (Author of Zustand)
    Episode 21
    35 minutes

    Señors @ Scale host Neciu Dan sits down with Daishi Kato, the author and maintainer of Zustand, Jotai, and Valtio — three of the most widely used state management libraries in modern React. Daishi has been building modern open source tools for nearly a decade, balancing simplicity with scalability. We dive deep into the philosophy behind each library, how they differ from Redux and MobX, the evolution of the atom concept, and Daishi's latest project: Waku, a framework built around React Server Components.

    📖 Read Takeaways
    Back to Blog