Should Your AI Coding Session Live in Git?
Hacker News is debating whether AI sessions should be committed alongside code. It's the wrong answer to the right question.
A project called Memento hit the top of Hacker News this week with an interesting argument: if AI wrote your code, the AI session that produced it should be committed to your repo alongside the diff.
317 points. 297 comments. Engineers clearly have opinions.
The idea isn’t crazy. The problem it’s trying to solve is real. But I think it’s the wrong solution — and understanding why it’s wrong gets at something important about where AI-assisted development is actually breaking down.
The problem is real
Here’s what teams are running into: AI-generated code ships, and nobody can reconstruct why it was written the way it was.
There’s no PR description that captures the reasoning. No rubber-duck conversation where someone thought it through. No comment explaining the edge case the code is handling. Just… code, generated fast, merged, shipped.
Six months later, something breaks. The original engineer has moved on. The person on-call opens the file and has no idea what they’re looking at or what assumption the code was built on.
This happens constantly. I see it in teams I work with. The AI made architectural decisions that made sense in context — context that lives nowhere except in a chat transcript that got closed weeks ago.
So yes: there’s a real provenance problem in AI-assisted development. We’re shipping code with no institutional memory behind it.
But raw sessions aren’t the answer
A full AI coding session is noise. Thousands of tokens of “actually, let me reconsider” and “here’s a better approach” and “sorry, I misunderstood the requirement.”
That’s not documentation. That’s a process artifact. Committing it to your repo is like committing your browser history because you Googled something while writing the code.
The signal is buried. Nobody is going to read a 40k-token AI transcript to understand a 200-line function. They’re going to do what they always do when context is missing: guess, experiment, and probably make things worse.
What you actually need isn’t the session. It’s the thinking that was in the session, distilled into something a human can use.
The workflows that actually work
The best AI-assisted engineering I’ve seen uses a variation of the same pattern.
Before coding starts, there’s a design document. What are we building? What are the constraints? What did we consider and reject? This doesn’t have to be long — a page is usually enough. But it has to exist, and it has to be committed.
Then there’s a plan. Not “implement the feature,” but: here’s the approach, here are the phases, here’s what done looks like. The AI helps generate it. The engineer iterates on it until it reflects their actual intent. Then the code gets written.
The artifact that gets committed alongside the code is the design doc and the plan — not the transcript. Future maintainers get the intent and the approach. They don’t get 40k tokens of AI second-guessing itself.
That’s the difference between documentation and transcription.
The deeper issue
Here’s what the Memento debate is really pointing at: we’ve automated code generation without automating comprehension.
The AI writes fast. The review is often cursory. The context window closes. And now you have code in production that nobody on the team fully understands, with no record of why it was written that way.
That gap — between generation and genuine ownership — is where teams are getting hurt.
The engineers navigating this well aren’t trying to read every line of AI-generated code. They’re doing something smarter: they’re setting up the process so the AI makes decisions they’d make themselves. Strong prompts. Clear constraints. Design docs before code, not after.
The AI is an implementation engine. The thinking has to come from somewhere human.
What this means if you’re building now
If you’re using AI coding tools and you’re not already doing this, try it this week:
→ Write a one-page design doc before you prompt. What are you building? What are the constraints? What are you explicitly not doing?
→ Have the AI generate a plan from the design doc. Iterate on the plan until it reflects your intent. This is the step most people skip — and it’s the most valuable.
→ Commit both alongside the code. Not the session. The thinking.
→ Add comments in the code explaining non-obvious choices. Not what the code does — why it does it that way.
It takes maybe 30 extra minutes per feature. It saves hours when something breaks.
The last mile is comprehension
The Memento idea is trying to solve an accountability problem with a storage solution. It won’t work. Nobody will read those transcripts.
Accountability in AI-assisted development isn’t about preserving the past — it’s about owning what you ship. That means understanding it well enough to explain it, debug it, and modify it without breaking something else.
That understanding has to happen before the code lands in main. No amount of committed session history will substitute for it.
That’s the last mile. It’s not glamorous. It’s not automated. But it’s what separates teams that ship AI code well from teams that just ship AI code fast.
One of those is a feature. The other is debt with a fuse.