Just a few years ago, most important technical insights were born in conversations with another person. Pair programming, code review, sitting together over a bug that "shouldn't be possible."
It all took longer, but it had one huge advantage: you had to explain something, defend it, sometimes go back three times to the same place in the code.
And since the brain had to work hard, the conclusions stuck.
Today, AI coding assistants do a huge part of that work for us. Faster, more efficiently, often better.
The problem is that we no longer always "go through" the cognitive process — we mostly observe it.
And that changes how we remember.
Why conversations with AI don't stick in your head like conversations with people
From a cognitive psychology perspective:
- conversation with a person requires real-time knowledge reconstruction,
- conversation with a coding assistant often comes down to recognizing correct answers, not constructing them.
The brain does different work.
AI produces:
- a massive amount of context,
- alternative solutions,
- explanations we understand in the moment.
But without the act of externalizing knowledge (writing it down in your own words, in your own system), that information never makes it into the working memory of a team or a person.
That's why a deliberate "save point" is needed after every coding session with AI.
Brain Dump — my answer to this problem
Brain Dump was born from a very simple need. Every "aha!" moment, every "this is worth remembering," every "okay, this explains everything"
cannot disappear into the depths of chat history.
I'm a visual learner. Over the past months, I had countless situations where I visually remembered a fragment of a conversation:
the layout of paragraphs, the way Claude explained something, the moment it "clicked."
But when I tried to go back to it later, the full context was missing.
The image didn't embed deeply enough to reconstruct along with the reasoning, constraints, and decisions made at the time.
I decided to do something about it and wrote a small project to prevent this from happening.
Brain Dump is the place where those "aha" moments land — before they fade from memory.
It's a record of specific discoveries that happened during sessions with Claude:
- why something only works in one version but not another,
- why a solution looks good but has hidden costs,
- which assumption turned out to be wrong,
This is exactly the kind of knowledge that would normally stay in a colleague's head during pair programming — or in my own head, if memory alone (even visual or line-of-code memory) actually sufficed.
The system is intentionally simple
This system wasn't built to be complicated and cover all edge cases.
It was built so I'd actually want to use it.
I didn't care about:
- an app,
- UI,
- sync,
- an "ideal knowledge structure."
What I cared about:
- fast capture,
- one-time setup,
- a format readable in 2 years,
- practically zero barrier to entry,
- version control to keep track of notes.
Markdown, directories, git, one command in the terminal.
That was my initial idea because if capturing knowledge is even slightly cumbersome — it stops working.
Step by step: Brain Dump configuration
Step 1: knowledge repository
I created a repository, e.g.:
~/Documents/repos/brain-dump
This is not a project repo.
This is a repo of my technical thinking.
brain-dump/
├── react/ # e.g. rendering pitfalls
│ └── useeffect-cleanup-race-condition.md
├── architecture/ # architectural decisions
│ └── build-pipeline.md
├── react-performance/ # discovered bottlenecks
│ └── memo-vs-usememo-kiedy-co.md
├── scripts/dump.sh # CLI script
└── README.md # auto-generated index Step 2: the dump.sh script
The script serves three functions:
- saves knowledge (from keyboard or clipboard),
- lets you search it,
- automatically indexes the README.
Key design decisions:
- writing to Markdown,
- automatic commit,
- categories as directories,
- title as the first heading (easy grep).
This means the cost of capturing knowledge is minimal — and that's crucial.
Each entry is a Markdown file with a header:
# Entry title
**Date:** 2026-02-11
**Source:** claude-conversation
**Tags:** category
---
## Context
Why this matters.
## Problem
What went wrong or what was unclear.
## Solution
What works and why.
## Key Gotchas
- Things easy to overlook. Step 3: global availability across every repo
To make the script callable from any other repo, I added to ~/.zshrc or ~/.bashrc:
export BRAIN_DUMP="$HOME/Documents/repos/brain-dump"
dump() {
"$BRAIN_DUMP/scripts/dump.sh" "$@"
} The result:
dump react "useEffect cleanup race condition"
pbpaste | dump architecture "Build pipeline overview"
I don't switch context.
I don't leave the terminal.
I don't "write documentation."
I just save a thought.
Step 4: bringing Claude into the memory system
A CLAUDE.md file in each working repo serves as a cognitive instruction for the assistant:
- it informs that an external memory exists,
- suggests checking it before making decisions,
- encourages saving new discoveries.
## Developer Knowledge Base
This developer maintains a personal knowledge base
at ~/brain-dump/ with architectural decisions,
platform limitations, and solutions discovered
during development.
When making architectural decisions or debugging
platform-specific issues, check relevant entries
there first.
This is important:
AI stops being just a response generator and starts becoming a collaborator in a knowledge system.
Step 5: automating "save / retrieve"
A Skill (global, installed once) — a set of instructions that tells Claude when to save, when to search, and how to format entries.
The skill header looks like this:
---
name: brain-dump
description: >
Save and retrieve knowledge from a persistent
markdown knowledge base.
SAVE triggers: "dump this", "remember this",
"save this for later", or when a conversation
produces reusable insight about platform limitations,
architecture decisions, debugging discoveries.
RETRIEVE triggers: "what do we know about",
"check brain-dump", "remind me about",
or any question where past discoveries
from previous sessions would help.
--- Thanks to this:
- Claude knows when to save a discovery,
- knows when to reach for past conclusions,
- and you don't have to remember that something was already covered.
Knowledge lives in the conversation but disappears when it ends.
The Skill solves this problem — it automates the "save point."
What I gained from Brain Dump
The biggest change isn't technical, it's mental. I stopped feeling like: "I know I already figured this out once, but I can't remember the details."
Every session with Claude adds a building block that doesn't vanish the next day.
How I use Brain Dump daily
In practice, it's very simple. During a debugging session or right after:
- if something surprised me,
- if I changed my mind,
- if I discovered a limitation that will "come back" in six months,
— I dump it to Brain Dump.
To be precise, it's not me who does the dump — it's Claude at my command. I approve the changes.
Over time I noticed something even more interesting: Claude, having access to this knowledge base, starts reaching for previous entries on its own, asking about context, and building answers based on what we've already discovered together.
That's the moment when the assistant stops being a "one-time conversation partner" and starts acting like someone who has access to the broader history of the project and my thinking.
Summary
AI coding assistants don't take away our knowledge.
They take away our natural mechanism for retaining it.
If I don't build my own memory layer — I'm fast, but short-sighted.
Brain Dump is not a productivity tool.
It's a system for maintaining control over your own technical experience.
Knowledge you can't retrieve is, in practice, knowledge lost.
Image: Jorge Franganillo from Pixabay
