Markdown Beats RAG - Building a Second Brain with LLM Wikis

Building a second brain has never worked for me. I'm good at capturing, terrible at sorting, and too lazy to maintain it. So it always becomes a mess.
I've been spending months trying to use modern AI systems to build my second brain using vector embeddings of content and semantic search. It worked-ish, but I felt like it still wasn't great.
Then Andrej Karpathy's post on LLM wikis showed up, and it finally clicked. Let the LLM do the organizing, don't make it too complex, and keep each wiki to a specific topic.
Here's how I'm using it for one of my research topics: AI coding.
-
I dump everything into a raw folder as markdown: articles, videos, chat transcripts.
-
A scheduled process ingests it all and organizes it into topics like agents, models, IDE, harnesses, or whatever I'm learning.
-
It builds an index and hyperlinks everything back to the source.
-
A linting process flags contradictions and issues, and I work them out with the model. Those conversations get folded back in.
The payoff: searchable knowledge I can browse myself or hand to an agent.
When I'm building, I point the agent at the wiki and ask how to approach the problem. Every new thing I read or every time I engage with it compounds and makes the knowledge base better.
The best part? No RAG, no vector database, no infrastructure. Just markdown files. Lightweight, portable, works with any chatbot.
Three tips if you want to try it:
- Use a web clipper to grab full articles, not just links.
- Start with a solid kickoff prompt.
- Keep it to one topic. Broad wikis get unwieldy fast.
This isn't just for AI coding. A patent lawyer, a researcher, anyone going deep on a domain could use it.
If you're building something like this, I'd love to compare notes.