if can_code == False: build_agent_anyway()  

A couple weeks ago, I built an AI agent using code. 

I don’t code. 

When I started the project, I assumed my guide (ChatGPT) would use the low-code, drag-and-drop tools our MIT teacher had shown us. In hindsight, I should have included that in the prompt. 

The first sign of trouble came early, when the bot confidently told me to open the Mac Terminal and start pasting Python snippets.  

The Terminal has always scared me. It feels like doing brain surgery on my computer. I’ve always assumed that one bad keystroke would brick my Mac and leak my passwords to the dark web. 

Python also gives me the willies. Every time I see it, I feel like a traveler who didn’t bother to learn a word of the language before getting on the plane. 

Yet here I was. Fetching APIs. Assembling a toolchain. Building modules. And failing, and debugging, over and over again. 

Sample of our process: 

Me: OK. I ran it. I got this:  TypeError: 'NoneType' object is not subscriptable 

ChatGPT: Great catch. Nothing’s “broken”—that error just means the test didn’t load your `.env` file. By default, macOS doesn’t read `.env` automatically for ad-hoc Python one-liners... 

You get the picture. 

The idea for my agent seemed simple: scour my ChatGPT history, isolate prompts, score them. 

ChatGPT wrote the code, told me where to place it, and traced the source of every bug. I was the hands, the eyes, and the dude saying “that didn’t work.”  

The first version took six hours to build. Or, more accurately, it took two hours to build it, and then another four to overcome parsing failures, embedding and rate limits, Chroma metadata validation errors, Terminal freezes, etc.  

(I’ve included all the steps in the appendix, for anyone who wants to geek out and/or experience vertigo.) 

What kept me going? In spite of all the above complaints, I was in a flow state. It was FUN. 

Finally, after the sixth wave of debugging, I opened the browser window, typed a keyword into the search field, and saw all my related historical prompts in neat little boxes. With a click, the boxes expanded, revealing the context. 

Success!  

Kinda. 

My agent worked, but I didn't get the insights I’d hoped for. I was sure I’d unearth a library of my very own gold-standard prompts. Instead, I learned that I use a multi-shot, iterative prompting style. I riff, I flow, I discover. (TL/DR, I’m a Creative.) 

Still, an insight is an insight. I realized I need to go back and master formal prompting styles, even if I’m more comfortable riffing. 

And I learned a lot in those six hours. I’m more familiar with agents. I can read a little Python. I know what to do with APIs. The Terminal doesn’t terrify me. And my next agent will be easier. 

Because, swear to GOD, next time I’m gonna specify drag-and-drop. 


 APPENDIX: A SIX-HOUR BUILD JOURNEY TO V.1 

Here’s a blow-by-blow account of the steps we took to go from “zero” to “agent.” This list was generated by ChatGPT — which should be obvious, because I’ve already established that I don’t understand what a “Chroma Metadata Validation error” is.  

Think of this as a helpful, verbal diagram of maddening complexity.  

 

1. Vision & Scaffolding 

1.1 Defined the goal: 

A personal “Prompt Librarian” that ingests your ChatGPT export, extracts the good prompts, tags and scores them, embeds everything, and gives you a searchable library. 

1.2 Designed the project structure: 

  • pl/loaders/ 

  • pl/extract/ 

  • pl/analyze/ 

  • pl/store/ 

  • pl/cli.py 

  • plus outputs directories & environment config. 

1.3 Chose the toolchain: 

  • Python 3.11 (Homebrew) 

  • Virtual environment 

  • LangChain 0.2.12 

  • LangSmith 

  • langchain-openai 

  • langchain-community 

  • Chroma 

  • dotenv 

  • click CLI 

2. Clean Install + Environment Setup 

2.1 Created a clean project folder 

2.2 Created a fresh venv 

2.3 Installed all pinned dependencies 

2.4 Built a correct .env and loaded keys 

2.5 Verified Python version & imports 

2.6 Established a predictable folder layout 

3. First Build of Modules 

3.1 Wrote and installed: 

  • config.py 

  • openai_export.py 

  • candidate_selector.py 

  • normalization.py 

  • tag_patterns.py 

  • score_prompts.py 

  • index.py 

  • cli.py 

3.2 Ensured every file loaded .env correctly 

3.3 Standardized all OpenAI imports 

3.4 Converted the pipeline to LangChain 0.2 patterns 

4. First Ingest Attempt — Parsing Failures 

4.1 Hit AttributeError: 'list' object has no attribute 'get' 

 → Diagnosed that ChatGPT export structure changed 

 → Rewrote loader to detect list-level transcript format 

4.2 Re-tested ingest 

 → Loader worked 

 → Next failure moved downstream 

5. Second Wave Errors — Embedding & Rate Limits 

5.1 Got OpenAI 400: max_tokens_per_request 

 → Realized some extracted entries had huge text blobs 

 → Added batching to avoid token explosions 

5.2 Hit LangSmith rate limits 

 → Pipeline was generating thousands of traces 

 → Enabled billing to raise the quota 

 → Ingest began working again 

6. Third Wave Errors — Chroma Metadata Validation 

6.1 Encountered: Expected metadata value to be a str, int, float… got [] 

Meaning: Chroma forbids lists inside metadata. 

6.2 Rewrote metadata to flatten or remove lists 

 → Kept tags in metadata as a simple string 

 → Embedded tags separately inside the document text 

6.3 Re-ran ingest 

 → Metadata error resolved 

7. Fourth Wave Errors — Terminal Freezes & Stuck Processes 

7.1 Streamlit & Python processes persisted 

 → Terminal got stuck with black dot in close button 

 → Resolved using Ctrl+C kills and process cleanup 

 → Reset environment between runs 

8. Fifth Wave — Chroma Deprecation Warnings 

8.1 Received warnings about Chroma moving to langchain-chroma 

 → Not fatal, but noted for later upgrade 

 → Proceeded with functioning implementation 

9. Sixth Wave — UI Import Errors 

9.1 Initial Streamlit UI lived inside pl/ 

 → Could not import pl.store.index 

 → Received ModuleNotFoundError: No module named 'pl' 

9.2 Attempted sys.path fix 

 → Partial success but Streamlit continued to execute with non-root working directory 

9.3 Final fix: 

 → Moved UI to project root (ui.py) 

 → Imports immediately worked with no hacks 

10. The Successful End-to-End Run 

10.1 Loaded ChatGPT export (conversations.json) 

10.2 Successfully ran: python -m pl.cli ingest exports/conversations.json 

10.3 Ingest completed: 3,017 items processed 

10.4 Verified semantic searches working in CLI 

10.5 Finally launched web-based UI: streamlit run ui.py 

10.6 Browser opened, clean interface, full semantic search 

10.7 Tested queries → beautiful, readable expandable cards 
 

This marked the moment the prototype was truly alive

Why it Took Six Hours 

  • Two full rebuilds of the loader logic 

  • One full rebuild of the entire indexing pipeline 

  • A total rewrite of metadata compatibility for Chroma 

  • Token-limit handling & batch processing 

  • LangSmith rate limit resolution 

  • Multiple virtual environment resets 

  • Streamlit path handling quirks 

  • Cross-compatibility work between older Chroma API and modern LangChain 

  • Plus: hundreds of lines of code, and a dozen “silent” file edits 

Next
Next

Are the Liberal Arts a perfect background for the age of AI?