✍🏼 Anecdotal Experiences – Real‑World Stories
Below are some of anecdotes while using AI the past year.
The One‑Line Fix
I can’t tell you how many times I’ve asked an LLM for a regex or complex SQL query syntax and gotten a correct solution in seconds, saving hours of time!
LLM based auto-complete is my favorite feature of all these tools (thank you Tabnine)! Once in a while yo describe the method, a hit the tab button and something surprising that saves you days appears!
AI can Tempt Veterans to take Shortcuts 🤖☠️
I’ve seen good engineers rely on Claude like a crutch and stop thinking critically. I’ve even seen staff engineers open up PR’s with embarrassing regressions, poorly written code, and tests that miss core parts of a feature. PR’s created with AI must be read and reviewed critically, even for those with more seniority.
Pair‑Programming with an LLM
I love to explore a new framework or unfamiliar part of a codebase with Claude. Its great at summarizing a class or feature set. It can also be very helpful “pair” by prompting it with smaller questions and actively partnering with it. I’ve also had great experiences working with Gemini the browser layering on increasingly complex level of queries.
It is Magic with Boilerplate Code 🪄
Recently I used Cursor to help a team transition from Rails ERB templates to React views by generating boilerplate code. I’ve also been able to verbally describe the shape and attributes of a JSON payload to the LLM. I used this to create example while testing API endpoints.
This can help engineers from having to work on mind numbing tasks.
When the Model Hallucinates
More times than I care to admit, I have been excited to see a LLM suggest a perfect library, module, or syntactic sugar to compliment the feature I’m working on, only to find out it is non-existent. 😿
Try Using it in Weird Places |
I like to use Cursor to make a first draft of an ERD diagram in Mermaid format. Sure I might have to adjust a few data types and relationships, but it provides a helpful first pass.
I also have set up an MCP (Model Context Protocol) integrations with Figma (design app), Notion (wiki) and Linear (ticketing manager) to read PRD’s and the tickets in an epic. Usually the more context the better the outcome (up to 100,000 lines of context). Often times it will atleast create something that is at least useful as a first draft.
Fool Me Twice, Shame on Me
If it doesn’t create something useful on the first 2 passes I will often times just cut-bait and do it on my own. Don’t waste your time~!
If you are using these tools right, you are feeding it something small to medium that you could likely do mostly on your own in under 15 minutes. You’re not going to find the perfect magic incantation every time.
Wiley Coyote Effect 🐺🏔️
Sometimes while working with LLM’s you can think you are a based borg-minded-genius.
Then days later… after the LLM assisted code is deployed into production, you discover a subtle but major un-tested regression.
You realize at that moment that you are in-fact actually Wiley Coyote standing off the edge of a cliff.