Behind the Buzzword: What Real AI-Assisted Engineering Actually Looks Like
Some feedback arrives in the good kind of way. A recruiter was reviewing ClickNBack not long ago—the system I’ve been writing about for the past couple of months—and he was also an engineer, which made the conversation feel more like a peer review than a screening call. He had good things to say. Then he offered a suggestion: the repository didn’t have an AGENTS.md or CLAUDE.md file, and those are becoming the expected signal that you’re working with AI tools.
He was being genuinely helpful. He was also, understandably, looking in the wrong place.

What I Built While Learning
What he hadn’t seen was a docs/guidelines/ directory with a set of engineering principles governing how AI should approach the codebase: how to structure a new module, how to name and document an ADR, how to handle trade-off decisions, which patterns to follow and which to avoid. Alongside that, .github/prompts/—a collection of what everyone is currently calling skills: purpose-built prompts for specific recurring tasks, from scaffolding a new domain module consistently to generating ADR drafts that match the codebase’s conventions.
Those didn’t appear by accident, but they also didn’t come from a master plan. They emerged because I was learning what actually works while shipping ClickNBack. It took roughly two weeks to fine-tune—not two weeks of building features, but two weeks of defining constraints, testing outputs, adjusting context, and iterating until the prompts produced reliable, idiomatic code that required no manual cleanup after every generation. That’s where the real payoff came from: not from following a specific tool’s playbook, but from understanding what I needed and building toward it.
That investment is what compressed a multi-month project into a manageable pace. Watching what worked and what didn’t taught me more about how to actually work with AI than any tool documentation could have.
Learning Across a Shifting Landscape
The recruiter’s suggestion came from a real and legitimate place. AGENTS.md, CLAUDE.md, .cursorrules—these files have become recognizable artifacts. Claude Code, Cursor, GitHub Copilot each have their own conventions, their own ecosystem, their own way of making AI collaboration visible in a repository. These aren’t arbitrary. They’re the industry’s way of settling on what matters.
The thing is, I’m watching this settle in real time. Right now, I’m actively migrating the prompts and guidelines I built in isolation toward whatever conventions are emerging as the standard. A month ago, that direction wasn’t clear. Today, it’s getting clearer. Next month, it will probably be even clearer. The reason I didn’t start with .cursorrules or a tool-specific config wasn’t stubbornness—it was that there genuinely wasn’t a universal answer when I started. By the time I realized which direction the industry was settling, I already had a working system. Now I’m threading the needle: keeping what I learned while gradually aligning with the emerging standard.
This is the interesting part, actually. Standards don’t matter because they’re universal; they matter because enough people adopted them that the adoption itself creates value. Right now, the landscape is still fragmenting—different tools, different conventions, different “right ways” depending on who you ask. By next year, this will probably be more settled. But when you’re in the middle of rapid change, you can’t use the standard as your anchor. You have to use judgment.
The tools will keep changing, and the conventions will keep shifting as the winners become clearer. That’s not a bug; that’s how emerging technology works.
Adding the File, Evolving the Approach
The recruiter’s suggestion crystallized something: I should make my learning process visible to someone scanning the repo quickly. So I added an AGENTS.md file that summarizes the approach and links to the full documentation (guidelines, specs, design documents). That took ten minutes, and it actually was the right call.
It sent a signal I wanted to send—that I’m thinking deliberately about how I work with AI. And more than that, it meant I was ready to join the signal system that’s settling on the industry. I wasn’t against the standard; I was just waiting for the standard to exist. Now that it’s becoming clearer, there’s no reason to stay off to the side.
The version I have now isn’t the version I’ll have in six months. I’m learning what works, watching which conventions actually reduce friction, and adapting. The fact that I built something custom first doesn’t mean I’m stuck to it—it means I understand it deeply enough to know what parts actually matter when I move toward whatever becomes the next evolved standard.
The Real Skill
That conversation—and the conversations with the code while building ClickNBack—crystallized what actually matters about AI-assisted engineering. It’s not about which application you run or whether you’re ahead of or behind the adoption curve. It’s about whether you understand what AI does and doesn’t do well enough to stay present in your own work while letting AI handle what it’s genuinely good at.
Where AI excels—translating patterns, applying conventions, generating idiomatic boilerplate—is where good prompts compound and velocity increases meaningfully. Where it falls short—architectural judgment, trade-off selection, knowing which abstraction is load-bearing—is exactly where you have to stay fully present. The craft is knowing which is which, and designing a workflow that respects that boundary. That’s the skill that transfers across tools, across standards, across whatever emerges next.
I ship code every day from VS Code using Copilot and Claude Sonnet with a basic subscription plan (the cost of a usable Claude Code subscription is insane for a solo developer building a portfolio). I’m learning what works with that setup, I’m adapting as conventions become clearer, and I’m staying deliberate about what I’m delegating. Trends in tooling will shift. The ability to think clearly about what you’re delegating and what you’re keeping will not. That’s the real advantage, and it’s not something you get from reading the docs or copying a config file. It comes from building, learning, and staying present enough to understand why something worked.