I've set a few rules for working with coding agents:
1. If I use a coding agent to generate code, it should be something I am absolutely confident I can code correctly myself given the time (gun to my head test).
2. If it isn't, I can't move on until I completely understand what it is that has been generated, such that I would be able to recreate it myself.
3. I can create debt (I believe this is being called Cognitive Debt) by breaking rule 2, but it must be paid in full for me to declare a project complete.
Accumulating debt increases the chances that code I generate afterwards is of lower quality, and it also feels like the debt is compounding.
I'm also not really sure how these rules scale to serious projects. So far I've only been applying these to my personal projects. It's been a real joy to use agents this way though. I've been learning a lot, and I end up with a codebase that I understand to a comfortable level.
snowe2010 3 hours ago [-]
> The other change is simpler: I'm doing the design work myself, by hand, before any code gets written. Not a vague doc. Concrete interfaces, message types, ownership rules.
That’s the hard part of coding. If you have an architecture then writing the code is dead simple. If you aren’t writing the code you aren’t going to notice when you architected an API that allows nulls but then your database doesn’t. Or that it does allow that but you realize some other small issue you never accounted for.
I do not know how you can write this article and not realize the problem is the AI. Not that you let it architect, but that you weren’t paying attention to every single thing it does. It’s a glorified code generator. You need to be checking every thing it does.
The hard part of software engineering was never writing code. Junior devs know how to write code. The hard part is everything else.
mikepurvis 2 hours ago [-]
I agree with what you're saying, but I think we do have a problem right now with definitions where there's a lot of people basically getting supercharged tab completions or running a chatbot or two in a parallel pane, but still clearly reviewing everything; and on the other side of things is freaking Steve Yegge pitching a whole new editor that lets you orchestrate a dozen or more agents all vibing away on code you're apparently never going to read more than a line or two of: https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas. The second group are not, and those are the ones that I find a bit more worrisome.
RossBencina 1 hours ago [-]
> The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas.
I can't speak for others, but I'd go further and say that LLMs allow me to go deeper on the design side. I can survey alternative data structures, brainstorm conversationally, play design golf, work out a consistent domain taxonomy and from there function, data structure and field names, draft and redraft code, and then rewrite or edit the code myself when the AI cost/benefit trade off breaks down.
skydhash 1 hours ago [-]
> The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas
I worry about the first group too, because interfaces and data structures are the map, not the territory. When you create a glossary, it is to compose a message, that transmit a specific idea. I find invariably that people that focus on code that much often forgot the main purpose of the program in favor of small features (the ticket). And that has accelerated with LLM tooling.
I believe most of us that are not so keen on AI tooling are always thinking about the program first, then the various parts, then the code. If you focus on a specific part, you make sure that you have well defined contracts to the orther parts that guarantees the correctness of the whole. If you need to change the contract, you change it with regard to the whole thing, not the specific part.
The issue with most LLM tools is that they’re linear. They can follow patterns well, and agents can have feedback loop that correct it. But contracts are multi dimensional forces that shapes a solution. That solution appears more like a collapsing wave function than a linear prediction.
staplers 2 hours ago [-]
You need to be checking every thing it does.
This is what seems to be lost on so many. As someone with relatively little code experience, I find myself learning more than ever by checking the results and what went right/wrong.
This is also why I don't see it getting better anytime soon. So many people ask me "how do you get your claude to have such good output?" and the answer is always "I paid attention and spotted problems and asked claude to fix them." And it's literally that simple but I can see their eyes already glazing over.
Just as google made finding information easier, it didn't fix the human element of deciphering quality information from poor information.
skydhash 2 hours ago [-]
And when you got familiar with the other parts, you realize that writing code is the most enjoyable one. More often than not, you’re either balancing trade offs or researching what factors yoy have missed with the previous balancing. When you get to writing code, it’s with a sigh of relief, as that means you understand the problem enough to try a possible solution.
You can skip that and go directly to writing code. But that meant you replaced a few hours of planning with a few weeks of coding.
plastic041 3 hours ago [-]
Title says
> back to writing code by hand
But what they are doing is
> doing the __design work__ myself, by hand, before any code gets written.
So... Claude still is generating the code I guess?
And seriously, I can't understand that they thought their vibe coded project works fine and even bought a domain for the project without ever looking at source code it generated, FOR 7 MONTHS??
dewey 1 hours ago [-]
I bought domains for projects minutes after the idea.
I don’t think it’s that weird to not look at the code if it’s a side project and you follow along incrementally via diffs. It’s definitely a different way of working but it’s not that crazy.
bayarearefugee 19 minutes ago [-]
> I don’t think it’s that weird to not look at the code if it’s a side project and you follow along incrementally via diffs.
Its not weird to not look at the code, as long as you're looking at the code? (diffs?)
Uh, ok
viceconsole 1 hours ago [-]
> Vibe-coding makes you feel like you have infinite implementation budget. You don't. You have infinite LINE budget (the AI will generate as much code as you want). But you have the same finite complexity budget as always.
This is a special case of a general fundamental point I'm struggling with.
Let's assume AI has reduced the marginal cost of code to zero. So our supply of code is now infinite.
Meanwhile, other critical factors continue to be finite: time in a day, attention, interest, goodwill, paying customers, money, energy.
So how do you choose what to build?
Like a genie, the tools give us the power to ask for whatever we want. And like a genie, it turns out we often don't really know what we want.
xantronix 3 hours ago [-]
So you're not actually writing code by hand? I'm very confused by the difference between the title and the conclusion here.
rane 53 minutes ago [-]
The point was to come up with a sensationalistic headline that HN eats up and post flies to the front page.
29 minutes ago [-]
shahbaby 3 hours ago [-]
This reads too much like it was LLM generated. I can't say for sure if it was but I have an allergic reaction to the short snappy know-it-all LLM writing style.
baxtr 29 minutes ago [-]
Writing code by hand but blog post are written by LLMs?
fromwilliam 1 hours ago [-]
yeah, it set off my llm radar too
erelong 4 hours ago [-]
Can't you just ask AI to break up large files into smaller ones and also explain how the code works so you can understand it, instead of start over from scratch?
dropbox_miner 3 hours ago [-]
That was actually the first thing I tried. It did a good jov at explaining the code base mess and the architecture. Then I ran 3-4 refactor attempts. Each one broke things in ways that were harder to debug than the original mess. The god object had so many implicit dependencies that pulling one thread unraveled something else. And each attempt burned through my daily Claude usage limit before the refactor was stable.
And I'm sure the rewrite is going to teach me a whole different set of lessons...
tres 2 hours ago [-]
What's your test coverage like?
Not sure why good coverage wouldn't mitigate risk in a refactor...
My mantra whenever I'm working with AI is that I want it to know what "point b" looks like and be able to tell by itself whether it's gotten there...
If you have a working implementation, it sounds like you have a basis for automated tests to be written... once you have that (assuming that the tests are written to test the interface rather than the implementation), then it should be fairly direct to have an agent extract and decompose...
striking 3 hours ago [-]
I'm currently working on the discovery phase of a larger refactor and have pretty quickly realized that AI can actually often be pretty useless even if you've encoded the rules in an unambiguous, programmatic way.
For example, consider a lint rule that bans Kysely queries on certain tables from existing outside of a specific folder. You'd write a rule like this in an effort to pull reads and writes on a certain domain into one place, hoping you can just hand the lint violations to your AI agent and it would split your queries into service calls as needed.
And at first, it will appear to have Just Worked™. You are feeling the AGI. Right up until you start to review the output carefully. Because there are now little discrepancies in the new queries written (like not distinguishing between calls to the primary vs. the replica, missing the point of a certain LIMIT or ORDER BY clause, failing to appropriately rewrite a condition or SELECT, etc.) You run a few more reviewer agent passes over it, but realize your efforts are entirely in vain... because even if the reviewer agent fixes 10 or 20 or 30 of the issues, you can still never fully trust the output.
As someone with experience in doing this kind of thing before AI, I went back to doing it the old way: using a codemod to rewrite the code automatically using a series of rules. AI can write the codemod, AI can help me evaluate the results, but actually having it apply all of the few hundred changes automatically led to a lack of my ability to trust the output. And I suspect that will continue to be true for some time.
This industry needs a "verification layer" that, as far as I know, it does not have yet. Some part of me hopes that someone will reply to this comment with a counterexample, because I could sorely use one.
joshuanapoli 3 hours ago [-]
Rewrite following a new architecture plan could get finished pretty quickly, treating the original as a prototype.
SpicyLemonZest 3 hours ago [-]
When people talk about codebases being "incomprehensible", it's not always hyperbole. Sometimes the architecture literally cannot be broken up or understood.
whattheheckheck 3 hours ago [-]
I find that really hard to believe. It's not like curing cancer
NichoPaolucci 2 hours ago [-]
While I mostly agree - science is built up on truths. Code has a large amount of creativity and freedom built into the decisions, some codebases will be documented, follow rigorous training, and design decisions. Others will just be an absolute legacy mess of 20 years of odd decisions made by people who may have not known what they were doing. Like an art piece that you don’t really “understand”.
pixl97 2 hours ago [-]
When you see some legacy C++ codebase with millions of lines of code, catching cancer and slowly dying from it is more human than trying to unscrew that mess.
A really screwed code base blows out your context window and just starts burning tokens as the AI works out a way to kill -9 itself to escape the hell you're subjecting it to.
chamomeal 3 hours ago [-]
No but it can be a rube goldberg machine of insanity
SpicyLemonZest 1 hours ago [-]
[flagged]
RuoqiJin 44 minutes ago [-]
This is Claude's problem.
Compared to GPT-5.5, Claude Code prefers to take shortcuts. I've tested having codexapp GPT-5.5 and Claude Code opus4.7 do the same thing - if following GPT-5.5's requirements, Claude Code's execution time for a task would stretch from 5 minutes to 40 minutes.
To solve macro architecture problems, I use Lisp to write the entire program's framework. Lisp replaces architecture documents, because I believe it has high semantic density, syntax restrictions, and checkers for assistance.
This way, at least I didn't have to rework anything anymore. I used this method to refactor my 20+ projects
archleaf 4 hours ago [-]
So what you really mean is you are going to do better and more detailed skills files so you can get an architecture that you've thought through rather than something random?
dropbox_miner 4 hours ago [-]
Partly, but the order matters. The CLAUDE.md constraints only work if you designed the architecture first. They're just how you communicate it to the AI. The mistake I made wasn't writing bad skills files, it was not designing anything at all and expecting the AI to make coherent structural decisions across 30 sessions.
The rewrite is me sitting down with a blank doc and drawing the boxes before any code exists. Then the CLAUDE.md enforces what I already decided. Whether that actually holds up as the project grows, I genuinely don't know yet.
cpncrunch 4 hours ago [-]
Are you really saving any time at all using AI at all then? If you have to write the architecture for it, write all the rules you want it to follow, check everything it's written, and then reprompt it because it's not how you want it?
SpicyLemonZest 3 hours ago [-]
Yes. I do all of this and I'd estimate 50-100% coding time savings. A lot of that comes from better multitasking over single-workstream throughput, which I suppose might compromise the gains depending on what you're doing. For me it amplifies the speedup by allowing some of my "coding time" to be spent on non-coding tasks too.
cpncrunch 3 hours ago [-]
But even if coding time is reduced by half, is that worth the downsides? Coding has never really been a major percentage of my time.
SpicyLemonZest 1 hours ago [-]
I could be wrong in some subtle way I'm not seeing, but I believe the model we're working in avoids the downsides. I actually think my review bar is slightly higher now, because I don't feel as much pressure to compromise my standards when I know Claude is capable of writing the code I want.
binyu 4 hours ago [-]
> I'm rewriting k10s in Rust. Not because Rust is better but, because it's the language I can steer. I've written enough of it to feel when something's wrong before I can articulate why. That instinct is the one thing vibe-coding can't replace. The AI hands you plausible-looking code. You need a nose for when it's garbage.
Isn't Golang relatively easier to read than Rust? I was under the impression that Rust is a more complex language syntactically.
> The other change is simpler: I'm doing the design work myself, by hand, before any code gets written. Not a vague doc. Concrete interfaces, message types, ownership rules. The architecture decisions that the AI kept making wrong are now made in writing before the first prompt.
This post is good to grasp the difference between "vibe-coding" and using the AI to help with design and architectural choices done by a competent programmer (I am not saying you are not one). Lately I feel that Opus 4.7 involves the user a lot more, even when given a prompt to one-shot a particular piece of software.
dropbox_miner 4 hours ago [-]
Go reads fine whether the architecture is good or bad, and I couldn't tell the difference until I was in trouble. Rust is harder to read but harder to misuse. The borrow checker would have caught that data race at compile time. I've also just written more Rust. That familiarity matters separately.
+1 on Open 4.7 involving the user a lot more. Rn I'm trying to get to a state where I can codify my design + decision preferences as agents personas and push myself out of the dev loop.
binyu 3 hours ago [-]
Gotcha, that implies you are going to read the code that the AI produces anyways.
> Go reads fine whether the architecture is good or bad
Were you reading the Golang code all along and got fooled or did you review it after it failed? Sorry I admit I didn't read the whole article.
williamstein 3 hours ago [-]
He was NOT reading the code: "For 7 months I'd been prompting and shipping without ever sitting down and actually reading the code Claude wrote."
binyu 3 hours ago [-]
Right, thank you. Personally I think reading all the code that the AI produces is impossible and kind of defeats the purpose of using it. The key is to devise a structured way to interact with it (skills and similar) and use extensive testing along the way to verify the work at all steps.
cortesoft 2 hours ago [-]
> Isn't Golang relatively easier to read than Rust? I was under the impression that Rust is a more complex language syntactically
It sounds like the author knows Rust, and might not be as familiar with Go.
A language that you are proficient in is always going to be easier read than one you don’t, even if it is an objectively easier language to to read in general.
rnxrx 47 minutes ago [-]
I'm not sure we'll ever really be free of the GIGO (garbage in / garbage out) principle. Tools will get better and better, but can never be a substitute for a deep understanding of the thing we want to create.
> I typed :rs pods to switch back to the pods view. Nothing rendered. The table was empty...
> now something was fundamentally broken and I couldn't just prompt my way out of it.
Hey I don't want to over simplify, I'm sure it was complicated, but did the author have functional tests for these broken views? As long as there are functional tests passing on the previous commit I'd have thought that claude could look at the end situation and work out how to get the desired feature without breaking the other stuff.
TUIs aren't an exception, it's still essential to have a way to end-to-end test each view.
cortesoft 2 hours ago [-]
What has really made AI coding be able to continue to work as the project got bigger was using speckit. It has been great at keeping the code consistent across features.
AI writes what you ask it to write, you need to talk to it about architecture. You should have an architecture doc so AI can shape the code based on that, you can get the AI to make the architecture doc also. If using claude you can use the software architecture mode for this.
jesse_dot_id 1 hours ago [-]
LLMs assist those of us who were apt to take blocks of code from StackOverflow, or wherever, to solve problems quickly and avoid as much of the aggravating and slow toil of trial and error as possible.
That trial and error process is still happening with a LLM, but much faster, and with instantaneous cross-references to various forms of documentation that I would be looking up myself otherwise. It produces code of a quality that is dependent on the engineer knowing what they want in the first place and prompting for it and refining its output correctly.
It's the exact same process of sculpting code that the majority of the industry was doing "by hand" prior to the release of LLMs, but faster, and the harnesses are only getting better. To "vibe code" is to prompt vaguely and ignore the quality of the output. You're coming to a forum full of professionals and essentially telling us that you're getting really frustrated with your Scratch project.
I don't know if you're trying to lead a charge or whatever but good luck with that. As a senior SWE, it is clear to me that this is the new paradigm until something better than LLMs comes along. My workflows and efficiency have been vastly improved. I will admit that I have never really been a "I made a SMTP server in 3k of Rust" kind of guy, though.
sakesun 2 hours ago [-]
A coder typing in code is not solely to generate outcome. It's part of ongoing thinking process. Without this ongoing process, we have no material to keep iterating forward.
amelius 2 hours ago [-]
So how are people writing the specifications for AI?
Do they write empty functions and let AI fill them in?
Or do they use some kind of specification language?
Are people designing those languages?
d_silin 2 hours ago [-]
It absolutely looks like AI psychosis.
ipaddr 3 hours ago [-]
When he mentions I push commits at work for as long as my tokens last I can understand that. Managing tokens has become an important skill.
EMM_386 3 hours ago [-]
You don't need to go back to coding by hand if you know how to do it already. There is a middle ground.
If you understand good software architecture, architect it. Create a markdown document just as you would if you had a team of engineers working with you and would hand off to them. Be specific.
Let the AI do the implementation of your architecture.
apt-apt-apt-apt 3 hours ago [-]
Outright lie clickbait. As he states himself, he's doing the design work by hand, and will likely still use AI to write code.
ljoshua 2 hours ago [-]
> tl;dr: AI writes features, not architecture.
This. I definitely agree with this statement at this point in AI-assisted development. This gets at the "taste" factor that is still intrinsically human, especially in software engineering. If you can construct and guide the overall architecture of an application or system, AI can conceivably fill in the smaller feature bits, and do so well. But it must have a strong architecture and opinionated field in which to play.
mpurbo 4 hours ago [-]
Strict SDD might help to constrain and harness the process.
epec254 4 hours ago [-]
Not sure if just me, but this post feels AI written?
pipeline_peak 3 hours ago [-]
Feels a bit too long winded to be AI generated.
royal__ 2 hours ago [-]
The title is just flat out wrong. The author isn't going back to writing code by hand, they're plopping some new stuff into their CLAUDE.md to "fix" the issues they see AI is having.
nothinkjustai 3 hours ago [-]
Writing code by hand is an oxymoron. You don’t write code with AI, AI doesn’t write, it generates.
nothinkjustai 16 minutes ago [-]
I don’t really think OP is writing code themselves since they admit they still use agents for code gen. I’ve really scaled back the amount I use agents though because in the medium to long term I haven’t been getting good results with them. And it’s not enjoyable. That’s enough for me, I’ll do whatever for a job because who cares, if the company wants slop I will gladly give them that, but for my own shit Ive gone back to circa 2024 and am mostly just using them as a chatbot.
Inb4 “you’re gonna be replaced” god damn it I hope so, I do not want to spend the rest of my life behind a computer screen…
AIorNot 4 hours ago [-]
This doesnt make much sense the article itself is AI written
It would have been easy to run a few ai agents to review the code and find these issues as well and architect it clearly
scuff3d 2 hours ago [-]
I feel like this article was circling a point it never actually got to. All the advice in here (except controlling scope creep) is specific to a TUI with an elm like architecture.
But here's the thing, you almost never know what the architecture is up front. If you do you probably aren't the one writing the actual code anymore. Writing the code, with or without an AI is part of the design process. For most people it isn't until they've tried several times, fucked it up a bunch, and refactored or rewrote even more that you actually know what the architecture needs to be.
imperio59 2 hours ago [-]
Alternate title: "I did not understand the current limitations of AI and assumed it could do large software design and it generated spaghetti slop"
Yea, that's why engineers are still very important for now (until models can do this type of longer term designs and stick to them).
kypro 3 hours ago [-]
> I learned over these 7 months
7 months ago was early November. Coding assistants were getting very good back then, but they were still significantly poorer at making good architectural decisions in my experience. They tended to just force features into the existing code base without much thought or care.
Today I've noticed assistants tend to spot architectural smells while working and will ask you whether they should try to address it, but even then they're probably never going to suggest a full refactor of the codebase (which probably is generally the correct heuristic).
My guess is that if you built this today with AI that you wouldn't run into so many of these problems. That's not to say you should build blind, but the first thing that stood out to me was that you starting building 7 months ago and coding assistants were only just becoming decent at that time, and undirected would still generally generate total slop.
photochemsyn 3 hours ago [-]
Does ‘writing code by hand’ mean you’re not going to use compilers to generate assembly?
Now I do feel lucky that I started learning coding about four years before the LLM revolution, but these things are really just natural language compilers, aren’t they? We’re just in that period - the 1980s, the greybeards tell me - where companies charged thousands of dollars per compiler instance, right? And now, I myself have never paid for a compiler.
This whole investor bubble will blow up in the face of the rentier-finance capitalists and I’ll be laughing my head off while it happens.
green_wheel 2 hours ago [-]
Nondeterministic natural language compilers
photochemsyn 27 minutes ago [-]
Just because the trajectory is chaotic doesn't mean it’s not deterministic.
platevoltage 1 hours ago [-]
So C++ doesn't count as code now.
Jatin-Mali 34 minutes ago [-]
[flagged]
andrew_kwak 58 minutes ago [-]
[flagged]
genghot 2 hours ago [-]
[flagged]
vladsiu 1 hours ago [-]
[dead]
Decabytes 2 hours ago [-]
We should go back to designing UML diagrams for programs before we write them /s
eggplantemoji69 2 hours ago [-]
TLDR ai wrote tech debt slop because I vibed for 7 months, now I am taking a hybrid approach of defining strict constraints before vibing…
gjvc 4 hours ago [-]
have another drink and drive yourself home
Rendered at 05:34:30 GMT+0000 (Coordinated Universal Time) with Vercel.
1. If I use a coding agent to generate code, it should be something I am absolutely confident I can code correctly myself given the time (gun to my head test).
2. If it isn't, I can't move on until I completely understand what it is that has been generated, such that I would be able to recreate it myself.
3. I can create debt (I believe this is being called Cognitive Debt) by breaking rule 2, but it must be paid in full for me to declare a project complete.
Accumulating debt increases the chances that code I generate afterwards is of lower quality, and it also feels like the debt is compounding.
I'm also not really sure how these rules scale to serious projects. So far I've only been applying these to my personal projects. It's been a real joy to use agents this way though. I've been learning a lot, and I end up with a codebase that I understand to a comfortable level.
That’s the hard part of coding. If you have an architecture then writing the code is dead simple. If you aren’t writing the code you aren’t going to notice when you architected an API that allows nulls but then your database doesn’t. Or that it does allow that but you realize some other small issue you never accounted for.
I do not know how you can write this article and not realize the problem is the AI. Not that you let it architect, but that you weren’t paying attention to every single thing it does. It’s a glorified code generator. You need to be checking every thing it does.
The hard part of software engineering was never writing code. Junior devs know how to write code. The hard part is everything else.
The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas. The second group are not, and those are the ones that I find a bit more worrisome.
I can't speak for others, but I'd go further and say that LLMs allow me to go deeper on the design side. I can survey alternative data structures, brainstorm conversationally, play design golf, work out a consistent domain taxonomy and from there function, data structure and field names, draft and redraft code, and then rewrite or edit the code myself when the AI cost/benefit trade off breaks down.
I worry about the first group too, because interfaces and data structures are the map, not the territory. When you create a glossary, it is to compose a message, that transmit a specific idea. I find invariably that people that focus on code that much often forgot the main purpose of the program in favor of small features (the ticket). And that has accelerated with LLM tooling.
I believe most of us that are not so keen on AI tooling are always thinking about the program first, then the various parts, then the code. If you focus on a specific part, you make sure that you have well defined contracts to the orther parts that guarantees the correctness of the whole. If you need to change the contract, you change it with regard to the whole thing, not the specific part.
The issue with most LLM tools is that they’re linear. They can follow patterns well, and agents can have feedback loop that correct it. But contracts are multi dimensional forces that shapes a solution. That solution appears more like a collapsing wave function than a linear prediction.
This is also why I don't see it getting better anytime soon. So many people ask me "how do you get your claude to have such good output?" and the answer is always "I paid attention and spotted problems and asked claude to fix them." And it's literally that simple but I can see their eyes already glazing over.
Just as google made finding information easier, it didn't fix the human element of deciphering quality information from poor information.
You can skip that and go directly to writing code. But that meant you replaced a few hours of planning with a few weeks of coding.
> back to writing code by hand
But what they are doing is
> doing the __design work__ myself, by hand, before any code gets written.
So... Claude still is generating the code I guess?
And seriously, I can't understand that they thought their vibe coded project works fine and even bought a domain for the project without ever looking at source code it generated, FOR 7 MONTHS??
I don’t think it’s that weird to not look at the code if it’s a side project and you follow along incrementally via diffs. It’s definitely a different way of working but it’s not that crazy.
Its not weird to not look at the code, as long as you're looking at the code? (diffs?)
Uh, ok
This is a special case of a general fundamental point I'm struggling with.
Let's assume AI has reduced the marginal cost of code to zero. So our supply of code is now infinite.
Meanwhile, other critical factors continue to be finite: time in a day, attention, interest, goodwill, paying customers, money, energy.
So how do you choose what to build?
Like a genie, the tools give us the power to ask for whatever we want. And like a genie, it turns out we often don't really know what we want.
And I'm sure the rewrite is going to teach me a whole different set of lessons...
Not sure why good coverage wouldn't mitigate risk in a refactor...
My mantra whenever I'm working with AI is that I want it to know what "point b" looks like and be able to tell by itself whether it's gotten there...
If you have a working implementation, it sounds like you have a basis for automated tests to be written... once you have that (assuming that the tests are written to test the interface rather than the implementation), then it should be fairly direct to have an agent extract and decompose...
For example, consider a lint rule that bans Kysely queries on certain tables from existing outside of a specific folder. You'd write a rule like this in an effort to pull reads and writes on a certain domain into one place, hoping you can just hand the lint violations to your AI agent and it would split your queries into service calls as needed.
And at first, it will appear to have Just Worked™. You are feeling the AGI. Right up until you start to review the output carefully. Because there are now little discrepancies in the new queries written (like not distinguishing between calls to the primary vs. the replica, missing the point of a certain LIMIT or ORDER BY clause, failing to appropriately rewrite a condition or SELECT, etc.) You run a few more reviewer agent passes over it, but realize your efforts are entirely in vain... because even if the reviewer agent fixes 10 or 20 or 30 of the issues, you can still never fully trust the output.
As someone with experience in doing this kind of thing before AI, I went back to doing it the old way: using a codemod to rewrite the code automatically using a series of rules. AI can write the codemod, AI can help me evaluate the results, but actually having it apply all of the few hundred changes automatically led to a lack of my ability to trust the output. And I suspect that will continue to be true for some time.
This industry needs a "verification layer" that, as far as I know, it does not have yet. Some part of me hopes that someone will reply to this comment with a counterexample, because I could sorely use one.
A really screwed code base blows out your context window and just starts burning tokens as the AI works out a way to kill -9 itself to escape the hell you're subjecting it to.
The rewrite is me sitting down with a blank doc and drawing the boxes before any code exists. Then the CLAUDE.md enforces what I already decided. Whether that actually holds up as the project grows, I genuinely don't know yet.
Isn't Golang relatively easier to read than Rust? I was under the impression that Rust is a more complex language syntactically.
> The other change is simpler: I'm doing the design work myself, by hand, before any code gets written. Not a vague doc. Concrete interfaces, message types, ownership rules. The architecture decisions that the AI kept making wrong are now made in writing before the first prompt.
This post is good to grasp the difference between "vibe-coding" and using the AI to help with design and architectural choices done by a competent programmer (I am not saying you are not one). Lately I feel that Opus 4.7 involves the user a lot more, even when given a prompt to one-shot a particular piece of software.
+1 on Open 4.7 involving the user a lot more. Rn I'm trying to get to a state where I can codify my design + decision preferences as agents personas and push myself out of the dev loop.
> Go reads fine whether the architecture is good or bad
Were you reading the Golang code all along and got fooled or did you review it after it failed? Sorry I admit I didn't read the whole article.
It sounds like the author knows Rust, and might not be as familiar with Go.
A language that you are proficient in is always going to be easier read than one you don’t, even if it is an objectively easier language to to read in general.
Hey I don't want to over simplify, I'm sure it was complicated, but did the author have functional tests for these broken views? As long as there are functional tests passing on the previous commit I'd have thought that claude could look at the end situation and work out how to get the desired feature without breaking the other stuff.
TUIs aren't an exception, it's still essential to have a way to end-to-end test each view.
https://github.com/github/spec-kit
That trial and error process is still happening with a LLM, but much faster, and with instantaneous cross-references to various forms of documentation that I would be looking up myself otherwise. It produces code of a quality that is dependent on the engineer knowing what they want in the first place and prompting for it and refining its output correctly.
It's the exact same process of sculpting code that the majority of the industry was doing "by hand" prior to the release of LLMs, but faster, and the harnesses are only getting better. To "vibe code" is to prompt vaguely and ignore the quality of the output. You're coming to a forum full of professionals and essentially telling us that you're getting really frustrated with your Scratch project.
I don't know if you're trying to lead a charge or whatever but good luck with that. As a senior SWE, it is clear to me that this is the new paradigm until something better than LLMs comes along. My workflows and efficiency have been vastly improved. I will admit that I have never really been a "I made a SMTP server in 3k of Rust" kind of guy, though.
Do they write empty functions and let AI fill them in?
Or do they use some kind of specification language?
Are people designing those languages?
If you understand good software architecture, architect it. Create a markdown document just as you would if you had a team of engineers working with you and would hand off to them. Be specific.
Let the AI do the implementation of your architecture.
This. I definitely agree with this statement at this point in AI-assisted development. This gets at the "taste" factor that is still intrinsically human, especially in software engineering. If you can construct and guide the overall architecture of an application or system, AI can conceivably fill in the smaller feature bits, and do so well. But it must have a strong architecture and opinionated field in which to play.
Inb4 “you’re gonna be replaced” god damn it I hope so, I do not want to spend the rest of my life behind a computer screen…
It would have been easy to run a few ai agents to review the code and find these issues as well and architect it clearly
But here's the thing, you almost never know what the architecture is up front. If you do you probably aren't the one writing the actual code anymore. Writing the code, with or without an AI is part of the design process. For most people it isn't until they've tried several times, fucked it up a bunch, and refactored or rewrote even more that you actually know what the architecture needs to be.
Yea, that's why engineers are still very important for now (until models can do this type of longer term designs and stick to them).
7 months ago was early November. Coding assistants were getting very good back then, but they were still significantly poorer at making good architectural decisions in my experience. They tended to just force features into the existing code base without much thought or care.
Today I've noticed assistants tend to spot architectural smells while working and will ask you whether they should try to address it, but even then they're probably never going to suggest a full refactor of the codebase (which probably is generally the correct heuristic).
My guess is that if you built this today with AI that you wouldn't run into so many of these problems. That's not to say you should build blind, but the first thing that stood out to me was that you starting building 7 months ago and coding assistants were only just becoming decent at that time, and undirected would still generally generate total slop.
Now I do feel lucky that I started learning coding about four years before the LLM revolution, but these things are really just natural language compilers, aren’t they? We’re just in that period - the 1980s, the greybeards tell me - where companies charged thousands of dollars per compiler instance, right? And now, I myself have never paid for a compiler.
This whole investor bubble will blow up in the face of the rentier-finance capitalists and I’ll be laughing my head off while it happens.