1000% agree. I am increasingly hesitant to believe Anthropic's continual war drum of "build for the capabilities of future models, they'll get better".
We've got a QA agent that needs to run through, say, 200 markdown files of requirements in a browser session. Its a cool system that has really helped improve our team's efficiency. For the longest time we tried everything to get a prompt like the following working: "Look in this directory at the requirements files. For each requirement file, create a todo list item to determine if the application meets the requirements outlined in that file". In other words: Letting the model manage the high level control flow.
This started breaking down after ~30 files. Sometimes it would miss a file. Sometimes it would triple-test a bundle of files and take 10 minutes instead of 3. An error in one file would convince it it needs to re-test four previous files, for no reason. It was very frustrating. We quickly discovered during testing that there was no consistency to its (Opus 4.6 and GPT 5.4 IIRC) ability to actually orchestrate the workflow. Sometimes it would work, sometimes it wouldn't. I've also tested it once or twice against Opus 4.7 and GPT 5.5; not as extensively; but seems to have the same problems.
We ended up creating a super basic deterministic harness around the model. For each test case, trigger the model to test that test case, store results in an array, write results to file. This has made the system a billion times more reliable. But, its also made the agent impossible to run on any managed agent platform (Cursor Cloud Agents, Anthropic, etc) because they're all so gigapilled on "the agent has to run everything" that they can't see how valuable these systems can be if you just add a wee bit of determinism to them at the right place.
DrewADesign 12 hours ago [-]
I used to assume they pushed people into the prompt-only workflows because you’re paying them for the tokens, and not paying them for the scaffolding you built. However, I think that they’re really worried about is that a person needs to design and implement that stuff… It throws a wet blanket on their insistence that this will replace entire people in entire workflows or even projects, and I just don’t buy it. I do think it’s going to increase productivity enough to disastrously affect developer job market/pay scale, but I just don’t think this particular version of this particular technology is going to actually do what they say it will. If they said they were spending this much money bootstrapping a super useful thingy that can reduce a big chunk of the busy work of a human dev team— what most developers really want, and most executives really don’t— a bunch of investors would make them walk the plank.
I also think having granular, tightly controlled steps is much friendlier to implementing smaller, cheaper, more specialized models rather than using some ginormous behemoth of a model that can automate your tests, or crank out 5 novels of CSI fan fic in a snap.
cogman10 11 hours ago [-]
> However, I think that they’re really worried about is that a person needs to design and implement that stuff… It throws a wet blanket on their insistence that this will replace entire people in entire workflows or even projects, and I just don’t buy it.
I think you are on to something. But I also think this sort of system lends itself to not needing really good LLMs to do impressive things. I've noticed that the quality of a lot of these LLMs just gets worse the more datapoints they need to track. But, if you break it up into smaller and easier to consume chunks all the sudden you need a much less capable LLM to get results comparable or better than the SOTA.
Why pay extra money for Opus 4.7 when you could run Qwen 3.6 35b for free and get similar results?
aleqs 3 hours ago [-]
Indeed, I've been experimenting with agent workflows, for complicated tasks - where I essentially have a graph of agents with different roles/capabilities, including such things as breaking down complex tasks into simpler ones. There seems to be a point where a complex enough task is better performed by a group of cheaper agents/models than by one agent using one of the SOTA big models, in terms of both quality and cost.
devin 6 hours ago [-]
And then you realize that what you’re using the smaller models for is ALSO decomposable and part of it is just a few if statements, and then you realize that for this feature you don’t actually need or want a model because the performance, reliability, reproducibility are cheaper and better for you and your users.
jimbokun 5 hours ago [-]
So you have the model write the if statements and put itself out of a job.
tempest_ 8 hours ago [-]
It is also interesting because you get people with very different use cases arguing about the effectiveness of various models but doing very different things with them.
Its one things for a model to be very clearly instructed to add a REST endpoint to an existing Django app and add a button connected to it on the front vs "Design me a youtube". The smaller models can pretty dependably do the first and fall flat on the second.
pishpash 11 hours ago [-]
Aren't they just buying time to build you whatever harness you need? They want to be the only software engineering shop in the world.
user34283 10 hours ago [-]
The designing and implementing of a code harness in your workflow can be as simple as running something like /skill-builder.
You prompt for what you want it to do, and it will write eg. python scripts as needed for the looping part, and for example use claude -p for the LLM call.
You can build this in 10 minutes.
I don’t use a cloud platform, so I can’t comment on that part. I‘d say just run it on your own hardware, it’s probably cheaper too.
fny 4 hours ago [-]
Secret: "compile" that orchestration prompt. Determinism is solved by turning prompts into code that can in turn run agents or run code or both.
Everyone misses this pattern with skills: you can just drop code alongside a SKILL.md to guarantee certain behaviors, but for some reason everyone's addicted to writing prompts. You don't even need to build a CLI. A simple skill.py with tasks does it. You can even have helpers that call `claude -p`!
bob1029 9 hours ago [-]
I saw a major uplift in performance after I combined tools like apply_patch with check_compilation & run_unit_tests. I still call the tool "apply_patch", but it now returns additional information about the build & tests if the patch succeeds. The agent went from ~80% success rate to what seems to be deterministic (so far). I don't bother to describe the compilation and unit testing processes in my prompts anymore. All I need to do is return the results of these things after something triggers them to run as a dependency.
I feel like I'm falling out of whatever is popular these days. I've been using prepaid tokens and custom harnesses for a long time now. It just seems to work. I can ignore most of the news. Copilot & friends are currently dead to me for the problems I've expressly targeted. For some codebases it's not even in the same room of performance anymore, despite using the exact same GPT5.4 base model.
woeirua 11 hours ago [-]
I have but one upvote, but yes. The only way to make these systems work reliably is to break the problems down into smaller chunks. Any internal consistency checks are just going to show you that LLMs are way less consistent than you’d expect.
jiehong 2 hours ago [-]
This might be inherent to how the models are benchmarked.
Aren’t some benchmarks giving the model multiple shots at a problem and only keep the successful result if it appeared, ignoring the failure rate?
andyferris 2 hours ago [-]
Good point. We need the mean, “any 1 of 10” and the “all 10 of 10” success rates in the metrics, so we can estimate reliability (the last one).
rdedev 11 hours ago [-]
I had to create a hypothesis testing agent where it gets a query like "is manufacturing parameter x significantly different this month than last month" and have the agent follow a flowchart to run a statistical test and return the answer
At the time I had access to only 4o and there was no way to guarantee that the agent would follow the flowchart if I just mention it in its prompt. What I ended up wrapping the agent in a loop that kept feeding it the next step in the flowchart. In a way, a custom harness for the agent
julianlam 9 hours ago [-]
> This started breaking down after ~30 files. Sometimes it would miss a file. Sometimes it would triple-test a bundle of files and take 10 minutes instead of 3. An error in one file would convince it it needs to re-test four previous files, for no reason. It was very frustrating.
Sorry, you thought a prompt was a suitable replacement for a testing suite?
zapataband1 8 hours ago [-]
hey man it works great barely and also costs a bunch of money everytime we run it. we also can't trust the results, relax.
deadbabe 6 hours ago [-]
If you are invested in AI stocks, this is the way. You are basically funneling money from software companies into your brokerage account. Keep going.
mmis1000 12 hours ago [-]
> This started breaking down after ~30 files.
Codex's short context and todolist system combined somehow helps here though. Because of the frequent compact. The model was forced to recheck what todo list item has not done yet and what workflow skill it has to use. I used to left it for multi hour to do a big clean up and it finished without obvious issues.
swores 11 hours ago [-]
Is Codex willing to do "multi hour" tasks when used with a ChatGPT Plus subscription, or does it need something more expensive like Pro?
dns_snek 2 hours ago [-]
It's going to work the same regardless of how much you pay, but with Plus you'll run into 5h usage limit rather quickly unless your "multi hour task" spends 90% of the time just waiting around for code to compile. Expect to get an hour or two of active work (single-threaded).
dnh44 11 hours ago [-]
I regularly get codex to do multi hour tasks with a single prompts I don't think thats a big deal anymore. But you don't want a single agent doing all the work. The root agent needs to delegate the work to sub agents. For example, a sub agent for context gathering, then one for planning, then one (or more) for implementation, then another for review. This way the root agent doesn't use up its context window and it just manages from a bird's eye view. I do have the $200 plan though.
krashidov 4 hours ago [-]
> We've got a QA agent that needs to run through, say, 200 markdown files of requirements in a browser session. Its a cool system that has really helped improve our team's efficiency. For the longest time we tried everything to get a prompt like the following working: "Look in this directory at the requirements files. For each requirement file, create a todo list item to determine if the application meets the requirements outlined in that file". In other words: Letting the model manage the high level control flow.
This is cool. Can you elaborate on it? Is it flaky? Does it take a long time?
cheshire_cat 7 hours ago [-]
Wouldn't it be more efficient to convert the requirements these 200 markdown files into Playwright tests?
You could still use an LLM to write and extend the tests, but running the tests would be deterministic and would use less resources.
tharkun__ 7 hours ago [-]
This type of thing so much.
AI is being pushed so much at work right now. For non-dev stuff even. The amount of things that people think are "awesome never seen this" is staggering.
Just because you haven't seen file format X converted to file format Y before and now you asked the LLM to do it and it worked, doesn't mean you needed an LLM for it nor that it's remarkable. The LLM knew how to do it because it learned from a bazillion online sources for deterministic converters that cost nothing (and have open source). But now you're paying, every single time, for a non-deterministic version of it and you find it cool. It's magic ...
But I guess they deserve it.
gofreddygo 5 hours ago [-]
> It's magic
you'll be surprised with how many people are comfortable attributing something they do not understand to Magic.
more than anything, ai let people who couldn't and wouldn't bother to learn to write simple code, to side step ones who can and build solutions to scratch their own itch. that too faster.
now human behavior kicks in, and they don't want to hand control back into the hands of people who can code to solve problems.
put this together and you have a good model to understand the AI sales pitch... Its magic
like all magic, its but a trick.
6 hours ago [-]
awongh 10 hours ago [-]
The other part of the question is exactly when the "build for the capabilities of future models" becomes the present.
Looking at the Mythos benchmarks, it doesn't seem like the models are that close to being truly reliable for agentic tasks.
Is it a year away, or five? That's a big difference in deciding what to build today.
otikik 45 minutes ago [-]
I never tell claude to "go over this bunch of files and do this".
I tell it "write a program that goes over this bunch of files and do this".
Sometimes "do this" can be invoking another claude instance.
sharperguy 11 hours ago [-]
So I wonder, if a more powerful agent harness could have the agent basically write and exectute its own deteministic code, which when executed, spawns sub agents for each of the subtasks?
So far we've seen agents spawn subagents directly, but that still means leaving the final flow control to the non-deterministic orchestrator model, and so your case is a perfect example of where it would probably fail.
tonylucas 11 hours ago [-]
I've been working on an integrated deterministic/agent integrated system for a few months now. It basically runs an AI step to build a plan, which biases towards deterministic steps as much as possible but escalates back to AI when it needs to (for AI only capabilities or deterministic failures) so effectively (when I perfect it, I'm about 90% there) it can bounce back and forward as needed with deterministic steps launching AI steps and AI steps launching deterministic steps as needed.
Probably not explaining it very well but I think it's pretty effective at reducing token usage.
shripadt 10 hours ago [-]
[flagged]
peyton 8 hours ago [-]
I make codex do everything through a giant `justfile`. Simple, greppable, self-documenting, works great, and I don’t even need to read it.
sroussey 12 hours ago [-]
I’m working on a hybrid system of old school task graph and ai agents and let them instantiate each other. I think others will do that eventually.
tonylucas 11 hours ago [-]
I'm working on something similar (won't link to it as don't want people to think I'm spamming) but if you want to compare notes happy to talk.
Our team at Agentforce recently open-sourced our solution to this and we've gotten very valuable feedback -- would love to hear from more of you about it: https://github.com/salesforce/agentscript
zapataband1 8 hours ago [-]
No you didn't
"What we're not open sourcing (yet) is the runtime. "
imtringued 1 hours ago [-]
I'm personally surprised by this too. Like, everyone is writing how insanely productive AI is making them, but that productivity doesn't seem to have translated into any innovations beyond model quality.
Like, most of the stuff needed to make AI better is stuff that could have been written by hand in 2015, so why hasn't anyone used their agents to do so?
To be fair, there is probably a way to make it work the way you want. You could add an MCP for a task queue and let the model work each item in the task queue. The tasks could be added by a deterministic system i.e. your harness.
Joeri 11 hours ago [-]
You could have a skill that is the combination of a minimal markdown file and a set of orchestration scripts that do the deterministic work. The agent does not have to “run everything”, it just needs to know how to launch the right script.
pishpash 11 hours ago [-]
Can you not have it write your harness for you, or have it be the first step? You can push your own determinism where you need, surely.
svachalek 11 hours ago [-]
True. The prompt reads: Run the following Python: ```
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
rnxrx 14 hours ago [-]
I wonder if a part of the problem isn't just the misapplication of LLMs in the first place. As has been mentioned elsewhere, perhaps the agent's prompt should be to write code to accomplish as much of the task in as repeatable/verifiable/deterministic a way as possible. This would hopefully include validation of the agent's output as well. The overall goal would be to keep the LLM out of doing processing that could be more efficiently (and often correctly) handled programmatically.
chrismarlow9 14 hours ago [-]
100% agreed. use the non-deterministic thing that is right 90% of the time to generate a deterministic thing that is right 100% of the time. one of the key things I add to my prompts is:
- Please consult me when you encounter any ambiguous edge cases
Attaching the AI to production to directly do things with API calls is bad. For me the only use case where the app should do any AI stuff is with reading/categorizing/etc. Basically replacing the "R" in old CRUD apps. If you want to use that same new AI based "R" endpoint to auto fill forms for the "C", "U", and "D" based on a prompt that's cool, but it should never mutate anything for a customer before a human reviews it. Basically CRUD apps are still CRUD apps (and this will always be true), they just have the benefit of having a very intelligent "R" endpoint that can auto complete forms for customers (or your internal tooling/Jenkins pipelines/etc), or suggest (but never invoke) an action.
TZubiri 10 hours ago [-]
> Please consult me when you encounter any ambiguous edge cases
Why not check the logprobs of the output and take action when the prob of the first and second most likely token is too similar? (or below a certain threshold?
jatora 10 hours ago [-]
because this is manual? are you an llm?
vishvananda 13 hours ago [-]
I think there is a flow in most organizations from:
llm -> prompt -> result
llm -> prompt + prompt encoded as skill -> result
llm -> prompt + deterministic code encoded as skill -> result
I do think prompting to generate code early can shortcut that path to deterministic code, but we're still essentially embedding deterministic code in a non-deterministic wrapper. There is a missing layer of determinism in many cases that actually make long-horizon tasks successful. We need deterministic code outside the non-deterministic boundary via an agentic loop or framework. This puts us in a place where the non-deterministic decision making is sandwiched in between layers of determinism:
deterministic agentic flows -> non-deterministic decision making -> deterministic tools
This has been a very powerful pattern in my experiments and it gets even stronger when the agents are building their own determinism via tools like auto-researcher.
VMG 14 hours ago [-]
The problem is that often the program runs into some edge case that requires interpretation, at which point one is tempted to let the LLM deal with the edge case, at which point one is tempted to let the LLM deal with the whole loop and let it do the tool calls
Fishkins 13 hours ago [-]
Agreed. I think the approach described here is promising. Most of the workflow is deterministic and includes safeguards, but an LLM is invoked in the one case where it's really useful.
This is exactly how I did my last project of automating the generation of an interface library between a server that controls hardware and the mobile app.
The hardware control team delivers a spec as a document and spreadsheet. The mobile team was using that to code the interface library and validating their code against the server. I converted the document to TSV, sent some parts to Claude and have it write a parser for the TSV keeping all the nuances of human written spec. It took more than 150 iterations to get the parser to handle all edge cases and generate an intermediate output as JSON. Then Claude helped me write a code generator using some custom glue on top of Apollo to generate the code that is consumed by the mobile app.
This whole pipeline runs as part of Github actions and calls Claude only when our library validator fails. There is an md file which is sent to Claude on failure as part of the request to figure out what went wrong, propose a solution and create a PR. This is followed by a human review, rework and merge. Total credits consumed to get here < $350.
memjay 4 hours ago [-]
This has been our experience as well. Initially we had a list of tools that the agent could use to manipulate a data structure in certain ways. This approach was quite brittle.
Now we are using a small DSL (domain specific language) and a single tool where the agent can input scripts written in the DSL. We are getting more dynamic use-cases now and wrong syntax can easily be catched by the parser and relayed to the agent.
khasan222 12 hours ago [-]
Completely agree! People tend to forget we are non deterministic too! Yet we are able to write code fine, and fairly reliably by using tools that can help keep us fairly honest.
I think most problems with ai tend to be around can you deterministically test the thing you are asking it to do?
How many of us would never ever show work, without going to check the thing we just built first?
cluckindan 12 hours ago [-]
> can you deterministically test the thing you are asking it to do?
Of course: have it write tests first; and run them to check its work.
Works well for refactoring, but greenfield implementations still rely on a spec that is guaranteed to be incomplete, overcomplete and wrong in many ways.
khasan222 10 hours ago [-]
Well if the spec is incomplete it sounds like you should lower scope for the AI, and then go from there. I wouldn't be too keen to give a junior engineer free reign and expect awesomeness
pishpash 11 hours ago [-]
You can't ask something to check its own work without external reward/penalty. It'll cheat.
khasan222 7 hours ago [-]
Weirdly, and i fully think this is just some cognitive bias I don't have the knowledge to name, the ai seems very happy to please me. Like when it gets something done in one shot, it seems very happy to do so.
groovetandon 12 hours ago [-]
This is so true have been working on a project for exactly this principle -
I think there is a fundamental incentive problem - code + llm + harness is bound to be more efficient but the labs want you to burn tokens so they are not going to tell you to use the code, just burn more tokens. They are asking us to forget about the token cost and reliability for now - model will become better.
This means that most people just believe that their agent should just be able to do anything with the help of some Model fairy dust with prompts + skills.
People need to watch their agents fail in production to be able to come to the right conclusion unfortunately.
marcus_holmes 7 hours ago [-]
We have a rule that the LLM cannot perform any actions that result in actual money or stuff moving. Those can only be done by API calls that have lots of validation and checks on them, and adding or changing an API call is gated behind human review. The LLM is then free to make as many API calls as it likes, we're confident that it can't screw anything up too badly.
foolserrandboy 14 hours ago [-]
yup, the standard way of thinking about agents seems backwards and probably costly. Use LLMs to write scripts, then stick all your scripts in your own looping harness and call out for LLMs for those parts that are too hard to automate with some deterministic validation at the end.
nixpulvis 13 hours ago [-]
My agents often write themselves scripts. Isn't that effectively what you're asking for? Prompting for scripts can also be a useful time and accuracy tactic when you know it'll be a good fit for it.
falkensmaize 11 hours ago [-]
The problem is that code it spits out on the fly is untested and untrustworthy. Identify the parts of your workflow that could be accomplished with regular code - write and unit test that code, with LLM help if you want, and use the llm as the orchestrator only.
sisve 13 hours ago [-]
Yeah, the problem is that I do not think the agents is good at reusing scripts and stitching it together.At least for me it's recreating to much similar. I hope we will see platforms like windmill.dev find the optimal solution for this. I have not been able to test it enough. But have a platform that gives you some observability out of the box and protect secrets from llm is nice
reddit_clone 11 hours ago [-]
I noticed that too. Unless you _ask_ for a script, they throw away the scripts they write.
They are particularly bad at complex multiline parsing. Writing all sorts of weird/crude python/awk scripts and getting confused in the process.
I wish they would use Perl6/Grammer or Haskell/Parsec or similar and write better parsing scripts.
user3939382 12 hours ago [-]
> write code to accomplish as much of the task in as repeatable/verifiable/deterministic a way as possible
Correct. The concept of having probabilistic output with deterministic acceptance “guardrails” is illogical. If the domain resists deterministic modeling such that you’re using an LLM, the guardrails don’t magically gain that capability.
bwestergard 15 hours ago [-]
I agree with the sentiment, but I think the conclusion should be altered. When you hit the limit of prompting, you need to move from using LLMs at run time to accomplish a task to using LLMs to write software to accomplish the task. The role of LLMs at run time will generally shrink to helping users choose compliant inputs to a software system that embodies hard business rules.
scrappyjoe 15 hours ago [-]
I’ve had a couple of weeks of downtime at work, so I decided to incorporate agents into my work processes - things like note taking, task tracking, document management.
Your comment EXACTLY mirrors my experience. Week 1 was ever expanding prompts, and degrading performance. Week 2 has been all about actually defining the objects precisely (notes, tasks, projects, people etc) and defining methods for performing well defined operations against these objects. The agent surface has, as you rightly point out, shrunk to a translation layer that converts natural language to commands and args that pass the input validator.
sowbug 14 hours ago [-]
A full-circle system prompt would be to "find every opportunity to put yourself out of your job by automating it away. When you are given a question that code can answer, answer the question by writing code and running it to obtain the result."
Such an LLM might have fared better with the strawberry test.
edgarvaldes 15 hours ago [-]
Some have expressed the opinion in this forum that the future of software lies in programs that are created and adapted at runtime, using genAI. I don't know how far we are from that.
aleksiy123 14 hours ago [-]
It’s already here the question is just to what extent?
Are google search results modifying your software at runtime?
Take or agent chat for example, the output text is a ui, agents can generate charts and even constrained ui elements.
Isn’t that created and adapted at run time?
If you mean like agents live modifying your code. I think that’s pretty much here as well. Can read the logs and send prs.
The only thing is how fast that loop will execute from days or hours to mins or seconds, and what validation gates it needs to pass.
My git repo is pretty much self modifying personal software at this point, that I interface through the ide chat window.
But I don’t think we will ever lose the intermediary deterministic language (code) between the llm and the execution engine.
It would be prohibitively expensive to run everything through models all the time.
But I am starting to think we need a more precise language than English when talking with LLMs. That can do both precision and ambiguity when you need either.
jmaw 13 hours ago [-]
Some kind of "code", you could say
aleksiy123 11 hours ago [-]
Yes but more declarative vs imperative.
I say what the llm says how.
pishpash 11 hours ago [-]
Not that long ago the workflow was to turn code comments into code. Maybe leave some comments as is now.
pishpash 11 hours ago [-]
Sounds like assemblers bemoaning loss of control to C. The solution was inline assembly...
mjr00 15 hours ago [-]
> Some have expressed the opinion in this forum that the future of software lies in programs that are created and adapted at runtime, using genAI.
Good luck with that. Users will flood you with complaints if a button moves 5px to the left after a design update. A program that is generated at runtime, with not just a variable UI but also UX and workflows, would get you death threats.
hilariously 14 hours ago [-]
I think many software adjacent folks are super excited because they can now have the personalized toothbrush they keep asking people to make for them.
The problem is that outside of that most people want boring and regular interfaces so they can get in and solve the problem and get out - they don't want to "love" it or care if its "sexy" they want it to work and get out of the way.
LLMs transmogrifying your software at ever request assumes people are software architects and creators who love the computer interface, and that just doesn't describe the bulk of the population.
Most people using computers use the to consume things or utilize access to things, not for their own sake, and they certainly don't think "what if I just had code to do x..." unless x is make them a lot of money.
munk-a 12 hours ago [-]
A program that is generated at runtime is fine (we have interpreted languages and often compile on demand) - the issue is with the non-deterministic nature of the output.
I think the core issue is that non-deterministic output is great for a chatbot experience where you want unpredictable randomness so it feels less like talking to the mirror - but when it comes to coding I think we're pretty fundamentally misaligned in sticking to that non-deterministic approach so firmly.
cassianoleal 13 hours ago [-]
So we're back to vim over ssh in production, only without a human with _some semblance_ of judgement in the loop?
QuercusMax 13 hours ago [-]
I've seen cases where models will get stuck in a particular mode of problem solving and need a nudge to tell them to move to a new mode. For example, instead of trying to massage a bunch of system service configs to handle hot-plug/unplug of an audio stream, what I really needed was to just write a couple dozen lines of Python to handle stuff.
I just had Claude write itself a couple shell scripts to handle a bunch of common cases (like running tests) in my workflow where it just couldn't figure it out efficiently. Now it just runs those tools and sets things up instead of spinning in circles for half an hour.
Every time it tries to ask me if it can run some one-off crazy shell or python one-liner to do something, I've started asking myself if I should have it write a tool I can auto-approve instead.
3uba 14 hours ago [-]
[dead]
jerf 15 hours ago [-]
This is why I frequently refer to "next generation AIs" that aren't just LLMs. LLMs are pretty cool and I expect that even if we see no further foundational advancement in AIs that we're going to continue to see them exploited in more interesting ways and optimized better. Even if the models froze as they are today, there's a lot more value to be squeezed out of them as we figure out how to do that.
However, there are some things that I think need a foundational next-generation improvement of some sort. The way that LLMs sort of smudge away "NEVER DO X" and can even after a lot of work end up seeing that as a bit of a "PLEASE DO X" seems fundamental to how they work. It can be easy to lose track of as we are still in the initial flush of figuring out what they can do (despite all we've already found), but LLMs are not everything we're looking for out of AI.
There should be some sort of architecture that can take a "NEVER DO X" and treat it as a human would. There should be some sort of architecture that instead of having a "context window" has memory hierarchies something like we do, where if two people have sufficiently extended conversations with what was initially the same AI, the resulting two AIs are different not just in their context windows but have actually become two individuals.
I of course have no more idea what this looks like than anyone else. But I don't see any reason to think LLMs are the last word in AI.
cheesecakegood 10 hours ago [-]
Actual memory, in my opinion. Right now memory is broadly speaking like a system of sticky notes the AI writes itself and checks every time, rather than an integrative system that allows learning and can trigger more flexibly.
As someone who went full circle prompt-enforcement > deterministic flow > prompt-enforcement, I disagree.
The reason why "DO NOT SKIP" fails is because your agent is responsible for too many things and there's things in context that are taking away the attention from this guidance.
But nobody said the agent that does enforcement must be the same agent that builds. While you can likely encode some smart decision making logic in your deterministic control flow, you either make it too rigid to work well, or you'll make it so complex that at that point, you might as well just use the agent, it will be cheaper to setup and maintain.
You essentially need 3 base agents:
- Supervisor that manages the loop and kicks right things into gear if things break down
- Orchestrator that delegates things to appropriate agents and enforces guardrails where appropriate
- Workers that execute units of work. These may take many shapes.
baxtr 2 hours ago [-]
I think the key question is: How can you be sure the supervisor/orchestrator agents are reliable? You are just pushing the complexity down into another layer.
ex-aws-dude 9 hours ago [-]
Exactly, just keep adding more agents
SrslyJosh 9 hours ago [-]
I can't tell if this is satire or not. Well done!
bandrami 2 hours ago [-]
It's going to be hilarious in a few years when people are still using LLMs but only via a controlled vocabulary and syntax that you have to learn. It's just like how everybody moved to NoSQL 15 years ago but immediately recreated schemae in their JSON.
JohnMakin 14 hours ago [-]
> Imagine a programming language where statements are suggestions and functions return “Success” while hallucinating. Reasoning becomes impossible; reliability collapses as complexity grows.
This is essentially declarative programming. Most traditional programming is imperative, what most developers are used to - I give the exact set of instructions and expect them to be obeyed as I write them. Agents are way more declarative than imperative - you give them a result, they work on getting that result. Now the problem of course, is in something declarative like say, SQL, this result is going to be pretty consistent and well-defined, but you're still trusting the underlying engine on how to go about it.
Thinking about agents declaratively has helped me a lot rather than to try to design these rube-goldberg "control" systems around them. Didn't get it right? Ok, I validated it's not correct, let's try again or approach it differently.
If you really need something imperative, then write something imperative! Or have the agent do so. This stuff reads like trying to use the wrong tool for the job.
Terr_ 8 hours ago [-]
> This is essentially declarative programming.
I think it's step more-abstract that that, we're doing... How about "narrative programming"? (Though we could debate whether "programming" is still an applicable word.)
Yes, it may look like declarative programming, but it's within an illusion: We aren't aren't actually describing our goals "to" an AI that interprets them. Instead, there's a story-document where our human stand-in character has dialogue to a computer-character, and up in the real world we're hoping that the LLM will append more text in a way that makes a cohesive longer story with something useful that can be mined from it.
It's not just an academic distinction, if we know there's a story, that gives us a better model for understanding (and strategizing) the relationship between inputs and outputs. For example, it helps us understand risks like prompt-injection, and it provides guidance for the kinds of training data we do (or don't) want it trained on.
JohnMakin 8 hours ago [-]
I dont hate that distinction, I just think a lot of people are approaching this from an imperative framework that might not fit.
repelsteeltje 13 hours ago [-]
I was thinking of declarative, but PROLOG rather than SQL. So with actual control flow and reasoning capabilities.
And then you run into similar issues as the llm does, like silent failures, loops, contradictions unless you're very careful.
The essence might be the same closed world assumption problem. In llm case this manifests as hallucination rather that admitting it does not know.
PaulStatezny 9 hours ago [-]
I agree. But you can speak imperatively to agents as well ("Here are specific steps; follow them") and they can still screw up. :) I think what you're looking for is determinism, not imperativism.
And to your point: instructing a (non-deterministic) LLM declaratively ("get me to this end state") compounds the likelihood of going off the rails.
JohnMakin 8 hours ago [-]
I don’t think I’m confusing the two but it is an issue. See another comment I made in a sibling comment - terraform is a great example or something that is declarative, and also non deterministic. You can’t control upstream api/provider changes even between two plans happening simultaneously - thats a lot what working with agents feels like to me.
miltonlost 13 hours ago [-]
SQL's declarativeness is also based on the mathematics of relational algebra, so it will return the same result every time. Will it return it in the same amount of time every single query? No, that depends on indexing and database size. But the query itself won't be altered in the same way an LLM would be.
JohnMakin 12 hours ago [-]
Engines that use SQL can vary drastically in how they handle strings, floating points, etc., where identical SQL queries on identical data absolutely can return different results, which is why I mentioned the engine underneath - LLM's being nondeterministic in addition to declarative is kind of tangential to the point I was trying to make.
It is the same in terraform - yes, the HCL spec defines things very precisely, but you're kind of at the mercy of how the provider and provider API decide how to handle what you wrote, which can be very messy and inconsistent even when nothing changed on your side at all. LLM/agent usage feels a lot like that to me, in the sense it's declarative and can be a bit lossy. As a result there are things I could technically do in terraform but would never, because I need imperativeness.
My main point being, I think people are trying to ram agents into a ton of cases where they might not necessarily need or even want to be used, and stuff like this gets written. Maybe not, but I see it day to day - for instance, I have a really hard time convincing coworkers that are complaining about the reliability of MCP responses with their agents, that they could simply take an API key, have the agent write a script that uses it, and strictly bound/define the type of response format they want, rather than let the agent or server just guess - for some reason there is some inclination to "let the agent decide how to do everything."
I think that's probably what this article is getting at, but, I am saying that trying to create these elaborate control flows with validation checks everywhere to reign in an unruly application making dumb decisions, why not just use it to write deterministic automation instead of using agent as the automation?
isityettime 13 hours ago [-]
Afaict all harnesses are wrong in this respect, some of them deeply so.
Slash commands, for instance, are a misfeature. I should never have to wait for the chatbot finish a turn so that I can check on the status of my context window or how much money I've spent this session. Control should be orthogonal to the chat loop.
Even things that have nothing to do with controlling the text generator's input and output are entangled with chat actions for no good reason except "it's a chat thing, let's pretend we're operating an IRC bot".
There are a zillion LLM agents out there nowadays, but none of them really separate control from the agent loop from presentation well. (A few do at least have headless modes, which is cool.)
dnautics 13 hours ago [-]
> Slash commands, for instance, are a misfeature. I should never have to wait for the chatbot finish a turn so that I can check on the status of my context window or how much money I've spent this session. Control should be orthogonal to the chat loop.
I get what you're trying to say but in practice architecting what you propose is considerably more difficult. Why not build it and try to get hired by one of the bigcos?
gf000 39 minutes ago [-]
In what way would it be more complicated? This is pretty basic concurrent programming, we routinely have much much more complex concurrent designs..
Hell, a telegram bot can handle that just fine.
isityettime 13 hours ago [-]
I don't think the basic architecture principles are novel. The big AI labs and other large tech companies already have engineers who can see this, without a doubt. But the AI labs clearly don't care if their LLM agents are just big balls of mud, and the big tech companies priorities mostly lie elsewhere, too.
They just want features. They don't really care about duplicated work, so half of them reinvent the TUI rendering wheel. Pluggability is something that might be actually hostile to their interests in lock-in. And the AI labs probably think "after a couple more scaling cycles, our models will be so good that our agents can just rewrite themselves from scratch"; until they hit a compute or power wall, it always looks rational to them to defer rearchitecting.
Another real possibility is that if you work on an agent with a really clean architecture and publish it in hopes of getting hired by some AI company, all of them think "that looks great, but we don't want to rearchitect right now". Your code winds up in the training set, and a year and a half from now, existing agents can "one-shot" rewrites along the lines of your design because they're "smarter".
As for me, I'm not that interested, personally. There are other things I want to build and I'm working on those.
the_duke 13 hours ago [-]
In codex CLI /status works just fine during a turn.
Other things don't though.
user34283 13 hours ago [-]
I use the Codex desktop app.
In the GUI I can see the context indicator and usage stats.
It also makes it easier to jump between conversations and see the updates.
Sometimes I use Claude Code or opencode in the terminal, and my experience is much poorer compared to the Codex desktop app.
plumbline 8 hours ago [-]
I've been thinking about this a lot actually. It can almost be related to the conversation about specialization. The more specialized a model is required to be, the less capable it seems to be at a foundational level, where as if you just aim towards a liiitle bit of abstraction, you might get the best of both worlds.
Here's a pretty specific example of what I mean, but maybe food for thought:
At my new job, I was assigned to improve processes with AI.
My first thought was, well agents seem nice, but I think, AI workflows are a better bet. However, I don't really understood AI or agents in depth and felt like I was just "doing things the old way" and removing flexibility from agents was a ridiculous idea.
After some research I got the impression that I was right. A well defined workflow and scope is just what's needed for AI. It's cheaper and more consistent. It probably even makes the whole thing run well with non-SOTA models.
59nadir 15 hours ago [-]
This was one of the key insights in Stripe's explanations about Minions[0], their autonomous agent system; in-between non-deterministic LLM work they had deterministic nodes that handled quality assurance and so on in order to not leave those types of things to the LLMs.
If you're trying to get reliability and determinism out of the LLM, you've already lost
tekne 15 hours ago [-]
Wait... why?
Making an unreliable, nondeterministic system give reliable results for a bounded task with well-understood parameters is... like half of engineering, no?
There's a huge difference between "generate this code here's a vague feature description" and "here's a list of criteria, assign this input to one of these buckets" -- the latter is obviously subject to prompt engineering, hallucination, etc -- but so can a human pipeline!
JCTheDenthog 14 hours ago [-]
>the latter is obviously subject to prompt engineering, hallucination, etc -- but so can a human pipeline!
...which is why we write deterministic code to take the human out of the pipeline. One of the early uses of computers was calculating firing tables for artillery, to replace teams of humans that were doing the calculations by hand (and usually with multiple humans performing each calculation to catch errors). If early computers had a 99% chance of hallucinating the wrong answer to an artillery firing table, the response from the governments and militaries that used them would not be to keep using computers to calculate them. It would be to go back to having humans do it with lots of manual verification steps and duplicated work to be sure of the results.
If you're trying to make LLMs (a vague simulacrum of humans) with their inherent and unsolvable[1] hallucination problems replace deterministic systems, people are going to eventually decide to return to the tried and true deterministic systems.
Because it's not possible. There is nothing you can say to the LLM that will guarantee that something happens. It's not how it works. It will maybe be taken into consideration if you're lucky.
But if you're trying to tell me that every time you list criteria you get them all perfectly matched, you're clearly gifted.
gf000 35 minutes ago [-]
I'm being deliberately pedantic, but depending on what kind of representation we use for the neural network (due to rounding) as well as the choice of inference (that is, given a distribution for next token, which one to choose), it can absolutely be reproducible and completely deterministic.
Though chaotic, which I believe is the better word here - a single letter change may result in widely different results.
We just choose to use more random inference rules, because they have better results.
aleksiy123 14 hours ago [-]
There’s a whole range between completely random and completely rule based deterministic.
Somewhere in between that I guess is the varying levels of intelligence more likely able to make the “right” decision for anything you throw at it.
evantbyrne 12 hours ago [-]
I would hope that when engineers speak of LLM determinism they just mean it as shorthand for close to 1 under expected conditions
sudosteph 12 hours ago [-]
I mean, with reliability there's a spectrum. If the risks that an unreliable outcome brings aren't all that bad, then sometimes it's worth it to chase "my agents made an acceptable PR 70% of the time, can I get it to 90?"
Determinism is a different matter. Scripts and hooks are really the main levers you can pull there, but yeah - a a decent script and a cron job will handle certain things much better (and for a fraction of the cost)
pydry 14 hours ago [-]
This is something I think some people are fundamentally not capable of understanding.
andai 5 hours ago [-]
Yeah, you could also see this in 2023 with Auto-GPT. People were letting GPT "drive" when what they actually needed, in most cases, was like ten lines of Python (and maybe a few calls to a llm() function).
The alternative is running your ten lines of Python in the most expensive, slowest, least reliable way possible. (Sure is popular though)
For example, most people were using the agents for internet research. It would spin for hours, get distracted or forget what it was supposed to be doing.
Meanwhile `import duckduckgo` and `import llm` and you can write ten lines that does the same thing in 20 seconds, actually runs deterministically, and costs 50x less.
The current models are much better -- good enough that the Auto-GPT is real now! -- but running poorly specified control flow in the most expensive way possible is still a bad idea.
est 2 hours ago [-]
I have a question, does LLM follow these MANDATORY or DO NOT SKIP during pre-train, like how people write a comment paragraph on reddit corpus, or is it just some post-train alignment habbit?
stingraycharles 2 hours ago [-]
Instruction following is a specific fine tuning / post training phase, yes.
That’s why you see “base” vs “instruct” models for example — base is just that, the basic language model that models language, but doesn’t follow instructions yet.
Especially the open weights models have lots of variants, eg tuned for math, tuned for code, tuned for deep thinking, etc.
But it’s definitely a post train thing, usually done by generating synthetic data using other models.
moconnor 11 hours ago [-]
“Flow” moves agents through a yaml flowchart of prompts and decisions. It’s working quite well for a couple of us in Tenstorrent, more to discover here though:
I find Flow really interesting, thanks for pointing it out.
Deterministic workflows using AI to help perform those steps not requiring human input has been an area of interest for me for some time. Particularly interesting how you are using the AI to determine what a step has achieved and the action of the next step.
Combine it with workflow elements that does handle human steps together with a notification/routing/task system would make for a helpful system for so many.
rglover 13 hours ago [-]
> Babysitter: Keep a human in the loop to catch errors before they propagate.
This is the only way to guarantee AI usage doesn't burn you. Any automation beyond this is just theater, no matter how much that hurts to hear/undermines your business model.
A bird sings, a duck quacks. You don't expect the duck to start singing now, do you?
kelseyfrog 13 hours ago [-]
I'm not sure I agree. Like all stochastic processes, LLM errors can be quantified. That makes each use case a risk-reward tradeoff where users can decide if the tradeoff makes sense for them or not. There are scenarios where errors are acceptable because the risks are low or errors are acceptable or the rewards make up for them. This is a process engineer problem where business and technology specifics matter.
rglover 12 hours ago [-]
I see where you're coming from, but this assumes good behavior and discipline which most people/teams struggle with.
If a business can get away with some margin of error being acceptable, more power to them. But if not (or doing so would cause additional problems; what I'd imagine to be true for a non-trivial number of orgs), it's wise to consider the nature of the tool a lot of people are suggesting is mandatory if you're dependent on consistent, predictable results.
kelseyfrog 12 hours ago [-]
That's fair. A heuristic that leaves some opportunity on the table due to org capability is a reasonable one to have.
alasano 7 hours ago [-]
I think babysitting LLMs is exactly the thing that burns you.
Presuming you meant burns you out though.
doubled112 5 hours ago [-]
No, "burns you" as in "play with fire and you'll get burned".
It will make a mistake and you will get burned, so you have to babysit it.
apalmer 15 hours ago [-]
Generally agree with this stance case in point: the breakthrough in ai coding was not that AI intelligence increased as much as that a lot of the core process execution moved out of the LLM prompt and into the harness.
nickstinemates 5 hours ago [-]
This is why we built swamp[1].
Swamp teaches your Agent to build and execute repeatable workflows, makes all the data they produce searchable, and enables your team to collaborate.
We also build swamp and swamp club using swamp. You can see that process in the lab[2]. This combines all of the creativity of the LLM for the parts that matter, while providing deterministic outcomes for the parts you need to be deterministic.
If you’re interested in such deterministic scaffolding/control flow, check out Probity.
I created it to address this exact issue. It is a vendor-neutral ESLint-style policy engine and currently supports Claude Code, Codex, and Copilot.
It uses the agents hooks payloads and session history to enforce the policies. Allowing it to be setup to block commits if a file has been modified since the checks were last run, disallow content or commands using string or regex matching, and enforce TDD without the need of any extra reporter setup and it works with any language.
I feel like people forget that they're still allowed to program. You're still allowed to create workflows tying together LLMs and agents if you want. Almost all the tools and technology that existed before LLMs are still available to be used.
socketcluster 9 hours ago [-]
That's why I built https://saasufy.com/ as an agent tool for building data-driven realtime apps.
I started working on it piece by piece about 14 years ago. It was originally targeted at junior developers to provide them the necessary security and scalability guardrails whilst trying to maintain as much flexibility as possible. It's very flexible; most of Saasufy is itself is built using Saasufy. Only the actual user service and orchestration is custom backend code.
Also, I designed it in a way that it would help the user fast-track their learning of important concepts like authentication, access control, schema validation.
It turns out that all of these things that junior devs need are exactly what LLMs need as well.
I tested it with Claude Code originally and got consistently great results. More recently, I tested with https://pi.dev with GPT 5.5 and it seemed to be on par.
13 hours ago [-]
sudosteph 13 hours ago [-]
This is a good discussion topic. A lot of people really seem to believe that if you word a prompt just so, that you just need to throw a high-powered model at it, it will work consistently how you want. And maybe as models progress that might be the case. But right now, that's not how I've seen real life work out.
Even skills are not a catch-all, because besides the supply chain risk from using skills you pull from someone else, a lot of tasks require an assortment of skills.
I've accommodated this with my agent team (mostly sonnets fwiw) by developing what we call "operational reflexes". Basically common tasks that require multiple domains of expertise are given a lockfile defining which of the skills are most relevant (even which fragment of a skill) and how in-depth / verbose each element needs to be to accomplish the same task the same way, with minimal hallucinations or external sources.
A coordinator agent assigns the tasks and selects the relevant lockfile and sends it along or passes it along to another agent with a different specified lockfile geared towards reviewing.
It's a bit, but this workflow dramatically increased the quality of output for technical work I get from my agents and I don't really need to write many prompts myself like this.
sbinnee 5 hours ago [-]
I have been telling this to my team that 1000 lines of instructions are deemed to fail no matter how great of instruction following capability of a model. I have been reviewing hundreds of line changes daily basis for about a month. I couldn’t help becoming a prayer.
tim-projects 14 hours ago [-]
This is exactly the problem I've been working on and I see others are too. When you implement quality control gates, everything works better. It solves so many of the basic problems llms create - saying code is finished when it isn't. Skipping tests, introducing code regressions, basic code validation etc
I am finding that the better the quality gates are the lower quality llm you can use for the same result (at a cost of time).
Nizoss 11 hours ago [-]
Exactly! I don’t babysit TDD anymore. I have another agent that does that for me and honestly sometimes catches things I would have missed if I was the babysitting.
Hooks do wonders here. The payload contains a lot of information about the pending action the agent wants to make. Combine that with the most recent n events from the agent’s session history and you have a rich enough context to pass to another agent to validate the action through the SDK.
This way the validation uses the same subscription you’re logged in to, whether you’re using Claude Code, Codex, or Copilot. The validation agent responds with a json format that you can easily parse and return, allowing you to let the action through or block it with direction and guidance. I’m genuinely impressed by how well this works considering how simple it is.
Build CLIs your agents call, that scaffold what you want, and lint so it actually does achieves your intended design.
Markdown files are a good reference but they are a weak enforcement tool and go stale easily.
Avoid burying yourself in more skills docs you’re not even writing yourself and probably never even read. Focus that toward deterministic tooling. (Not that skills or prompts are bad, I agree a meta skill that tells an agent what subagents and what order to run is useful)
zapataband1 8 hours ago [-]
lol so write an actual deterministic program? we're close to full circle
noisy_boy 7 hours ago [-]
Yes but with the "judgement" to call them. If you put "review the results based on conditions described here and anything else suspicious you may spot before call the <next_deterministic_program>", it should be able to catch some case you didn't think about in your standard checks. Of course it may miss out on those or have false positives but that is the nature of the beast, as it is now.
alasano 7 hours ago [-]
I'm building a robust runtime for this.
It's externally orchestrated and managed, not by an agent running the the loop.
The goal is to force LLMs to produce exactly what you want every time.
I will be open sourcing soon. You can use whatever harness or tools you already use, you just delegate the actual implementation to the engine.
If you're interested in driving coding agents with code, check out the OpenHands Software Agent SDK [1]
We need to define agents in code, and drive them through semi-deterministic workflows. Kick subtasks off to agents where appropriate, but do things like gather context and deal with agent output deterministically.
This is a massive boost in accuracy, cost efficiency, AND speed. Stop using tokens to do the deterministic parts of the task!
"conversation.send_message("Write 3 facts about the current project into FACTS.txt.")"
why tf would i ever need this
astra_omnia 8 hours ago [-]
I think this also points to what needs to exist after the control-flow layer. Once an agent executes a bounded workflow, teams still need a reviewable object showing what authority/scope it had, what artifacts it touched, what validation ran, what evidence was retained, and what limitations remain. Logs are useful, but they are not the same thing as an action receipt.
illwrks 13 hours ago [-]
I’ve been building a small ‘agent’ using copilot at work, partly a learning exercise as well as testing it in a small use case.
My personal opinion is that AI and agents are being misrepresented… The amount of setup, guidance and testing that’s required to create smarter version of a form is insane.
At the moment my small test is:
Compressed instructions (to fit within the 8k limit)
9 different types of policies to guide the agent (json)
3 actual documents outlining domain knowledge (json)
8 Topics (hint harvesting, guide rails, and the pieces of information prepared as adaptive cards for the user)
3 Tools (to allow for connectors)
The whole thing is as robust as I can make it but it still feels like a house of cards and I expect some random hiccup will cause a failure.
dnh44 11 hours ago [-]
To be honest Copilot really stinks and is really far from the sharp edge of what is possible these days.
niyikiza 9 hours ago [-]
My analogy[1] has been that we need a valet key: capped speed, geofenced, short ttl, can't open trunk/glovebox, etc. That way we don't have to say pretty please to the valet and hope that they won't get ideas.
How do you have "aggressive error detection" when one of the most common and pernicious mistakes agents make are architectural? The behaviour is fine, but the code is overly defensive, hiding possible bugs and invariant violations, leading to ever more layers of complexity that ultimately end up diverging when nothing can be changed without breaking something.
astrobiased 15 hours ago [-]
It's the right direction, but control flow introduces limitations within a system that is quite adaptable to dynamic situations. The more control flow you try to do, the more buggy edge cases that pop up if done poorly.
Still have yet to see a universal treatment that tackles this well.
TuringTest 13 hours ago [-]
I would just reverse the architecture of the whole system. Build a classic deterministic program, and use LLMs as heuristics adapting the system to the environment - the functions that you call on the 'if's and 'switch' statements to decide where the system should go.
I see this as the most robust way to build a predictable system that runs in a controlled way while taking advantage of probabilistic AIs while reducing the impact of their alucinations.
LLMs simply can't be trusted to follow instructions in the general case, no matter how much you constraint them. The power of very large probabilistic models is that they basically solved the _frame problem_ of classic AI: logical reasoning didn't work for general tasks because you can't encode all common sense knowledge as axioms, and inference engines lost their way trying to solve large problems.
LLMs fix those handicaps, as they contain huge amounts of real world knowledge and they're capable of finding facts relevant to the problem at hand in an efficient way. Any autonomous system using them should exploit this benefit.
rickysahu 4 hours ago [-]
we work on this issue in healthcare (genhealth.ai) where it's imperative to get every step correct. not easy. a valuable solution at the intersection of browser, code, lmms. there r far more layers of browser interaction than just imgs and dom.
arian_ 14 hours ago [-]
Control flow tells the agent what it's allowed to do. It doesn't tell you what the agent actually did. Both matter. Everyone is building the permission layer. Almost nobody is building the verification layer.
allynjalford 5 hours ago [-]
I am...
zby 12 hours ago [-]
I concur - it does not make sense to do in llm prompts what can be done in code. Code is cheaper, faster, deterministic and we have lots of experience with working with code.
Agents are probabilistic systems. A common mechanism to get a reliable answer from systems that can have variable output is to run them several times (ideally in separate, isolated instances) and then have something vote on the best result or use the most common result. This happens in things like rockets and aviation where you have multiple systems giving an answer and an orchestrator picking the result.
I've tried doing something similar with AI by running a prompt several times and then have an agent pick the best response. It works fairly well but it burns a lot of tokens.
Yokohiii 13 hours ago [-]
An LLMs "wrong" decision is either systemic or biased. They learn "common sense" from human input (i.e. shared datasets, reinforcement learning). If a decision is flat out wrong for you, asking 10 LLMs is unlikely to help.
suprfnk 14 hours ago [-]
But then, if an agent picks the best response, how would you know that that is reliable?
onion2k 13 hours ago [-]
You could get the agents to output something structured and then use a deterministic test if you're worried about that.
xienze 14 hours ago [-]
Obviously you have multiple agents justify why they picked a certain response and then create another agent that picks the solution with the best justification.
kkyr 14 hours ago [-]
touché
briga 14 hours ago [-]
Sometimes it feels like Agents are just reinventing microservices. Except they are are doing it in the most inefficient way possible. It is certainly a good way for the LLM companies to sell more tokens
gardnr 14 hours ago [-]
This is straight outta 2023:
Agents aren't reliable; use workflows instead.
kmad 13 hours ago [-]
This is, at least in part, the promise of frameworks like DSPy and PydanticAI. They allow you to structure LLM calls within the broader control flow of the program, with typed inputs and outputs. That doesn’t fix non-determinism, hallucinations, etc., but it does allow you to decompose what it is you’re trying to accomplish and be very precise about when an LLM is called and why.
chandureddyvari 14 hours ago [-]
I had good success with hooks in claude code.
Personally I feel this problem was common with humans as well. We added tools like husky for git commits, for our peers to push code which was linted, type checked etc.
I feel hooks are integral part of your code harness, that’s only deterministic way to control coding agents.
Nizoss 11 hours ago [-]
I fully agree. Also started using husky before expanding further and created my own hooks. I can’t imagine myself using agents today without them, it would require a lot of babysitting.
10 hours ago [-]
idivett 9 hours ago [-]
Isn't that what they call "Harness engineering"?
piyh 6 hours ago [-]
9 different frameworks being pushed in the comments of this thread. 2026 truly is the year of agents.
yangbiaogaoshou 4 hours ago [-]
which 9 frameworks?
hmaxdml 13 hours ago [-]
We've found that durable workflows is a much needed primitive for agents control flow. They give a structure for deterministic replays, observability, and, of course, fault tolerance, that operators need to make the agent loop reliable.
SrslyJosh 9 hours ago [-]
> "Agents need control flow, not more prompts"
Can't wait for ya'll to come full circle and invent programming from first principles.
colek42 12 hours ago [-]
We built https://aflock.ai/ (open source) to help with this. Constraining activity tends to work well
mohamedkoubaa 10 hours ago [-]
Eventually we'll all come to the inevitable conclusion that for a task to be fully automated there should be neither human nor genie in the loop.
arbirk 12 hours ago [-]
I always wonder with these posts:
- are they talking about coding (where I am the control flow)
- or RPA agents (in which it is obvious)
? - also don't use llm for deterministic tasks
solomonb 15 hours ago [-]
I agree and I think a really wonderful way to encode agentic control flow would be with Polynomial Functors.
> If you’ve ever resorted to MANDATORY or DO NOT SKIP, you’ve hit the ceiling of prompting.
using this is going to do the opposite of what you want
glasner 11 hours ago [-]
This exactly why I’m building aiki to be a control layer for harness execution. I don’t think the model companies will ever give us the neutral layer we need.
jarboot 12 hours ago [-]
I think this is a good usecase for temporal + pydantic-ai
2001zhaozhao 11 hours ago [-]
If we need control flows, then designing these control flows ought to be the future of agent engineering
cesarvarela 12 hours ago [-]
This will remain a persistent problem without a definitive answer until models move from generative tools to actual AI.
mhotchen 11 hours ago [-]
HUMANS need control flow. It's a very effective strategy that has worked wonders in healthcare
stonewizard 11 hours ago [-]
[dead]
aykutseker 14 hours ago [-]
all caps in a prompt is a code smell. when you're typing MANDATORY, you should be writing a wrapper, not refining the prose.
Nizoss 11 hours ago [-]
Exactly! I have said this a couple of times but it was taken literally as in no capital letters or strong language. Glad to see someone else who shares this perspective.
ModernMech 15 hours ago [-]
Slowly and surely we are replacing AI with programming languages.
dnautics 13 hours ago [-]
Yes. Humans are also unreliable and nondeterministic (though certainly more reliable). Accordingly we have built software dev practices around this. I imagine it would be super useful for example to have a "TDD enforcer":
Phase 1: only test files may be altered, exactly one new test failure must appear.
Phase 2: only code files may be altered. The phase is cleared when the test now succeeds and no other tests fail.
If you get stuck, bail and ask for guidance
ManWith2Plans 13 hours ago [-]
I've been busy building and dogfooding open-artisan for my own development purposes. I've diverged quite a bit from main and am hoping to merge some of those changes back soon. It's basically an OpenCode plugin that forces open-code token-hungry state machine that tries to map the engineering process I follow, exposing only valid tools and states at every step of development. If you're interested, in following along or trying it out, it's available here:
Hopefully, I'll merge in my large structural changes in the next couple of weeks. These structural changes will enhance the state machine meaningfully, as well as adding support for hermes agenet.
ubj 10 hours ago [-]
I've said this before, but it's interesting to see momentum go back and forth between the flexibility and ease of everyday language, and the formal rigor of programming languages.
It feels like we are still discovering the optimal operating range on a spectrum between these two domains. Perhaps the optimal range will depend on the specific field in question.
10 hours ago [-]
throwthrowuknow 7 hours ago [-]
Isn’t this basically what Palantir does?
13 hours ago [-]
geon 13 hours ago [-]
How is this not obvious to everyone? It's like people forgot how to engineer.
_pdp_ 13 hours ago [-]
Or maybe, just maybe, LLMs do not run deterministicly and that is ok?
In the real world almost nothing runs like that - only software and even that has a lot of failures.
So perhaps rather than trying to make agents run deterministicly the goal is to assume some failure rate and find compensation control around it.
zekenie 10 hours ago [-]
you know it really depends on what you're trying to accomplish and if it's possible to describe it with deterministic control flow
oinoom 14 hours ago [-]
this is just advocating for a harness, which has been the focus (along with evals) for at least the last three months by pretty much anyone working with agents professionally or seriously
afxuh 13 hours ago [-]
thats why agents completes a project with the first 3 prompts, , then maintaining and fine-tuning it take ages till hits "-Session token expired"
eth415 15 hours ago [-]
agreed - this is what we’ve been trying to build at scale.
Ditto Ethan's point -- and hundreds of customers tell us it works very well. We'd value more feedback from this community, not just the Salesforce/Agentforce customer base!
try-working 12 hours ago [-]
that's why you need a recursive workflow that creates its own artifacts per step that can later be used for verification.
Nizoss 11 hours ago [-]
Sounds interesting, can you elaborate on your thinking? Got me curious.
try-working 5 hours ago [-]
how do you verify the work that was just done in the current stage? verify against the output artifacts from the previous stages. for example, if you have a requirement doc, then you can analyse the codebase for current state, and store as a doc. then generate the implementation plan based on the delta between requirements and current state. after implementation, create an implementation summary doc. to verify the implementation in the next stage, compare the implementation summary against the implementation plan, the previous codebase analysis and the original requirements doc, as well as codebase diffs.
so, every stage outputs a source of truth for that stage, which can be used by later stages for verification, alone or together with other artifacts. if you want to read more, here's the recursive-mode development workflow I built: https://recursive-mode.dev/introduction
terminalbraid 12 hours ago [-]
My friend, you have invented management.
Nizoss 11 hours ago [-]
Not throwing shade at anyone here but the thought has definitely crossed my mind that we are recreating SAFe but for agents when looking at some of the orchestration setups out there. I think that it is better to not force the same hierarchical processes that worked for humans in large organizations onto agents and instead look at what they need to give better results and what their failure modes look like.
marvinified 8 hours ago [-]
Depends on the use case
ncrmro 7 hours ago [-]
deepwork.md is made for this.
cookiengineer 8 hours ago [-]
We have control flow. It's requirements specifications and test driven development. You just have to enforce it, so the agents cannot cheat their way around it.
I decided to build my agentic environment differently. Local only, sandboxed, enforced with Go specific requirement definitions that different agent roles cannot break as a contract.
That alone is far better than any hyped markdown-storage-sold-as-memory project I've seen in the last weeks.
Currently I am experimenting with skills tailored to other languages, because agentskills actually are kinda useless because they're not enforced nor can any of their metadata be used to predictably verify their behaviors.
My recommendation to others is: Treat LLM output as malware. Analyse its behavior, not its code. Never let LLMs work outside your sandbox. Force them to not being able to escape sandboxes. And that includes removing the Bash tool, for example, because that's not a reproducible sandbox.
Also, choose a language that comes with a strong unit testing methodology. I chose Go because it allows me to write unit tests for my tools, and even agents to agents communication down the line (with some limitations due to TestMain, but at least it's possible).
If you write your agent environment or harness in Typescript, you already failed before you started. Compiled code isn't typesafe because the compiler doesn't generate type checks in the resulting JS code.
Anyways, my two cents from the purpleteaming perspective that tries to make LLMs as deterministic as possible.
carterschonwald 8 hours ago [-]
i mean of course. ive been working on this the past few months and ive a bunch of tech towards this in flight, including some harness forks to layer my ideas in. eg my oh punkin pi test bed on my github.com/cartazio page , theres some shockingly obvious ince you see it tricks that i think i can stack into a really nice harness product for just doing hard real work with these models more easily
droolingretard 15 hours ago [-]
Are you the guy who used to write MapleStory hacks?
ltbarcly3 12 hours ago [-]
Don't listen to anyone who knows what should be done without proof. If someone 'knows' what agents 'need' then that knowledge is worth millions of dollars right now. If they haven't built it they are probably just talking shit.
11 hours ago [-]
yogthos 14 hours ago [-]
This was basically my realization as well. We are trying to get LLMs to write software the way humans do it, but they have a different set of strength and weaknesses. Structuring tooling around what LLMs actually do well seems like an obvious thing to do. I wrote about this in some detail here:
My experimentation with Verblets also concluded plain functions are the most logical harness for LLMs.
encoderer 14 hours ago [-]
You can get a lot done with agentic programming without going "all in" on a gastown-like system, but I think there is a minimum viable setup:
1. an adversarial agent harness that uses one agent to create a plan and implement it, and another to review the plan and code-review each step.
2. an agentic validation suite -- a more flexible take on e2e testing.
3. some custom skills that explain how to use both of those flows.
With this in place you can formulate ideas in a chat session, produce planning artifacts, then use the adversarial system to implement the plans and the validation layer to get everything working e2e for human review.
There are a lot of tools you can use for these things but I chose to just build the tooling in the repo as I go.
Schiendelman 14 hours ago [-]
Claude already creates multiple agents for some projects just to keep context windows smaller. I don't think it'll be long before they offer a testing agent along with their planning agent.
encoderer 14 hours ago [-]
I prefer having codex author plans and implement, and claude play reviwer. I do swap them from time to time and i have a lot of respect for claude 4.6 and 4.7 but for my domain I think codex does a better job with the authoring.
Schiendelman 14 hours ago [-]
That's a cool idea! Plus I bet you can stay in lower tiers with both?
encoderer 14 hours ago [-]
You're definitely burning more tokens with the back/forth and multi-step approach but assuming you swap who does the authoring from time to time you can definitely get the max out of each plan. Review doesn't use as many tokens.
moron4hire 9 hours ago [-]
I've been building this at work. It's... shockingly not hard. People have been telling me, "get into agentic coding now or you'll get left behind" and the things they are saying need training and taste and expertise to figure out how to cajole the AI into doing a job are things that I can just write a program to do.
There's this guy at work who is kind of precious about Claude Code. When Hegseth banned Anthropic, this guy freaked out. He spent many pages ranting about how terrible Gemini and Codex are and basically nuked his project. He insisted only Claude could do his project.
Meanwhile, I managed to redo his work with GPT 4o in a weekend. No AI generated code anywhere, just being capable of writing a for-loop over a directory of files my own self. The AI part is only really necessary because folks can't be bothered to author documents with proper hierarchies.
People talk about "AI is going to eliminate boilerplate and accelerate development and we'll do new jobs that were too costly before". Yet this guy spent weeks coaxing Claude to do something that took me a few hours because "boilerplate" is really not that big of a deal. If this is the kind of job we're going to be able to do because the value-to-effort ratio was less than 1, it kind of indicates to me that there isn't a lot of value to gain at any level of effort. Yeah, it's not really worth your time to bend over and pick up a penny, but even if I had a magical penny snagging magnet, I'm still going to ignore the pennies because that's just how valueless pennies are.
If AI lets me never have to open a PowerPoint from a client to read the chart values from the piechart they screenshot and pasted into PowerPoint, that's wonderful. What more would I ever need? The rest of the work just isn't that hard. But if you think AI is going to replace people like me because it can do "boilerplate", the AI is not anywhere near as fast or cheap at getting to a reliable, consistent, repeatable process as a human for that.
13 hours ago [-]
empath75 13 hours ago [-]
I have heard this sort of thing a lot from people working with agents, and I just think it's fundamentally misguided as a way to think of them, and if you work with them this way, you are probably setting money on fire for no reason because the tasks you are able to perform this way _don't need agents to begin with_.
You might use an LLM api call here as a translation or summary step in a deterministic workflow, but they are not acting as agents, because they lack _agency_.
The value of using an agent harness is precisely that they are _not deterministic_. You provide agents a goal, tools and constraints and they do the task they were asked to perform as best as they can figure out how to do it. You may provide them deterministic workflows as tools they can call, but those workflows, outside of the agent harness itself, should not constrain what the agent does. You are paying a lot of money for agent reasoning, not to act as an expensive data transformation pipeline.
It may be the case that a lot of agentic workflows are more properly done with fully deterministic workflows, but the goal there should be to _remove the agents entirely_ and spend those tokens on non deterministic tasks that require agentic decision making.
I do think there are fundamental limits to what agents are capable of doing unsupervised and there does need to be a lot more human guidance, observability and control over what they are doing, but that's sort of the opposite of embedding them in deterministic workflows, that is more of team integration/communication problem to solve.
MagicMoonlight 6 hours ago [-]
This is slop generated right?
AIorNot 15 hours ago [-]
I mean we have Langgraph, BAML etc
schipperai 10 hours ago [-]
[flagged]
nicktaobo 5 hours ago [-]
[flagged]
fredcallagan 11 hours ago [-]
[flagged]
aditgupta 13 hours ago [-]
[dead]
pinfloyd 10 hours ago [-]
[dead]
Cart0ne 10 hours ago [-]
[flagged]
lacymorrow 13 hours ago [-]
[flagged]
pevansgreenwood 6 hours ago [-]
[dead]
pschw 11 hours ago [-]
[dead]
Linell 15 hours ago [-]
[dead]
noborutakahashi 9 hours ago [-]
[flagged]
11 hours ago [-]
shouvik12 9 hours ago [-]
[flagged]
hajekt2 13 hours ago [-]
[dead]
Amber-chen 9 hours ago [-]
[flagged]
pandalyt1c 14 hours ago [-]
[flagged]
jonahs197 14 hours ago [-]
[dead]
huflungdung 14 hours ago [-]
[dead]
taherchhabra 15 hours ago [-]
I wrote something recently on how agent development differs from traditional software development
We've got a QA agent that needs to run through, say, 200 markdown files of requirements in a browser session. Its a cool system that has really helped improve our team's efficiency. For the longest time we tried everything to get a prompt like the following working: "Look in this directory at the requirements files. For each requirement file, create a todo list item to determine if the application meets the requirements outlined in that file". In other words: Letting the model manage the high level control flow.
This started breaking down after ~30 files. Sometimes it would miss a file. Sometimes it would triple-test a bundle of files and take 10 minutes instead of 3. An error in one file would convince it it needs to re-test four previous files, for no reason. It was very frustrating. We quickly discovered during testing that there was no consistency to its (Opus 4.6 and GPT 5.4 IIRC) ability to actually orchestrate the workflow. Sometimes it would work, sometimes it wouldn't. I've also tested it once or twice against Opus 4.7 and GPT 5.5; not as extensively; but seems to have the same problems.
We ended up creating a super basic deterministic harness around the model. For each test case, trigger the model to test that test case, store results in an array, write results to file. This has made the system a billion times more reliable. But, its also made the agent impossible to run on any managed agent platform (Cursor Cloud Agents, Anthropic, etc) because they're all so gigapilled on "the agent has to run everything" that they can't see how valuable these systems can be if you just add a wee bit of determinism to them at the right place.
I also think having granular, tightly controlled steps is much friendlier to implementing smaller, cheaper, more specialized models rather than using some ginormous behemoth of a model that can automate your tests, or crank out 5 novels of CSI fan fic in a snap.
I think you are on to something. But I also think this sort of system lends itself to not needing really good LLMs to do impressive things. I've noticed that the quality of a lot of these LLMs just gets worse the more datapoints they need to track. But, if you break it up into smaller and easier to consume chunks all the sudden you need a much less capable LLM to get results comparable or better than the SOTA.
Why pay extra money for Opus 4.7 when you could run Qwen 3.6 35b for free and get similar results?
Its one things for a model to be very clearly instructed to add a REST endpoint to an existing Django app and add a button connected to it on the front vs "Design me a youtube". The smaller models can pretty dependably do the first and fall flat on the second.
You prompt for what you want it to do, and it will write eg. python scripts as needed for the looping part, and for example use claude -p for the LLM call.
You can build this in 10 minutes.
I don’t use a cloud platform, so I can’t comment on that part. I‘d say just run it on your own hardware, it’s probably cheaper too.
Everyone misses this pattern with skills: you can just drop code alongside a SKILL.md to guarantee certain behaviors, but for some reason everyone's addicted to writing prompts. You don't even need to build a CLI. A simple skill.py with tasks does it. You can even have helpers that call `claude -p`!
I feel like I'm falling out of whatever is popular these days. I've been using prepaid tokens and custom harnesses for a long time now. It just seems to work. I can ignore most of the news. Copilot & friends are currently dead to me for the problems I've expressly targeted. For some codebases it's not even in the same room of performance anymore, despite using the exact same GPT5.4 base model.
Aren’t some benchmarks giving the model multiple shots at a problem and only keep the successful result if it appeared, ignoring the failure rate?
At the time I had access to only 4o and there was no way to guarantee that the agent would follow the flowchart if I just mention it in its prompt. What I ended up wrapping the agent in a loop that kept feeding it the next step in the flowchart. In a way, a custom harness for the agent
Sorry, you thought a prompt was a suitable replacement for a testing suite?
Codex's short context and todolist system combined somehow helps here though. Because of the frequent compact. The model was forced to recheck what todo list item has not done yet and what workflow skill it has to use. I used to left it for multi hour to do a big clean up and it finished without obvious issues.
This is cool. Can you elaborate on it? Is it flaky? Does it take a long time?
You could still use an LLM to write and extend the tests, but running the tests would be deterministic and would use less resources.
AI is being pushed so much at work right now. For non-dev stuff even. The amount of things that people think are "awesome never seen this" is staggering.
Just because you haven't seen file format X converted to file format Y before and now you asked the LLM to do it and it worked, doesn't mean you needed an LLM for it nor that it's remarkable. The LLM knew how to do it because it learned from a bazillion online sources for deterministic converters that cost nothing (and have open source). But now you're paying, every single time, for a non-deterministic version of it and you find it cool. It's magic ...
But I guess they deserve it.
you'll be surprised with how many people are comfortable attributing something they do not understand to Magic.
more than anything, ai let people who couldn't and wouldn't bother to learn to write simple code, to side step ones who can and build solutions to scratch their own itch. that too faster.
now human behavior kicks in, and they don't want to hand control back into the hands of people who can code to solve problems.
put this together and you have a good model to understand the AI sales pitch... Its magic
like all magic, its but a trick.
Looking at the Mythos benchmarks, it doesn't seem like the models are that close to being truly reliable for agentic tasks.
Is it a year away, or five? That's a big difference in deciding what to build today.
I tell it "write a program that goes over this bunch of files and do this".
Sometimes "do this" can be invoking another claude instance.
So far we've seen agents spawn subagents directly, but that still means leaving the final flow control to the non-deterministic orchestrator model, and so your case is a perfect example of where it would probably fail.
Probably not explaining it very well but I think it's pretty effective at reducing token usage.
https://linear.app/agents
"What we're not open sourcing (yet) is the runtime. "
Like, most of the stuff needed to make AI better is stuff that could have been written by hand in 2015, so why hasn't anyone used their agents to do so?
To be fair, there is probably a way to make it work the way you want. You could add an MCP for a task queue and let the model work each item in the task queue. The tasks could be added by a deterministic system i.e. your harness.
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
- Please consult me when you encounter any ambiguous edge cases
Attaching the AI to production to directly do things with API calls is bad. For me the only use case where the app should do any AI stuff is with reading/categorizing/etc. Basically replacing the "R" in old CRUD apps. If you want to use that same new AI based "R" endpoint to auto fill forms for the "C", "U", and "D" based on a prompt that's cool, but it should never mutate anything for a customer before a human reviews it. Basically CRUD apps are still CRUD apps (and this will always be true), they just have the benefit of having a very intelligent "R" endpoint that can auto complete forms for customers (or your internal tooling/Jenkins pipelines/etc), or suggest (but never invoke) an action.
Why not check the logprobs of the output and take action when the prob of the first and second most likely token is too similar? (or below a certain threshold?
llm -> prompt -> result
llm -> prompt + prompt encoded as skill -> result
llm -> prompt + deterministic code encoded as skill -> result
I do think prompting to generate code early can shortcut that path to deterministic code, but we're still essentially embedding deterministic code in a non-deterministic wrapper. There is a missing layer of determinism in many cases that actually make long-horizon tasks successful. We need deterministic code outside the non-deterministic boundary via an agentic loop or framework. This puts us in a place where the non-deterministic decision making is sandwiched in between layers of determinism:
deterministic agentic flows -> non-deterministic decision making -> deterministic tools
This has been a very powerful pattern in my experiments and it gets even stronger when the agents are building their own determinism via tools like auto-researcher.
https://lethain.com/agents-as-scaffolding/
The hardware control team delivers a spec as a document and spreadsheet. The mobile team was using that to code the interface library and validating their code against the server. I converted the document to TSV, sent some parts to Claude and have it write a parser for the TSV keeping all the nuances of human written spec. It took more than 150 iterations to get the parser to handle all edge cases and generate an intermediate output as JSON. Then Claude helped me write a code generator using some custom glue on top of Apollo to generate the code that is consumed by the mobile app.
This whole pipeline runs as part of Github actions and calls Claude only when our library validator fails. There is an md file which is sent to Claude on failure as part of the request to figure out what went wrong, propose a solution and create a PR. This is followed by a human review, rework and merge. Total credits consumed to get here < $350.
I think most problems with ai tend to be around can you deterministically test the thing you are asking it to do?
How many of us would never ever show work, without going to check the thing we just built first?
Of course: have it write tests first; and run them to check its work.
Works well for refactoring, but greenfield implementations still rely on a spec that is guaranteed to be incomplete, overcomplete and wrong in many ways.
https://www.decisional.com/blog/workflow-automation-should-b...
I think there is a fundamental incentive problem - code + llm + harness is bound to be more efficient but the labs want you to burn tokens so they are not going to tell you to use the code, just burn more tokens. They are asking us to forget about the token cost and reliability for now - model will become better.
This means that most people just believe that their agent should just be able to do anything with the help of some Model fairy dust with prompts + skills.
People need to watch their agents fail in production to be able to come to the right conclusion unfortunately.
They are particularly bad at complex multiline parsing. Writing all sorts of weird/crude python/awk scripts and getting confused in the process.
I wish they would use Perl6/Grammer or Haskell/Parsec or similar and write better parsing scripts.
Correct. The concept of having probabilistic output with deterministic acceptance “guardrails” is illogical. If the domain resists deterministic modeling such that you’re using an LLM, the guardrails don’t magically gain that capability.
Your comment EXACTLY mirrors my experience. Week 1 was ever expanding prompts, and degrading performance. Week 2 has been all about actually defining the objects precisely (notes, tasks, projects, people etc) and defining methods for performing well defined operations against these objects. The agent surface has, as you rightly point out, shrunk to a translation layer that converts natural language to commands and args that pass the input validator.
Such an LLM might have fared better with the strawberry test.
Are google search results modifying your software at runtime?
Take or agent chat for example, the output text is a ui, agents can generate charts and even constrained ui elements.
Isn’t that created and adapted at run time?
If you mean like agents live modifying your code. I think that’s pretty much here as well. Can read the logs and send prs.
The only thing is how fast that loop will execute from days or hours to mins or seconds, and what validation gates it needs to pass.
My git repo is pretty much self modifying personal software at this point, that I interface through the ide chat window.
But I don’t think we will ever lose the intermediary deterministic language (code) between the llm and the execution engine.
It would be prohibitively expensive to run everything through models all the time.
But I am starting to think we need a more precise language than English when talking with LLMs. That can do both precision and ambiguity when you need either.
I say what the llm says how.
Good luck with that. Users will flood you with complaints if a button moves 5px to the left after a design update. A program that is generated at runtime, with not just a variable UI but also UX and workflows, would get you death threats.
The problem is that outside of that most people want boring and regular interfaces so they can get in and solve the problem and get out - they don't want to "love" it or care if its "sexy" they want it to work and get out of the way.
LLMs transmogrifying your software at ever request assumes people are software architects and creators who love the computer interface, and that just doesn't describe the bulk of the population.
Most people using computers use the to consume things or utilize access to things, not for their own sake, and they certainly don't think "what if I just had code to do x..." unless x is make them a lot of money.
I think the core issue is that non-deterministic output is great for a chatbot experience where you want unpredictable randomness so it feels less like talking to the mirror - but when it comes to coding I think we're pretty fundamentally misaligned in sticking to that non-deterministic approach so firmly.
I just had Claude write itself a couple shell scripts to handle a bunch of common cases (like running tests) in my workflow where it just couldn't figure it out efficiently. Now it just runs those tools and sets things up instead of spinning in circles for half an hour.
Every time it tries to ask me if it can run some one-off crazy shell or python one-liner to do something, I've started asking myself if I should have it write a tool I can auto-approve instead.
However, there are some things that I think need a foundational next-generation improvement of some sort. The way that LLMs sort of smudge away "NEVER DO X" and can even after a lot of work end up seeing that as a bit of a "PLEASE DO X" seems fundamental to how they work. It can be easy to lose track of as we are still in the initial flush of figuring out what they can do (despite all we've already found), but LLMs are not everything we're looking for out of AI.
There should be some sort of architecture that can take a "NEVER DO X" and treat it as a human would. There should be some sort of architecture that instead of having a "context window" has memory hierarchies something like we do, where if two people have sufficiently extended conversations with what was initially the same AI, the resulting two AIs are different not just in their context windows but have actually become two individuals.
I of course have no more idea what this looks like than anyone else. But I don't see any reason to think LLMs are the last word in AI.
The reason why "DO NOT SKIP" fails is because your agent is responsible for too many things and there's things in context that are taking away the attention from this guidance.
But nobody said the agent that does enforcement must be the same agent that builds. While you can likely encode some smart decision making logic in your deterministic control flow, you either make it too rigid to work well, or you'll make it so complex that at that point, you might as well just use the agent, it will be cheaper to setup and maintain.
You essentially need 3 base agents:
- Supervisor that manages the loop and kicks right things into gear if things break down
- Orchestrator that delegates things to appropriate agents and enforces guardrails where appropriate
- Workers that execute units of work. These may take many shapes.
This is essentially declarative programming. Most traditional programming is imperative, what most developers are used to - I give the exact set of instructions and expect them to be obeyed as I write them. Agents are way more declarative than imperative - you give them a result, they work on getting that result. Now the problem of course, is in something declarative like say, SQL, this result is going to be pretty consistent and well-defined, but you're still trusting the underlying engine on how to go about it.
Thinking about agents declaratively has helped me a lot rather than to try to design these rube-goldberg "control" systems around them. Didn't get it right? Ok, I validated it's not correct, let's try again or approach it differently.
If you really need something imperative, then write something imperative! Or have the agent do so. This stuff reads like trying to use the wrong tool for the job.
I think it's step more-abstract that that, we're doing... How about "narrative programming"? (Though we could debate whether "programming" is still an applicable word.)
Yes, it may look like declarative programming, but it's within an illusion: We aren't aren't actually describing our goals "to" an AI that interprets them. Instead, there's a story-document where our human stand-in character has dialogue to a computer-character, and up in the real world we're hoping that the LLM will append more text in a way that makes a cohesive longer story with something useful that can be mined from it.
It's not just an academic distinction, if we know there's a story, that gives us a better model for understanding (and strategizing) the relationship between inputs and outputs. For example, it helps us understand risks like prompt-injection, and it provides guidance for the kinds of training data we do (or don't) want it trained on.
And then you run into similar issues as the llm does, like silent failures, loops, contradictions unless you're very careful.
The essence might be the same closed world assumption problem. In llm case this manifests as hallucination rather that admitting it does not know.
And to your point: instructing a (non-deterministic) LLM declaratively ("get me to this end state") compounds the likelihood of going off the rails.
It is the same in terraform - yes, the HCL spec defines things very precisely, but you're kind of at the mercy of how the provider and provider API decide how to handle what you wrote, which can be very messy and inconsistent even when nothing changed on your side at all. LLM/agent usage feels a lot like that to me, in the sense it's declarative and can be a bit lossy. As a result there are things I could technically do in terraform but would never, because I need imperativeness.
My main point being, I think people are trying to ram agents into a ton of cases where they might not necessarily need or even want to be used, and stuff like this gets written. Maybe not, but I see it day to day - for instance, I have a really hard time convincing coworkers that are complaining about the reliability of MCP responses with their agents, that they could simply take an API key, have the agent write a script that uses it, and strictly bound/define the type of response format they want, rather than let the agent or server just guess - for some reason there is some inclination to "let the agent decide how to do everything."
I think that's probably what this article is getting at, but, I am saying that trying to create these elaborate control flows with validation checks everywhere to reign in an unruly application making dumb decisions, why not just use it to write deterministic automation instead of using agent as the automation?
Slash commands, for instance, are a misfeature. I should never have to wait for the chatbot finish a turn so that I can check on the status of my context window or how much money I've spent this session. Control should be orthogonal to the chat loop.
Even things that have nothing to do with controlling the text generator's input and output are entangled with chat actions for no good reason except "it's a chat thing, let's pretend we're operating an IRC bot".
There are a zillion LLM agents out there nowadays, but none of them really separate control from the agent loop from presentation well. (A few do at least have headless modes, which is cool.)
I get what you're trying to say but in practice architecting what you propose is considerably more difficult. Why not build it and try to get hired by one of the bigcos?
Hell, a telegram bot can handle that just fine.
They just want features. They don't really care about duplicated work, so half of them reinvent the TUI rendering wheel. Pluggability is something that might be actually hostile to their interests in lock-in. And the AI labs probably think "after a couple more scaling cycles, our models will be so good that our agents can just rewrite themselves from scratch"; until they hit a compute or power wall, it always looks rational to them to defer rearchitecting.
Another real possibility is that if you work on an agent with a really clean architecture and publish it in hopes of getting hired by some AI company, all of them think "that looks great, but we don't want to rearchitect right now". Your code winds up in the training set, and a year and a half from now, existing agents can "one-shot" rewrites along the lines of your design because they're "smarter".
As for me, I'm not that interested, personally. There are other things I want to build and I'm working on those.
Other things don't though.
In the GUI I can see the context indicator and usage stats.
It also makes it easier to jump between conversations and see the updates.
Sometimes I use Claude Code or opencode in the terminal, and my experience is much poorer compared to the Codex desktop app.
Here's a pretty specific example of what I mean, but maybe food for thought:
Podcast (20 minute digest): https://pub-6333550e348d4a5abe6f40ae47d2925c.r2.dev/EP008.ht...
Paper: https://arxiv.org/abs/2605.00225
My first thought was, well agents seem nice, but I think, AI workflows are a better bet. However, I don't really understood AI or agents in depth and felt like I was just "doing things the old way" and removing flexibility from agents was a ridiculous idea.
After some research I got the impression that I was right. A well defined workflow and scope is just what's needed for AI. It's cheaper and more consistent. It probably even makes the whole thing run well with non-SOTA models.
0 - https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-...
Making an unreliable, nondeterministic system give reliable results for a bounded task with well-understood parameters is... like half of engineering, no?
There's a huge difference between "generate this code here's a vague feature description" and "here's a list of criteria, assign this input to one of these buckets" -- the latter is obviously subject to prompt engineering, hallucination, etc -- but so can a human pipeline!
...which is why we write deterministic code to take the human out of the pipeline. One of the early uses of computers was calculating firing tables for artillery, to replace teams of humans that were doing the calculations by hand (and usually with multiple humans performing each calculation to catch errors). If early computers had a 99% chance of hallucinating the wrong answer to an artillery firing table, the response from the governments and militaries that used them would not be to keep using computers to calculate them. It would be to go back to having humans do it with lots of manual verification steps and duplicated work to be sure of the results.
If you're trying to make LLMs (a vague simulacrum of humans) with their inherent and unsolvable[1] hallucination problems replace deterministic systems, people are going to eventually decide to return to the tried and true deterministic systems.
1: https://arxiv.org/abs/2401.11817
But if you're trying to tell me that every time you list criteria you get them all perfectly matched, you're clearly gifted.
Though chaotic, which I believe is the better word here - a single letter change may result in widely different results.
We just choose to use more random inference rules, because they have better results.
Somewhere in between that I guess is the varying levels of intelligence more likely able to make the “right” decision for anything you throw at it.
Determinism is a different matter. Scripts and hooks are really the main levers you can pull there, but yeah - a a decent script and a cron job will handle certain things much better (and for a fraction of the cost)
The alternative is running your ten lines of Python in the most expensive, slowest, least reliable way possible. (Sure is popular though)
For example, most people were using the agents for internet research. It would spin for hours, get distracted or forget what it was supposed to be doing.
Meanwhile `import duckduckgo` and `import llm` and you can write ten lines that does the same thing in 20 seconds, actually runs deterministically, and costs 50x less.
The current models are much better -- good enough that the Auto-GPT is real now! -- but running poorly specified control flow in the most expensive way possible is still a bad idea.
That’s why you see “base” vs “instruct” models for example — base is just that, the basic language model that models language, but doesn’t follow instructions yet.
Especially the open weights models have lots of variants, eg tuned for math, tuned for code, tuned for deep thinking, etc.
But it’s definitely a post train thing, usually done by generating synthetic data using other models.
https://github.com/yieldthought/flow
Happily, 5.5 is good at writing and using it.
Deterministic workflows using AI to help perform those steps not requiring human input has been an area of interest for me for some time. Particularly interesting how you are using the AI to determine what a step has achieved and the action of the next step.
Combine it with workflow elements that does handle human steps together with a notification/routing/task system would make for a helpful system for so many.
This is the only way to guarantee AI usage doesn't burn you. Any automation beyond this is just theater, no matter how much that hurts to hear/undermines your business model.
A bird sings, a duck quacks. You don't expect the duck to start singing now, do you?
If a business can get away with some margin of error being acceptable, more power to them. But if not (or doing so would cause additional problems; what I'd imagine to be true for a non-trivial number of orgs), it's wise to consider the nature of the tool a lot of people are suggesting is mandatory if you're dependent on consistent, predictable results.
Presuming you meant burns you out though.
It will make a mistake and you will get burned, so you have to babysit it.
Swamp teaches your Agent to build and execute repeatable workflows, makes all the data they produce searchable, and enables your team to collaborate.
We also build swamp and swamp club using swamp. You can see that process in the lab[2]. This combines all of the creativity of the LLM for the parts that matter, while providing deterministic outcomes for the parts you need to be deterministic.
1: https://swamp.club
2: https://swamp.club/lab
I created it to address this exact issue. It is a vendor-neutral ESLint-style policy engine and currently supports Claude Code, Codex, and Copilot.
It uses the agents hooks payloads and session history to enforce the policies. Allowing it to be setup to block commits if a file has been modified since the checks were last run, disallow content or commands using string or regex matching, and enforce TDD without the need of any extra reporter setup and it works with any language.
Feedback welcome: https://github.com/nizos/probity
I started working on it piece by piece about 14 years ago. It was originally targeted at junior developers to provide them the necessary security and scalability guardrails whilst trying to maintain as much flexibility as possible. It's very flexible; most of Saasufy is itself is built using Saasufy. Only the actual user service and orchestration is custom backend code.
Also, I designed it in a way that it would help the user fast-track their learning of important concepts like authentication, access control, schema validation.
It turns out that all of these things that junior devs need are exactly what LLMs need as well.
I tested it with Claude Code originally and got consistently great results. More recently, I tested with https://pi.dev with GPT 5.5 and it seemed to be on par.
Even skills are not a catch-all, because besides the supply chain risk from using skills you pull from someone else, a lot of tasks require an assortment of skills.
I've accommodated this with my agent team (mostly sonnets fwiw) by developing what we call "operational reflexes". Basically common tasks that require multiple domains of expertise are given a lockfile defining which of the skills are most relevant (even which fragment of a skill) and how in-depth / verbose each element needs to be to accomplish the same task the same way, with minimal hallucinations or external sources.
A coordinator agent assigns the tasks and selects the relevant lockfile and sends it along or passes it along to another agent with a different specified lockfile geared towards reviewing.
It's a bit, but this workflow dramatically increased the quality of output for technical work I get from my agents and I don't really need to write many prompts myself like this.
I am finding that the better the quality gates are the lower quality llm you can use for the same result (at a cost of time).
Hooks do wonders here. The payload contains a lot of information about the pending action the agent wants to make. Combine that with the most recent n events from the agent’s session history and you have a rich enough context to pass to another agent to validate the action through the SDK.
This way the validation uses the same subscription you’re logged in to, whether you’re using Claude Code, Codex, or Copilot. The validation agent responds with a json format that you can easily parse and return, allowing you to let the action through or block it with direction and guidance. I’m genuinely impressed by how well this works considering how simple it is.
You can find my approach here: https://github.com/nizos/probity
Markdown files are a good reference but they are a weak enforcement tool and go stale easily.
Avoid burying yourself in more skills docs you’re not even writing yourself and probably never even read. Focus that toward deterministic tooling. (Not that skills or prompts are bad, I agree a meta skill that tells an agent what subagents and what order to run is useful)
It's externally orchestrated and managed, not by an agent running the the loop.
The goal is to force LLMs to produce exactly what you want every time.
I will be open sourcing soon. You can use whatever harness or tools you already use, you just delegate the actual implementation to the engine.
https://engine.build
We need to define agents in code, and drive them through semi-deterministic workflows. Kick subtasks off to agents where appropriate, but do things like gather context and deal with agent output deterministically.
This is a massive boost in accuracy, cost efficiency, AND speed. Stop using tokens to do the deterministic parts of the task!
[1] https://github.com/OpenHands/software-agent-sdk
why tf would i ever need this
My personal opinion is that AI and agents are being misrepresented… The amount of setup, guidance and testing that’s required to create smarter version of a form is insane.
At the moment my small test is: Compressed instructions (to fit within the 8k limit) 9 different types of policies to guide the agent (json) 3 actual documents outlining domain knowledge (json) 8 Topics (hint harvesting, guide rails, and the pieces of information prepared as adaptive cards for the user) 3 Tools (to allow for connectors)
The whole thing is as robust as I can make it but it still feels like a house of cards and I expect some random hiccup will cause a failure.
[1] https://niyikiza.com/posts/capability-delegation/
Both designs (Lightroom, game engines) have worked successfully.
There's probably nothing that prevents mixing both approaches in the same "app".
Still have yet to see a universal treatment that tackles this well.
I see this as the most robust way to build a predictable system that runs in a controlled way while taking advantage of probabilistic AIs while reducing the impact of their alucinations.
LLMs simply can't be trusted to follow instructions in the general case, no matter how much you constraint them. The power of very large probabilistic models is that they basically solved the _frame problem_ of classic AI: logical reasoning didn't work for general tasks because you can't encode all common sense knowledge as axioms, and inference engines lost their way trying to solve large problems.
LLMs fix those handicaps, as they contain huge amounts of real world knowledge and they're capable of finding facts relevant to the problem at hand in an efficient way. Any autonomous system using them should exploit this benefit.
Especially all bookkeeping logic should move into the symbolic layer: https://zby.github.io/commonplace/notes/scheduler-llm-separa...
"One thing that I have seen in the wild quite a bit is taking the agent pattern and sprinkling it into a broader more deterministic DAG." - https://github.com/humanlayer/12-factor-agents/blob/main/REA...
I've tried doing something similar with AI by running a prompt several times and then have an agent pick the best response. It works fairly well but it burns a lot of tokens.
Agents aren't reliable; use workflows instead.
I feel hooks are integral part of your code harness, that’s only deterministic way to control coding agents.
Can't wait for ya'll to come full circle and invent programming from first principles.
https://arxiv.org/abs/2312.00990
using this is going to do the opposite of what you want
Phase 1: only test files may be altered, exactly one new test failure must appear.
Phase 2: only code files may be altered. The phase is cleared when the test now succeeds and no other tests fail.
If you get stuck, bail and ask for guidance
https://github.com/yehudacohen/open-artisan/
Hopefully, I'll merge in my large structural changes in the next couple of weeks. These structural changes will enhance the state machine meaningfully, as well as adding support for hermes agenet.
It feels like we are still discovering the optimal operating range on a spectrum between these two domains. Perhaps the optimal range will depend on the specific field in question.
In the real world almost nothing runs like that - only software and even that has a lot of failures.
So perhaps rather than trying to make agents run deterministicly the goal is to assume some failure rate and find compensation control around it.
https://github.com/salesforce/agentscript
so, every stage outputs a source of truth for that stage, which can be used by later stages for verification, alone or together with other artifacts. if you want to read more, here's the recursive-mode development workflow I built: https://recursive-mode.dev/introduction
I decided to build my agentic environment differently. Local only, sandboxed, enforced with Go specific requirement definitions that different agent roles cannot break as a contract.
That alone is far better than any hyped markdown-storage-sold-as-memory project I've seen in the last weeks.
Currently I am experimenting with skills tailored to other languages, because agentskills actually are kinda useless because they're not enforced nor can any of their metadata be used to predictably verify their behaviors.
My recommendation to others is: Treat LLM output as malware. Analyse its behavior, not its code. Never let LLMs work outside your sandbox. Force them to not being able to escape sandboxes. And that includes removing the Bash tool, for example, because that's not a reproducible sandbox.
Also, choose a language that comes with a strong unit testing methodology. I chose Go because it allows me to write unit tests for my tools, and even agents to agents communication down the line (with some limitations due to TestMain, but at least it's possible).
If you write your agent environment or harness in Typescript, you already failed before you started. Compiled code isn't typesafe because the compiler doesn't generate type checks in the resulting JS code.
Anyways, my two cents from the purpleteaming perspective that tries to make LLMs as deterministic as possible.
https://yogthos.net/posts/2026-02-25-ai-at-scale.html
1. an adversarial agent harness that uses one agent to create a plan and implement it, and another to review the plan and code-review each step.
2. an agentic validation suite -- a more flexible take on e2e testing.
3. some custom skills that explain how to use both of those flows.
With this in place you can formulate ideas in a chat session, produce planning artifacts, then use the adversarial system to implement the plans and the validation layer to get everything working e2e for human review.
There are a lot of tools you can use for these things but I chose to just build the tooling in the repo as I go.
There's this guy at work who is kind of precious about Claude Code. When Hegseth banned Anthropic, this guy freaked out. He spent many pages ranting about how terrible Gemini and Codex are and basically nuked his project. He insisted only Claude could do his project.
Meanwhile, I managed to redo his work with GPT 4o in a weekend. No AI generated code anywhere, just being capable of writing a for-loop over a directory of files my own self. The AI part is only really necessary because folks can't be bothered to author documents with proper hierarchies.
People talk about "AI is going to eliminate boilerplate and accelerate development and we'll do new jobs that were too costly before". Yet this guy spent weeks coaxing Claude to do something that took me a few hours because "boilerplate" is really not that big of a deal. If this is the kind of job we're going to be able to do because the value-to-effort ratio was less than 1, it kind of indicates to me that there isn't a lot of value to gain at any level of effort. Yeah, it's not really worth your time to bend over and pick up a penny, but even if I had a magical penny snagging magnet, I'm still going to ignore the pennies because that's just how valueless pennies are.
If AI lets me never have to open a PowerPoint from a client to read the chart values from the piechart they screenshot and pasted into PowerPoint, that's wonderful. What more would I ever need? The rest of the work just isn't that hard. But if you think AI is going to replace people like me because it can do "boilerplate", the AI is not anywhere near as fast or cheap at getting to a reliable, consistent, repeatable process as a human for that.
You might use an LLM api call here as a translation or summary step in a deterministic workflow, but they are not acting as agents, because they lack _agency_.
The value of using an agent harness is precisely that they are _not deterministic_. You provide agents a goal, tools and constraints and they do the task they were asked to perform as best as they can figure out how to do it. You may provide them deterministic workflows as tools they can call, but those workflows, outside of the agent harness itself, should not constrain what the agent does. You are paying a lot of money for agent reasoning, not to act as an expensive data transformation pipeline.
It may be the case that a lot of agentic workflows are more properly done with fully deterministic workflows, but the goal there should be to _remove the agents entirely_ and spend those tokens on non deterministic tasks that require agentic decision making.
I do think there are fundamental limits to what agents are capable of doing unsupervised and there does need to be a lot more human guidance, observability and control over what they are doing, but that's sort of the opposite of embedding them in deterministic workflows, that is more of team integration/communication problem to solve.
https://x.com/i/status/2051706304859881495