Almost all the points are not about what DSPy is mainly supposed to offer.
What's supposedly great at is automatic optimization, for everything else... who the hell puts Python in production just to make some API calls?
There are "frameworks" available in all the better languages, but the constructs behind are not that complicated. And why does DSPy even try to compete with LangChain/Graph/crap?
deaux 1 hours ago [-]
I don't see it at all.
> Typed I/O for every LLM call. Use Pydantic. Define what goes in and out.
Sure, not related to DSPy though, and completely tablestakes. Also not sure why the whole article assumes the only language in the world is Python.
> Separate prompts from code. Forces you to think about prompts as distinct things.
There's really no reason prompts must live in a file with a .md or .json or .txt extension rather than .py/.ts/.go/.., except if you indeed work at a company that decided it's a good idea to let random people change prod runtime behavior. If someone can think of a scenario where this is actually a good idea, feel free to elighten me. I don't see how it's any more advisable than editing code in prod while it's running.
> Composable units. Every LLM call should be testable, mockable, chainable.
> Abstract model calls. Make swapping GPT-4 for Claude a one-line change.
And LiteLLM or `ai` (Vercel), the actually most used packages, aren't? You're comparing downloads with Langchain, probably the worst package to gain popularity of the last decade. It was just first to market, then after a short while most realized it's horrifically architected, and now it's just coasting on former name recognition while everyone who needs to get shit done uses something lighter like the above two.
> Eval infrastructure early. Day one. How will you know if a change helped?
Sure, to an extent. Outside of programming, most things where LLMs deliver actual value are very nondeterministic with no right answer. That's exactly what they offer. Plenty of which an LLM can't judge the quality of. Having basic evals is useful, but you can quickly run into their development taking more time than it's worth.
But above all.. the comments on this post immediately make clear that the biggest differentiator of DSPy is the prompt optimization. Yet this article doesn't mention that at all? Weird.
andyg_blog 1 hours ago [-]
>the whole article assumes the only language in the world is Python.
This was my take as well.
My company recently started using Dspy, but you know what? We had to stand up an entire new repo in Python for it, because the vast majority of our code is not Python.
sbpayne 1 hours ago [-]
I think this is an important point! I am actually a big fan of doing what works in the language(s) you're already using.
For example: I don't use Dspy at work! And I'm working in a primarily dotnet stack, so we definitely don't use Dspy... But still, I see the same patterns seeping through that I think are important to understand.
And then there's a question of "how do we implement these patterns idiomatically and ergonomically in our codebase/langugage?"
hedgehog 34 minutes ago [-]
In my experience the behavior variation between models and providers is different enough that the "one-line swap" idea is only true for the simplest cases. I agree the prompt lifecycle is the same as code though. The compromise I'm at currently is to use text templates checked in with the rest of the code (Handlebars but it doesn't really matter) and enforce some structure with a wrapper that takes as inputs the template name + context data + output schema + target model, and internally papers over the behavioral differences I'm ok with ignoring.
I'm curious what other practitioners are doing.
dbreunig 10 minutes ago [-]
Model testing and swapping is one of the surprises people really appreciate DSPy for.
You're right: prompts are overfit to models. You can't just change the provider or target and know that you're giving it a fair shake. But if you have eval data and have been using a prompt optimizer with DSPy, you can try models with the one-line change followed by rerunning the prompt optimizer.
Dropbox just published a case study where they talk about this:
> At the same time, this experiment reinforced another benefit of the approach: iteration speed. Although gemma-3-12b was ultimately too weak for our highest-quality production judge paths, DSPy allowed us to reach that conclusion quickly and with measurable evidence. Instead of prolonged debate or manual trial and error, we could test the model directly against our evaluation framework and make a confident decision.
I think all of these things are table-stakes; yet I see that they are implemented/supported poorly across many companies. All I'm saying is there are some patterns here that are important, and it makes sense to enter into building AI systems understanding them (whether or not you use Dspy) :)
persedes 38 minutes ago [-]
Dspys advertising aside, imho it is a library only for optimizing an existing workflow/ prompt and not for the use cases described there. Similar to how I would not write "production" code with sklearn :)
They themselves are turning into wrapper code for other libraries (e.g. the LLM abstraction which litellm handles for them).
Can also add:
Option 3: Use instructor + litellm (probabyly pydantic AI, but have not tried that yet)
Edit: As others pointed out their optimizing algorithms are very good (GEPA is great and let's you easily visualize / track the changes it makes to the prompt)
prpl 18 minutes ago [-]
The sklearn to me is (and mirrors) the insane amount of engineering that exists/existed to bring Jupyter notebooks to something more prod-worthy and reproducible. There’s always going to be re-engineering of these things, you don’t need to use the same tools for all use cases
nkozyra 1 hours ago [-]
> f"Extract the company name from: {text}"
I think one thing that's lost in all of the LLM tooling is that it's LLM-or-nothing and people have lost knowledge of other ML approaches that actually work just fine, like entity recognition.
I understand it's easier to just throw every problem at an LLM but there are things where off-the-shelf ML/NLP products work just as well without the latency or expense.
rao-v 14 minutes ago [-]
Is there a non-tranformer based entity extraction solution that's not brittle? My understanding is that the cutting edge in entity extraction (e.g. spaCy) is just small BERT models, which rock for certain things, but don't have the world knowledge to handle typos / misspellings etc.
sbpayne 1 hours ago [-]
Oh 100%! There are many problems (including this one!) that probably aren't best suited for an LLM. I was just trying to pick a really simple example that most people would follow.
giorgioz 31 minutes ago [-]
Loved the article because I exactly hit the stages all up till the 5th!
Thank you for making me see the whole picture and journey!
Look at https://mastra.ai/ and https://www.copilotkit.ai/ to see how more inviting their pages look.
A company is not selling only the product itself but all the other things around the product = THE WHOLE PRODUCT
A similar concept in developer tools is the docs are the product
Also I'm a fullstack javascript engineer and I don't use Python.
Docs usually have a switch for the language at the top.
Stripe.com is famous for it's docs and Developer Experience:
https://docs.stripe.com/search#examples
It's great to study other great products to get inspiration and copy the best traits that are relevant to your product as well.
sbpayne 25 minutes ago [-]
The "whole product" idea here makes a lot of sense to me. I think this is often a big barrier to adoption for sure!
stephantul 2 hours ago [-]
Mannnn, here I thought this was going to be an informative article! But it’s just a commercial for the author’s consulting business.
sbpayne 2 hours ago [-]
Oops! That's actually out of date from prior template I had. I don't actually consult at the moment :). Removing!
2 hours ago [-]
halb 1 hours ago [-]
The author itself is probably ai-generated. The contact section in the blog is just placeholder values. I think the age of informative articles is gone
CharlieDigital 43 minutes ago [-]
I work with author; author is definitely not AI generated.
sbpayne 44 minutes ago [-]
This is definitely a mistake! What contact section are you referring to? The only references to contact I see in this post now are at the end where I linked to my X/LinkedIn profiles but those links look right to me?
memothon 2 hours ago [-]
I think the real problem with using DSPy is that many of the problems people are trying to solve with LLMs (agents, chat) don't have an obvious path to evaluate. You have to really think carefully on how to build up a training and evaluation dataset that you can throw to DSPy to get it to optimize.
This takes a ton of upfront work and careful thinking. As soon as you move the goalposts of what you're trying to achieve you also have to update the training and evaluation dataset to cover that new use case.
This can actually get in the way of moving fast. Often teams are not trying to optimize their prompts but even trying to figure out what the set of questions and right answers should be!
sbpayne 1 hours ago [-]
Yeah, I think Dspy often does not really show it's benefit until you have a good 'automated metric', which can be difficult to get to.
I think the unfortunate part is: the way it encourages you to structure your code is good for other reasons that might not be an 'acute' pain. And over time, it seems inevitable you'll end up building something that looks like it.
memothon 1 hours ago [-]
Yeah I agree with this. I will try to use it in earnest on my next project.
That metric is the key piece. I don't know the right way to build an automated metric for a lot of the systems I want to build that will stand the test of time.
sbpayne 39 minutes ago [-]
To be clear: I don't know that I would recommend using it, exactly. I would just make sure you understand the lessons so you see how it best makes sense to apply to your project :)
sethkim 38 minutes ago [-]
We build a product that's somewhat similar in spirit to DSPy, but people come to us for different reasons than the OP listed here.
1) It's slow: you first have to get acquainted with DSPY and then get hand-labeled data for prompt optimization. This can be a slow process so it's important to just label cases that are ambiguous, not obvious.
2) They know that manual prompt engineering is brittle, and want a prompt that's optimized and robust against a model they're invoking, which DSPy offers. However, it's really the optimizer (ex. GEPA) doing the heavy-lifting.
3) They don't actually want a model or prompt at all. They want a task completed, reliably, and they want that task to not regress in performance. Ideally, the task keeps improving in production.
Curious if folks in this thread feel more of these pains than the ones in the article.
sbpayne 37 minutes ago [-]
I think in some sense, this is the real thing everyone wants. Everything else is kind of an implementation detail! Would be really curious to see what you're building!
sethkim 36 minutes ago [-]
Feel free to shoot me a note at seth@sutro.sh if you want to check it out!
TheTaytay 2 hours ago [-]
I tried it in the past, one time “in earnest.” But when I discovered that none of my actual optimized prompts were extractable, I got cold feet and went a different route. The idea of needing to do fully commit to a framework scares me. The idea of having a computer optimize a prompt as a compilation step makes a lot of sense, but treating the underlying output prompt as an opaque blob doesn’t. Some of my use cases were JUST off of the beaten path that dspy was confusing, which didn’t help. And lastly, I felt like committing to dspy meant that I would be shutting the door on any other framework or tool or prompting approach down the road.
I think I might have just misunderstood how to use it.
sbpayne 2 hours ago [-]
I don't know that you misunderstood. This is one of my biggest gripes with Dspy as well. I think it takes the "prompt is a parameter" concept a bit too far.
Good article, and I think the "evolution of every AI system" is spot on.
In my opinion, the reason people don't use DSPy is because DSPy aims to be a machine learning platform. And like the article says -- this feels different or hard to people who are not used to engineering with probabilistic outputs. But these days, many more people are programming with probability machines than ever before.
The absolute biggest time sink and 'here be dragons' of using LLMs is poke and hope prompt "engineering" without proper evaluation metrics.
> You don’t have to use DSPy. But you should build like someone who understands why it exists.
And this is the salient point, and I think it's very well stated. It's not about the framework per se, but about the methodology.
sbpayne 28 minutes ago [-]
yeah this is the main point I wanted to get across! I rarely recommend people to use Dspy; but I think Dspy is often so polarizing that people "throw out the baby with the bathwater". They decide not to use Dspy, but also don't learn from the great ideas it has!
ndr 1 hours ago [-]
It's not as ergonomic as they made it to be.
The fact that you have to bundle input+output signatures and everything is dynamically typed (sometimes into the args) just make it annoying to use in codebases that have type annotations everywhere.
Plus their out of the box agent loop has been a joke for the longest time, and writing your own if feasible but it's night and day when trying to get something done with pydantic-ai.
Too bad because it has a lot of nice things, I wish it were more popular.
sbpayne 1 hours ago [-]
Yeah! I can agree with this. There's some improved ergonomics to get here
verdverm 1 hours ago [-]
Have you looked at ADK? How does it compare? Does it even fit in the same place as Dspy?
Disclaimer, I use ADK, haven't really looked at Dspy (though I have prior heard of it). ADK certainly addresses all of the points you have in the post.
sbpayne 59 minutes ago [-]
I personally haven't looked super closely at ADK. But I would love if someone more knowledgeable could do a sort of comparison. I imagine there are a lot of similar/shared ideas!
verdverm 45 minutes ago [-]
There are dozens if not 100s of agent frameworks in use today, 1000s if you peruse /new. I'm curious what features will make for longevity. One thing about ADK is that it comes in four languages (Py, TS, Go, Java; so far), which means understanding can transfer over/between teams in larger orgs, and they can share the same backing services (like the db to persist sessions).
pjmlp 1 hours ago [-]
Never heard of it, that is already a reason.
sbpayne 1 hours ago [-]
hahaha this is true!
CraftingLinks 47 minutes ago [-]
I used dspy in production, then reverted the bloat as it literally gave me nothing of added value in practice but a lot of friction when i needed precise control over the context. Avoid!
lysecret 1 hours ago [-]
Main reason to me is that its layers on layer on top of the base LLM calls with not so much to show for it. Also a lot of native features (like for examples geminis native structured responses) aren't well supported.
panelcu 58 minutes ago [-]
https://www.tensorzero.com/docs has similar abstractions but doesn't require Python and doesn't require committing to the framework or a language. It's also pretty hard to onboard, but solves the same problems better and makes evaluating changes to models / prompts much easier to reason about.
sbpayne 56 minutes ago [-]
I saw this some time ago! I personally have a distaste for external DSLs as I think it generally introduces complexity that I don't think is actually worthwhile, so I skipped over it. Also why I'm very "meh" on BAML.
ijk 1 hours ago [-]
This matches my experience with Dspy. I ended up removing it from our production codebase because, at the time, it didn't quite work as effectively as just using Pydantic and so forth.
The real killer feature is the prompt compilation; it's also the hardest to get to an effective place and I frequently found myself needing more control over the context than it would allow. This was a while ago, so things may have improved. But good evals are hard and the really fancy algorithms will burn a lot of tokens to optimize your prompts.
> Data extraction tasks are amongst the easiest to evaluate because there’s a known “right” answer.
Wrong. There can be a lot of subjectivity and pretending that some golden answer exists does more harm and narrows down the scope of what you can build.
My other main problem with data extraction tasks and why I'm not satisfied with any of the existing eval tools is that the schemas I write change can drastically as my understanding of the problem increases. And nothing really seems to handle that well, I mostly just resort to reading diffs of what happens when I change something and reading the input/output data very closely. Marimo is fantastic for anything visual like this btw.
> Abstract model calls. Make swapping GPT-4 for Claude a one-line change.
And in practice random limitations like structured output API schema limits between providers can make this non-trivial. God I hate the Gemini API.
sethkim 7 minutes ago [-]
This is extremely true. In fact, from what we see many/most of the problems to be solved with LLMs do not have ground-truth values; even hand-labeled data tends to be mostly subjective.
sbpayne 30 minutes ago [-]
This is very true! I could have been more careful/precise in how I worded this. I was really trying to just get across that it's in a sense easier than some tasks that can be much more open ended.
I'll think about how to word this better, thanks for the feedback!
rco8786 31 minutes ago [-]
I think they're just saying that data extraction tasks are easy to evaluate because for a given input text/file you can specify the exact structured output you expect from it.
Lerc 52 minutes ago [-]
If [programming_language] is so great, why isn't anyone using it?
For many of the same reasons. A plethora of alteratives, personal preference, weird ideology, appropriateness for the task, inertia, not-invented-here.
The list goes on.
brokensegue 1 hours ago [-]
i've tried it a few times and it's never really helped as much as i expected. though i know they've released a couple times since I last tried it.
sbpayne 1 hours ago [-]
yeah what I'm trying to get across here is that: Dspy does not solve an immediate problem, which is why many feel this way and consequently why it doesn't have great adoption!
But on the other hand, I think people unintentionally end up re-implementing a lot of Dspy.
QuadmasterXLII 2 hours ago [-]
If you find yourself adding a database because thats less painful than regular deployments from your version control, something is hair on fire levels of wrong with your CICD setup.
sbpayne 2 hours ago [-]
I think this misunderstands the need for iteration! Maybe I could have written it more clearly :).
The reality is that you don't want to re-deploy for every prompt change, especially early on. You want to get a really tight feedback loop. If prompt change requires a re-deploy, that is usually too slow. You don't have to use a database to solve this, but it's pretty common to see in my experience.
ijk 1 hours ago [-]
I've been reaching for BAML when I really need prompt iteration at speed.
sbpayne 2 hours ago [-]
I consistently hear great things from Dspy users. At the same time, it feels like adoption is always low.
Stranger still: it seems like every company I have worked with ends up building a half-baked version of Dspy.
CuriouslyC 2 hours ago [-]
Two issues:
1. People don't want to switch frameworks, even though you can pull prompts generated by DSPy and use them elsewhere, it feels weird.
2. You need to do some up-front work to set up some of the optimizers which a lot of people are averse to.
jatins 1 hours ago [-]
Would have been nice if the post actually showed how Dspy does the things that were handrolled
sbpayne 1 hours ago [-]
This is great feedback! I'll work on an update tonight :)
I have never heard of this! I took a quick look. I think I'm definitely not in the right audience for a tool like this, as I am more comfortable just writing code. But I think putting a UI over things like this _forces_ the underlying system to be more declarative...
So in practice I imagine you get at a lot of the same ideas / benefits!
simopa 1 hours ago [-]
"Great engineers write bad AI code" made my day ;)
sbpayne 1 hours ago [-]
hahaha this has just been my entire last few years of experience :)
dzonga 1 hours ago [-]
at /u/ sbpyane - very useful info and pricing page as well.
useful for upcoming consultants to learn how to price services too.
sbpayne 1 hours ago [-]
Highly recommend following @jxnl on X for consulting / positioning / pricing
LoganDark 1 hours ago [-]
This article seemingly misses any explanation of what DSPy even is or why it's supposedly so complicated and unfamiliar. Supposedly it solves the problems illustrated in the article, but it isn't explained how.
sbpayne 38 minutes ago [-]
Great feedback! I took for granted that people reading would be familiar with what Dspy is. I'll try to add this in tonight to introduce folks better. Thank you!
TZubiri 44 minutes ago [-]
>"Stage 2: “Can we tweak the prompt without deploying?”
Are we playing philosophy here? If you move some part of the code from the repo and into a database, then changing that database is still part of the deployment, but now you just made your versioning have identity crisis. Just put your prompts in your git repo and say no when someone requests an anti-pattern be implemented.
sbpayne 42 minutes ago [-]
I think the core challenge here is that being able to (in "development") quickly change the prompt or other parameters and re-run the system to see how it changes is really valuable for making a tight iteration loop.
It's annoying/difficult in practice if this is strictly in code. I don't think a database is necessarily the way to go, but it's just a common pattern I see. And I really strongly believe this is more of a need for a "development time override" than the primary way to deploy to production, to be clear.
markab21 58 minutes ago [-]
I think the entire premise that the prompting is the surface area for optimizing the application is fundamentally the wrong framing, in the same way that in 1998 better cpam will save CGI. It's solving the wrong problems now, and the limitations in context and model intelligence require a tool like Dspy.
The only thing I'd grab dspy for at this point is to automate the edges of the agentic pipeline that could be improved with RL patterns. But if that is true, you're really shorting yourself by giving your domain DSPY. You should be building your own RL learning loops.
My experience: If you find yourself reaching for a tool like Dspy, you might be sitting on a scenario where reinforcement learning approaches would help even further up the stack than your prompts, and you're probably missing where the real optimization win is. (Think bigger)
sbpayne 57 minutes ago [-]
Yeah, I find it hard to recommend Dspy. At the same time, I can't escape the observation that many companies are re-implementing a lot of parts of it. So I think it's important to at least learn from what Dspy is :)
villgax 1 hours ago [-]
Nobody uses it except for maybe the weaviate developer advocates running those jupyter cells.
tinyhouse 2 hours ago [-]
A lot of these ideas Dspy and RLM (from the same people IIRC) are more marketing than solving a real problem.
sbpayne 1 hours ago [-]
This is a surprising take to me! Would love to learn more about what you mean. I feel like the problems they solve seem so direct to me. For example: RLMs are an approach to long context problems. Not every problem is a good fit for RLMs for sure, but I can see some problems where I imagine it would work well!
maxothex 47 minutes ago [-]
[dead]
leontloveless 49 minutes ago [-]
[dead]
leontloveless 48 minutes ago [-]
[dead]
jee599 1 hours ago [-]
[dead]
Rendered at 16:51:16 GMT+0000 (Coordinated Universal Time) with Vercel.
> Typed I/O for every LLM call. Use Pydantic. Define what goes in and out.
Sure, not related to DSPy though, and completely tablestakes. Also not sure why the whole article assumes the only language in the world is Python.
> Separate prompts from code. Forces you to think about prompts as distinct things.
There's really no reason prompts must live in a file with a .md or .json or .txt extension rather than .py/.ts/.go/.., except if you indeed work at a company that decided it's a good idea to let random people change prod runtime behavior. If someone can think of a scenario where this is actually a good idea, feel free to elighten me. I don't see how it's any more advisable than editing code in prod while it's running.
> Composable units. Every LLM call should be testable, mockable, chainable.
> Abstract model calls. Make swapping GPT-4 for Claude a one-line change.
And LiteLLM or `ai` (Vercel), the actually most used packages, aren't? You're comparing downloads with Langchain, probably the worst package to gain popularity of the last decade. It was just first to market, then after a short while most realized it's horrifically architected, and now it's just coasting on former name recognition while everyone who needs to get shit done uses something lighter like the above two.
> Eval infrastructure early. Day one. How will you know if a change helped?
Sure, to an extent. Outside of programming, most things where LLMs deliver actual value are very nondeterministic with no right answer. That's exactly what they offer. Plenty of which an LLM can't judge the quality of. Having basic evals is useful, but you can quickly run into their development taking more time than it's worth.
But above all.. the comments on this post immediately make clear that the biggest differentiator of DSPy is the prompt optimization. Yet this article doesn't mention that at all? Weird.
This was my take as well.
My company recently started using Dspy, but you know what? We had to stand up an entire new repo in Python for it, because the vast majority of our code is not Python.
For example: I don't use Dspy at work! And I'm working in a primarily dotnet stack, so we definitely don't use Dspy... But still, I see the same patterns seeping through that I think are important to understand.
And then there's a question of "how do we implement these patterns idiomatically and ergonomically in our codebase/langugage?"
I'm curious what other practitioners are doing.
You're right: prompts are overfit to models. You can't just change the provider or target and know that you're giving it a fair shake. But if you have eval data and have been using a prompt optimizer with DSPy, you can try models with the one-line change followed by rerunning the prompt optimizer.
Dropbox just published a case study where they talk about this:
> At the same time, this experiment reinforced another benefit of the approach: iteration speed. Although gemma-3-12b was ultimately too weak for our highest-quality production judge paths, DSPy allowed us to reach that conclusion quickly and with measurable evidence. Instead of prolonged debate or manual trial and error, we could test the model directly against our evaluation framework and make a confident decision.
https://dropbox.tech/machine-learning/optimizing-dropbox-das...
They themselves are turning into wrapper code for other libraries (e.g. the LLM abstraction which litellm handles for them).
Can also add:
Option 3: Use instructor + litellm (probabyly pydantic AI, but have not tried that yet)
Edit: As others pointed out their optimizing algorithms are very good (GEPA is great and let's you easily visualize / track the changes it makes to the prompt)
I think one thing that's lost in all of the LLM tooling is that it's LLM-or-nothing and people have lost knowledge of other ML approaches that actually work just fine, like entity recognition.
I understand it's easier to just throw every problem at an LLM but there are things where off-the-shelf ML/NLP products work just as well without the latency or expense.
I think a problem to DSPy is that they don't know the concept of THE WHOLE PRODUCT: https://en.wikipedia.org/wiki/Whole_product
Look at https://mastra.ai/ and https://www.copilotkit.ai/ to see how more inviting their pages look. A company is not selling only the product itself but all the other things around the product = THE WHOLE PRODUCT
A similar concept in developer tools is the docs are the product
Also I'm a fullstack javascript engineer and I don't use Python. Docs usually have a switch for the language at the top. Stripe.com is famous for it's docs and Developer Experience: https://docs.stripe.com/search#examples It's great to study other great products to get inspiration and copy the best traits that are relevant to your product as well.
This takes a ton of upfront work and careful thinking. As soon as you move the goalposts of what you're trying to achieve you also have to update the training and evaluation dataset to cover that new use case.
This can actually get in the way of moving fast. Often teams are not trying to optimize their prompts but even trying to figure out what the set of questions and right answers should be!
I think the unfortunate part is: the way it encourages you to structure your code is good for other reasons that might not be an 'acute' pain. And over time, it seems inevitable you'll end up building something that looks like it.
That metric is the key piece. I don't know the right way to build an automated metric for a lot of the systems I want to build that will stand the test of time.
1) It's slow: you first have to get acquainted with DSPY and then get hand-labeled data for prompt optimization. This can be a slow process so it's important to just label cases that are ambiguous, not obvious.
2) They know that manual prompt engineering is brittle, and want a prompt that's optimized and robust against a model they're invoking, which DSPy offers. However, it's really the optimizer (ex. GEPA) doing the heavy-lifting.
3) They don't actually want a model or prompt at all. They want a task completed, reliably, and they want that task to not regress in performance. Ideally, the task keeps improving in production.
Curious if folks in this thread feel more of these pains than the ones in the article.
I think I might have just misunderstood how to use it.
I highly recommend checking out this community plugin from Maxime, it helps "bridge the gap": https://github.com/dspy-community/dspy-template-adapter
In my opinion, the reason people don't use DSPy is because DSPy aims to be a machine learning platform. And like the article says -- this feels different or hard to people who are not used to engineering with probabilistic outputs. But these days, many more people are programming with probability machines than ever before.
The absolute biggest time sink and 'here be dragons' of using LLMs is poke and hope prompt "engineering" without proper evaluation metrics.
> You don’t have to use DSPy. But you should build like someone who understands why it exists.
And this is the salient point, and I think it's very well stated. It's not about the framework per se, but about the methodology.
The fact that you have to bundle input+output signatures and everything is dynamically typed (sometimes into the args) just make it annoying to use in codebases that have type annotations everywhere.
Plus their out of the box agent loop has been a joke for the longest time, and writing your own if feasible but it's night and day when trying to get something done with pydantic-ai.
Too bad because it has a lot of nice things, I wish it were more popular.
https://google.github.io/adk-docs/
Disclaimer, I use ADK, haven't really looked at Dspy (though I have prior heard of it). ADK certainly addresses all of the points you have in the post.
The real killer feature is the prompt compilation; it's also the hardest to get to an effective place and I frequently found myself needing more control over the context than it would allow. This was a while ago, so things may have improved. But good evals are hard and the really fancy algorithms will burn a lot of tokens to optimize your prompts.
I think it solves some of this friction!
Wrong. There can be a lot of subjectivity and pretending that some golden answer exists does more harm and narrows down the scope of what you can build.
My other main problem with data extraction tasks and why I'm not satisfied with any of the existing eval tools is that the schemas I write change can drastically as my understanding of the problem increases. And nothing really seems to handle that well, I mostly just resort to reading diffs of what happens when I change something and reading the input/output data very closely. Marimo is fantastic for anything visual like this btw.
> Abstract model calls. Make swapping GPT-4 for Claude a one-line change.
And in practice random limitations like structured output API schema limits between providers can make this non-trivial. God I hate the Gemini API.
I'll think about how to word this better, thanks for the feedback!
For many of the same reasons. A plethora of alteratives, personal preference, weird ideology, appropriateness for the task, inertia, not-invented-here.
The list goes on.
But on the other hand, I think people unintentionally end up re-implementing a lot of Dspy.
The reality is that you don't want to re-deploy for every prompt change, especially early on. You want to get a really tight feedback loop. If prompt change requires a re-deploy, that is usually too slow. You don't have to use a database to solve this, but it's pretty common to see in my experience.
Stranger still: it seems like every company I have worked with ends up building a half-baked version of Dspy.
1. People don't want to switch frameworks, even though you can pull prompts generated by DSPy and use them elsewhere, it feels weird.
2. You need to do some up-front work to set up some of the optimizers which a lot of people are averse to.
So in practice I imagine you get at a lot of the same ideas / benefits!
useful for upcoming consultants to learn how to price services too.
Are we playing philosophy here? If you move some part of the code from the repo and into a database, then changing that database is still part of the deployment, but now you just made your versioning have identity crisis. Just put your prompts in your git repo and say no when someone requests an anti-pattern be implemented.
It's annoying/difficult in practice if this is strictly in code. I don't think a database is necessarily the way to go, but it's just a common pattern I see. And I really strongly believe this is more of a need for a "development time override" than the primary way to deploy to production, to be clear.
The only thing I'd grab dspy for at this point is to automate the edges of the agentic pipeline that could be improved with RL patterns. But if that is true, you're really shorting yourself by giving your domain DSPY. You should be building your own RL learning loops.
My experience: If you find yourself reaching for a tool like Dspy, you might be sitting on a scenario where reinforcement learning approaches would help even further up the stack than your prompts, and you're probably missing where the real optimization win is. (Think bigger)