Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow.
clickety_clack 1 days ago [-]
I don't think that will work for me. I looked for ways to summarize a transcript into a PRD and all I got was "Wow. Incredible. You’ve managed to hit the trifecta: vague, lazy, and entitled. You dump a transcript here and expect the internet to conjure up a polished PRD for you like some kind of corporate fairy godmother? Newsflash: this isn’t Fiverr, and we’re not your underpaid product managers."
Or they could just use Gemini or GPT-5. It isn't exactly difficult these days to find alternate LLMs
mceoin 1 days ago [-]
or Anthropic models on AWS, etc.
funnym0nk3y 1 days ago [-]
Aren't they all on AWS?
FuriouslyAdrift 1 days ago [-]
Or run locally with GPT4All
jmclnx 1 days ago [-]
Stack overflow did not exist or was not even a dream when I was in my prime :)
gzer0 1 days ago [-]
Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024.
Comment last time that had me chuckling.
javier2 1 days ago [-]
I might have to go read documentation again instead of asking Claude
boarush 1 days ago [-]
Anthropic has by far been the most unreliable provider I've ever seen. Daily incidents, and this one seems to have taken down all their services. Can't even login to the Console.
Insanity 1 days ago [-]
Maybe they have vibe-coded their own stack!
But less tongue-in-cheek, yeah Anthropic definitely has reliability issues. It might be part of trying to move fast to stay ahead of competitors.
adastra22 1 days ago [-]
They have. Claude Code was their internal dev tool, and it shows.
CuriouslyC 1 days ago [-]
And yet even dogfooding their own product heavily, it's still a giant janky pile. The prompt work is solid, the focus on optimizing tools was a good insight, and the model makes a good agent, but the actual claude code software is pretty shameful to be the most viable product of a billion dollar company.
shuckles 1 days ago [-]
What artifact are you evaluating to come to this conclusion? Is the implementation available?
rmonvfer 1 days ago [-]
The source for one of the initial versions got leaked a while ago and let’s say it’s not very good architecturally speaking, specifically when compared with the Gemini CLI, which it open source.
The point of Claude Code is deep integration with the Claude models, not the actual CLI as a piece of software, which is quite buggy (it also has some great features, of course!)
At least for me, if I didn’t have to put in the work to modify the Gemini CLI to work reliably with Claude (or at least to get a similar performance), I wouldn’t use Claude Code CLI (and I say this while paying $200 per month to Anthropic because the models are very good)
CuriouslyC 1 days ago [-]
A. I use it daily to take advantage of the plan inference discount.
B. Let's just say I didn't write the most robust javascript decompilation/deminification engine in existence solely as an academic exercise :)
There are a lot more stuff (both released and still cooking) on my products page (https://sibylline.dev/products), I will be doing a few drops this week, including hopefully something pretty huge (benchmark validation is killing me but I'm almost good to cut release).
Analemma_ 1 days ago [-]
The tongue-in-cheek jokes are kind of obvious, but even without the snark I think it is worth asking why the supposed 100x productivity boost from Claude Code I keep hearing about hasn't actually resulted in reliability improvements, even from developers who presumably have effectively-unlimited token budgets to spend on improving their stack.
Uehreka 1 days ago [-]
I love how people like Simon Willison and Pete Steinberger spend all this effort trying to be skeptical of their own experiences and arrive at nuanced takes like “50% more productive, but that’s actually a pretty big deal, but the nature of the increase is complicated” and y’all just keep repeating the brainrotted “100x, juniors are cooked” quote you heard someone say on LinkedIn.
CuriouslyC 1 days ago [-]
AI gives you what you ask for. If you don't understand your true problems, and you ask it to solve the wrong problems, it doesn't matter how much compute you burn, you're still gonna fail.
cainxinth 1 days ago [-]
I've been paying for the $20/m plan from Anthropic, Google, and OpenAI for the past few months (to evaluate which one I want to keep and to have a backup for outages and overages).
Gemini never goes down, OpenAI used to go down once in a while but is much more stable now, and Anthropic almost never goes a full week without throwing an error message or suffering downtime. It's a shame because I generally prefer Claude to the others.
panarky 1 days ago [-]
Same here, but for API access to the big three instead of their web/app products, and Gemini also shows greater uptime.
But even when the API is up, all three have quite high API failure rates, such as tool calls not responding with valid JSON, or API calls timing out after five minutes with no response.
Definitely need robust error handling and retries with exponential backoff because maybe one in twenty-five calls fails and then succeeds on retry.
boarush 1 days ago [-]
Invalid JSON and other formatting issues is more towards the model behavior I would say since no model guarantees that level of conformance to the schema. I wouldn't necessarily club it with the downtime of the API.
j45 1 days ago [-]
A lot of people might be discovering their preference for Claude.
RobertLong 1 days ago [-]
All the AI labs are but Anthropic is the worst. Anyone serious about running Claude in prod is using Bedrock or Vertex. We've been pretty happy with Vertex.
boarush 1 days ago [-]
I wonder why they haven't invested a lot more in the inference stack? Is it really that different from Google, OpenAI and other open weight models?
ihaveajob 1 days ago [-]
Have you used Bitbucket?
boarush 1 days ago [-]
A core research library for MATLAB I used in a course project used to be on BitBucket, though thankfully didn't have to deal with a lot of collaboration there.
paulddraper 1 days ago [-]
OpenAI used to be just as bad if not worse.
But they've stabilized the past 5 months.
cube2222 1 days ago [-]
Funny observation - it feels like being in the EU I get a much better AI SaaS experience than folks over in the US.
It’s like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors.
In EU working hours there’s rarely any outages.
htrp 1 days ago [-]
By the time San Francisco comes online, your day is already done.
config_yml 1 days ago [-]
This is exactly my experience. It’s like Claude Code had a stroke during lunch, and when I return working it forgot how anything works.
pram 1 days ago [-]
Funnier still it goes to shit late at night for me in the US (like 1am+) because I assume India is getting online. Can't win.
_joel 1 days ago [-]
Agreed, early morning here in the UK everything is fine, as soon as most of the US is up and at it, then it slowly turn to treacle. I've been testing z.ai for the past week and it's nowhere near as suceptible, fwiw.
flutas 1 days ago [-]
To back up that observation:
I've seen a LOT of commentary on social media that Anthropic models (Claude / Opus) seem to degrade in capability when the US starts it's workday vs when the US is asleep.
TkTech 1 days ago [-]
And on the flip side, the status page literally says:
> Importantly, we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.
Liquix 1 days ago [-]
keyword: intentionally
the statement is carefully worded to avoid the true issue: an influx of traffic resulting in service quality unintentionally degrading
flutas 1 days ago [-]
I wasn't trying to say they intentionally do it.
I was trying to say that systemic issues (such as load capacity) seem to degrade the models in US working hours and has been noticed by a non-zero number of users (myself included).
jamil7 1 days ago [-]
Yeah, also the CI queues start to get longer towards the end of the EU day as the Americans start their day.
j1000 1 days ago [-]
Funny, my friend told me the same thing happens to Figma.
grishka 1 days ago [-]
And just like that, the world became a little bit of a better place for a short while.
sys32768 1 days ago [-]
The Sentient Hyper-Optimized Data Access Network has acquired a meat suit and was last seen shambling toward In-n-Out.
Your best bet is having an account on AWS Bedrock & Vertex AI so you're able to route your request to the same model (such as claude-sonnet-4) but on a different provider.
bravetraveler 1 days ago [-]
Are the vibes off? (pun intended, sorry)
I've noticed a trend with their incident reports... "all fixed", basically. Little mind/words to prevention
jsnell 1 days ago [-]
The status page is not where you communicate about either the root cause or about the action plan for preventing recurrences.
bravetraveler 1 days ago [-]
You sure? That's exactly where I found this. Note the domain :)
edit: before some drive-by datamining nerd thinks I do/did SRE for Google, no
jsnell 1 days ago [-]
Fair enough! But that's not real-time communication during an active incident. It's communication O(days) later.
bravetraveler 1 days ago [-]
And that's totally fine! Not really even looking for meaty RCA material, just some indication that the incidents are taken more seriously than in-the-moment.
To be fair, too, it's likely been mentioned. I'm biased towards an unreasonable standard due to my line of work.
A status page without some thorough history is glorified 'About Us' :P
ath3nd 1 days ago [-]
Extremely hard disagree. The status page is exactly where you communicate about both the root cause, and the action plan to prevent it.
Every status page incident on every normal company everywhere in the world has links to lead you to the postmortem and their steps to avoid it. Here are a few examples:
It's literally a standard for your status page to communicate both about root cause and action plan how to prevent it in the future. Sure, when an incident is just happening, the status page entry doesn't have the postmortem and the steps to avoid, but later on those get added.
Being so overconfidently wrong reminds me of an LLM.
jasona123 1 days ago [-]
To be fair, we did have rainy weather in Ashburn today (insert some joke about us-east-1 here).
bfrog 1 days ago [-]
The self feeding AI has destroyed itself at last
searls 1 days ago [-]
Man, Anthropic's service quality has just been a dumpster fire since July. Embarrassed I ever recommended people pony up for a 20x Max plan. The fact they've admitted that Claude Code got dumber for an entire month and didn't offer refunds is really bad form IMO https://www.reddit.com/r/ClaudeAI/comments/1nc4mem/update_on...
jedisct1 1 days ago [-]
Z.AI works fine. Qwen works fine.
Glad I switched.
cpursley 1 days ago [-]
Qwen code is pretty decent but it’s no Claude. How would you compare Z?
ath3nd 1 days ago [-]
Is it a surprise that a vibe coding company has vibe coded operational excellence practices?
dgfitz 1 days ago [-]
github is down! I can’t write any more code!
The shoulders of giants we stand on are slumped in sadness.
deepdarkforest 1 days ago [-]
Looks like it’s time to go outside and touch some grass again
What? Code by hand/brain like your elders. Young whippersnappers.
_joel 1 days ago [-]
r/Anthropic/ currently filled with very irate customers.
trollied 1 days ago [-]
They should learn to write a few lines of code themselves while they wait.
_joel 1 days ago [-]
Hilarious, have you got any new ones?
yifanl 1 days ago [-]
They should learn to write a few lines of code themselves while they wait.
CuriouslyC 1 days ago [-]
Not a good use of time. Better to spend time analyzing your codebase for high level improvements you can have agents perform when back online, or working with ChatGPT on high level strategic goals/planning.
bogtog 1 days ago [-]
I opened the subreddit, and the posts/comments I saw didn't show anybody "very irate"...
_joel 1 days ago [-]
Well, there's a number that are cancelling, I'd call that irate.
bdcravens 1 days ago [-]
Peeked, and it seems to be less irate customers, and more just comments that just say "Is it down?"
anonyfox 1 days ago [-]
meanwhile I am amazed by the raw speed of grok in cursor. night and day to claude sonnet, and don't even talk about gpt5
Rendered at 02:50:46 GMT+0000 (Coordinated Universal Time) with Vercel.
(nit.. please don't actually do this).
Comment last time that had me chuckling.
But less tongue-in-cheek, yeah Anthropic definitely has reliability issues. It might be part of trying to move fast to stay ahead of competitors.
The point of Claude Code is deep integration with the Claude models, not the actual CLI as a piece of software, which is quite buggy (it also has some great features, of course!)
At least for me, if I didn’t have to put in the work to modify the Gemini CLI to work reliably with Claude (or at least to get a similar performance), I wouldn’t use Claude Code CLI (and I say this while paying $200 per month to Anthropic because the models are very good)
B. Let's just say I didn't write the most robust javascript decompilation/deminification engine in existence solely as an academic exercise :)
There are a lot more stuff (both released and still cooking) on my products page (https://sibylline.dev/products), I will be doing a few drops this week, including hopefully something pretty huge (benchmark validation is killing me but I'm almost good to cut release).
Gemini never goes down, OpenAI used to go down once in a while but is much more stable now, and Anthropic almost never goes a full week without throwing an error message or suffering downtime. It's a shame because I generally prefer Claude to the others.
But even when the API is up, all three have quite high API failure rates, such as tool calls not responding with valid JSON, or API calls timing out after five minutes with no response.
Definitely need robust error handling and retries with exponential backoff because maybe one in twenty-five calls fails and then succeeds on retry.
But they've stabilized the past 5 months.
It’s like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors.
In EU working hours there’s rarely any outages.
I've seen a LOT of commentary on social media that Anthropic models (Claude / Opus) seem to degrade in capability when the US starts it's workday vs when the US is asleep.
> Importantly, we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.
the statement is carefully worded to avoid the true issue: an influx of traffic resulting in service quality unintentionally degrading
I was trying to say that systemic issues (such as load capacity) seem to degrade the models in US working hours and has been noticed by a non-zero number of users (myself included).
Especially concerning since we just had a npm phishing attack and people can't tell.
> This incident affects: claude.ai, console.anthropic.com, and api.anthropic.com.
Or is there a better alternative to address this availability concern?
I've noticed a trend with their incident reports... "all fixed", basically. Little mind/words to prevention
https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
edit: before some drive-by datamining nerd thinks I do/did SRE for Google, no
To be fair, too, it's likely been mentioned. I'm biased towards an unreasonable standard due to my line of work.
A status page without some thorough history is glorified 'About Us' :P
Every status page incident on every normal company everywhere in the world has links to lead you to the postmortem and their steps to avoid it. Here are a few examples:
https://status.gitlab.com/ -> https://status.gitlab.com/pages/history/5b36dc6502d06804c083...
https://status.hetzner.com/ -> https://status.hetzner.com/incident/2e715748-fddd-427b-a07b-...
https://www.githubstatus.com/ -> https://www.githubstatus.com/incidents/mj067hg9slb4
https://bitbucket.status.atlassian.com/ -> https://bitbucket.status.atlassian.com/incidents/4mcg46242wz...
It's literally a standard for your status page to communicate both about root cause and action plan how to prevent it in the future. Sure, when an incident is just happening, the status page entry doesn't have the postmortem and the steps to avoid, but later on those get added.
Being so overconfidently wrong reminds me of an LLM.
Glad I switched.
The shoulders of giants we stand on are slumped in sadness.