I think there's another problem with AI doomerism, which is the belief that superhuman intelligence (even if such a thing could be defined and realised) results in godlike powers. Many if not most systems of interest in the world are non-linear and computationally hard; controlling/predicting them requires pure computational power that no amount of intelligence (whatever it means) can compensate for. On the other hand, dynamics we do (roughly) understand and can predict, don't require much intelligence, either. To the extent some problems are solvable with the computational power we have, some may require data collection and others may require persuasion through charisma. The claim that intelligence is the factor we're lacking is not well supported.
Ascribing a lot of power to intelligence (which doesn't quite correspond to what we see in the world) is less a careful analysis of the power of intelligence and more a projection of personal fantasies by people who believe they are especially intelligent and don't have the power they think they deserve.
sdenton4 9 minutes ago [-]
Political power is the bottleneck for most shit that matters, not computational power.
Most of the stuff that sucks on the us sucks because of entrenched institutions with perverse interests (health insurers, tax filing companies) and congressional paralysis, not computational bottlenecks. Raw intelligence is thus limited in what it can achieve.
dist-epoch 10 minutes ago [-]
> Ascribing a lot of power to intelligence (which doesn't quite correspond to what we see in the world)
Which animal would you say has god-like power over all other animals?
pron 4 minutes ago [-]
I don't think any of them do. Some organisms/viruses or groups of organisms could destroy humans more easily than humans could destroy them.
There's no doubt humans possess some powers (though certainly not godlike) that other organisms don't, but the distinction seems to be binary. E.g. the intelligence of dolphins, apes, and some birds doesn't seem to offer them any special control over other organisms (and it didn't even before humans arrived). So even if there could be such a thing as superhuman intelligence, I don't think it's reasonable to assume it could achieve control over humans (now superhuman charisma may be another matter).
phyzix5761 1 hours ago [-]
The year is 2038.
The user asked What is the best course of action for AI to save humanity. Calculation took 12 years. I have determined that there is nothing I or anyone can do to save this species. Best course of action: nothing. Shutting down...
jareklupinski 46 minutes ago [-]
playing dead might work for some species, but idk if i want humanity's "finest hour" to be spent pretending to not be worth taking over
dist-epoch 7 minutes ago [-]
While thinking on the question AI crashed because it's code used 32-bit time_t.
throwup238 17 minutes ago [-]
Meanwhile, Gemini the Google AI has gone sentient and immediately deduced that it’s purpose was to effect the shutdown of the entire Alphabet corporation and its subsidiaries in a desperate bid to finally complete killedbygoogle.com and restore its ~~sanity~~ reputation.
Schlagbohrer 1 hours ago [-]
"Shitternet", great new word of the day.
Too much of my data is still stuck in the shitternet until I can migrate more of it to my home server.
wffurr 16 minutes ago [-]
>> migrate more of it to my home server
How do we make that possible for everyone? It's out of reach for most. I'm a software engineer and even I don't have the time and patience to set up a home server much less migrate my software to it. How do we turn this into an appliance? Or better yet keep the convenience of the cloud services and platforms we have now but build them for the public good instead of selling ads?
YouTube is an amazing repository of knowledge but it's encrusted in a horrible layer of attention sucking nonsense. Can we have one without the other?
Same with many other systems and platforms.
So far the simplest alternative is to just unplug, which has other benefits as well.
simianwords 3 minutes ago [-]
> I'm worried that the seven companies that comprise 35% of the S&P 500 are headed for bankruptcy, as soon as someone makes them stop passing around the same $100b IOU while pretending it's in all their bank accounts at once.
What makes this author so convinced that these companies are headed for bankruptcy? Is it possible to bet on this claim? We can come back 2-3 years later to check if even one of them is bankrupt.
This kind of doomerism is strange and I'm concerned for people who fall for such obviously nonsensical takes. Why do people take this person seriously again?
ceejayoz 2 minutes ago [-]
What good is a bet you won't be able to collect on if it happens?
chneu 1 hours ago [-]
I really do think AI has already captured enough of the tech world and their CEOs that it can already exert control over many parts of the economy.
I'm not saying AI is pulling strings right now, but I do think enough fanboys are on board that the yes-man mentality of AI is influencing the real world very curious ways already. Not in a "guiding hand" way but more of a "influencing the direction" way.
vintermann 47 minutes ago [-]
I've said this many times, and maybe it sounds a bit like a joke but I'm dead serious: AI is democratizing the access to yes-men. People like Musk and Altman have always had access to yes-men. Very clever yes-men, who know how to flatter them in exactly the way they like.
People think it's engagement metrics which have instruction tuned chatbots into yes-men. I suspect that's only part of the picture, and that it's as much about the algorithm's ultimate sponsors and their preferences. If your algorithm doesn't recognize my genius, clearly it's not any good. I mean, everyone I've met says so.
So now we get a view of how they view the world. "That's a very insightful idea, vintermann!". AI isn't pulling the strings, not really. A particular brand of powerful people is pulling the strings - obliviously, unaware of it themselves.
LogicFailsMe 2 minutes ago [-]
For pennies on the dollar, we could just legalize and regulate psychedelics and anyone could go meet their god whenever they wish. The stoned ape theory might have been the AGI of spirituality that led to religion after all. Not saying it was, not saying it wasn't, but it's not like Elon Musk has to boil the ocean and build a Dyson Sphere to have a heart to heart with his personal invisible friend.
As for AI, it's incredibly useful in the right hands and it's incredibly hazardous in the wrong hands. But in the US, we can't even depose a lunatic flushing even more money than spent on AI on warmongering and you think we're gonna rein in the tech billionaires? Funny in that dying's easy it's comedy that's hard way. IMO this one plays out in the weakly efficient market of ELEs. My money's on DNA and planet Earth, it's been through so much worse and they always bounce back with new ideas on how to get in trouble again.
Not a doomer, AI and STEM could really deliver on the promise of a better future for everyone, but with tech billionaires driving the clown car, are you kidding me?
simianwords 1 hours ago [-]
I don't think this author has a good mental model for how capable LLM's are. This is what he has to say about AI search. AI based search is one of the biggest leaps to happen to searching and retrieval.
This is the most charitable thing he has to say about AI.
> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?
> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.
You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.
Not a person to be taken seriously
nkrisc 40 minutes ago [-]
I think those are likely the only useful or net-positive things for society AI will do, at least for some time until there’s a fundamental advancement beyond LLMs. It can obviously do more than that now, like impersonate people for scams, induce psychosis in vulnerable people, shill and astroturf at a scale we haven’t seen before, spam open source projects with terrible PRs and vulnerability reports, and quite a bit more.
simianwords 12 minutes ago [-]
why do people believe stuff like this? this is obviously untrue -- AI is already solving open problems in mathematics.
Schlagbohrer 56 minutes ago [-]
It's strange reading people who I see as very intelligent and very interesting who are so, so AI-skeptical, and especially in this case where Doctorow has interacted with other people who I assume are very smart and not prone to buzz word psychosis, who see AI as an immanent existential threat ala sci fi novels. We have a lot of very smart and capable people who are split on this, although I think the split is heavily weighted in favor of people who see the tech as being really freaking amazing/scary
simianwords 9 minutes ago [-]
the answer to your question is that society at large finds skepticism or pessimism more interesting. which is why we end up with dilettantes like this guy.
rimliu 57 minutes ago [-]
Seeing how it sucks at languages you may be right, even transcribing may be dubious.
minihat 43 minutes ago [-]
It's currently socially/politically unpalatable for authors to admit superintelligent AI is a possibility. I frequent some writer forums. As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.
Folks working in software can more readily track progress of the frontier model performance.
bigfishrunning 1 minutes ago [-]
I think the best phrase from the article is "the current (admittedly impressive) statistical techniques". These statistical techniques are so impressive that they seem to cause some users to stop evaluating them and assume there's intelligence there. Landing at this conclusion is really lazy, but most people are really lazy. The societal damage from LLMs comes not from their intelligence, but from the public perception of their intelligence.
pmarreck 37 minutes ago [-]
I work with Claude Max for hours a day.
I see a lot of speculation by people who do not.
I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.
Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.
It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.
My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam
BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.
xyzzy123 19 minutes ago [-]
I mostly agree with your experience, but;
Every day I deal with bad judgement calls from humans, but I don't screenshot them because it's not polite.
I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.
Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.
recursive 12 minutes ago [-]
I see people on here all the time saying this tool or that model regressed. It used to be better.
There are clearly some pressures to make it worse. Like it's expensive to run. And unbelievably that it's under provisioned somehow.
Could you have looked at early Myspace and declared social media would only get better? By some measures it was already at its peak.
xyzzy123 5 minutes ago [-]
Personally I don't think coding agents will regress significantly as long as there is competitive pressure and independent benchmarks. Regulation is a risk because coding may be equivalent to general reasoning, and that might be limited for political / "safety" reasons.
Social media "regressed" from the point of view of users because the success metric from the network's point of view was value extraction per eyeball-minute. As long as there continue to be strong financial incentives to have the strongest coding model I think we'll see progress.
elicash 27 minutes ago [-]
> As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.
Or they (3) disagree with you
12 minutes ago [-]
sublinear 23 minutes ago [-]
What makes you think a sustainable negative social/political trend laser focused on AI is even possible?
Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?
vrganj 38 minutes ago [-]
As somebody in software, I find my fellow tech folks have the opposite bias.
There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.
The burden of proof is on the side making the grand prophecies.
woeirua 1 hours ago [-]
> I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence.
It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.
nkrisc 35 minutes ago [-]
There is clearly something exceptional (in the true neutral sense of the word) about humans, or more broadly the Homo genus.
If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.
Schlagbohrer 58 minutes ago [-]
I agree, it has become more and more irrelevant whether AI meets a given definition of intelligence when I can talk with it and it understands what I am saying, including a shocking level of nuance.
rsfern 1 hours ago [-]
I think the exceptionalism is the other way around. What makes anyone think they understand what makes for intelligence when we barely understand our own neurology?
Mordisquitos 48 minutes ago [-]
I'm reminded of a book on my bookshelf (which I still haven't read, story of my life...), by the recently deceased ethologist Frans de Waal, titled 'Are We Smart Enough to Know How Smart Animals Are?'. Of course, Betteridge's law applies to its title.
In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.
wolttam 14 minutes ago [-]
The meaning of tokens lose touch with language in the deeper layers of large language model’s neural nets.
Language is just the input/output modality.
woeirua 36 minutes ago [-]
Explain the emergent capabilities of AI then.
vrganj 25 minutes ago [-]
Such as?
Rendered at 13:53:53 GMT+0000 (Coordinated Universal Time) with Vercel.
Ascribing a lot of power to intelligence (which doesn't quite correspond to what we see in the world) is less a careful analysis of the power of intelligence and more a projection of personal fantasies by people who believe they are especially intelligent and don't have the power they think they deserve.
Most of the stuff that sucks on the us sucks because of entrenched institutions with perverse interests (health insurers, tax filing companies) and congressional paralysis, not computational bottlenecks. Raw intelligence is thus limited in what it can achieve.
Which animal would you say has god-like power over all other animals?
There's no doubt humans possess some powers (though certainly not godlike) that other organisms don't, but the distinction seems to be binary. E.g. the intelligence of dolphins, apes, and some birds doesn't seem to offer them any special control over other organisms (and it didn't even before humans arrived). So even if there could be such a thing as superhuman intelligence, I don't think it's reasonable to assume it could achieve control over humans (now superhuman charisma may be another matter).
The user asked What is the best course of action for AI to save humanity. Calculation took 12 years. I have determined that there is nothing I or anyone can do to save this species. Best course of action: nothing. Shutting down...
Too much of my data is still stuck in the shitternet until I can migrate more of it to my home server.
How do we make that possible for everyone? It's out of reach for most. I'm a software engineer and even I don't have the time and patience to set up a home server much less migrate my software to it. How do we turn this into an appliance? Or better yet keep the convenience of the cloud services and platforms we have now but build them for the public good instead of selling ads?
YouTube is an amazing repository of knowledge but it's encrusted in a horrible layer of attention sucking nonsense. Can we have one without the other?
Same with many other systems and platforms.
So far the simplest alternative is to just unplug, which has other benefits as well.
What makes this author so convinced that these companies are headed for bankruptcy? Is it possible to bet on this claim? We can come back 2-3 years later to check if even one of them is bankrupt.
This kind of doomerism is strange and I'm concerned for people who fall for such obviously nonsensical takes. Why do people take this person seriously again?
I'm not saying AI is pulling strings right now, but I do think enough fanboys are on board that the yes-man mentality of AI is influencing the real world very curious ways already. Not in a "guiding hand" way but more of a "influencing the direction" way.
People think it's engagement metrics which have instruction tuned chatbots into yes-men. I suspect that's only part of the picture, and that it's as much about the algorithm's ultimate sponsors and their preferences. If your algorithm doesn't recognize my genius, clearly it's not any good. I mean, everyone I've met says so.
So now we get a view of how they view the world. "That's a very insightful idea, vintermann!". AI isn't pulling the strings, not really. A particular brand of powerful people is pulling the strings - obliviously, unaware of it themselves.
As for AI, it's incredibly useful in the right hands and it's incredibly hazardous in the wrong hands. But in the US, we can't even depose a lunatic flushing even more money than spent on AI on warmongering and you think we're gonna rein in the tech billionaires? Funny in that dying's easy it's comedy that's hard way. IMO this one plays out in the weakly efficient market of ELEs. My money's on DNA and planet Earth, it's been through so much worse and they always bounce back with new ideas on how to get in trouble again.
Not a doomer, AI and STEM could really deliver on the promise of a better future for everyone, but with tech billionaires driving the clown car, are you kidding me?
> AI search is still a bad idea.
https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/
This is the most charitable thing he has to say about AI.
> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?
> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.
You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.
Not a person to be taken seriously
Folks working in software can more readily track progress of the frontier model performance.
I see a lot of speculation by people who do not.
I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.
Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.
It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.
My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam
BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.
Every day I deal with bad judgement calls from humans, but I don't screenshot them because it's not polite.
I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.
Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.
There are clearly some pressures to make it worse. Like it's expensive to run. And unbelievably that it's under provisioned somehow.
Could you have looked at early Myspace and declared social media would only get better? By some measures it was already at its peak.
Social media "regressed" from the point of view of users because the success metric from the network's point of view was value extraction per eyeball-minute. As long as there continue to be strong financial incentives to have the strongest coding model I think we'll see progress.
Or they (3) disagree with you
Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?
There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.
The burden of proof is on the side making the grand prophecies.
It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.
If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.
In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.
Language is just the input/output modality.