NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Singularity will occur on a Tuesday (campedersen.com)
stego-tech 47 minutes ago [-]
This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

And, yep! A lot of people absolutely believe it will and are acting accordingly.

It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.

nine_k 5 minutes ago [-]
> * enough people believe it will happen and act accordingly*

Here comes my favorite notion of "epistemic takeover".

A crude form: make everybody believe that you have already won.

A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.

jacquesm 44 minutes ago [-]
> “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”

And there are plenty of people that take issue with that too.

Unfortunately they're not the ones paying the price. And... stock options.

stego-tech 38 minutes ago [-]
History paints a pretty clear picture of the tradeoff:

* Profits now and violence later

OR

* Little bit of taxes now and accelerate easier

Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.

jpadkins 10 minutes ago [-]
Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.
AndrewKemendo 26 minutes ago [-]
Every possible example of “progress” have either an individual or a state power purpose behind it

there is only one possible “egalitarian” forward looking investments that paid off for everybody

I think the only exception to this is vaccines…and you saw how all that worked during Covid

Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad

jacquesm 15 minutes ago [-]
COVID has cured me (hah!) of the notion that humanity will be able to pull together when faced with a common enemy. That means global warming or the next pandemic are going to happen and we will not be able to stop it from happening because a solid percentage can't wait to jump off the ledge, and they'll push you off too.
jpadkins 4 minutes ago [-]
COVID also cured me but we have different conclusions. I used to think that what happened in Nazi Germany was unthinkable or impossible in my country. After seeing the glee in which my fellow humans took in enforcing and cheering on inhumane policies (closing schools, beaches, public parks, etc) or mask mandates (that became a very public signal of whether you were part of the in group or not) made me realize how easy it was for the Nazi's to control the German public. I lost a lot of faith in humanity, that they aren't going to stand up for their fellow man if it means they can feel virtuous or superior by castigating people for not following the authority
AndrewKemendo 13 minutes ago [-]
Yeah buddy we agree
dakolli 5 minutes ago [-]
Just say it simply,

1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.

2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.

3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dope-manergic crash one gets when taking shortcuts in life.

I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.

mitthrowaway2 33 minutes ago [-]
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe.

afthonos 29 minutes ago [-]
I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.
Negitivefrags 30 minutes ago [-]
> If the singularity does happen, then it hardly matters what people do or don't believe.

Depends on how you feel about Roko's basilisk.

VonTum 18 seconds ago [-]
God Roko's Basilisk is the most boring AI risk to catch the public consciousness. It's just Pascal's wager all over again, with the exact same rebuttal.
sigmoid10 29 minutes ago [-]
Depends on what a post singularity world looks like, with Roko's basilisk and everything.
cgannett 31 minutes ago [-]
if people believe its a threat and it is also real then what matters is timing
0x20cowboy 22 minutes ago [-]
"'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".
Forgeties79 32 minutes ago [-]
I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.

It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!

generic92034 40 minutes ago [-]
> Folks vibe with the latter

I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).

stego-tech 35 minutes ago [-]
Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.

It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.

dakolli 3 minutes ago [-]
It seems pretty obvious to me the ruling class is preparing for war to keep us occupied, just like in the 20s, they'll make young men and women so poor they'll beg to fight in a war.

It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.

bheadmaster 31 minutes ago [-]
> here’s how LLMs actually work

But how is that useful in any way?

For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.

OkayPhysicist 23 minutes ago [-]
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

MarkusQ 24 minutes ago [-]
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.

Sometimes things seems unbelievable simply because they aren't true.

AndrewKemendo 31 minutes ago [-]
The goal is to eliminate humans as the primary actors on the planet entirely

At least that’s my personal goal

If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled

As it stands today and in all the annals of history there does not exist a system that does what I just described.

Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.

Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist

And no mondragon does not have one of these

NitpickLawyer 35 minutes ago [-]
> [...] prior to reforming society [...]

Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)

stego-tech 34 minutes ago [-]
I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.
AndrewKemendo 29 minutes ago [-]
Literally nobody’s trying because there is no solution

The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly

and so there is no solution because humans can’t plan or execute on a plan

sp527 25 minutes ago [-]
The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.
accidentallfact 35 minutes ago [-]
Reality won't give a shit about what people believe.
vcanales 1 hours ago [-]
> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.

Damn, good read.

adastra22 39 minutes ago [-]
We are already long past that point…
shantara 22 minutes ago [-]
It doesn’t help when quite a few Big Tech companies are deliberately operating on the principle that they don’t have to follow the rules, just change at the rate that is faster than the bureaucratic system can respond.
ericmcer 29 minutes ago [-]
Great article, super fun.

> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.

You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.

It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.

gojomo 52 minutes ago [-]
"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."

– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965

https://www.baen.com/Chapters/9781618249203/9781618249203___...

atomic128 40 minutes ago [-]

    Once men turned their thinking over to machines
    in the hope that this would set them free.

    But that only permitted other men with machines
    to enslave them.

    ...

    Thou shalt not make a machine in the
    likeness of a human mind.

   -- Frank Herbert, Dune
You won't read, except the output of your LLM.

You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?

You won't think or analyze or understand. The LLM will do that.

This is the end of your humanity. Ultimately, the end of our species.

Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.

Join us, or better yet: deploy weapons of your own design.

debo_ 38 minutes ago [-]
If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album
gojomo 27 minutes ago [-]
Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.
accidentallfact 27 minutes ago [-]
A better approach is to make AI bullshit people on purpose.
octernion 29 minutes ago [-]
do... do the "poison" people actually think that will make a difference? that's hilarious.
PaulHoule 25 minutes ago [-]
The simple model of an "intelligence explosion" is the obscure equation

  dx    2
  -- = x
  dt
which has the solution

        1      
  x = -----
       C-t
and is interesting in relation to the classic exponential growth equation

  dx
  -- = x
  dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature outside of an aside in one of Howard Odum's tomes on emergy.

Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation

  dx
  --  = (1-x) x
  dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
kpil 8 minutes ago [-]
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.

I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.

* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)

The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

root_axis 43 minutes ago [-]
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
jgrahamc 41 minutes ago [-]
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
jacquesm 38 minutes ago [-]
I suspect that's the secret driver behind a lot of the push for the apocalypse.
octernion 40 minutes ago [-]
that was precisely my reaction as well. phew machines will deal with the timestamp issue and i can just sit on a beach while we singularityize or whatever.
jacquesm 37 minutes ago [-]
You won't be on the beach when you get turned into paperclips. The machines will come and harvest your ass.

Don't click here:

https://www.decisionproblem.com/paperclips/

33 minutes ago [-]
octernion 14 minutes ago [-]
having played that when it came out, my conclusion was that no, i will definitely be able to be on a beach; i am too meaty and fleshy to be good paperclip
danesparza 19 minutes ago [-]
"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.

I feel like I need to start more sprint stand-ups with this quote...

wayfwdmachine 13 minutes ago [-]
Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).

The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.

Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).

We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.

pixl97 51 minutes ago [-]
>That's a very different singularity than the one people argue about.

---

I wouldn't say it's that much different. This has always been a key point of the singularity

>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.

It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.

zh3 1 hours ago [-]
Fortuitously before the Unix date rollover in 2038. Nice.
ecto 1 hours ago [-]
I didn't even realize - I hope my consciousness is uploaded with 64 bit integers!
thebruce87m 57 minutes ago [-]
You’ll regret this statement in 292 billion years
layer8 51 minutes ago [-]
I think we’ll manage to migrate to bignums by then.
GolfPopper 36 minutes ago [-]
The poster won't, but the digital slaves made from his upload surely will.
svilen_dobrev 6 minutes ago [-]
> already exerting gravitational force on everything it touches.

So, "Falling of the night" ?

rcarmo 1 hours ago [-]
"I could never get the hang of Tuesdays"

- Arthur Dent, H2G2

49 minutes ago [-]
jama211 46 minutes ago [-]
Thursdays, unfortunately
qoez 52 minutes ago [-]
Great read but damn those are some questionable curve fittings on some very scattered data points
jacquesm 42 minutes ago [-]
Better than some of the science papers I've tried to parse.
aenis 46 minutes ago [-]
In other words, just another Tuesday.
jesse__ 49 minutes ago [-]
The meme at the top is absolute gold considering the point of the article. 10/10
wffurr 46 minutes ago [-]
Why does one of them have the state flag of Ohio? What AI-and-Ohio-related news did I miss?
adzm 35 minutes ago [-]
Note that the only landmass on Earth is actually Ohio as well. Turns out, it's all Ohio. And it always has been. https://knowyourmeme.com/memes/wait-its-all-ohio-always-has-...
baalimago 54 minutes ago [-]
Well... I can't argue with facts. Especially not when they're in graph form.
regnull 17 minutes ago [-]
Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
jama211 44 minutes ago [-]
A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.

Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.

Crazy times we live in.

dirkc 40 minutes ago [-]
The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.

*edit* - seems inline with what the author is saying :)

> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.

ragchronos 51 minutes ago [-]
This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
Krei-se 40 minutes ago [-]
https://cdn.statcdn.com/Infographic/images/normal/870.jpeg

you can easily see that at the doubling rate every 2 years in 2020 we already had over 5 facebook accounts per human on earth.

GolfPopper 26 minutes ago [-]
Frank Herbert and Samuel Butler.
arscan 43 minutes ago [-]

  Don't worry about the future
  Or worry, but know that worrying
  Is as effective as trying to solve an algebra equation by chewing Bubble gum
  The real troubles in your life
  Are apt to be things that never crossed your worried mind
  The kind that blindsides you at 4 p.m. on some idle Tuesday

    - Everybody's free (to wear sunscreen)
         Baz Luhrmann
         (or maybe Mary Schmich)
athrowaway3z 37 minutes ago [-]
> Tuesday, July 18, 2034

4 years early for the Y2K38 bug.

Is it coincidence or Roko's Basilisk who has intervened to start the curve early?

miguel_martin 43 minutes ago [-]
"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
neilellis 40 minutes ago [-]
But you're not Everyone - they are a fictional hacker collective from a TV show.
lostmsu 42 minutes ago [-]
Your comment just self-defeated.
bluejellybean 39 minutes ago [-]
Yet, here you are ;)
jacquesm 36 minutes ago [-]
Another one down.
sempron64 38 minutes ago [-]
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
H8crilA 37 minutes ago [-]
But this is a phase change process.

Also, the temptation to shitpost in this thread ...

sempron64 20 minutes ago [-]
I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.
banannaise 35 minutes ago [-]
You have not read far enough.
43 minutes ago [-]
dakolli 15 minutes ago [-]
Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
skrebbel 44 minutes ago [-]
Wait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)
jmugan 1 hours ago [-]
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
ecto 1 hours ago [-]
Perhaps they will revel in the friends they made along the way.
Krei-se 39 minutes ago [-]
If only we had a battle tested against reality self learning system.
jonplackett 28 minutes ago [-]
This assumes humanity can make it to 2034 without destroying itself some other way…
hinkley 1 hours ago [-]
Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
cesarvarela 30 minutes ago [-]
Thanks, added to calendar.
jrmg 1 hours ago [-]
This is gold.

Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.

mesozoicpilgrim 33 minutes ago [-]
I'm trying to figure out if the LLM writing style is a feature or a bug
moffkalast 47 minutes ago [-]
> I am aware this is unhinged. We're doing it anyway.

If one is looking for a quote that describes today's tech industry perfectly, that would be it.

Also using the MMLU as a metric in 2026 is truly unhinged.

banannaise 58 minutes ago [-]
Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
MarkusQ 30 minutes ago [-]
Prior work with the same vibe: https://xkcd.com/1007/
aenis 47 minutes ago [-]
Damn. I had plans.
bpodgursky 31 minutes ago [-]
2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
PantaloonFlames 41 minutes ago [-]
This is what I come here for. Terrific.
darepublic 45 minutes ago [-]
> Real data. Real model. Real date!

Arrested Development?

neilellis 39 minutes ago [-]
End of the World? Must be Tuesday.
vagrantstreet 36 minutes ago [-]
Was expecting some mention of Universal Approximation Theorem

I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please

hipster_robot 49 minutes ago [-]
why is everything broken?

> the top post on hn right now: The Singularity will occur on a Tuesday

oh

markgall 1 hours ago [-]
> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."

> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.

Huh? I don't get it. e^t would also still be finite at heat death.

ecto 1 hours ago [-]
exponential = mañana
OutOfHere 53 minutes ago [-]
I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
skulk 1 hours ago [-]
> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x

ecto 1 hours ago [-]
Thanks. I dropped out of college
boca_honey 23 minutes ago [-]
Friendly reminder:

Scaling LLMs will not lead to AGI.

AndrewKemendo 45 minutes ago [-]
Y’all are hilarious

The singularity is not something that’s going to be disputable

it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation

I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully

Everybody needs to be preparing for the world where it’s;

human plus machine

versus

human groups by themselves

across all possible categories of competition and collaboration

Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race

Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics

tempaccountabcd 41 minutes ago [-]
[dead]
cubefox 30 minutes ago [-]
A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:

  Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:

"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...

tempaccountabcd 43 minutes ago [-]
[dead]
braden-lk 52 minutes ago [-]
lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
unbalancedevh 31 minutes ago [-]
It depends on how you define humanity. The singularity implies that the current model isn't appropriate anymore, but it doesn't suggest how.
bwestergard 45 minutes ago [-]
inanutshellus 45 minutes ago [-]
We avoid catastrophe by thinking about new developments and how they can go wrong (and right).

Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.

... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...

That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.

jacquesm 41 minutes ago [-]
Yes, but if we don't do it 'they' will. Onwards!
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:11:19 GMT+0000 (Coordinated Universal Time) with Vercel.