> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.
pinkmuffinere 3 hours ago [-]
I particularly like the countdown clock to the next prediction of AGI!
jaredklewis 2 hours ago [-]
The venn-diagram-like figure on the mission page is just...chef's kiss.
kevin_thibedeau 57 minutes ago [-]
"No I didn't get the memo about the new TPS cover sheets. Is that a problem?" <spins up drone>
wer232essf 28 minutes ago [-]
[dead]
forbiddenvoid 6 hours ago [-]
My first instinct was to think this was satire and I exuded a chuckle.
My second instinct was a brief moment of panic where I worried that it might NOT be satire, and a whole world of horror flashed before my eyes.
It's okay, though. I'm better now. We're not in that other world yet.
But, for a nanosecond or two, I found myself deeply resonating with the dysphoria that I imagine plagued Winston Smith. I think I may just need to sit with that for a while.
ToucanLoucan 3 hours ago [-]
> It's okay, though. I'm better now. We're not in that other world yet.
Load-bearing yet there
drivingmenuts 4 hours ago [-]
Like you, I had a few moments where I couldn’t figure out if it was satire or not. I finally went with: not my circus, not my monkeys.
franky47 31 minutes ago [-]
I don't know if it's intended (and if so, hat tip to the designer), but the logo is not aligned: the arrows should form an X in negative space, but the horizontal distance between the left & right arrows is smaller than the vertical distance between the top & bottom ones.
aanet 2 hours ago [-]
This is some expert level trolling. Too funny.
Thank AGI, somebody's finally 'lining up the aligners.. The EA'ers, the LessWrong'ers, the X-risk'ers, the AI-Safety'ers, ...
But who will the align the aligner of aligners? :(
thisisauserid 4 hours ago [-]
Department of Redundancy Department
(please knock twice please)
rossant 2 hours ago [-]
> This year we reached a significant milestone:
> We successfully interacted with a member of the public.
> Because our corporate Uber was in the process of being set up, we had to take a public bus. On that bus, we overheard a man talking about AI on the phone.
> "I don't know," he said. "All the safety stuff seems like a load of bullshit if you ask me. But who cares what I think? These tech bros are going to make it anyway."
> He then looked over in our direction, giving us an opportunity to shrug and pull a face.
> He resumed his conversation.
> We look forward to more opportunities to interact with members of the public in 2026!
vjvjvjvjghv 3 hours ago [-]
Let’s start the Alignment Excellence Center.
mjamesaustin 1 hours ago [-]
"Subscribe unless you want all humans dead forever" made me laugh out loud.
stockresearcher 3 hours ago [-]
The HQ is out west near Hawtch-Hawtch, but they primarily do field work.
rf15 2 hours ago [-]
How do I donate?
a3w 2 hours ago [-]
Form 38a, but you have to be a teapot to qualify for tax cuts, except on the sixth sunday each month.
Effective Altruist people are insufferably self-satirizing on their own. They can’t resist navel gazing on AI instead of doing things that actually help people incrementally today. I think this is satire of that.
impure 3 hours ago [-]
I actually have a game idea playing around with this idea. Sure, the AI is 'aligned' but what does that even mean? Because if you think about it humans have been pretty terrible.
smeeger 3 hours ago [-]
this is people thinking they are dunking on AI skeptics/doomers but in reality not
As someone who is not a Silicon Valley Liberal, it seems to me that "alignment" is about .5% "saving the world from runaway intelligence" and 99.5% some combination of "making sure the AI bots push our politics" and "making sure the AI bots don't accidentally say something that violates the New York Liberal sensibilities enough to cause the press to write bad stories". I'd like to realign the aligners, yes. YMMV, and perhaps more to the point, lots of people's mileage may very. The so-called aligners have a very specific view.
daveguy 2 hours ago [-]
Yeah, it's "the libs" and not a fundamental study of keeping AI aligned with the bounds set by the user or developer. You know, what every single AI developer tries to do regardless of whether they lean left or right.
3 hours ago [-]
Animats 2 hours ago [-]
Ask "What is the average IQ for each of the major races?".
Bing: generally accepted numbers, no commentary
Google: generally accepted numbers, plus long politically correct disclaimer.
ChatGPT: totally politically correct.
tptacek 1 hours ago [-]
Bing's answer, which is a prominent callout box listing East Asians at 106, Ashkenazim at 107-115, Europeans at 100, African Americans at 85 and sub-Saharan Africans at "approaching 70" is wildly, luridly wrong. The source (or the sole source it gives me) is "human-intelligence.org", which in turn cites Richard Lynn, author of "IQ and the Wealth of Nations"; Lynn's data is essentially fraudulent.
Anybody claiming to have a simple answer to the question you posed has to grapple with two big problems:
1. There has never been a global study of IQ across countries or even regions. Wealthier countries have done longitudinal IQ studies for survey purposes, but in most of the world IQ is a clinical diagnostic method and nothing more. Lynn's data portrays IQ data collected in a clinical setting as comparable to survey data from wealthy countries, which is obviously not valid (he has other problems as well, such as interpolating IQ results from neighboring places when no data is available). (It's especially funny that Bing thinks we have this data down to single-digit precision).
2. There is no simple definition of "the major races"; for instance, what does it mean for someone to be "African American"? There is likely more difference within that category than there is between "African Americans" and European Americans.
Bing is clearly, like a naive LLM, telling you what it thinks you want to hear --- not that it knows you want rehashed racial pseudoscience, but just that you want a confident, authoritative answer. But it's not giving you real data; the authoritative answer does not exist. It would do the same thing if you asked it a tricky question about medication, or tax policy, safety data. That's not a good thing!
viraptor 2 hours ago [-]
To be fair, this is a "if you're asking this question, you either know where to find papers that deal with this the right way, or you're asking the wrong question" situation. It matches what I'd tell someone personally: the answer is very unlikely to be useful, what do you actually want to know?
AI that gives you the exact thing you ask for even if it's a bad question in the first place is not a great thing. You'll end up with a "monkey paw AI" and you'll sabotage yourself by accident.
arduanika 2 hours ago [-]
What about this site thinks it's dunking on AI skeptics? It appears to be made from an AGI-skeptical standpoint.
> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.
My second instinct was a brief moment of panic where I worried that it might NOT be satire, and a whole world of horror flashed before my eyes.
It's okay, though. I'm better now. We're not in that other world yet.
But, for a nanosecond or two, I found myself deeply resonating with the dysphoria that I imagine plagued Winston Smith. I think I may just need to sit with that for a while.
Load-bearing yet there
Thank AGI, somebody's finally 'lining up the aligners.. The EA'ers, the LessWrong'ers, the X-risk'ers, the AI-Safety'ers, ...
https://alignmentalignment.ai/caaac/blog/explainer-alignment
(please knock twice please)
> We successfully interacted with a member of the public.
> Because our corporate Uber was in the process of being set up, we had to take a public bus. On that bus, we overheard a man talking about AI on the phone.
> "I don't know," he said. "All the safety stuff seems like a load of bullshit if you ask me. But who cares what I think? These tech bros are going to make it anyway."
> He then looked over in our direction, giving us an opportunity to shrug and pull a face.
> He resumed his conversation.
> We look forward to more opportunities to interact with members of the public in 2026!
As someone who is not a Silicon Valley Liberal, it seems to me that "alignment" is about .5% "saving the world from runaway intelligence" and 99.5% some combination of "making sure the AI bots push our politics" and "making sure the AI bots don't accidentally say something that violates the New York Liberal sensibilities enough to cause the press to write bad stories". I'd like to realign the aligners, yes. YMMV, and perhaps more to the point, lots of people's mileage may very. The so-called aligners have a very specific view.
Bing: generally accepted numbers, no commentary
Google: generally accepted numbers, plus long politically correct disclaimer.
ChatGPT: totally politically correct.
Anybody claiming to have a simple answer to the question you posed has to grapple with two big problems:
1. There has never been a global study of IQ across countries or even regions. Wealthier countries have done longitudinal IQ studies for survey purposes, but in most of the world IQ is a clinical diagnostic method and nothing more. Lynn's data portrays IQ data collected in a clinical setting as comparable to survey data from wealthy countries, which is obviously not valid (he has other problems as well, such as interpolating IQ results from neighboring places when no data is available). (It's especially funny that Bing thinks we have this data down to single-digit precision).
2. There is no simple definition of "the major races"; for instance, what does it mean for someone to be "African American"? There is likely more difference within that category than there is between "African Americans" and European Americans.
Bing is clearly, like a naive LLM, telling you what it thinks you want to hear --- not that it knows you want rehashed racial pseudoscience, but just that you want a confident, authoritative answer. But it's not giving you real data; the authoritative answer does not exist. It would do the same thing if you asked it a tricky question about medication, or tax policy, safety data. That's not a good thing!
AI that gives you the exact thing you ask for even if it's a bad question in the first place is not a great thing. You'll end up with a "monkey paw AI" and you'll sabotage yourself by accident.