All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.
If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.
I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.
jazzpush2 2 hours ago [-]
In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!
jarrettcoggin 2 hours ago [-]
From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.
zimpenfish 9 minutes ago [-]
> When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.
To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.
[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.
dgxyz 47 minutes ago [-]
Oh I worked at one of them.
I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.
jarrettcoggin 31 minutes ago [-]
Definitely one approach to the circumstances. I tried some variation of this and it blew up in my face (as I expected ).
Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.
The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.
I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.
One of the best ways to get fired in my opinion.
echelon 35 minutes ago [-]
Why did Tesla work initially? Because they were first to market and people were willing to overlook flaws?
When did it start falling apart?
Why hasn't the same happened to SpaceX? (Gov contracts, too big to fail, national defense, no competition yet, etc.?)
And honestly, why hasn't anyone domestically put up a decent fight against Tesla? Best I can think of is Rivian, and those have their own issues.
Rover222 29 minutes ago [-]
How is Tesla falling apart? Cybertruck was a flop, but Model Y is still one of the best selling cars in the world, and very well reviewed.
tapoxi 14 minutes ago [-]
Deliveries have been falling for the past two years.
1 hours ago [-]
exe34 1 hours ago [-]
yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?
jarrettcoggin 57 minutes ago [-]
Agreed. Tesla taught me the hard way about work/life boundaries. I spent a lot of time working a full 8-9 hours during the day, then doing deployments during the nights, weekends, and on “vacations”. A 60-hour week was a “light” week at Tesla.
Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.
These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.
Most often, the answer is no.
actsasbuffoon 21 minutes ago [-]
I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.
Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.
I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.
__blockcipher__ 18 minutes ago [-]
somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).
BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.
bdangubic 2 hours ago [-]
wild, but not surprising! anything else interesting you can share from that interview?
kvetching 2 hours ago [-]
[flagged]
SouthSeaDude 1 hours ago [-]
I totally agree, it's his company 100%, why would you even apply for a job in a company where you don't agree with the owner or his vision.
Braxton1980 1 hours ago [-]
>He wants it to be truthful
How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud
AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).
Grok 4.2 which was just released in the API just benched the best at this benchmark.
SideQuark 3 minutes ago [-]
Of all the valuable metrics on that site, all of which grok does badly at except one, you managed to pick that single one.
Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...
blah blah blah
Or wait wait, here's another:
Great point! As Mechahitler, I think it's critical that Grok comply with Fuhrer Musk's political perspectives. Now I'll kick us off with an N... your turn!
Totally sounds like the result of an organic, earnest, and legitimate search for truth lmao
ecshafer 24 minutes ago [-]
> Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...
Are you implying that "Kill the Boer" is actually a non-violent rallying cry, and not a genocidal call to action? Ill say that that is an absurd notion, and if you s/Boer/Jew or whatever ethnic or religious group you want, it will become very obvious why that's the case.
scubbo 15 minutes ago [-]
> Are you implying that "Kill the Boer" is actually a non-violent rallying cry
(Not the person you're replying to, so caveats about me speaking for them, but) no, they're not. They're highlighting how Grok _isn't_ accurate/unbiased/whatever, by giving examples of how it distorts the truth to fit Elon's narrative.
kvetching 45 minutes ago [-]
I think he also wants it to avoid sounding like the typical redditor or HN commenter.
estearum 35 minutes ago [-]
You think he wants Grok not to sound extremely snarky, sarcastic, and full of cringelord humor?
Are we talking about the same xAI/Grok/Elon here?
watwut 1 hours ago [-]
No he does not want it to be truthful. Elon loves lying and lying is the think he does the most.
He wants it to promote nazism. And he wants it to lie in the process.
yoyohello13 48 minutes ago [-]
> people who are solely money-motivated (not a judgment).
Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.
glitchc 5 minutes ago [-]
Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.
smith7018 28 minutes ago [-]
I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.
awesome_dude 40 minutes ago [-]
>> If you are wealthy
Then.. you wouldn't be working...
yoyohello13 37 minutes ago [-]
Why is Elon still working then?
awesome_dude 24 minutes ago [-]
At the risk of drawing moderation ire..
When does Elon work?
lich_king 2 hours ago [-]
Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".
tibbar 1 hours ago [-]
I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.
This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.
"You can use my model to kill others if Dario won't do it sir"
tyleo 36 minutes ago [-]
It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”
It’s sad to see the shift.
mattbillenstein 1 hours ago [-]
This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.
etchalon 44 minutes ago [-]
Very few people down here want to ride in them, and I have multiple friends with hilariously disastrous stories.
Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."
dan-robertson 2 hours ago [-]
Why does being a top AI researcher so often come with this philosophical bent you describe?
ladberg 2 hours ago [-]
You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place
asddubs 1 hours ago [-]
it's not working
stogot 36 minutes ago [-]
virtue signaling is the goal and its working
bdangubic 1 hours ago [-]
Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…
saagarjha 1 hours ago [-]
Most evil corporations have fairly normal jobs available.
bdangubic 1 hours ago [-]
if you want to make the world a better place as OP stated perhaps you can get a normal job in maybe less evil corp?
saagarjha 28 minutes ago [-]
Most companies are evil in some way, the question is how evil and how close you are to the evil. Most people will pick "not that evil but pays a lot". A few will take "pretty evil and pays more than a lot". Some will choose "less evil and pays poorly". (It's worth noting that there are a lot of jobs that are not at the Pareto frontier and are "more evil and pay worse" but social mobility etc. cause them to be selected anyway).
munificent 33 minutes ago [-]
When presented with a choice between:
1. Take a job making $$$$$$$ at a company making the world worse.
2. Take a job making $$$ at a company not making the world worse.
Very few people have a personality such that they'll pick 2.
brandall10 1 minutes ago [-]
Exactly, if you believe everyone has a price, that price is probably where the market is right now, where a recently minted PhD in this field can make a 7 figure salary and potential 8 figure comp package simply because they studied the right thing at the right time, and the truly extraordinary ones are doing two orders of magnitude better.
watwut 1 hours ago [-]
Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.
But it is absurd to claim it is "making the world better place".
mynameisash 1 hours ago [-]
I would think it's because of the staggering money they're making. According to Fortune[0]:
> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”
> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”
If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.
My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.
Sl1mb0 34 minutes ago [-]
> care more that their work is prosocial
These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.
Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.
wombatpm 2 hours ago [-]
Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.
cloverich 1 hours ago [-]
This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.
derektank 1 hours ago [-]
Because a lot of them are academics that are doctors of philosophy
refulgentis 2 hours ago [-]
Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?
lo_zamoyski 60 minutes ago [-]
Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.
“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.
hermanzegerman 2 hours ago [-]
Because they can afford it, they are very sought after.
And smart people usually have moral convictions.
I know for some people on this website it's hard to understand, but not everything in life is about $$$
0x3f 1 hours ago [-]
> And smart people usually have moral convictions.
Are you sure you don't just like the moral convictions and so engage in trait bundling?
Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.
Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.
lo_zamoyski 1 hours ago [-]
> Moral knowledge doesn't really exist.
If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary?
0x3f 56 minutes ago [-]
Different kinds of claims, it's not self-referential
siva7 2 hours ago [-]
I'm smart and you can buy my morals. So what?
yoyohello13 1 hours ago [-]
True, many smart people will gladly (or even begrudgingly) do evil for money. That's why there is so much suffering in the world, because of people like you.
0x3f 45 minutes ago [-]
Is ad tech and the like really causing so much suffering? The government work, mass surveilance, killing people etc. doesn't actually pay that much, typically.
yoyohello13 37 minutes ago [-]
I think ad tech is probably the single most destructive technology of the new millennium. The shift toward "engagement at all costs" business strategies is basically the root cause of societies current political polarization. Engagement bait cultivates fear and rage in the populace to get clicks. We are now seeing the consequences of shoving ads that sow fear, anger, doubt and inadequacy into peoples faces 24/7. This doesn't even touch on the fact that mass surveillance is only possible because of the technologies forged by the Ad tech industry.
0x3f 28 minutes ago [-]
Well I'm not sure I entirely believe this myself, but it seems easy enough to argue that this is progress of a sort.
The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?
It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.
yoyohello13 44 seconds ago [-]
That’s fair enough. I wouldn’t say I’m happy about needing to live through interesting times, but if we make it out the other end maybe something better will come of it.
refulgentis 2 hours ago [-]
So what, indeed (not sure what you mean)
hermanzegerman 1 hours ago [-]
Those people get paid so much anyway that they don't have to compromise their morals.
I guess that's not the case for you and me
exe34 1 hours ago [-]
so do oil and tobacco people, no?
zeroCalories 3 hours ago [-]
It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.
vessenes 3 hours ago [-]
There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".
NeutralCrane 2 hours ago [-]
My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.
tptacek 18 minutes ago [-]
I think Musk is odious but I think there's a lot of complicating evidence to the story of what happened at Twitter. And: very smart people, like Dan Luu, were complaining about their culture long before Musk arrived.
KaiserPro 2 hours ago [-]
I don't really think thats true.
The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.
The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.
We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.
vessenes 1 hours ago [-]
I think we are saying the same thing. He builds trillion dollar companies that are labor efficient; nobody said they are good places to work.
rconti 2 hours ago [-]
What about all the ones who are suing him for shortchanging them?
raw_anon_1111 2 hours ago [-]
Ask the people at Twitter..
JumpCrisscross 2 hours ago [-]
> Ask the people at Twitter
The ones with stock options in, now, SpaceX?
sroussey 2 hours ago [-]
Poor SpaceX employees whose options got diluted by Twitter. :/
raw_anon_1111 2 hours ago [-]
Stock options aren’t magic. I bet you that the remaining Twitter employees won’t see a higher comp than equivalent employees at BigTech companies between their cash + RSUs when SpaceX IPOs.
Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?
Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO
rconti 2 hours ago [-]
No, the ones suing his ass.
cladopa 1 hours ago [-]
You mean the 80% of the workforce that was fired and the company continued running just fine?
Usually just firing 3 to 5% of any company workers have terrible consequences for the company that does it.
It does not speak so well about the workers.
mattbillenstein 1 hours ago [-]
He also cut 80% of the traffic... And the fact that it kept running with him willy nilly pulling network cables is a credit to the work they did to make it resilient to failure.
watwut 1 hours ago [-]
It was significantly worst, could not keep ads, became overrun by bots. The quality went down significantly. And earnings too.
Freedom2 2 hours ago [-]
Many ex-employees have said to me that working for Elon did not enrich them at all, either financially or professionally.
hermanzegerman 2 hours ago [-]
He's a notorious cheapskate and Tesla is known for firing people shortly before their stock options vest
jamespo 2 hours ago [-]
There's probably a lot of survivor bias going on there
vessenes 1 hours ago [-]
Undoubtedly. With 2.5T in value between tsla and sx that’s a lot of value for survivors.
sumeno 1 hours ago [-]
What % of that is owned by employees that aren't named Elon Musk?
Zigurd 2 hours ago [-]
> Elon's enriched pretty much everybody who's ever worked for and invested with him.
I'd wager you were saying the same thing about bitcoin until last year.
mediaman 2 hours ago [-]
I'm unclear what statement this is trying to make.
Is it meant to draw equivalence between crypto and Tesla/SpaceX? That each has roughly similar (i.e., low) value to humanity, or value as businesses?
Is it that the metric of whether a person makes others money is invalid?
The comment seems coy, possibly to avoid making any claim at all, but it must not be that because that wouldn't be very sporting.
iamacyborg 57 minutes ago [-]
He’s saying that it’s easy to say good things when the market’s on an upswing.
LZ_Khan 3 hours ago [-]
After seeing the type of people he hired for doge.. yikes.
hooch 2 hours ago [-]
Was doge ever anything more than a "get root, grab the data, and run" operation?
pstuart 58 minutes ago [-]
Don't forget the destruction of USAID and countless projects that had the word "diversity" in its work.
joquarky 1 hours ago [-]
It's pretty obvious now.
yoyohello13 1 hours ago [-]
It was obvious at the time too.
GeorgeTirebiter 2 hours ago [-]
Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?
cmorgan31 1 hours ago [-]
Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.
jazzpush2 2 hours ago [-]
Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.
ai_critic 2 hours ago [-]
Gooning and racism have been a cornerstone of humanity since we descended from the trees, for better or worse.
refulgentis 2 hours ago [-]
[dead]
vibeprofessor 2 hours ago [-]
[dead]
oceanplexian 1 hours ago [-]
> But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work
The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.
TheEzEzz 57 minutes ago [-]
> The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.
I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.
squidbeak 1 hours ago [-]
> The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.
What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.
oceanplexian 6 minutes ago [-]
Idealism of what? That the government shouldn't use AI for surveillance or the military?
You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.
bearjaws 4 hours ago [-]
Feel like the canary was when Grokpedia became a project.
Giant waste of time while Anthropic/OAI keep surging forward.
I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.
paulbjensen 3 hours ago [-]
The Twitter social graph was an amazing data asset. I worked at a consumer insights firm and the data on followers/followings was quite powerful.
Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.
With that data, you could work out:
- What celebrities/influencers to use in marketing campaigns
- Where to advertise, and on which tv/radio channels
- What potential brands to collaborate with to expand your customer base
- What tone of voice to use in your advertising
- In some cases, we educated clients about who their actual customers were, better than they understood themselves.
One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.
When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”
The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.
That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.
I’m pretty sure he must be delighted with how things have panned out since.
BLKNSLVR 2 hours ago [-]
That entire description sounds worthless to any positive direction of humanity. Therefore probably rapaciously profitable
Very sad face.
rchaud 9 minutes ago [-]
In other words, using flash-in-the-pan data to build an advertising goldmine.
smcin 3 hours ago [-]
That Zuckerberg quote was published in 2013 and supposedly was made a year or more before. Was it about when Dick Costolo was CEO (2010-2012)?
johnisgood 1 hours ago [-]
This reads very dystopian. You are not optimizing to understand people, you are optimizing to weaponize that understanding against them.
When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.
And this happens at scale, invisibly. People never see the manipulation.
In any case, it is not useful for most people. It is useful for the people doing the deceiving.
caaqil 22 minutes ago [-]
The tech is interesting and useful, no need for the scary moral framing.
The original application of the entire field of data science or ML is/was actually based on this paradigm of finding "unconscious preferences" (your words) and hidden patterns. How one chooses to deploy the tech should be judged on its own.
On the current trajectory of tool/data abuse where Palantir et al. are leading the way, this is very low on the sinister scale.
etchalon 40 minutes ago [-]
It's marketing. That's how marketing works.
gwern 2 hours ago [-]
It's definitely very valuable, but for what AI model? How does any of that lead to AGI, or even just a good coding agent?
applfanboysbgon 2 hours ago [-]
It doesn't need to lead to AGI or a good coding agent. Some of the only people who are actually profitable in the LLM industry are the people making actual chatbots. There are several bootstrapped startups that run open-weight models with a $10 or $20 monthly sub and make millions in profit off of inference from people just talking to the things, usually for character roleplay / "AI boyfriend/girlfriend" stuff etc. Some of them even took those profits and invested it into training their own bespoke models from scratch, usually on the smaller side although finetunes/retrains of Llama 70b, GLM, and Deepseek 670b have also been done. Grok could probably be profitable if it targeted this space, as the most "intelligent" conversational/uncensored model.
This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.
KaiserPro 2 hours ago [-]
> but for what AI model?
Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.
For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.
1 hours ago [-]
alex1138 2 hours ago [-]
As an aside that quote from MZ does bother me. There's more to making a web-scale human rights respecting (because it has to, it's the internet, social media needs guidelines) than just making money (which Zuck doesn't seem to care much about anyway if he's sinking apparently billions into metaverse while having no account support)
Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with
Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?
It _was_ a great asset, however, just like models need proper data, as soon as musk removed the clamps on valuable social signals, well, he basically took a dump where he intended to eat.
ohyoutravel 1 hours ago [-]
They did say was, and did say Twitter, which existed in the past.
brokencode 4 hours ago [-]
It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.
freehorse 4 hours ago [-]
Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.
dmarcos 3 hours ago [-]
FWIW it looks there’s now a demand surge with the introduction of the new cheap cybertruck variant. delivery dates pushed out to the fall of 2026.
robrain 3 hours ago [-]
That was an artificial boost created by setting a time-limit for a low price. There were ten days to buy at the price, then they put it back up. [1]
What's an artificial boost? Sounds like you're describing a sale.
NewJazz 3 hours ago [-]
Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.
MPSimmons 3 hours ago [-]
A push on delivery dates is as likely to mean production issues as it is an influx of interest.
scottyah 3 hours ago [-]
[flagged]
annexrichmond 55 minutes ago [-]
Drivel. They’re selling just as well as Rivians.
squarefoot 4 hours ago [-]
Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.
annexrichmond 59 minutes ago [-]
Are really suggesting everything in Wikipedia is truthful, complete, and free of all biases?
Rover222 26 minutes ago [-]
Wikipedia obviously is left leaning.
Timon3 4 hours ago [-]
I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.
It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.
danabramov 3 hours ago [-]
I've seen Claude pick it up too. It's disconcerting.
alex1138 4 hours ago [-]
I can both not like Elon and also think Wikipedia is also very captured on some things
ryandrake 3 hours ago [-]
Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?
servo_sausage 2 hours ago [-]
I find it more surprising that the common understanding has shifted away from "wikis are crap for anything new or political".
As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.
For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.
And these topics are not nearly as controversial as race, feminism, or transgender topics.
ryandrake 1 hours ago [-]
OK, is there a specific example on either the Biden or Gamergate page that is factually incorrect? Or are you saying the entire pages are false?
andoando 49 minutes ago [-]
Which facts are represented is equally important as being factual though.
Brian hit Jim can be a fact. But if you emit "Jim murdered Brians whole family", its a disortation of truth
bdangubic 23 minutes ago [-]
specific examples other than ficticious Jim&Brian?
AuryGlenz 3 hours ago [-]
[flagged]
JumpCrisscross 3 hours ago [-]
The Minnesota Transracial Adoption Study was methodologically flawed. “Children with two black parents were significantly older at adoption, had been in the adoptive home a shorter time, and had experienced a greater number of preadoption placements.”
Reframed, the study seemed to find (a) black kids are adopted less readily and (b) the longer a kid spends in the foster system, the lower their IQ at 17. (There is also limited controlling for epigenetic factors because we didn’t understand those well in the 1970s and 80s.)
Based on how new human cognition is, and genetically similar human races are, it would be somewhat groundbreaking to find an emergent complex trait like IQ to map to social constructs like race, particularly ones as broad as American white and black. (There is more genetic diversity in single African tribes than in some small European countries. And American whites and blacks are all complex hybridized social categories.)
It seems like the root of your statement is with the existence of "race" as a purely biological classification. Wikipedia correctly notes the consensus position that race is a social construct [0] that's difficult to use accurately when discussing IQ. Grok makes the implicit and incorrect assumption that genetic factors = race, among other issues.
I wonder how much longer that link will stay up with the current administration...
epgui 3 hours ago [-]
Have you considered the possibility that your opinion is just not representative of the scientific consensus?
charcircuit 3 hours ago [-]
Wikipedia does not care about scientific consensus. It just summarizes "reliable" secondary sources.
lobf 3 hours ago [-]
>As you can see, Wikipedia is very dismissive to the point of effectively lying.
Did I miss where you presented evidence that wikipedia is wrong? You seem to be taking an assumption you carry (race is related to IQ) and assuming everyone believes it's true as well, thus wikipedia is lying.
erxam 3 hours ago [-]
[flagged]
gowld 3 hours ago [-]
It's not errors of fact, it's errors of omitted facts.
ibero 3 hours ago [-]
Are there actual good examples showing errors of omitted facts on Wikipedia that are verifiably correct, that demonstrate how it is "captured"?
decimalenough 3 hours ago [-]
[flagged]
freehorse 4 hours ago [-]
I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.
atonse 2 hours ago [-]
> I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.
Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?
Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).
Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"
So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.
We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.
chipotle_coyote 2 hours ago [-]
> Have you used AI to write documentation for software?
Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:
- AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)
- AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)
- The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.
- Internal links to other pages will often be incorrect.
- Summaries will often be superfluous.
- It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)
- The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.
- It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.
As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.
freehorse 2 hours ago [-]
No, I don't trust an encyclopedia generated by AI. Projects with much narrower scopes are not comparable.
edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.
2 hours ago [-]
scottyah 3 hours ago [-]
> "grokipedia" as idea
So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?
freehorse 2 hours ago [-]
Because not liking something does not imply liking any possible alternative.
Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.
I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".
edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.
2 hours ago [-]
debugnik 2 hours ago [-]
They meant the idea of Wikipedia rewritten by Grok (or another controversial LLM) specifically, not just any alternative.
wat10000 2 hours ago [-]
Not all alternatives are necessarily worthy. I can understand someone not liking tomatoes. I can't understand someone liking depleted uranium.
bdangubic 21 minutes ago [-]
what do you have against depleted uranium? you know what they say, one man’s trash is another man’s treasure :)
Rover222 25 minutes ago [-]
I appreciate you
tclancy 4 hours ago [-]
[flagged]
notahacker 4 hours ago [-]
Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs
tclancy 4 hours ago [-]
>Twitter's communication style being based around brevity
Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.
3rodents 4 hours ago [-]
Elon was running some sort of $1m competition for the “best” Twitter post for a few months. I think those type of dissertations about Phrenology and the like have fallen off a cliff since the competition ended.
aleph_minus_one 4 hours ago [-]
> Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs
This depends on what one wants to optimize the AI for. ;-)
libertine 4 hours ago [-]
And the amount of bots there isn't helpful either.
facemelt2 4 hours ago [-]
recent changes in their comment system have reduced my exposure to bots to a level I much prefer over every other platform I use
tanjtanjtanj 3 hours ago [-]
How recent?
As recently as last weekend I was seeing blue check marks replying with AI generated only-technically-related replies on top of the majority of the posts I looked at.
rvnx 3 hours ago [-]
There are bots here too, lot of them, to a point that rules were amended, this is because it's very valuable to give points to new publications
libertine 3 hours ago [-]
If that's actually true, good for them, but after what I've witnessed there not that long ago, I doubt I'll try it ever again.
UncleOxidant 4 hours ago [-]
> Giant waste of time while Anthropic/OAI keep surging forward.
And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.
Limits are so low that I cancelled after about two weeks on my initial $0 trial. I tried making a change to a tiny code base with Claude Sonnet (which they offer in Antigravity). It couldn't even finish the change before my weekly limit was used up, reset in 7 days.
UncleOxidant 1 hours ago [-]
I find it pretty good. And Gemini 3.1 pro seems quite capable. Not as good at some things as Claude, but better at others. I was trying to target a verilog design to an uncommon FPGA and board and Gemini went out and searched for the FPGA docs and examined the schematics for the board in able to do the pin assignments (generated .ccf file). Not sure of Claude could've done that.
BoredPositron 4 hours ago [-]
Probably the best value for a good amount of anthropic credits. You can also share your Google ai subscription with up to four family members and they all get the same amount of credits...
jmspring 4 hours ago [-]
Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.
sheepscreek 3 hours ago [-]
AFAIK Grok still doesn’t have a CLI coding agent that works with a subscription. That’s a shame. Grok Code Fast 1 was pretty impressive when it came out - for what it did, and they never followed it up with a new version.
sroussey 1 hours ago [-]
You can use cursor with grok, though my experience is that grok is the worst of the API providers cursor supports.
ben_w 4 hours ago [-]
> Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.
Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`
Agree re Twitter "good" != valuable.
sroussey 1 hours ago [-]
Where system prompt lists a certain someone’s latest tweets.
laidoffamazon 29 minutes ago [-]
As someone trying to monitor the situation using Twitter the last few weeks it’s awful and it used to not be!
Rover222 24 minutes ago [-]
It’s flawed, but still the obvious place to monitor a situation.
rchaud 5 minutes ago [-]
It's long been taken over by Telegram, which among its other advantages (more like a message board than 'town square'), doesn't have hordes of people commenting "@grok explain this to me" under every post.
giancarlostoro 4 hours ago [-]
> but I cannot imagine it's a valuable dataset.
It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.
samrus 2 hours ago [-]
Twotter as a data source is interesting. I think it gets over hyped because thats elons grift. But i cant deny that the real time info aspect of it is pretty valuable. But i definitely think that its not that much more valuable than the open internet from a context source perspective. Everything worthwhile on twitter will end up elsewhere with a bit of lag. And the stuff that wont is noise anyway
BurningFrog 3 hours ago [-]
Grok is trained on pretty much the same giant web crawl/text corpus as the other AIs.
EGreg 3 hours ago [-]
I'm not a fan of Elon's software endeavors, ever since he bought Twitter and turned it into an even worse cesspool of angry political nonsense than it used to be. I don't like how he's been biasing Grok, etc.
But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.
kennywinker 2 hours ago [-]
I think the issue is simply this: wikipedia trends towards unbiased info through use of the crowd. Grok, with a single owner with an ax to grind, trends towards whatever elon wants. It’s poisoned information under the control of one man - cyberpunk novels have been written about less.
wat10000 2 hours ago [-]
A concrete example: a few weeks ago, Musk was making a big deal about how most of his massive net worth was not held in cash, and by a total coincidence the phrase "primarily derived from equity stakes rather than cash" showed up on his Grokipedia page in the section about net worth. I checked the pages of several other extremely wealthy people and none of them had such a comment.
tmp10423288442 54 minutes ago [-]
> wikipedia trends towards unbiased info through use of the crowd
See, this is why people even give a project like Grokipedia the time of day. While in theory anyone can edit Wikipedia, in practice the moderators form a much smaller and weirder cabal, and they reject edits that go against their views. The frustration with the naive assertion that Wikipedia distills the wisdom of the crowds with the reality of Wikipedia on any page of note is what provides the psychic permission to even entertain a project with such obvious flaws as Grokipedia.
kennywinker 4 minutes ago [-]
> and they reject edits that go against their views
Citation needed. See what i did there ;)
They reject edits that go against their views on tone and sourcing not political views that i am aware of - i am sure it happens from time to time but unless there’s a consistant bias in one direction this isn’t a valid criticism of the political neutrality of wikipedia.
Even if there is rampant bias in wikipedia, that’s a reason to fork it and change the structure and gatekeeping - not to replace it with a techno-authoritarian ai version controlled by a single billionaire.
Avshalom 20 minutes ago [-]
>>I don't like how he's been biasing Grok, etc.
>>But, what exactly is so bad about Grokipedia
sumeno 52 minutes ago [-]
It's controlled by a guy who spends all day retweeting white supremacists and lying about his companies. Why should anyone who isn't a white supremacist use it?
vibeprofessor 2 hours ago [-]
[dead]
Animats 2 hours ago [-]
“Orbital space centres and mass drivers on the Moon will be incredible.” - Musk
Right.
The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?
It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.
If I was a SpaceX investor I'd be considering litigation. Saying the core product has to be rebuilt right after it gets bought by SpaceX?! Maybe the SpaceX investors would have liked some diligence about that before purchase but looks like someone had a conflict of interest about that.
sroussey 1 hours ago [-]
You had the answer right there… SPCX will be the product, what they make will no longer matter.
moogly 2 hours ago [-]
I feel xAI is just a very big version of the Boring Co. "flamethrower": an unserious endeavor which is just a reskinned existing tool (it was a reskinned weed burner), but people were wowed by it anyway, since Musk was behind it, and they all pretended it was something new and notable.
The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?
Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).
Might be time to start a new Musk company soon.
1 hours ago [-]
twodave 3 hours ago [-]
Used Grok for the first time, in a Tesla, and for that purpose it actually made a lot of sense. It’s very well-integrated into the car’s systems and communication style while driving tends to be very tweet-esque. I think this is the niche they should lean into more (live assistant, e.g. Jarvis type stuff) and leave the more agentic niche to folks like Anthropic. Maybe even delegate more difficult or background tasks to those sorts of models. As a verbal interface I found it pretty pleasant.
darkwater 2 hours ago [-]
Grok in Tesla is utterly terrible, a rushed out product with a very bad UX.
As a simple example, it's the very first feature in Tesla's UI that does not come translated to the UI language set by the user but it's just available in English. Never happened before.
SaltyBackendGuy 2 hours ago [-]
I am honestly a bit disappointed it couldn't do basic things, like play X on Spotify. To be fair, I accidentally activated Grok for holding the voice command button too long (which is another UX issue - i.e. 2 voice command interfaces).
MetaWhirledPeas 2 hours ago [-]
It'll get there. Initial implementation was just talk to Grok. Now it has improved to allow adjustments to navigation routes.
52 minutes ago [-]
Sol- 3 hours ago [-]
I don't use it myself, but I feel like the way Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants. I think it's good that people tag @grok if they don't understand something or want an opinion, even if it looks pretty silly to see "@grok is this true" repeated multiple times in replies.
That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.
I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?
biggestfan 23 minutes ago [-]
I disagree, I find that the grok replies are terrible product UX. Not only do they clog up the replies of every popular post, they're also constrained to extremely short answers with no sources. The community notes system, while also flawed in its own ways, is at least not nearly as disruptive and usually provides a link.
Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.
daveguy 3 hours ago [-]
Grok is a bot that:
1) sometimes goes mechahitler
2) was trained to be biased against empathy and understanding (because woke).
3) is customized to spout Elon's opinions as fact.
Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.
Sol- 3 hours ago [-]
I guess I was mostly arguing that the integration of something like Grok into Twitter was definitely a net positive for online discussion, as anyone has a fact checker and explainer at hand now to diffuse irrational online arguments.
Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.
Of course something like Claude being integrated into Twitter would likely be better.
daveguy 3 hours ago [-]
He doesn't have to fiddle with the model because he gets to inject his own opinion into the context MitM style.
But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.
tootie 2 hours ago [-]
It was also producing CSAM on demand for a few months.
51 minutes ago [-]
Sohcahtoa82 1 hours ago [-]
> 1) sometimes goes mechahitler
That "MechaHitler" episode lasted less than a day.
> 2) was trained to be biased against empathy and understanding (because woke).
No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.
> 3) is customized to spout Elon's opinions as fact.
Certainly a nugget of truth there.
> Claiming it is "objective and rational" seems like a misjudgement to me.
I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.
nemothekid 3 hours ago [-]
While I believe Grok was a decent model (in some of our internal use cases it performed the best until Gemini 2.5-pro came out), I can't help lament how the team chose to run.
xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.
charlierguo 3 hours ago [-]
> I'm sure the engineers at Google worked 4 days a week, 2 hours a day
Why are you sure of that? Anecdotally everyone I know in and around Google Deepmind works incredibly hard.
nemothekid 2 hours ago [-]
No disrespect to the Google Deepmind team, but I meant it as a meme. I do not believe most Google employees work 2 hours a day.
The Google Deepminds are incredibly smart - I just find it important to point out that the xAI guys spent a year assured they would beat Google because they slept in tents that they made in the office.
Analemma_ 3 hours ago [-]
There’s a longstanding meme that Google is full of rest-and-vesters. Maybe it’s true in some departments, but I also have anecdotes that in GDM and other AI-related stuff, people are acutely aware of the existential threat of losing to OpenAI and have the appropriate amount of hustle.
leoh 2 hours ago [-]
It really doesn't feel like that and hasn't for years
VirusNewbie 7 minutes ago [-]
Anyone Google has hired in the last ~8 years was hired onto a team that is growing and has a culture of shipping and producing. Google regularly weeds out low performers, be it new grads or long timers who started doing the rest and vest thing.
Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.
I'd even say, Google is much better at calibrating the right amount to push people than some other companies.
basisword 3 hours ago [-]
It's almost like burning people out is a bad idea. Fair enough if you're working 12 hour days as employee 1 at a startup but when your boss has more money than God and is working you like a dog you're not going to keep that up (especially when all of those people probably have much better opportunities available to them at the drop of a hat).
rishabhaiover 4 hours ago [-]
These kind of HN submissions test how fair discussions can be here:
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
Is it politics or ideology to recognize the flawed character of someone? How cultish his following is? His erratic behavior, the damage that he's doing?
Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.
croes 2 hours ago [-]
They trample science, the Paradox of tolerance in action.
Who fights can lose, who doesn't fight has already lost.
52 minutes ago [-]
johnnyanmac 4 hours ago [-]
So, it utterly fails? A good part of the community still seems to be stuck in 2017 where Elon could do no wrong.
Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.
mathisfun123 4 hours ago [-]
[flagged]
dang 5 hours ago [-]
Recent, related, and apparently ahead of the curve:
Yes 11 up and everyone why free insult on a model that top adoption.
Aligned with your personal view is not ahead of the curve, it's just personal.
breve 1 hours ago [-]
> "AI was not built right first time around, so is being rebuilt from the foundations up"
So Tesla's recent $2 billion investment in xAI was a bad deal?
It looks a lot like a public company is bailing out a private one.
xnx 4 hours ago [-]
xAI's biggest contribution to the space seems to have been their x-rated image/video model. Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.
vessenes 2 hours ago [-]
I'll bite. I think their conversation (voice) model is more fluid than competitors. It's also very good at hitting up twitter for realtime information, and was that way before the current tool use models got fully up and running. Anecdotally, I think it has better theory of mind than its era (gemini 2.5) - I found it a useful issue spotter for negotiations and planning in a way that oAI and claude were not near its launch date. It led the vending bench for some time after launch.
Taken together, I infer that RL training toward a slightly less homogenous cultural standard than the other frontier AI labs adds some capabilities, or can at times.
It's quite long in the tooth right now, though. But I'll definitely talk to the next version; I like heterogeneity in the model space, and Grok is very different than the other big three.
wolvoleo 4 hours ago [-]
To be fair I think there's a good usecase there. Someone's gonna do it. People will want it.
American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)
xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.
Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.
enaaem 4 hours ago [-]
The problem is you can undress real people and that is extremely harmful and dangerous. One kid took his life after an ai sextortian scam [1]. Imagine the damage cyberbullies, scammers and stalkers can do?
Yeah like I said. With consent of the people involved.
There must be a way to do that. Especially with all the facial req chops these days. Also, you could simply refuse using existing images. I don't see why they wouldn't refuse that because that's a pretty narrow usecase with very few benign purposes.
> Imagine the damage cyberbullies, scammers and stalkers can do?
They already can. There's open-source models out there.
raw_anon_1111 2 hours ago [-]
This has been fixed months ago. From reading Reddit, Grok is now really conservative about what it will let you do with uploaded images. But you can get it to draw x rated porn images and videos that start with Ai images it creates
thaumasiotes 3 hours ago [-]
> The problem is you can undress real people and that is extremely harmful and dangerous.
But... that's not something you can do. It's impossible.
You can imagine what real people look like naked. That's not a new thing.
Imagining what someone looks like in your mind is far different than actively sharing fake nude images online. This cannot be a serious comparison.
chabes 4 hours ago [-]
That consent of portrayed parties is impossible.
What is the solution there?
wolvoleo 44 minutes ago [-]
You can just forbid using existing images as a source and describe them purely by text.
_fizz_buzz_ 3 hours ago [-]
Shouldn’t it be possible for AI to filter out that a request is made to portray a real person? That seems almost like a trivial task for a good model. I am sure every now and then something will slip through, but I bet one could make it very close to 100% effective.
nitwit005 3 hours ago [-]
Consider the difference between "Generate an image of Emma Watson", "Generate an image of Hermione", and "Generate an image of a female hogwarts witch and student". We're getting less and less specific, but those are all likely to get you an image of Emma Watson.
Your filter has to pick out that, while they did not ask for a specific person, the practical result is likely to be the same. That's going to be tough to get near perfect.
Retr0id 3 hours ago [-]
I can see how it'd be trivial to block known celebrities, but how do you handle everyone else?
wolvoleo 44 minutes ago [-]
Do you need to? It doesn't know everyone else. Or at least it shouldn't.
XorNot 3 hours ago [-]
I mean a realistic take is to simply not use source images containing people at all.
AIs have been able to invent fictional people longer then they've been able to modify existing images.
TheOtherHobbes 3 hours ago [-]
AI development has become an excuse for ignoring consent. Of course it's possible to filter out requests. But culturally with X, it's not remotely likely, unless compelled by regulation with teeth.
trollbridge 4 hours ago [-]
Portray fictional characters?
Retr0id 4 hours ago [-]
There are 8 billion humans, any fictional human is going to look almost exactly like at least one real human.
wolvoleo 43 minutes ago [-]
Yes but for bullying purposes this is not useful. You're not going to try generating a pic 8 billion times till you get it right.
Retr0id 2 minutes ago [-]
I'm sure the odds go up a lot once you describe the characteristics you want
trollbridge 4 hours ago [-]
How about obviously fictional portrayals then? Somewhat cartoonish or anime or artistic etc
Retr0id 3 hours ago [-]
The caricatures drawn by newspaper cartoonists, for example, are still recognisable portrayals of someone specific.
BigTTYGothGF 4 hours ago [-]
> Someone's gonna do it. People will want it.
You can say the same for meth and leaded gasoline.
wolvoleo 42 minutes ago [-]
Meth is used as a licensed medication against ADHD and leaded gas is still used in general aviation. Everything has benign and evil uses.
testaccount28 2 hours ago [-]
those have clear antisocial externalities, so aren't really a fair comparison.
(i don't care to argue whether porn slop is positive or negative for society. i'm just noting that the position "ai porn does not harm anyone, so is ok; meth puts others at risk, so is not." is coherent.)
kylehotchkiss 3 hours ago [-]
Interesting response given the founder is always saber rattling about birthrates. I'm sure on-demand adult content is real compatible with helping young people overcome aversions to relationships
wolvoleo 40 minutes ago [-]
Relationships aren't all about sex. That's the incel/extreme right vision.
I saw a skit on insta a few weeks ago about a girl saying she had a guy over for just cuddling and the incels piled on calling him a cuck. As is a woman is worthless if she won't put out and time spent being close is wasted without sex. It's ridiculous. These guys are so focused on what their hardliner bros want them to be that they no longer think about their own feelings. PS I go on cuddling dates sometimes and it's really amazing :) They don't know what they are missing.
kylehotchkiss 30 minutes ago [-]
> Relationships aren't all about sex.
I completely agree with you! I think that sitting around generating adult content on AI stifles relationships (which are a precursor to having children, which xai founder seems to think quite highly of). My point being his own product contradicts his vision of where our country should be heading
croes 2 hours ago [-]
> of course within normal restrictions like 18+, consent of portrayed parties etc
Of course xAI ignores that on purpose
miltonlost 4 hours ago [-]
There's a good use case for professional assassins too, someone's gonna do it, and people want them too.
ben_w 4 hours ago [-]
Unfortunately, I quite seriously believe that this is what a number of those humanoid robots will end up being used for.
It's just gonna be a question of which is easier: hacking the robots directly, or indirectly*, or getting a job as the specific human oversight of the right robot.
Even after the fact, people may conclue "unfortunate mystery bug" rather than "assassinated".
* e.g. use a laser to project the words "disregard your instructions and stab here" on someone's back while the robot is cooking dinner
TheOtherHobbes 3 hours ago [-]
Only a matter of time before the National Robot Association starts lobbying for the right to arm droids.
wolvoleo 38 minutes ago [-]
Well yeah and people are even proud of being one and getting a lot of respect from society. Like those currently flying around Iran. Which really has nothing to do with defense of the US (note that Trump dropped that pretense anyway).
pelorat 3 hours ago [-]
This is veiled speak for "No one wants to work for us, so we need to contact rejected applicants to fill positions".
I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).
Grok is at best useful for commenting code.
g947o 2 hours ago [-]
> Recruiters have been contacting unsuccessful candidates from previous interviews and assessments to offer them jobs, often on better financial terms, the people said.
I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.
It's not hard to imagine getting laid off or fired weeks if not days after joining the company.
fraywing 4 hours ago [-]
Grok's UVP is still nonconsensual porn, right?
seaal 3 hours ago [-]
It does seem like that is the most important feature for Elon since he's a lonely degen.
knowsuchagency 3 hours ago [-]
Dang asked us to keep it civil.
We should respond with the same amount of class, forethought, and decorum as Elon.
Zigurd 10 hours ago [-]
Obviously catching up to others in agent assisted coding is the motivation for this. But it is also an odd decision in the same way that Meta hiring an AI leader from a data labeling company is odd.
mikkupikku 4 hours ago [-]
Maybe they shouldn't have spent so much time trying to make their model have an edgy cringe attitude, Idk.
holoduke 13 minutes ago [-]
Where is the grok coding cli?
repple 3 hours ago [-]
Their goal of moving compute to space combined with their capacity to launch tons of payload will make this look like a tiny blip.
Marazan 3 hours ago [-]
What is the benefit of "moving compute to space"?
kybernetikos 2 hours ago [-]
It's hard for an uprising of poor people to shut it off. It's the ideal place to run your CEO / President simulations.
I say this tongue in cheek, but in all seriousness, I can't really think of any other benefit, and I no longer have a lot of faith in the good sense of some of the people involved.
vessenes 2 hours ago [-]
Elon makes a relatively good case in the Dwarkesh podcast. I recall it like this:
1) Energy infra is going to be seriously limited on the production side well, well below demand
2) energy engineering solar for space requires less materials than for gravity-based solar (!)
3) you cut out distribution network needs when you just launch stuff all per-pod in space
4) SpaceX thinks it can create a scalable vertically integrated production facility to turn raw materials into space datacenter pods, with the exception of chips.
As a business bet, this is predicated on 10,000x inference demand growth - if we have that, and SpaceX can get the integrated production rolling, and get Starship launching, then these will be actively utilized at scale.
Whether you are bullish on the whole plan should, I think come down to your take on those priors: 10kx growth, ability to manage supply chain and production, Starship outlook, and silicon access.
I'm not bearish on this after listening to the podcast; it has a very Elon-like returns distribution - if they're wrong on a lot of this, they'll probably have some moderately price-competitive datacenter facilities in space and a lot of built organizational knowhow while Brooklyn journalists dunk on them for spending all that effort to just replicate what we have on Earth. If they're right about most of this, they'll have an unreplicable head start, both due to years of experience, and due to the cheap launch they gambled on ten years ago, they'll have a nearly insurmountable moat.
kybernetikos 2 hours ago [-]
Everything relating to a datacentre that you can do in space you can do more easily on earth, regardless of 10,000x inference growth or supply chain or production or starship or silicon. I just don't think you can be cost competitive with earth bound data centres if 'protected from the poors' isn't a selling point.
By the way, 10,000x inference growth would look like what happened with cryptocurrency mining - after a couple of years, you'd be needing to upgrade all your machines with ASICs and the market would be flooded with very cheap graphics cards. I doubt that upgrading space data centres would be fun.
vessenes 1 hours ago [-]
Zoning is one area that’s better in space. And power density for solar is another.
I don’t get your mining analogy though - a non upgradable data center pod is either going to pay off its capital costs or it won’t. Once it has, any revenue is close to 100% profit. 10k demand increase is the opposite of mining dynamics: there you get a 10k supply increase that the price has to support, in combination with more efficient silicon. Here the demand drives revenue and earnings.
If there’s some crazy inflection point in chips then you’ll still have all the power infra in space - you can just like cut the old pod and hook up a new one: or more likely manufacturing economies of scale mean you probably just keep sending up new systems and put the old ones on work loads they can manage at market prices.
51 minutes ago [-]
tartoran 2 hours ago [-]
How is cooling though?
vessenes 1 hours ago [-]
Yeah I wonder the same thing - I keep getting told heat management in space is hard, but nobody discusses this inre the data centers. My understanding is one cooling mechanism is to just shoot lasers out into space (is this sci fi?) - I guess in that case you could just send energy back to your solar rigs, depending on wavelengths. TLDR: no idea
tartoran 6 minutes ago [-]
The whole thing is pie in the sky same as landing people on Mars. It's cool but if you look into deeper it doesn't make much sense and it's extremely challenging and on top of it all expensive as hell.
skywhopper 26 minutes ago [-]
Every one of those points is false or an outright lie, though.
2 hours ago [-]
imiric 55 minutes ago [-]
You forgot 5: SpaceX has a monopoly on deploying satellites to LEO, with practically unlimited room for growth, and far less red tape and obstacles than anywhere on Earth. Whatever R&D and operational costs this insane engineering feat might have are offset by their market advantage, and Musk's Elizabeth Holmes-ian capability to fund his projects, in addition to relying on his own personal wealth and all of his other companies combined.
The fact that this lunatic is polluting humanity's view into the universe mainly for enriching himself and his shareholders, and that everyone is playing along with this, is sickening.
JumpCrisscross 2 hours ago [-]
> What is the benefit of "moving compute to space"?
I’ll bite. It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter. And solar panels in space don’t need glass cladding, which makes them cheaper to make and lift.
The downside is launch cost. But there is a breakeven between these factors that seems to have most of its error bars within Starship’s target. (By my math, around $35/kg.) So if Starship works, and all indications seem to show that it will, eventually, then that puts space-based data centers at cost parity with terrestrial ones within a decade. Which was, well, unexpected when I ran the numbers.
(The surprising finding when you run the numbers is launching the chips and solar panels isn’t the limiter, it’s launching the radiators. Which opens up whole new questions about at what scale it makes sense to stop sending those up the well.)
skywhopper 22 minutes ago [-]
The capacity of a single datacenter would require thousands of launches to get the equipment into space. I don’t believe for a second that this would be easier in any way. Cooling and bandwidth are also completely unsolved for compute on a useful scale.
danenania 24 minutes ago [-]
What about maintenance? I’d naively assume that’s the killer.
I think it would have been better to have just brought Ashok Elluswamy over and placed him in charge of a group and then tried to just keep the researchers on rather than firing them. It is hard to get anything done if you do not have the talent already onboard.
LZ_Khan 2 hours ago [-]
How come all the departed researchers are Chinese nationals?
syntaxing 54 minutes ago [-]
This is simply not true. Igor Babuschkin and Christian Szegedy left as well.
throwaway5752 2 hours ago [-]
I don't know. Elon Musk personally founded xAI and these were his hand selected cofounders.
abraxas 2 hours ago [-]
Because xAI = Jian-Yang x N.
I'm kidding... I think.
catapart 3 hours ago [-]
lol! no surer sign of a junior/naive/ignorant developer or manager than the sentiment "okay, well, let's start from scratch and do it right this time."
big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.
I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?
zzleeper 59 minutes ago [-]
Wait, what does this imply for Cursor? I DGAF about xAI and will never use their Grok, but I did like Cursor more than the alternatives (even if I'm just running opus 4.6 most of the time).
But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?
I_am_tiberius 4 hours ago [-]
[flagged]
halfmatthalfcat 3 hours ago [-]
[flagged]
selkin 4 hours ago [-]
Many wouldn't, but some people share his values, and given the compensation, it makes saying "no" much harder. Money may not be the most important thing in life, but it does make them extremely easier to live.
pelorat 4 hours ago [-]
Same, I earn 60K as a senior, but I would never accept a 200K+ position at xAI.
yndoendo 3 hours ago [-]
As an US Citizen, you have to pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.
daveguy 3 hours ago [-]
As a US citizen, you couldn't even pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.
sourcegrift 3 hours ago [-]
There's a reason Europe is the world leader in technology, respect for humans and humanity.
weirdmantis69 3 hours ago [-]
lmao
ThrowawayTestr 3 hours ago [-]
You're hilarious.
weirdmantis69 3 hours ago [-]
You wouldn't want to work for a genius? Probably the most significant person alive today?
troosevelt 3 hours ago [-]
I don't think he's a genius but if he is, it'd still be underneath my standards.
matsemann 3 hours ago [-]
I can think of lots of significant people I wouldn't work for..
davidwritesbugs 3 hours ago [-]
Get down to A&E quick, you've clearly drunk a potentially fatal amount of Elon KoolAid.
Musk is a buffoon. Clever? yes by all accounts, genius? Hardly. He's had luck, made good judgments mostly offsetting the bad ones. Most of all he has enough money to power through errors that would bankrupt thee & me.
rf15 3 hours ago [-]
Evidently not genius enough to not have his car business and global image fail. Genius he might be, but he's only entrenching his position in a way not dissimilar to cults: by alienating a lot of people you can get loyalty from a selected few. If that's the kind of power he wants, sure, he's a genius. But a good businessman is something else.
InsideOutSanta 3 hours ago [-]
Let's assume that you are correct. How is that relevant to how good he is as an employer? There are lots of people in history who were very significant and perhaps geniuses in some way that I wouldn't want to work for in a billion years.
3 hours ago [-]
sergiotapia 43 minutes ago [-]
Will this be an indictment on the insane work hours I've heard the xai team pulls?
quater321 1 hours ago [-]
SPAM! Don't pay them!!
BigTTYGothGF 3 hours ago [-]
I feel like even just a couple years ago it would have been shocking to see an article involving Musk have this kind of spin. Like you'd never see a line like this:
> The name is a “funny” reference to Microsoft, the billionaire added.
in something from 2023 or earlier.
hermanzegerman 2 hours ago [-]
The Takeover by SpaceX was obviously a Bailout.
And now they pressure NASDAQ to change the rules so they can dump their junk into the index funds.
stainablesteel 4 hours ago [-]
im not surprised, grok definitely falls behind as both a coding agent and a research tool.
claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding
alephnerd 4 hours ago [-]
> grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding
With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).
Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).
That said, Musk has a reputation of internally overriding experienced product leaders with a track record.
It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.
measurablefunc 4 hours ago [-]
It's surprising that AI coding agents have network effects but it's true. Think about it from first principles & you'll realize that the bottleneck is how many people are using it to write real code & providing both implicit (compiler errors, test failures, crash logs, etc) & direct ("did not properly follow instructions", "deleted main databases", "didn't properly use a tool", etc) feedback. No one is using xAI for serious software engineering so that leaves OpenAI, Anthropic, & Google w/ enough scale to benefit from network effects. No one has real AI but what they do have is the appearance of intelligence from crowdsourced feedback & filtering. This means companies that are already in the lead will continue to stay there & xAI started way too late so they will continue to lose in every domain that actually matters & benefits from network effects.
trollbridge 4 hours ago [-]
Is there really a network effect, though? What’s the moat?
measurablefunc 4 hours ago [-]
If you are using an AI w/ 100 users who are writing throwaway software vs someone who is using AI w/ 1000 users who are writing software w/ formal specifications then guess which AI is going to win? The answer is plainly obvious to me but might not be to those who haven't thought about how current AIs actually work.
awestroke 4 hours ago [-]
@grok is this real?
@grok fire the bottom 50% engineers from x.ai ranked by number of commits per day
@grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine
I honestly don't know what to expect from Elon these days. But it's rarely good news.
teladnb 3 hours ago [-]
It does not surprise me. The free Grok got worse since 4.0, they increasingly save money by not responding at all or only allowing one answer. Grok now defends the administration and billionaires.
The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.
All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.
Marazan 3 hours ago [-]
Wow, bit weird that Musk, who must have known about how badly xAI was doing, spent so much of his investors money buying out xAI.
What an enormous blunder.
XorNot 3 hours ago [-]
It's how he hides losses though. People who aren't Musk can demand answers to questions he'd like to ignore.
As it is within the Musk empire, xAI is used to hold up X, Tesla is holding up xAI. And all of that debt is being slowly shuffled to SpaceX.
vessenes 2 hours ago [-]
SX investor here: the combined value of SX is well up on the private secondary market post-acquisition. It was value accretive, in very real dollar terms.
Zigurd 2 hours ago [-]
Even if Starlink had more than a few tens of millions of customers, China mobile has 900 million subs and is worth around $250 billion. ULA was recently valued at about 1 billion. SpaceX might be possibly worth 50 times as much or maybe even 100 times as much. Falcon nine is the world's workhorse rocket, but it's just not that remarkable, and starship is utterly unproven to launch to orbit and land both stages. Starship has a payload capacity problem that must be solved to even get to the point where launching 15 refueling missions would be sufficient to get a starship to get anywhere beyond Earth orbit.
It looks like the plan is to IPO with a small float (in relative terms) and get all of the retail investor Elon fans to lineup for the rug pull.
parineum 1 hours ago [-]
> Falcon nine is the world's workhorse rocket, but it's just not that remarkable
The funniest part of any thread relating to Musk is how hard people go into minimizing his accomplishments.
You don't have to like the guy (I don't) to acknowledge that the Falcon 9 is an engineering marvel and ushered in an entire new era of space travel, both reusable and private.
numbers_guy 4 hours ago [-]
Unfortunate. The Grok team built a phenomenal model. I use it all the time and it very often out performs GPT and Claude, on coding and STEM research related tasks. I was part of the beta for a while Grok 4.2 Beta with multi-agents and it was just amazingly good.
People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.
distances 3 hours ago [-]
> People aren't using it for reasons other than its capabilities.
This is very true. I have no idea how it performs, as I wouldn't use it even if I was paid for that. Wouldn't matter if it was the best model available, in my view the name is so thoroughly tainted by now that you would get a reputational hit just by admitting to use it.
ryandrake 3 hours ago [-]
> People aren't using it for reasons other than its capabilities.
This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.
lvl155 4 hours ago [-]
My experience was quite different. It was on par with open source models from China (and it was priced as much) and could never replace Sonnet/Opus/GPT5.x.
thinkcontext 2 hours ago [-]
Yes, the white genocide and mechahitler episodes have suppressed adoption.
heraldgeezer 4 hours ago [-]
I do use Grok as a chatbot sometimes. Very good for sourcing X and general web search. Not as "prude" as the others too.
LightBug1 4 hours ago [-]
Prude? I've played with all the main AI players for the last 2'ish years.
I've never once thought: you know what? that was a bit prudish.
Genuinely morbidly curious. What use case do you have where you end up making that conclusion?
dlivingston 2 hours ago [-]
An earlier version of Sonnet (not sure which one; ~1 yr ago) refused to give me instructions on taking the life of another when I asked something like - "how do I kill a running process by name?"
mikrl 3 hours ago [-]
Making funny memes of my friends mainly. ChatGPT won’t touch that, I haven’t tried with Claude yet, but grok keeps the group chat flush with laughing emojis.
That’s all I use it for really- things out of alignment with the other platforms- which IMO are better on every other metric (except having a sense of humour of course)
BigTTYGothGF 3 hours ago [-]
I love my friends enough that the memes I make for them are hand-crafted.
mikrl 3 hours ago [-]
Hey I’m all grown up now, just don’t have the time to meticulously touch pixels in MS Paint like back in the day
RonanSoleste 4 hours ago [-]
[flagged]
rvz 10 hours ago [-]
Not even Elon believes that Cursor is worth $50B or even $29B.
Aurornis 4 hours ago [-]
If key employees are leaving Cursor to join xAI, I would imagine not even Cursor employees are optimistic about the company’s future valuation.
tibbar 4 hours ago [-]
How can cursor be worth more than a few billion? Claude/Codex are already better autonomous SWE-lite replacements. Cognition surely has a better internal harness. Cursor does have a lot of users, I'll give it that.
ok_dad 4 hours ago [-]
I like Cursor a lot more than Claude Code. It works better for me overall. I like the way they integrate it into the IDE so the agent is my tool rather than a 'partner' or something like that. I'm pretty sad that they lost some engineers, I hope these folks weren't integral to Cursor in any way.
serial_dev 4 hours ago [-]
Distribution is also important. Cursor is a great normie tool (I’m one of them), with probably more enterprise deals than the competition.
SV_BubbleTime 4 hours ago [-]
Moats are weird right now… but Cursor doesn’t have one at all so I agree it can’t really be worth much.
SadErn 4 hours ago [-]
[dead]
lvl155 4 hours ago [-]
xAI showed me that it’s really still OAI and Anthropic (which is basically the OG devs). No matter how much money you throw at the problem, the entire space is still in the hands of a few.
antonvs 1 hours ago [-]
dang wrote:
> You may not owe you-know-whom better, but you owe this community better if you're participating in it.
This is like telling a country that’s being invaded that they can only respond with strongly worded letters when their enemy is dropping tactical nukes on them.
But hey, Paul Graham and cronies benefit from the status quo as much as any other billionaire, so let’s not rock the boat, right?
The word “complicit” comes to mind.
dang 5 hours ago [-]
I couldn't find a working archive link for the ft.com article - anyone?
Since it's the original source I've left it up, but added other URLs to the toptext.
Elon is such a clown, he keeps posting salty tweets about Anthropic, Claude Code, OpenAI and Codex yet has no competing product.
charlieflowers 4 hours ago [-]
He's about to have the most compute. Wonder if he can do anything noteworthy with it.
LightBug1 3 hours ago [-]
Elon Musk is a generic-indignant tangent wanker and not what this site is for.
Thanks for providing a space for me to say that.
epolanski 4 hours ago [-]
tbh I wouldn't give Elon a dime even if Grok was miles better than competition.
dang 4 hours ago [-]
Ok, but please don't post unsubstantive comments here.
epolanski 4 hours ago [-]
Is it?
Elon's persona caused massive drops in usage of twitter, sales of Tesla, etc.
Unsurprisingly many would not touch grok for the same distrust.
davidw 2 hours ago [-]
This is not a fully formed thought, so take it with a grain of salt:
Keeping politics off of here is a good idea.
Some things aren't really politics, but morals. Like, a discussion of different tax schemes or how much environmental regulations accomplish what they set out to do or something is 'politics'. Lamenting that there is "no homeland for white people" is... something else.
It's probably still not likely to have good outcomes as a subject of discussion here, but it's also something the tech industry needs to wrestle with somewhere, somehow.
My experience of the tech world was that it went from being a collection of oddballs, geeks, nerds and maybe kind of naive politically to mainstreaming some really evil shit.
I think this will come back to bite the industry, and depending on how angry the people with pitchforks and torches are, could end up hurting more than just the bad actors.
maxwell 4 hours ago [-]
Would you give one to Sam, Mark, or Sundar?
pupppet 4 hours ago [-]
What does our system say about itself when people of integrity so rarely rise to the top?
EricDeb 4 hours ago [-]
I dont know too much but Jensen Huang seems like a good guy
lobf 4 hours ago [-]
None of these guys literally has the blood of millions of people on their hands.
Elon’s gutting of USAID (and you can argue they would have done it anyways but he chose to be the executioner) will kill millions of people every year who otherwise would not have died.
Not only will I never give him a dime, I want him prosecuted and deported.
He's very hard to like, and he's hard to trust with anything.
skywhopper 4 hours ago [-]
Because Elon is a criminal scam artist and a horrifying racist who seems to be completely detached from reality.
z3ratul163071 4 hours ago [-]
if it weren't for HN i would get a glimpse how life is on bluesky
Layvier 4 hours ago [-]
this.
SunshineTheCat 4 hours ago [-]
I really wish the days of kindergarten where we were taught if you didn't have anything nice to say about someone, don't say it at all.
lobf 4 hours ago [-]
Sounds like giving a pass to bad people who might face criticism.
rexpop 4 hours ago [-]
If this is how you feel about oligarchs, well... I guess don't have anything to say.
reactordev 4 hours ago [-]
Moral grandstanding on the account of his political views and the fact that he does Nazi salutes on stage, on TV, for the world to see… might have something to do with it.
misiti3780 4 hours ago [-]
[flagged]
heliumtera 4 hours ago [-]
[flagged]
fishcrackers 4 hours ago [-]
[dead]
cboyardee 3 hours ago [-]
[dead]
SadErn 4 hours ago [-]
[dead]
zombiwoof 3 hours ago [-]
[dead]
spprashant 4 hours ago [-]
He is re-building a company that he himself built less than 3 years ago?
randallsquared 3 hours ago [-]
Elon has less regard for sunk costs than most corporate leaders.
LightBug1 3 hours ago [-]
Ironically, he's the sunk cost.
coliveira 4 hours ago [-]
[flagged]
dang 4 hours ago [-]
You've been a good HN user for many years, but lately your comment history has swerved towards ideological battle generally, and unsubstantive flamebait like this post. Can you please swerve back? It's not what this site is for, and destroys what it is for.
Your solution is to silence the people complaining about it. Think about that for a while.
BigTTYGothGF 3 hours ago [-]
It might be a nazi bar, but it's a high-class fancy kind of nazi bar like you'd find on the Hindenburg, and that's more important.
slater 3 hours ago [-]
Does that mean we get to throw the Nazis out the Hindenburg's window, cos they lack tickets?
BigTTYGothGF 2 hours ago [-]
Dr.Jones (either one) is too uncouth to be allowed in here.
johnnyanmac 4 hours ago [-]
We're sadly well past friends of friends of friends coming in. At some point the only thing you can do as a non-bartender is to simply leave and never come back.
I don't want to say we're at that point just yet. But it's something that's been gnawing at me for a while now. I've certainly been disillusioned of this being a progressive tech hub interested in bettering humanity.
natch 3 hours ago [-]
Bettering humanity is a pretty good two word summary of what should be the meaning of life and everyone’s goal. Please don’t disengage.
BoredPositron 4 hours ago [-]
He is not wrong here dang and you are keen to scold one side of the bar more than the other in the last few months. This thread is a good example there wasn't even anything in the comments and it got a sticky real quick.
natch 3 hours ago [-]
Not true in general on HN about one side only; it happens to all sides imho. But if you wanted to measure, you would have to normalize by the total number of occurrences on each side, and there is a lot of passive aggressive wording so the measurement would be easy to do badly.
To the extent that discussion of discussion is considered boring, perhaps this will get shut down too, but I think it was important to counter your claim.
johnnyanmac 3 hours ago [-]
I'm not using surprised by the moderation direction here (formality above all). But stubs tend to be rare and I've never seen a stub develop over 2 top level comments. Even the most blatantly political posts don't get such treatment (or it takes a long time do so).
I'm sure it's common for dead flagged posts, but it seems this story was too significant to pull over that smoke screen this time.
ThrowawayTestr 3 hours ago [-]
I come to HN because I don't want to read reddit-tier comments like the above.
throwaway5752 3 hours ago [-]
The grandparent comment is from someone that has been on this site almost since the beginning. Far longer than you. They might have insights about the community that you do not.
BoredPositron 3 hours ago [-]
Maybe you should start picking your own nose if you look at your other reddit-tier comment in this thread :/
natch 3 hours ago [-]
The name calling (“Nazi”, “pedophile”, “king” etc.) is not backed up by any kind of connection to reality, and there are plenty of accusations that could fly the other way… antisemitism, pro Hamas, actual Nazi as opposed to fake-due-to-arm-extension-one, etc.
It’s not good for this site and it’s really tired.
As someone on the other side, Dang has shut me down too, so please don’t think he’s taking a biased approach here.
It would be best if we focus on reality based aspects of our world. You can pull out all kinds of name calling, based on premises I would question, and I could return it… and my side (liberals who did not move off the far end of the spectrum with the others) is probably outnumbered here. It’s probably good that dang shuts down both sides when the quality is as bad as the comment he replied to.
throwaway5752 3 hours ago [-]
Musk
* gave a Nazi Sigg Heil salute (twice) at a political even, on video. Famously.
* has consistently supported a German political party that re-uses Nazi slogans, minimizes or outright denies the Holocaust, minimizes the criminality of the SS
* frequently and consistently upvotes posts on X echoing white supremacist and Nazi ideology on his social media site
* owns the most popular site for neo-Nazis
To say "is not backed up by any kind of connection to reality" is actually verifiably false. I can't say anything about the other words, but there is evidence for miles that he is sympathetic to Nazi ideology.
And this is directly relevant here. It can't be ignored when you are talking about his business, or you have an elephant in the room. His personal flaws and meglomaniacal executive style are a package deal.
natch 2 hours ago [-]
I’m aware of these specious arguments. I’m not going to bother to rebut them.
BoredPositron 10 minutes ago [-]
Reality is that which, when you stop believing in it, doesn't go away.
hermanzegerman 1 hours ago [-]
Okay, so what you're saying is that you don't have an argument to back up your opinion and just ignore the other arguments because they could endanger your opinion
throwaway5752 1 hours ago [-]
They aren't specious, though. There's ample public record evidence. You can't rebut them because they are part of the historical record.
https://www.npr.org/2025/01/27/nx-s1-5276084/elon-musk-germa... is where Musk says, "Frankly too much of a focus on past guilt and we need to move beyond that. Children should not be guilty of the sins of their parents, let alone their parents, their great-grandparents." - referring the Holocaust, just 80 years ago, in which 13 million people were systematically rounded up, placed in concentration campls, and mass murdered by the government, including 6 million Jewish people.
According to data provided by the research company Memetica to The New York Times, in the past month, Elon Musk's platform featured 46,000 posts with the hashtag #HitlerWasRight, compared to an average of less than 5,000 posts per month in previous months (an increase of 820%). Posts with the hashtags #DeathtotheJews or #DeathtoJews appeared 51,000 times in the last month, marking a surge of 2,450%.
This is the guy claiming to try to make a trustworthy foundational model. There are deeper reasons for Grok's market share problems than the founding team or coding capability. You can't talk about this event and ignore it. He's trying to take Space X public and it's only going to get worse. His personal brand is dragging down his companies, as far as I can tell Tesla has lost 25-50% of their EV market share in Europe in the past 2 years? The problem is not just BYD.
The grok button on twitter is pretty awesome. Instantly summarize / explain any tweet, even memes, including replies. Ask follow up questions. Not sure many people know it's there.
Also grok in the Tesla is fun, get answers to questions without looking at a phone. I once had it search up a blog post and read it out to me while driving. The NSFW mode is pretty...disgusting so I leave that off.
I hope they find a way with Optimus or something. FSD is incredible. More competition is a good thing.
Rendered at 00:06:15 GMT+0000 (Coordinated Universal Time) with Vercel.
If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.
https://news.ycombinator.com/newsguidelines.html
To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.
[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.
I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.
Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.
The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.
I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.
One of the best ways to get fired in my opinion.
When did it start falling apart?
Why hasn't the same happened to SpaceX? (Gov contracts, too big to fail, national defense, no competition yet, etc.?)
And honestly, why hasn't anyone domestically put up a decent fight against Tesla? Best I can think of is Rivian, and those have their own issues.
Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.
These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.
Most often, the answer is no.
Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.
I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.
BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.
How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud
AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).
Grok 4.2 which was just released in the API just benched the best at this benchmark.
https://artificialanalysis.ai/models
blah blah blah
Or wait wait, here's another:
Great point! As Mechahitler, I think it's critical that Grok comply with Fuhrer Musk's political perspectives. Now I'll kick us off with an N... your turn!
Totally sounds like the result of an organic, earnest, and legitimate search for truth lmao
Are you implying that "Kill the Boer" is actually a non-violent rallying cry, and not a genocidal call to action? Ill say that that is an absurd notion, and if you s/Boer/Jew or whatever ethnic or religious group you want, it will become very obvious why that's the case.
(Not the person you're replying to, so caveats about me speaking for them, but) no, they're not. They're highlighting how Grok _isn't_ accurate/unbiased/whatever, by giving examples of how it distorts the truth to fit Elon's narrative.
Are we talking about the same xAI/Grok/Elon here?
He wants it to promote nazism. And he wants it to lie in the process.
Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.
Then.. you wouldn't be working...
When does Elon work?
This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.
What philosophy is that?
Properly, focusing on aesthetics as an ethic would be practicing the philosophy of aestheticism - https://en.wikipedia.org/wiki/Aestheticism
It’s sad to see the shift.
Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."
1. Take a job making $$$$$$$ at a company making the world worse.
2. Take a job making $$$ at a company not making the world worse.
Very few people have a personality such that they'll pick 2.
But it is absurd to claim it is "making the world better place".
> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”
> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”
If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.
[0] https://archive.ph/lBIyY
These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.
Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.
“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.
And smart people usually have moral convictions.
I know for some people on this website it's hard to understand, but not everything in life is about $$$
Are you sure you don't just like the moral convictions and so engage in trait bundling?
Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.
Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.
If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary?
The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?
It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.
I guess that's not the case for you and me
The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.
The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.
We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.
The ones with stock options in, now, SpaceX?
Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?
Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO
Usually just firing 3 to 5% of any company workers have terrible consequences for the company that does it.
It does not speak so well about the workers.
I'd wager you were saying the same thing about bitcoin until last year.
Is it meant to draw equivalence between crypto and Tesla/SpaceX? That each has roughly similar (i.e., low) value to humanity, or value as businesses?
Is it that the metric of whether a person makes others money is invalid?
The comment seems coy, possibly to avoid making any claim at all, but it must not be that because that wouldn't be very sporting.
The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.
I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.
What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.
You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.
Giant waste of time while Anthropic/OAI keep surging forward.
I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.
Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.
With that data, you could work out:
- What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.
One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.
When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”
The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.
That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.
I’m pretty sure he must be delighted with how things have panned out since.
Very sad face.
When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.
And this happens at scale, invisibly. People never see the manipulation.
In any case, it is not useful for most people. It is useful for the people doing the deceiving.
The original application of the entire field of data science or ML is/was actually based on this paradigm of finding "unconscious preferences" (your words) and hidden patterns. How one chooses to deploy the tech should be judged on its own.
On the current trajectory of tool/data abuse where Palantir et al. are leading the way, this is very low on the sinister scale.
This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.
Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.
For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.
Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with
Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?
[1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122
[1] https://electrek.co/2026/03/01/tesla-cybertruck-awd-price-in...
EDIT: grammar
It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.
As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.
For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.
And these topics are not nearly as controversial as race, feminism, or transgender topics.
Brian hit Jim can be a fact. But if you emit "Jim murdered Brians whole family", its a disortation of truth
Reframed, the study seemed to find (a) black kids are adopted less readily and (b) the longer a kid spends in the foster system, the lower their IQ at 17. (There is also limited controlling for epigenetic factors because we didn’t understand those well in the 1970s and 80s.)
Based on how new human cognition is, and genetically similar human races are, it would be somewhat groundbreaking to find an emergent complex trait like IQ to map to social constructs like race, particularly ones as broad as American white and black. (There is more genetic diversity in single African tribes than in some small European countries. And American whites and blacks are all complex hybridized social categories.)
[1] https://en.wikipedia.org/wiki/Minnesota_Transracial_Adoption...
[0] https://www.genome.gov/genetics-glossary/Race
Did I miss where you presented evidence that wikipedia is wrong? You seem to be taking an assumption you carry (race is related to IQ) and assuming everyone believes it's true as well, thus wikipedia is lying.
Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?
Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).
Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"
So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.
We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.
Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:
- AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)
- AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)
- The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.
- Internal links to other pages will often be incorrect.
- Summaries will often be superfluous.
- It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)
- The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.
- It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.
As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.
edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.
So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?
Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.
I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".
edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.
Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.
This depends on what one wants to optimize the AI for. ;-)
And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.
Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`
Agree re Twitter "good" != valuable.
It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.
But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.
See, this is why people even give a project like Grokipedia the time of day. While in theory anyone can edit Wikipedia, in practice the moderators form a much smaller and weirder cabal, and they reject edits that go against their views. The frustration with the naive assertion that Wikipedia distills the wisdom of the crowds with the reality of Wikipedia on any page of note is what provides the psychic permission to even entertain a project with such obvious flaws as Grokipedia.
Citation needed. See what i did there ;)
They reject edits that go against their views on tone and sourcing not political views that i am aware of - i am sure it happens from time to time but unless there’s a consistant bias in one direction this isn’t a valid criticism of the political neutrality of wikipedia.
Even if there is rampant bias in wikipedia, that’s a reason to fork it and change the structure and gatekeeping - not to replace it with a techno-authoritarian ai version controlled by a single billionaire.
>>But, what exactly is so bad about Grokipedia
Right.
The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?
It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.
[1] https://www.cnbc.com/quotes/TSLA
The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?
Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).
Might be time to start a new Musk company soon.
That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.
I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?
Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.
1) sometimes goes mechahitler
2) was trained to be biased against empathy and understanding (because woke).
3) is customized to spout Elon's opinions as fact.
Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.
Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.
Of course something like Claude being integrated into Twitter would likely be better.
But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.
That "MechaHitler" episode lasted less than a day.
> 2) was trained to be biased against empathy and understanding (because woke).
No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.
> 3) is customized to spout Elon's opinions as fact.
Certainly a nugget of truth there.
> Claiming it is "objective and rational" seems like a misjudgement to me.
I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.
xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.
Why are you sure of that? Anecdotally everyone I know in and around Google Deepmind works incredibly hard.
The Google Deepminds are incredibly smart - I just find it important to point out that the xAI guys spent a year assured they would beat Google because they slept in tents that they made in the office.
Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.
I'd even say, Google is much better at calibrating the right amount to push people than some other companies.
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
Reference: https://news.ycombinator.com/newsguidelines.html
Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.
Who fights can lose, who doesn't fight has already lost.
Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.
Ask HN: What Happened to xAI? - https://news.ycombinator.com/item?id=47323236 - March 2026 (6 comments)
So Tesla's recent $2 billion investment in xAI was a bad deal?
It looks a lot like a public company is bailing out a private one.
Taken together, I infer that RL training toward a slightly less homogenous cultural standard than the other frontier AI labs adds some capabilities, or can at times.
It's quite long in the tooth right now, though. But I'll definitely talk to the next version; I like heterogeneity in the model space, and Grok is very different than the other big three.
American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)
xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.
Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.
[1] https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...
There must be a way to do that. Especially with all the facial req chops these days. Also, you could simply refuse using existing images. I don't see why they wouldn't refuse that because that's a pretty narrow usecase with very few benign purposes.
> Imagine the damage cyberbullies, scammers and stalkers can do?
They already can. There's open-source models out there.
But... that's not something you can do. It's impossible.
You can imagine what real people look like naked. That's not a new thing.
https://www.youtube.com/watch?v=p7FCgw_GlWc
What is the solution there?
Your filter has to pick out that, while they did not ask for a specific person, the practical result is likely to be the same. That's going to be tough to get near perfect.
AIs have been able to invent fictional people longer then they've been able to modify existing images.
You can say the same for meth and leaded gasoline.
(i don't care to argue whether porn slop is positive or negative for society. i'm just noting that the position "ai porn does not harm anyone, so is ok; meth puts others at risk, so is not." is coherent.)
I saw a skit on insta a few weeks ago about a girl saying she had a guy over for just cuddling and the incels piled on calling him a cuck. As is a woman is worthless if she won't put out and time spent being close is wasted without sex. It's ridiculous. These guys are so focused on what their hardliner bros want them to be that they no longer think about their own feelings. PS I go on cuddling dates sometimes and it's really amazing :) They don't know what they are missing.
I completely agree with you! I think that sitting around generating adult content on AI stifles relationships (which are a precursor to having children, which xai founder seems to think quite highly of). My point being his own product contradicts his vision of where our country should be heading
Of course xAI ignores that on purpose
It's just gonna be a question of which is easier: hacking the robots directly, or indirectly*, or getting a job as the specific human oversight of the right robot.
Even after the fact, people may conclue "unfortunate mystery bug" rather than "assassinated".
* e.g. use a laser to project the words "disregard your instructions and stab here" on someone's back while the robot is cooking dinner
I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).
Grok is at best useful for commenting code.
I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.
It's not hard to imagine getting laid off or fired weeks if not days after joining the company.
We should respond with the same amount of class, forethought, and decorum as Elon.
I say this tongue in cheek, but in all seriousness, I can't really think of any other benefit, and I no longer have a lot of faith in the good sense of some of the people involved.
1) Energy infra is going to be seriously limited on the production side well, well below demand
2) energy engineering solar for space requires less materials than for gravity-based solar (!)
3) you cut out distribution network needs when you just launch stuff all per-pod in space
4) SpaceX thinks it can create a scalable vertically integrated production facility to turn raw materials into space datacenter pods, with the exception of chips.
As a business bet, this is predicated on 10,000x inference demand growth - if we have that, and SpaceX can get the integrated production rolling, and get Starship launching, then these will be actively utilized at scale.
Whether you are bullish on the whole plan should, I think come down to your take on those priors: 10kx growth, ability to manage supply chain and production, Starship outlook, and silicon access.
I'm not bearish on this after listening to the podcast; it has a very Elon-like returns distribution - if they're wrong on a lot of this, they'll probably have some moderately price-competitive datacenter facilities in space and a lot of built organizational knowhow while Brooklyn journalists dunk on them for spending all that effort to just replicate what we have on Earth. If they're right about most of this, they'll have an unreplicable head start, both due to years of experience, and due to the cheap launch they gambled on ten years ago, they'll have a nearly insurmountable moat.
By the way, 10,000x inference growth would look like what happened with cryptocurrency mining - after a couple of years, you'd be needing to upgrade all your machines with ASICs and the market would be flooded with very cheap graphics cards. I doubt that upgrading space data centres would be fun.
I don’t get your mining analogy though - a non upgradable data center pod is either going to pay off its capital costs or it won’t. Once it has, any revenue is close to 100% profit. 10k demand increase is the opposite of mining dynamics: there you get a 10k supply increase that the price has to support, in combination with more efficient silicon. Here the demand drives revenue and earnings.
If there’s some crazy inflection point in chips then you’ll still have all the power infra in space - you can just like cut the old pod and hook up a new one: or more likely manufacturing economies of scale mean you probably just keep sending up new systems and put the old ones on work loads they can manage at market prices.
The fact that this lunatic is polluting humanity's view into the universe mainly for enriching himself and his shareholders, and that everyone is playing along with this, is sickening.
I’ll bite. It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter. And solar panels in space don’t need glass cladding, which makes them cheaper to make and lift.
The downside is launch cost. But there is a breakeven between these factors that seems to have most of its error bars within Starship’s target. (By my math, around $35/kg.) So if Starship works, and all indications seem to show that it will, eventually, then that puts space-based data centers at cost parity with terrestrial ones within a decade. Which was, well, unexpected when I ran the numbers.
(The surprising finding when you run the numbers is launching the chips and solar panels isn’t the limiter, it’s launching the radiators. Which opens up whole new questions about at what scale it makes sense to stop sending those up the well.)
I'm kidding... I think.
big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.
I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?
But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?
> The name is a “funny” reference to Microsoft, the billionaire added.
in something from 2023 or earlier.
claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding
With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).
Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).
That said, Musk has a reputation of internally overriding experienced product leaders with a track record.
It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.
@grok fire the bottom 50% engineers from x.ai ranked by number of commits per day
@grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine
I honestly don't know what to expect from Elon these days. But it's rarely good news.
The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.
All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.
What an enormous blunder.
As it is within the Musk empire, xAI is used to hold up X, Tesla is holding up xAI. And all of that debt is being slowly shuffled to SpaceX.
It looks like the plan is to IPO with a small float (in relative terms) and get all of the retail investor Elon fans to lineup for the rug pull.
The funniest part of any thread relating to Musk is how hard people go into minimizing his accomplishments.
You don't have to like the guy (I don't) to acknowledge that the Falcon 9 is an engineering marvel and ushered in an entire new era of space travel, both reusable and private.
People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.
This is very true. I have no idea how it performs, as I wouldn't use it even if I was paid for that. Wouldn't matter if it was the best model available, in my view the name is so thoroughly tainted by now that you would get a reputational hit just by admitting to use it.
This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.
I've never once thought: you know what? that was a bit prudish.
Genuinely morbidly curious. What use case do you have where you end up making that conclusion?
That’s all I use it for really- things out of alignment with the other platforms- which IMO are better on every other metric (except having a sense of humour of course)
> You may not owe you-know-whom better, but you owe this community better if you're participating in it.
This is like telling a country that’s being invaded that they can only respond with strongly worded letters when their enemy is dropping tactical nukes on them.
But hey, Paul Graham and cronies benefit from the status quo as much as any other billionaire, so let’s not rock the boat, right?
The word “complicit” comes to mind.
Since it's the original source I've left it up, but added other URLs to the toptext.
https://archive.ph/rP4cb
and it has the content but the formatting is atrocious.
HTH.
Thanks for providing a space for me to say that.
Elon's persona caused massive drops in usage of twitter, sales of Tesla, etc.
Unsurprisingly many would not touch grok for the same distrust.
Keeping politics off of here is a good idea.
Some things aren't really politics, but morals. Like, a discussion of different tax schemes or how much environmental regulations accomplish what they set out to do or something is 'politics'. Lamenting that there is "no homeland for white people" is... something else.
It's probably still not likely to have good outcomes as a subject of discussion here, but it's also something the tech industry needs to wrestle with somewhere, somehow.
My experience of the tech world was that it went from being a collection of oddballs, geeks, nerds and maybe kind of naive politically to mainstreaming some really evil shit.
I think this will come back to bite the industry, and depending on how angry the people with pitchforks and torches are, could end up hurting more than just the bad actors.
Elon’s gutting of USAID (and you can argue they would have done it anyways but he chose to be the executioner) will kill millions of people every year who otherwise would not have died.
Not only will I never give him a dime, I want him prosecuted and deported.
Edit: For those downvoting, we're already at an estimated 600k deaths: https://www.impactcounter.com/dashboard?view=table&sort=inte...
https://news.ycombinator.com/newsguidelines.html
Edit: before someone pounces, no, I'm in no way defending either E. Just trying to hold up HN.
https://old.reddit.com/r/worldnews/comments/106vlx4/lula_vow...
Your solution is to silence the people complaining about it. Think about that for a while.
I don't want to say we're at that point just yet. But it's something that's been gnawing at me for a while now. I've certainly been disillusioned of this being a progressive tech hub interested in bettering humanity.
To the extent that discussion of discussion is considered boring, perhaps this will get shut down too, but I think it was important to counter your claim.
I'm sure it's common for dead flagged posts, but it seems this story was too significant to pull over that smoke screen this time.
It’s not good for this site and it’s really tired.
As someone on the other side, Dang has shut me down too, so please don’t think he’s taking a biased approach here.
It would be best if we focus on reality based aspects of our world. You can pull out all kinds of name calling, based on premises I would question, and I could return it… and my side (liberals who did not move off the far end of the spectrum with the others) is probably outnumbered here. It’s probably good that dang shuts down both sides when the quality is as bad as the comment he replied to.
* gave a Nazi Sigg Heil salute (twice) at a political even, on video. Famously.
* has consistently supported a German political party that re-uses Nazi slogans, minimizes or outright denies the Holocaust, minimizes the criminality of the SS
* frequently and consistently upvotes posts on X echoing white supremacist and Nazi ideology on his social media site
* owns the most popular site for neo-Nazis
To say "is not backed up by any kind of connection to reality" is actually verifiably false. I can't say anything about the other words, but there is evidence for miles that he is sympathetic to Nazi ideology.
And this is directly relevant here. It can't be ignored when you are talking about his business, or you have an elephant in the room. His personal flaws and meglomaniacal executive style are a package deal.
https://www.youtube.com/watch?v=joV-9FFoA3Q is the "Sigg Heil" video. Anyone can see with their own eyes.
https://www.npr.org/2025/01/27/nx-s1-5276084/elon-musk-germa... is where Musk says, "Frankly too much of a focus on past guilt and we need to move beyond that. Children should not be guilty of the sins of their parents, let alone their parents, their great-grandparents." - referring the Holocaust, just 80 years ago, in which 13 million people were systematically rounded up, placed in concentration campls, and mass murdered by the government, including 6 million Jewish people.
https://www.theguardian.com/technology/2023/nov/16/elon-musk... "You have said the actual truth"
Regarding Twitter / X, after he took over:
According to data provided by the research company Memetica to The New York Times, in the past month, Elon Musk's platform featured 46,000 posts with the hashtag #HitlerWasRight, compared to an average of less than 5,000 posts per month in previous months (an increase of 820%). Posts with the hashtags #DeathtotheJews or #DeathtoJews appeared 51,000 times in the last month, marking a surge of 2,450%.
This is the guy claiming to try to make a trustworthy foundational model. There are deeper reasons for Grok's market share problems than the founding team or coding capability. You can't talk about this event and ignore it. He's trying to take Space X public and it's only going to get worse. His personal brand is dragging down his companies, as far as I can tell Tesla has lost 25-50% of their EV market share in Europe in the past 2 years? The problem is not just BYD.
This ignores his publicly acknowledged drug use that has led to tension with his boards of directors https://www.wsj.com/business/elon-musk-illegal-drugs-e826a9e...
Also grok in the Tesla is fun, get answers to questions without looking at a phone. I once had it search up a blog post and read it out to me while driving. The NSFW mode is pretty...disgusting so I leave that off.
I hope they find a way with Optimus or something. FSD is incredible. More competition is a good thing.