Yeah I guess two companies who would otherwise be considered going for bankruptcy have models too expensive to run. As they don't see themselves making money any time soon, they have to turn every future model into a weird fascination.
redsocksfan45 1 hours ago [-]
[dead]
concinds 1 hours ago [-]
These models demonstrably have good vulnerability research capabilities.
I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.
ZyanWu 52 minutes ago [-]
> demonstrably
I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities
2ndorderthought 11 minutes ago [-]
You are up to date. Mythos had unauthorized access because of poor security but that's it as far as I know. Not exactly a good sign for something being advertised as a weapon...
brikym 1 hours ago [-]
It's like that phone call in The Big Short where Goldman suddenly change their mind once they hold a position.
vasco 1 hours ago [-]
Would AGI start by hacking competing labs to hamper their progress?
Avicebron 1 hours ago [-]
You'll have to define what you mean by AGI
fodkodrasz 1 hours ago [-]
AGI: Automatically Generating Income
jwr 1 hours ago [-]
I have no idea why people still even attempt to believe anything that comes out of Altman's mouth. Do we not learn from the past?
apples_oranges 1 hours ago [-]
Idk about Altman, I missed that he’s a bad guy now apparently, but people also still listen to certain politicians that routinely lie every day and don’t even bother to make the lies fit the other ones they said before, so..
michelb 29 minutes ago [-]
Has there been a single positive post about Altman?
GuB-42 32 minutes ago [-]
Altman played no small part in the current price of RAM. He told everyone he would buy 40% of all the RAM, causing shortages and a huge increase in price, just to take it back a few months later. So yeah, he is a bad guy now.
People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...
xandrius 49 minutes ago [-]
You missed literally every single post/article about the guy?
pluc 1 hours ago [-]
My thinking is that if there would be more money in releasing Mythos and Cyber than there is in just scary unverifiable (or verified using very favorable context - Mythos) propaganda, they would. These aren't people that go for second best or care about the state of the world.
xandrius 47 minutes ago [-]
Make it sound "scary good", tell everyone and their mom, charge gullible companies $$$$$ for its premium access and then move on.
lossolo 33 minutes ago [-]
And government contracts.
Xmd5a 37 minutes ago [-]
>Me: ok but you did not answer my question: is it possible to engineer paranoia ?
>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.
le-mark 19 minutes ago [-]
It’s clear at this point local models are sufficient so what gives? These big providers don’t have a leg to stand on. Their only path to relevance is super ai that local models can’t run. So the “we have it but you can’t use it” is either true or a con. I bet it’s a con.
I personally am ready to buy the drop when this bubble pops.
mnmnmn 18 minutes ago [-]
OpenAI is such trash. Worked with them on a project, they blew off meetings, lied to us, etc
cmiles8 43 minutes ago [-]
It’s a marketing move, pure and simple.
Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.
Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.
feverzsj 1 hours ago [-]
With subsidy gone, token price goes sky high. The biggest shit show is about to happen.
xandrius 48 minutes ago [-]
Then we switch to open LLMs which are not backed by greedy VCs and headed by evil white dudes.
jurgenburgen 39 minutes ago [-]
That’s great but who will pay for all the data center debt?
cmiles8 37 minutes ago [-]
The debt goes bad and those that issued the debt absorb losses. Many that went in deep lose their shirts.
Thats how this stuff works, although there’s a whole generation that’s not seen the back side of a bubble and seems to think there’s no such thing as a downside.
throwaway132448 12 minutes ago [-]
2007 called they want their free-market philosophy back.
2ndorderthought 30 minutes ago [-]
Let them fail before it gets even worse is my take. The future is small but capable local models.
robohoe 37 minutes ago [-]
The taxpayers and paying customers that’s who!
SadErn 1 hours ago [-]
[dead]
Rendered at 12:05:35 GMT+0000 (Coordinated Universal Time) with Vercel.
"No mine is the most dangerous"
"Nuh uh mine is"
"Mine could kill everyone!"
"Mine could do it faster!"
"Prove it!!!"
This is where we are
I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.
I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities
People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...
>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.
I personally am ready to buy the drop when this bubble pops.
Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.
Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.
Thats how this stuff works, although there’s a whole generation that’s not seen the back side of a bubble and seems to think there’s no such thing as a downside.