NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Bruce Schneier: AI and the scaling of betrayal (2023) (schneier.com)
gnabgib 2 hours ago [-]
(2023) Discussion at the time (203 points, 91 comments) https://news.ycombinator.com/item?id=38516965

Title should be: AI and Trust

insuranceguru 1 hours ago [-]
Thanks for the link. I missed that original discussion. It’s fascinating to read the 2023 takes now that we are actually living through the scaling phase he predicted. The concept of AI betrayal feels even more relevant today than it did then
observationist 32 minutes ago [-]
It's crazy that the marketplace seems to be an ongoing experiment in maximizing the number of times a company can defect, minimizing consumer anger, and exploiting assumptions of trust and good faith as frequently as possible without causing the consumer to defect completely. And it appears they've optimized that; we put up with shrinkflation, industrial waste repurposed as filler, processed ingredients derived from industrial wastes, high quality products debased and degraded until all that remains is a memory of a flavor and the general shape, color and texture. Big AG factory farming, pharma, healthcare products, all the rest - you think you can trust that a thing is the thing it's always been and we all assume it is, but nope.

Scratch any surface and the gilt flakes off - almost nothing can be trusted anymore - the last 30-40 years consolidated a whole lot of number-go-up, profit at any cost, ruthless exploitation. Nearly every market, business, and product in the US has been converted into some pitiful, profit optimal caricature of what quality should look like.

AI is just the latest on a long, long list of things that you shouldn't trust, by default, unless you have explicit control and do it yourself. Everywhere else, everything that matters will be useful to you iff there's no cost or leverage lost to the provider.

ares623 13 minutes ago [-]
The "meta" has been solved and everyone's just min-maxing now. The few who aren't min-maxing are considered a waste.

AI, crypto, etc. feels like potentially new meta opportunities and it is eerie how similar the mania is whenever a new major patch for a game is released. Everyone immediately starts exploring how to exploit and min-max the new niche. Everyone wants to be the first to "discover" a viable meta.

poszlem 15 minutes ago [-]
This to me is the most important point in the whole text:

"We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants."

I've not think about it like that, but I think it's a great way to legislate.

ChrisMarshallNY 7 minutes ago [-]
We've needed that in software (not just AI) for a long time.

Not a popular take; especially within the HN crowd.

That said, it needs to be scaled. As he indicated, only certain professions need fiduciaries.

Anyone that remembers working in an ISO9001 environment, can understand how incredibly bad it can get.

thefz 11 minutes ago [-]
> Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.
3 minutes ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:22:25 GMT+0000 (Coordinated Universal Time) with Vercel.