NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Intel Demos Chip to Compute with Encrypted Data (spectrum.ieee.org)
freedomben 2 hours ago [-]
Perhaps it's a cynical way to look at it, but in the days of the war on general purpose computing, and locked-down devices, I have to consider the news in terms of how it could be used against the users and device owners. I don't know enough to provide useful analysis so I won't try, but instead pose as questions to the much smarter people who might have some interesting thoughts to share.

There are two, non-exclusive paths I'm thinking at the moment:

1. DRM: Might this enable a next level of DRM?

2. Hardware attestation: Might this enable a deeper level of hardware attestation?

Frieren 2 hours ago [-]
> how it could be used against the users

We are not anymore their clients, we are just another product to sell. So, they do not design chips for us but for the benefit of other corporations.

3. Unskippable ads with data gathering at the CPU level.

dimitrios1 55 minutes ago [-]
I distinctly remember from university in one of my more senior classes designing logic gates, chaining together ands, nands, ors, nors, xors, and then working our way up to numerical processors, ALUs, and eventually latches, RAM, and CPUs. The capstone was creating an assembly to control it all.

I remember how thinking how fun it was! I could see unfolded before me how there would be endless ways to configure, reconfigure, optimize, etc.

I know there are a few open source chip efforts, but wondering maybe now is the time to pull the community together and organize more intentionally around that. Maybe open source chipsets won't be as fast as their corporate counterparts, but I think we are definitely at an inflection point now in society where we would need this to maintain freedom.

If anyone is working in that area, I am very interested. I am very green, but still have the old textbooks I could dust off (just don't have the ole college provided mentor graphics -- or I guess siemens now -- design tool anymore).

officeplant 8 minutes ago [-]
Sounds like you might want to go play with RISC-V, either in hardware or emulation.
egorfine 2 hours ago [-]
> how it could be used against the users and device owners

Same here.

Can't wait to KYC myself in order to use a CPU.

youknownothing 2 hours ago [-]
I don't think it's applicable to DRM because you eventually need the decrypted content: DRM is typically used for books, music, video, etc., you can't enjoy an encrypted video.

I think eGovernment is the main use case: not super high traffic (we're not voting every day), but very high privacy expectations.

freedomben 1 hours ago [-]
Yes it must be decrypted eventually, but I've read about systems (I think HDMI does this) where the keys are stored in the end device (like the TV or monitor) that the user can't access. Given that we already have that, I think I agree that this news doesn't change anything, but I wonder if there are clever uses I haven't thought of
NegativeLatency 53 minutes ago [-]
Rent out your spare compute, like seti@home or folding@home, but it’s something someone could repackage and sell as a service.
evolve2k 26 minutes ago [-]
My thought is half cynical. As LLM crawlers seek to mop up absolutely everything, companies themselves start to worry more about keeping their own data secret. Maybe this is a reason for shifts like this; as encrypted and other privacy-preserving products become more in demand across the board.
gruez 2 hours ago [-]
See: https://news.ycombinator.com/item?id=47323743

It's not related to DRM or trusted computing.

inetknght 2 hours ago [-]
Not yet.
gruez 2 hours ago [-]
What does that even mean?

A: "Intel/AMD is adding instructions to accelerate AES"

B: "Might this enable a next level of DRM? Might this enable a deeper level of hardware attestation?"

A: "wtf are you talking about? It's just instructions to make certain types of computations faster, it has nothing to do with DRM or hardware attestation."

B: "Not yet."

I'm sure in some way it probably helps DRM or hardware attestation to some extent, but not any more than say, 3nm process node helps DRM or hardware attestation by making it faster.

zvqcMMV6Zcr 2 hours ago [-]
> Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

That is nice speed-up compared to generic hardware but everyone probably wants to know how much slower it is than performing same operations on plain text data? I am sure 50% penalty is acceptable, 95% is probably not.

corysama 2 hours ago [-]
There are applications that are currently doing this without hardware support and accepting much worse than 95% performance loss to do so.

This hardware won’t make the technique attractive for ALL computation. But, it could dramatically increase the range of applications.

bobbiechen 52 minutes ago [-]
Agreed. When I was working on TEEs/confidential computing, just about everyone agreed that FHE was conceptually attractive (trust the math instead of trusting a hardware vendor) but the overhead of FHE was so insanely high. Think 1000x slowdowns turning your hour-long batch job into something that takes over a month to run instead.
patchnull 1 hours ago [-]
Current FHE on general CPUs is typically 10,000x to 100,000x slower than plaintext, depending on the scheme and operation. So even with a 5,000x ASIC speedup you are still looking at roughly 20-100x overhead vs unencrypted compute.

That rules out anything latency-sensitive, but for batch workloads like aggregating encrypted medical records or running simple ML inference on private data it starts to become practical. The real unlock is not raw speed parity but getting FHE fast enough that you can justify the privacy tradeoff for specific regulated workloads.

Foobar8568 23 minutes ago [-]
Now we know why Intel more or less abandonned SEAL and rejected GPU requests.
2 hours ago [-]
mmaunder 2 hours ago [-]
Someone explain how you'd create a vector embedding using homomorphically encrypted data, without decrypting it. Seems like a catch 22. You don't get to know the semantic meaning, but need the semantic meaning to position it in high dimensional space. I guess the point I'm making is that sure, you can sell compute for FHE, but you quickly run up against a hard limit on any value added SaaS you can provide the customer. This feels like a solution that's being shoehorned in because cloud providers really really really want to have a customer use their data center, when in truth the best solution would be a secure facility for the customer so that applications can actually understand the data they're working with.
bob1029 26 minutes ago [-]
Most of modern machine learning is effectively linear algebra. We can achieve semantic search over encrypted vectors if the encryption relies on similar principles.
Chance-Device 2 hours ago [-]
FHE is the future of AI. I predict local models with encrypted weights will become the norm. Both privacy preserving (insofar as anything on our devices can be) and locked down to prevent misuse. It may not be pretty but I think this is where we will end up.
boramalper 1 hours ago [-]
If you're interested in "private AI", see Confer [0] by Moxie Marlinspike, the founder of Signal private messaging app. They go into more detail in their blog. [1]

[0] https://confer.to/

[1] https://confer.to/blog/2025/12/confessions-to-a-data-lake/

Foobar8568 21 minutes ago [-]
FHE is impractical by all means. Either it's trivially broken and unsecured or the space requirements go beyond anything usable.

There is basically no business demand beside from sellers and scholars.

2 hours ago [-]
gigatexal 17 minutes ago [-]
If they can get this shrunk down and efficient enough in a future scenario I think Apple could move back to Intel for this with their stance on encryption and things it being a pillar of their image.
JanoMartinez 2 hours ago [-]
One thing I'm curious about is whether this could change how cloud providers handle sensitive workloads.

If computation can happen directly on encrypted data, does that reduce the need for trusted environments like SGX/TEE, or does it mostly complement them?

newzino 2 minutes ago [-]
[dead]
darig 8 minutes ago [-]
[dead]
esseph 2 hours ago [-]
Everything about this in my head screams "bad idea".

If you need to trust the encryption and trust the hardware itself, it may not be suitable for your environment/ threat model.

gruez 2 hours ago [-]
>If you need to trust the encryption and trust the hardware itself, it may not be suitable for your environment/ threat model.

Are we reading the same article? It's talking about homorphic encryption, ie. doing mathematical operations on already encrypted data, without being aware of its cleartext contents. It's not related to SGX or other trusted computing technologies.

u1hcw9nx 2 hours ago [-]
In FHE the hardware running it don't know the secrets. That's the point.

First you encrypt the data. Then you send it to hardware to compute, get result back and decrypt it.

Foobar8568 17 minutes ago [-]
But you leak all type of information and and the retrieve either leak even more data or you'll end up with transferring a god knows amount of data or your encryption is trivially broken or spend days/month/years to unencrypt.
cwmma 2 hours ago [-]
In theory you only need to trust the hardware to be correct, since it doesn't have the decryption key the worst it can do is give you a wrong answer. In theory.
esseph 36 minutes ago [-]
But can you trust the hardware encryption to not be backdoored, by design?

That's my point, this sounds like a way to create a backdoor for at-rest data.

cassonmars 16 minutes ago [-]
You can if the manufacturer has a track record that refutes the notion, and especially if they have verifiable hardware matching publicly disclosed circuit designs. But this is Intel, with their track record, I wouldn't trust it even if the schematics were public. Intel ME not being disable-able by consumers, while being entirely omitted for certain classes of government buyers tells me everything I need to know.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 16:23:34 GMT+0000 (Coordinated Universal Time) with Vercel.