I work on homomorphic encryption (HE or FHE for “fully” homomorphic encryption) and I have written a lot about it on this blog (see the relevant tag). This article is a collection of short answers to questions I see on various threads and news aggregators discussing FHE.
Facts
If a service uses FHE and can respond to encrypted queries, can’t the service see your query?
No. Not without cracking the cryptography. This is the entire point of FHE.
How is it possible to operate on encrypted data without seeing it?
See my FHE overview for details, but it boils down to designing a cryptographic scheme, for which adding/multiplying two ciphertexts gives the product/sum of the underlying cleartext values. Usually “add” and “multiply” on the encrypted values are more complicated than literal addition and multiplication, but most cryptographic schemes (like RSA) get one of the two homomorphic operations for free because it is compatible with the math the cryptography is built on (RSA has multiplication because multiplications are compatible with exponentiation).
Once you have that, you can build arithmetic circuits, and use polynomial approximations to implement nonlinear things like comparison operators.
The caveat is that all this has a computational cost of overhead.
Sorting is not an addition or multiplication operation, how can that be done in FHE?
It depends on the particular FHE scheme.
In some schemes, comparisons can be implemented by writing a < b
as 0.5 * (sign(a - b) + 1)
, and approximating the sign function by a polynomial that
uses only additions and multiplications. You have to be careful about the
quality of that approxmiation, but with a high enough degree you get something
usable, and then a larger degree also has a big impact on performance.
Other schemes are designed to be able to evaluate lookup tables of small-sized inputs (e.g., lookup tables of 4-bit integers), and so you can decompose comparisons into a tree of lookup tables.
How slow is FHE?
The numbers are always evolving, but the state of the art in 2025 is that you can can do things like facial recognition on images in a few seconds on a single powerful GPU. In my opinion you still need a team of experts to actually achieve this kind of performance, so a lot of people are keeping their fast solutions to themselves and trying to commercialize them.
Is FHE secure to quantum computers?
Yes. It uses the same underlying hardness assumptions as most of the NIST-standardized PQC methods, and a very similar technique to Kyber, albeit with different parameters.
What applications are actually good for FHE?
See FHE in production. There are more in the works that I have heard of in private discussions but cannot write about yet because there is no public information to verify it.
Does FHE require encrypting everything? (Even a server-owned database?)
No. In the simplest answer to this question, you can choose at what level to encrypt things. For example, you can encrypt a client query, but not the client id or timestamp. Or if a client query is a list of data items, you can encrypt each item separately. This is a question of the protocol and the more data you expose, the more performance options you get.
You can also keep all public data unencrypted.1 Encrypted data can interact with unencrypted data, but the results of that interaction will become encrypted, and the interaction must not branch on secret data. So for example, you can query a plaintext databsae for a secret identifier, but (the naive way) to do this requires a linear scan of the database and a multiplexer-style select operation to choose the right row. After processing each row, you get what appears to be a new ciphertext that masks whether you selected that row or not (as well as whether anything changed from the last row).
In fact, many HE techniques rely on being able to do more with plaintext data, and so plaintext-ciphertext operations are much more efficient than ciphertext-ciphertext operations. For example, this paper shows how batch ciphertext-plaintext matrix multiplication (in FHE) can be made more efficient by rearranging the plaintext matrices into special data layouts.
There are also many non-FHE techniques that are specifically tailored to database lookups with private queries. See private information retrieval (PIR) and private set intersection (PSI), which I would argue have had a lot more commercial success than FHE so far, and they are used in production at scale at companies like Google today. Some of these techniques actually use HE as a building block. See for example SimplePIR.
Would training an LLM to work on encrypted data also make that LLM good at breaking that encryption?
No. You don’t train the LLM to work on encrypted data, you directly convert the trained model’s internal operations into corresponding operations that work on encrypted data.
Any retraining of LLMs is done on the same unencrypted data, and the purpose of retraining LLMs for FHE is to use operations (activations, quantization, etc.) that have better performance when converting to FHE.
Opinions
What is FHE’s killer app?
I don’t think we’ve found it yet.
Why not just do the computation locally?
If you have a lot of data server side you can’t send to the device, but is required for the computation. Or if you want to protect the IP of the server-side algorithm and/or data, and are worried about it getting leaked if you ship it to the device.
However, local computation is the best privacy-enhancing option.
Is there a market for people who want services that are private, but cost so much more because of FHE?
My take is that FHE (or some other privacy-enhancing technology) will have demand in the following situations:
- When not using it is illegal.
- To protect against insider risk.
- When it can be used to provide some new service that otherwise does not exist, due to legality or just an ick factor that stops people from doing it today.
In those cases the additional costs are not so comparable because there is no alternative. And so people are not pegged to an expected cost. I also don’t think FHE will be that expensive in the long run, or is even as expensive as people think today, and that getting acceptable performance is mainly a matter of having enough FHE expertise in the development process.
FHE already has demand in other, murkier cases like making blockchain transactions private, but mainly I think that sort of demand is coming from the desire to do illegal things without getting caught, and so most people interested in the ``market’’ for FHE are probably not interested in crime.
Why would a company give up being able to analyze user data?
I think most companies would not want to give this up, so they just don’t protect privacy in a strong way.
Isn’t just building trust is better than the overhead of FHE?
Yes. But trust is also not enough to prevent honest mistakes that cause leaks, which FHE protects against.
Can FHE do LLMs?
Yes, but I don’t think the performance will be there to make it practical until maybe 2030 or 2035, and it’s not clear to me if they will ever outperform what can be done equivalently on device.
That said, LLMs are a great target for researchers to demonstrate the power of particular optimizations and crypto improvements. It’s a hot research topic in 2025.
Why not SGX, TEE, CVMs, or other hardware-based solutions?
I am not an expert in those technologies. Many people I talk to are concerned that TEEs just aren’t secure enough in practice. Side channel attacks are wild. It seems like getting things right with trusted hardware is quite hard.
I think it mainly comes down to risk tolerance. I trust the hardness of math problems more than the skill of hardware designers.
Doesn’t the extra structure of FHE make it easier to break?
Yes and no. All modern cryptography is based on problems with some mathematical structure. With enough structure it’s true, it does weaken security, and many cryptosystems are broken by exploiting the underlying math structure. I imagine the reason it took so long to find a working FHE scheme (1970’s - 2009) was that every time someone added enough structure to make FHE work, it made the scheme too easy to crack. But I don’t think there’s any evidence to suggest that the specific kind of structure that enables FHE (being able to add and multiply) necessarily makes a cryptosystem weaker.
I suppose the only solid answer here is that attacks haven’t been discovered yet despite many smart people trying. This is the same problem with all cryptographic hardness assumptions, so what sets FHE apart is that the underlying problems are newer.
Also see Estimating the Security of Ring Learning With Errors.
I will call unencryped data “plaintext data”, though there is a nontrivial difference between cleartext data and plaintext data in most HE schemes. ↩︎
Want to respond? Send me an email, post a webmention, or find me elsewhere on the internet.