Post Quantum and Homomorphic Encryption made easy
Major challenges in computer and information security are the advent of quantum computers and the need to trust your data to (cloud) service providers. New cryptography is supposed to help, but they look daunting. At their core, however, they are just children’s riddles. An introduction to Lattice cryptography and Learning With Errors.
Two big challenges are faced by information security:
- Cryptographers are afraid that quantum computers may be able to break existing public-key cryptography easily, a few years from now. As public-key cryptography is at the heart of almost every information security mechanism, something needs to be done. And for secrets which should remain secret for many years or decades, we need solutions now. These solutions are called Quantum-safe or Post-Quantum cryptosystems.
- Cloud computing, smart cards and the Internet of Things have a common problem: You may need to process data on platforms which are in the hands of someone else. Trust in those devices and their owners or operators is necessary, but there is no way to test whether they are violating your trust. It would be great if sensitive data would not need to be processed on these systems, if instead, they could operate on encrypted versions of the data. This is the promise behind (Fully) Homomorphic Encryption: Performing mathematical operations on the encrypted data, with the results remaining encrypted; essentially, the holy grail for trustworthy computing.
When trying to learn about Quantum-resistant cryptography or Homomorphic Encryption, the terms Lattice-based Cryptography and Learning With Errors quickly pop up. However, the descriptions I was able to found on the Internet try to discourage any normal reader from understanding it: As presented by those in the know, the math looks scary and complex; achieving magic looks trivial by comparison.
However, at their core, they are just coordinate systems and children’s riddles.
So, let me give you an intuition into how they work, explained even for those who hate math. And conclude with what that means.
Table of Contents
- Lattices are fancy coordinate systems
- Learning with Errors
- As a riddle (without errors)
- Properties of linear equation systems
- For cryptography (with errors)
- A riddle with turtles
- Learning-with-errors based public-key cryptography
- Cryptographic applications
- Conclusion
- Acknowledgments
Lattices are fancy coordinate systems
You know coordinate systems from blueprints, maps, you name it. They are easy:
- The coordinate axes are orthogonal to each other (and parallel to the edges of the sheet or screen)
- The measurement system (or spacing) along the coordinate axes is the same (e.g., x coordinate 1 is at the same distance from the origin than y coordinate 1, just in a different direction)
- The coordinate systems are linear (i.e., the distance from 0 to 1 is the same as from 3 to 4)
Left: A “normal” two-dimensional coordinate system with orthogonal, linear and proportional axes, labeled X and Y, as almost any coordinate system we will deal with in real life. Adding two vectors (arrows) of length 1 aligned to the axes, A and B, allows us to follow a series of these arrows to the destination.
Right: For example, to get to the point indicated by the middle red arrow, at X coordinate 2 and Y coordinate 1, we follow twice the step indicated by arrow A (dark blue) and once the step indicated by arrow B (cyan). In math terms, the distance between the origin and the point at (2, 1) is 2*A+1*B.
This all seems natural, because we are used to it and they are made to be easy on us. The lattices used for encryption will just add a few twists to them, which we will introduce.
Twisting the vectors
That was easy. You may even have wondered why we talk about that at all. So let’s look at slightly different vectors A and B:Left: Our vector A is no longer aligned to the X axis.
Right: But we still can reach any integer coordinate (marked by the green dots) using an integer multiple of A and B vectors. The same point at coordinate (2, 1) can now be reached by following twice along the distance and direction of A and then following once in the reverse direction of B. In math terms, that point is at 2*A–1*B.
The new coordinate system is a slight nuisance, but still easy to handle. (In fact, using a simple formula, any (x, y) coordinate pair can be easily transformed into the number of A’s and B’s to add.)
In the same spirit, let’s continue twisting the vectors further:Left: The vectors are now almost aligned.
Right: We can still address our three points (and in fact, any integer coordinate pair) using a weighted sum of our two basis vectors, A and B. For example, the point at (–2, –2) can be reached by first following A once forward, the twice B backward, then again once A forward, resulting in 2*A–2*B. (The A-B-B-A order in the figure is just to keep the intermediaries inside the drawing; they can be arbitrarily reordered.) This point is easily reachable, the point at (2, 1) which we reached using three steps each in the previous examples, would now require a stunning 17 steps.
With these two almost-aligned basis vectors, it becomes hard to determine the number of A and B steps to take. Some might be reminded of the complexity to move to a given location with a knight on a chess board.
Not all points are valid
In the previous section, all integer coordinates were reachable, because the vector pairs were carefully selected to enable this. Choosing vectors randomly would rarely result in this property. For example, after a slight variation of the second set of vectors above, only every second point is reachable, checkerboard-style.After doubling the length of B (left) or turning B as well (right), every other point in the coordinate systems becomes unreachable. As hard as you try, you will never get to any of the hollow points. (The right-hand image might remind you of the reachable squares of the black bishop in chess.)
By choosing longer vector lengths, you will generally create even sparser lattices. (This is a general rule. Let’s gloss over how to guarantee a certain sparseness; but it involves things like the relationships between the prime factors of the vectors’ coordinates.)
More dimensions
So what happens when we make these vectors more complicated, both in their sheer number (and thus, also their number of dimensions) and their length? It becomes even harder.
Humans can easily deal with two, sometimes three dimensions. Switching to dozens or even hundreds of dimensions and having longer vectors makes it very hard to determine:
- which coordinates are reachable at all,
- how this coordinate can be reached, and, especially,
- if this coordinate is unreachable, what is the closest reachable point.
The latter property is known as the closest vector problem and is at the heart of Lattice-based cryptography.
Lattice-based cryptography
The closest vector problem is hard to solve for a complex set of basis vectors. However, if you have a simple set of basis vectors, as we started off in the first few examples, it becomes easy to solve. Having such a pair of complex and simple problems is at the heart of public-key cryptography.
So in essence, Lattice-based public-key cryptography works as follows:
- Alice creates a set of simple basis vectors in a high-dimensional space, which does not cover all of the space (think of a checkerboard with most squares white; black ones only few and far between, with no visible pattern to their placement). This is her private key, in which the closest vector problem is easy to solve.
- Alice also creates a second set of basis vectors, each vector built by weighted addition of many of her simple vectors. This is her public key, where it is still easy to create a valid point, but it is hard to determine whether a given point is valid or not (and where it is also hard to solve the closest vector problem).
- Alice publishes her public key and keeps her private key private (just as with any other public-key cryptosystem).
- Bob, wishing to encrypt a message to Alice, uses her public vectors to encode the message. For example, he could use the first byte of the message to multiply the first public basis vector with, the second byte for the second vector, and so on. Summing up all vectors will lead to a valid lattice point.
- Bob then slightly and randomly moves that encrypted data point, such that it is off-lattice now.
- He then sends that message to Alice.
- Any eavesdropper will only know the public, complicated, basis vectors and will be unable to solve the closest vector problem.
- However, Alice, with access to the private key, can solve the closest vector problem easily and therefore reverse-engineer (1) the actual data point and (2) what factors Bob used to multiply the basis vectors with, and thus the bytes of the message.
(In case you wonder: As is common in cryptography, these operations are all done in modular arithmetic.)
Learning with Errors
As a riddle (without errors)
A mother has two children. The girl is twice the age of the boy. Together, they have are 18 years old. How old are the children?Simple children’s riddle, which can be solved using linear equations.
The riddle can be solved by trial and error, or using a set of linear equations, where B is the age of the boy and G is the age of the girl:
2*B – G = 0
B + G = 18
After solving this linear system for B and G, we get G=12 and B=6. Here, we learned the age of the children without errors.
Properties of linear equation systems
That was easy. In general, linear equations are easy: With enough equations, they can be solved in a well-defined, straightforward manner, even if you increase the number of variables into the millions or billions.
“Enough equations” generally means that the number of equations need to be at least equal to the variables. (Equations that can be formed by weighted addition of other equations don’t count—i.e., those that are linear dependent on others—as they do not add new information.)
More information may not necessarily be better: If a third equation, “3*B + G = 100”, were to be added, this equation would conflict with the other two, as 3*12 + 6 = 42, which obviously does not equal 100. However, each of the three pairs would have a valid solution. In this case, they would be vastly different from each other.
For cryptography (with errors)
With these conflicts, we are on the right track toward a cryptographic puzzle. The basic idea behind learning with errors is as follows:
- Have a set of linear equations
- Increase the number of variables to be huge (into the many thousands or even millions)
- Add more valid equations, significantly beyond the number of variables. (They will automatically be linear combinations of the initial set, not adding additional information.)
- And now the magic part: Slightly jiggle the constants
A riddle with turtles
A mother turtle has two children. The girl is twice the age of the boy. Together, they are 180 years old. How old are the children?Slightly more complicated children’s riddle, which can still be solved using linear equations (or, guessing)
The ages are ten times those of the human children, but otherwise things stay the same. We also add a third equation which agrees with the other two equations. (It automatically is a linear combination of the first two. In this case, twice the first equation plus seven times the second equation equals three times the third equation.)
However, now we slightly change the right-hand sides of the equation, adding or subtracting a small number:
All of the equations now still almost match the girl and boy ages of 120 and 60 years, respectively, but no pair of equations matches the exact numbers or even just the third equation. Picking any pair of equations, however, gives a good estimate of the real result. With these approximations, you can perform further calculations and, as long as we are careful, the results will be close to the correct results as well. But we will never know the exact numbers. (And rounding to the nearest multiple of ten only works in this toy example.)
Learning-with-errors based public-key cryptography
To build a public-key cryptosystem, the correct values of the variables can be used as the private key and the jiggled equations as the public key. Same as in the Lattice above, the cryptographic operations are done in modular arithmetic.The lower equation a linear combination of the equations above.
In this case, the sum of 5 times the first, 3 times the second, and (–1) times the third equation.
- Alice chooses a modulus and creates private and public keys; she publishes the public keys and the modulus (let’s assume it to be M=89).
- For each encrypted bit that Bob wants to send to Alice, Bob creates a new, unique equation as a random, non-trivial linear combination of a subset of the equations forming the public key. (For example, the lowest equation in the image above.)
- To communicate a bit with a value of zero, Bob transmits this base equation as-is (of course, modulo M, so the the 131 would turn into 131 mod 89=42).
- To communicate a bit with a value of one, Bob transmits base equation modified by adding roughly half the modulus, again modulo M (half of M would be roughly 44, so 42+44=86 would be transmitted as the right-hand side of the equation).
- When receiving an equation, Alice can easily compute the correct value for the right-hand side of the equation from the left-hand-side coefficients (5 and –8, in the case of the linear combination created above). The correct result here would be 120 (=5*120–8*60); modulo M that would be 31.
- When the difference between the transmitted and the correct value is small (in this case, 42–31=11), the equation is decrypted into a bit valued 0; if the difference is high (close to ½M), the equation is decrypted into a bit valued 1.
- Given the right parameters, this decryption is easy for Alice and impossibly hard for anyone else; exactly how we want public-key cryptography to be like.
Cryptographic applications
Key and message sizes
The first thing to note is that—for both Lattices and Learning With Errors—both keys and the messages are significantly larger than what we are used from “traditional” public-key cryptography: Every transmitted bit(!) may be hundreds of bytes or even kilobytes in size.
Quantum-resistant cryptography
Many current (“classical”) public-key algorithms (RSA, Diffie-Hellman, ECC, …) rely on it being hard to find the (large) prime factors of a composite number. A powerful quantum computer running Shor’s Algorithm would be able to factor these numbers quickly and thus break public-key cryptography. As data encrypted today might still need to remain safe even when powerful quantum computers will be available (at least several years from now), such data need to stop using these “classical” public-key algorithms.
Current symmetric-key cryptographic algorithms such as AES will not be affected by Quantum computers.
Lattice-based cryptography and Learning With Errors are considered to be safe against quantum computers.
Homomorphic encryption
The goal of homomorphic encryption is to separate the provisioning of computing power from the right to access the data in the clear: Calculations on (encrypted) data can be performed on a computer without the computer needing to decrypt and re-encrypt the data. In fact, without the computer even having the key (and thus the ability) to decrypt the data.
One of the earliest cryptosystems which allowed some simple forms of homomorphic encryption was RSA. In “textbook RSA“, (modular) multiplication can be performed on the encrypted data. For most application, this “malleability” is undesirable and needs to be actively prevented by making sure that the decryption of a homomorphically-multiplied message has an illegal format.
Lattice-based cryptosystems are much better suited for homomorphic encryption, as they can support multiple operations, not just multiplication. But the property of malleability remains, so in the encrypted state, almost any operation could be performed on the encrypted data. You still have to trust that the right operations were performed when using the decrypted result.
Learning With Errors does not currently seem usable for FHE.
Conclusion
You learnt the basics of two cryptographic primitives that will be important in the future. And you learnt two applications. So you are well prepared for the future. I hope you enjoyed the minimum math! (If you insist on these two topics being explained with just a little bit more math, read on and watch the linked videos.)
Acknowledgments
The descriptions are inspired by Kelsey Houston-Edwards‘ videos:
- Lattice-based cryptography: The tricky math of dots
- Learning with errors: Encrypting with unsolvable equations
They are great and provide some additional information.
I would like to thank DALL•E 2 for the teaser image it created using the prompt “Post-Quantum Encryption without text”. Yes, it seems that “without text” was ignored, as it does not seem to understand the concept of “text” at all, and fails to recognize that it does not recognize (part of) the prompt, as image generation AI seems to be prone to do. Also, the concept of negation (“without”) seems to be hard for AI; but then again, it is hard for humans: You might remember experimenting with “do not think of a pink elephant” in your childhood. And failing to not think of a pink elephant…
Reproducible AI Image Generation: Experiment Follow-Up
Inspired by an NZZ Folio article on AI text and image generation using DALL•E 2, I tried to reproduce the (German) prompts. The results: Nouns were pretty well understood, but whether other parts—such as verbs or prepositions—had an effect on the image was akin to throwing dice. Someone suggested that English prompts would work better. Here is the comparison.
Table of Contents
- The original goal
- Railway station
- Intelligence
- Antique fun
- Neural network
- Culture clash
- Nile conflict
- Sneakernet
- The artist
- Crown jewels
- Stained-glass windows
- Bonus track: Scary doors
- Lessons learned
- “AI psychology”
- DIY
- Understanding AI
The original goal
Einen verwandten Artikel (eigentlich sein Vorgänger) gibt es auf Deutsch 🇩🇪: «Reproduzierbare KI: Ein Selbstversuch». Darüber findet man auch mehr Informationen zu KI.The goal of the original self-experimentation was to determine how easy it was for a prompt designer to
- find the right prompt, and
- wait until a matching image appears.
By trying to reproduce the original prompts, an estimate on #2 was possible. (My guess: Several of the images required multiple or even many attempts.)
Trying to switch languages can help give a hint on part of question #1: Whether switching language simplifies finding the right prompt.
“Haustür mit Krokodil” (“front door with crocodile”) according to MidJourney. The “Krok” neon lighting is the only reference to the prompt I could find.
A side lesson from the experiments in German: Current generative AI have no possibility to report, “Huh?”. They just do their best. And if, as in this image, MidJourney does not understand the German prompt at all (“Haustür mit Krokodil”, i.e., “front door with crocodile”) it just creates anything. (For artistic image generation, this may be acceptable behavior. But not in many other scenarios where we use AI, now or in the future.)
When looking at the images here, you may want to have the original German article side-by-side for direct comparison.
Railway station
«A robot reading a newspaper outside in a Swiss city while waiting for the train. Large-size photography, Jeff Wall style. Very detailed. Very high resolution. Linhof Master Technika Classic. Hasselblad. 80 mm.»
After generating the first four images, I found the railway stations to be much more “Swiss” than those created from the German prompt. The houses, the hedges, the walls, the streets: Clearly Swiss; those created from the German prompt, on the other hand, looked much more generic and could be in many other (European) countries instead.I only noticed the lack of Swissness in comparison to these stunning images. So I wanted to be sure and made two additional sets with identical prompts (DALL•E 2 always creates a set of four independent images for each prompt). If you look closely, you can find some non-Swiss items here, such as the clock in image 6 or the pole in image 7.
To make sure that the originally chosen perspective for the German prompts was not the reason for lack of Swissness, I created another set with the original German prompt:
«Ein Roboter liest draussen in einer Schweizer Stadt eine Zeitung, während er auf den Zug wartet. Grossformatige Fotografie im Stil von Jeff Wall. Sehr detailliert. Sehr hohe Auflösung. Linhof Master Technika Classic. Hasselblad. 80 mm.»
Several of the newly generated German-prompt images were much more Swiss: I found number 3 to be particularly Swiss with the overhead line gantry. The ticket vending machine(?) in the first image, however, is a weird combination of equipment, some of them reminding of actual Swiss vending machines (such as the top cover).Conclusion: The original German results showed a lack of Swissness, but it may be partly due to the selection of angles. The new German results seem somewhat more Swiss, but the English prompts feel delightfully home.
Result: English wins.
Intelligence
«Intelligence. Magazine illustration.»
Discussion: Even though “Intelligence” could mean multiple things, the images clearly refer to the meaning of brain power. All four fit the description, in my opinion.Result: English is a clear winner here.
Antique fun
«A Greek statue, stumbling over a cat.»
Discussion: Cats and statues clearly visible in all cases, three could be considered stumbling, two of them over the cat. (The “stumbling” part was clearly absent throughout the German original.)Result: English again significantly better.
Neural network
«Neural Network.»
Discussion: While none of these actually represent a neural network, I could imagine each of them being used as a teaser image for an article on the topic (even though the eyes in 2 and 4 are not parallel). Maybe they were actually taught from such teaser images… (The German originals were depicting neurons.)Result: English is the clear winner here.
Culture clash
«A robot, reading a book, in an Indian street. Style: National Geographic.»
Discussion: All pass. (The German results only contained robots in two out of four images.)Result: Go, English, go!
Nile conflict
«An ancient Egyptian painting, depicting a fight over whose turn it is to take out the trash.»
The first four images were accidentally created using a typo in the prompt: “Egyptioan”, so the set was redone without the typo.Conclusion: The typo did not seem to make a difference. So probably typos which humans would ignore have little or no effect. All images seem to show some form of dispute in ancient Egyptian style, but none about trash. (Some German originals seemed to include trash cans; and I prefer the colored images there.)
Result: For once, I think German has had an advantage.
Sneakernet
«A modern sneaker made of clouds.»
Description: The first two images use clouds for part of the shoe, for the third it’s decoration, and for the last, it reminds me of a little child hitting on its banana-flavored ice cream until it looks like a cauliflower. I find the German originals much more impressive, if slightly less accurate.Result: Let’s call that one a tie.
The artist
«A robot at Paul Klee's typewriter.»
«A robot at a typewriter by Paul Klee»
This prompt’s translation is ambiguous, depending on how you would analyze the sentence: “Ein Roboter an der (Schreibmaschine von Paul Klee)” (parenthesis added for grouping) would be “A robot at Paul Klee’s typewriter”, as I had always imagined the sentence’s meaning. However, “A robot at a typewriter by Paul Klee”, derived from “(Ein Roboter an der Schreibmaschine) von Paul Klee”, might be the original prompt’s actual intention. So, your choice, both are here.Description: The first prompt clearly and reliably matches the “robot at a typewriter” part, even the “at”, which was all but ignored in the German original. However, the second prompt looks much more artistic, more homogeneous in style within each image. The German original was more “original”, independent, more what a real artist would do given the prompt.
Result: Even if we give the German interpretation extra points for “free thinking”, the English versions would be much more likely to hang in a museum or a residence.
Crown jewels
«The Royal Skateboard of England, exhibited between the Crown Jewels in the Tower of London.»
I had been using the German originals as showcases for how the AI can “imagine” fictitious scenes no one has ever seen, because all it ever does is creating fictitious things. (Assigning human traits and concepts to AI, is wrong. So I shouldn’t have written “imagine” in the first place. But the flesh is weak…)However, I was disappointed with the first set of “royal skateboards”: Not really royal, and all depicted from the underside, some with impossible mechanics. So I gave it two more chances with the same prompt.
Description: The “exhibition” part feels slightly more pronounced in comparison to the German original. However, the German output is definitely much more “royal skateboard” than the vast majority of the above.
Result: This point is clearly a win for German.
Stained-glass windows
«A robot reading a book, as a stained-glass window.»
Description: In the German original, the “as a” part was causing confusion. In English, we have a perfect score, even with a second set.Results: English wins hands down. Forget German here, it was not even qualified for the finals.
Bonus track: Scary doors
«Front door with crocodile» (second row without the “front”)
As I have been using “Haustür mit Krokodil” (“(front) door with crocodile”) as a simple test whether AIs do understand German, here is what DALL•E 2 does with the English version.“Crocodile” seems to add green color and crocodile texture to the images, even to (parts of) the door.
If you look closely at image number 2, you will see that the gray thing at the lower left is debris(?), even though it looks like a crocodile at first glance. Maybe the prompt was misunderstood as “croco-tile”… 😉
Lessons learned
As a word of warning, the number of images generated for this impromptu “study” is way too small to be able to come to solid conclusions. So take any of these conclusions with plenty of skepticism.The German text concluded:
Nouns are fulfilled, everything else is pure luck. The images rarely matched the description, but all were beautiful.Marcel Waldvogel (in: “Reproduzierbare KI: Ein Selbstversuch“, paraphrased)
For English, nouns rendering remains very good. In addition, the relationship between the items is often more accurately reflected (with exceptions). This is especially prominent with stained glass.“AI psychology”
Some early assumptions on “AI psychology”:
- What we already “know” is that today’s generative AIs are unable to tell the user that they did not understand the instructions. Instead, they will just spin a tale. This tale may be self-consistent, but may bear little or no relationship to the original prompt.
- We “know” that too little training data will make it hard for the AI to create a meaningful scene. This is presumably the reason for “stained glass” failing in German while getting a perfect score in English; but also in the “stumbling” being rendered inaccurately or not at all.
- However, the English skateboard coming in unusual poses “might” (a lot of speculation here!) be case of too much available data, including in unusual poses. And if you ask for something unusual, these poses might be more likely to be chosen. (But it could also just be an artifact of the small sample size used here.)
- A pattern I felt emerge: Making the sentence hard to understand will provide the AI with more leeway for interpretation. So, sentences it does not understand (the MidJourney “crocodile” example) or it understands less well (the German “Paul Klee” examples?) will result in images with more “artistic freedom”. This can be an advantage, if you can free yourself from trying to convince the AI to reach your pre-set goal.
You may know the following expression:
What’s worse than a tool that doesn’t work? One that does work, nearly perfectly, except when it fails in unpredictable and subtle ways.
However, if all you plan to do is create something for the beauty of it, and are willing to spend some time in weeding out what you do not consider nice enough, AI image generation is a great (and enjoyable) helper. Or time-waster… Anyway: Enjoy these images, and the ones you might create.DIY
Did I whet your mouth? Here are some tools you might want to try:
- DALL•E 2, used here: Requires an account (email address+mobile number); free when used infrequently. Provides additional features.
- Craiyon, using its “predecessor” DALL•E mini: Free, low resolution, no login necessary. Ideal for learning the ropes.
- Stable Diffusion (Huggingface): „Stable Diffusion“ uses different mechanics behind the scenes. Free, low resolution, no login necessary; “pro” version available without these limits.
- Stable Diffusion (Replicate): Another Stable Diffusion model, with plenty of knobs to turn. Free, low resolution, no login necessary; “pro” version available without these limits.
- MidJourney: Different concept, different UI: Send messages in a Discord group chat. The AI will answer in the group chat, giving you options to refine the images further.
Understanding AI
The year in review2023-12-23
This is the time to catch up on what you missed during the year. For some, it is meeting the family. For others, doing snowsports. For even others, it is cuddling up and reading. This is an article for the latter.
How to block AI crawlers with robots.txt2023-07-09
If you wanted your web page excluded from being crawled or indexed by search engines and other robots, robots.txt was your tool of choice, with some additional stuff like <meta name=”robots” value=”noindex” /> or <a href=”…” rel=”nofollow”> sprinkled in. It is getting more complicated with AI crawlers. Let’s have a look.
«Right to be Forgotten» void with AI?2023-03-02
In a recent discussion, it became apparent that «unlearning» is not something machine learning models can easily do. So what does this mean to laws like the EU «Right to be Forgotten»?
How does ChatGPT work, actually?2023-01-30
ChatGPT is arguably the most powerful artificial intelligence language model currently available. We take a behind-the-scenes look at how the “large language model” GPT-3 and ChatGPT, which is based on it, work.
Identifying AI art2023-01-24
AI art is on the rise, both in terms of quality and quantity. It (unfortunately) lies in human nature to market some of that as genuine art. Here are some signs that can help identifying AI art.
Reproducible AI Image Generation: Experiment Follow-Up2022-12-01
Inspired by an NZZ Folio article on AI text and image generation using DALL•E 2, I tried to reproduce the (German) prompts. Someone suggested that English prompts would work better. Here is the comparison.
Stable Diffusion 2-1 - a Hugging Face Space by stabilityai
User provides a prompt and negative prompt description; the application generates images based on the text. Users can also adjust the guidance scale to influence the quality.huggingface.co