In case you do not know how GenAI works, here is a very abridged description: First you train your model on some inputs. This is using some very fancy linear algebra, but can be seen as mostly being a regression of some sorts, i.e. a lower dimensional approximation of the input data. Once training is completed, you have your model predict the next token of your output. It will do so by creating a list of possible tokens, together with a rank of how good of a fit the model considers the specific token to be. You then randomly select from that list of tokens, with a bias to higher ranked tokens. How much bias your random choice has depends on the "temperature" parameter, with a higher temperature corresponding to a less biased, i.e. more random selection.
Now obviously, this process consumes a lot of randomness, and the randomness does not need to be cryptographically secure, so you usually use a statistical random number generator like the Mersenne twister at this step.
So when they write "using a Gen AI model to produce 'true' random numbers", what they're actual
... mehr anzeigen
In case you do not know how GenAI works, here is a very abridged description: First you train your model on some inputs. This is using some very fancy linear algebra, but can be seen as mostly being a regression of some sorts, i.e. a lower dimensional approximation of the input data. Once training is completed, you have your model predict the next token of your output. It will do so by creating a list of possible tokens, together with a rank of how good of a fit the model considers the specific token to be. You then randomly select from that list of tokens, with a bias to higher ranked tokens. How much bias your random choice has depends on the "temperature" parameter, with a higher temperature corresponding to a less biased, i.e. more random selection.
Now obviously, this process consumes a lot of randomness, and the randomness does not need to be cryptographically secure, so you usually use a statistical random number generator like the Mersenne twister at this step.
So when they write "using a Gen AI model to produce 'true' random numbers", what they're actually doing is using a cryptographically insecure random number generator and applying a bias to the random numbers generated, making it even less secure. It's amazing that someone can trick anyone into investing into that shit.
"Let's generate low quality random numbers about as fast as a grandma knitting socks using terra-watts of power in billion dollar data centers." - said no one ev... Oh wait.
there's also the noise introduced by the GPU scheduler doing the matrix multiplies in a different order which produces different results because float is not associative.
@ask that noise would be considered true random noise, but I don't know how many bits it has. While float isn't associative, it's like "mostly" associative, so depending on the condition of the matrix, it should be fairly low.
In any case, if you wanted to use that noise for cryptographic purposes, you'd first have to debias it by running it through a DRBG, and at that point you could just harvest it directly from the GPU for higher quality and performance.
Or query your stupid hardware RNG that literally every modern CPU has built-in.
@ask My bad if I have missed part of this thread - what you say is 100% accurate except nobody is using genAI random numbers for cryptographic or security purposes? Who is saying that? What?
Sophie Schmieg
Als Antwort auf Sophie Schmieg • • •In case you do not know how GenAI works, here is a very abridged description:
First you train your model on some inputs. This is using some very fancy linear algebra, but can be seen as mostly being a regression of some sorts, i.e. a lower dimensional approximation of the input data.
Once training is completed, you have your model predict the next token of your output. It will do so by creating a list of possible tokens, together with a rank of how good of a fit the model considers the specific token to be. You then randomly select from that list of tokens, with a bias to higher ranked tokens. How much bias your random choice has depends on the "temperature" parameter, with a higher temperature corresponding to a less biased, i.e. more random selection.
Now obviously, this process consumes a lot of randomness, and the randomness does not need to be cryptographically secure, so you usually use a statistical random number generator like the Mersenne twister at this step.
So when they write "using a Gen AI model to produce 'true' random numbers", what they're actual
... mehr anzeigenIn case you do not know how GenAI works, here is a very abridged description:
First you train your model on some inputs. This is using some very fancy linear algebra, but can be seen as mostly being a regression of some sorts, i.e. a lower dimensional approximation of the input data.
Once training is completed, you have your model predict the next token of your output. It will do so by creating a list of possible tokens, together with a rank of how good of a fit the model considers the specific token to be. You then randomly select from that list of tokens, with a bias to higher ranked tokens. How much bias your random choice has depends on the "temperature" parameter, with a higher temperature corresponding to a less biased, i.e. more random selection.
Now obviously, this process consumes a lot of randomness, and the randomness does not need to be cryptographically secure, so you usually use a statistical random number generator like the Mersenne twister at this step.
So when they write "using a Gen AI model to produce 'true' random numbers", what they're actually doing is using a cryptographically insecure random number generator and applying a bias to the random numbers generated, making it even less secure. It's amazing that someone can trick anyone into investing into that shit.
teilten dies erneut
Kris und Wilfried Klaebe haben dies geteilt.
Olivier
Als Antwort auf Sophie Schmieg • • •ask
Als Antwort auf Sophie Schmieg • • •there's also the noise introduced by the GPU scheduler doing the matrix multiplies in a different order which produces different results because float is not associative.
Surely they meant that... Right?...
But also probably that isn't true random either.
Sophie Schmieg
Als Antwort auf ask • • •@ask that noise would be considered true random noise, but I don't know how many bits it has. While float isn't associative, it's like "mostly" associative, so depending on the condition of the matrix, it should be fairly low.
In any case, if you wanted to use that noise for cryptographic purposes, you'd first have to debias it by running it through a DRBG, and at that point you could just harvest it directly from the GPU for higher quality and performance.
Or query your stupid hardware RNG that literally every modern CPU has built-in.
Cybarbie
Als Antwort auf Sophie Schmieg • • •Maxi 10x 💉
Als Antwort auf Sophie Schmieg • • •Daniel Taylor
Als Antwort auf Maxi 10x 💉 • • •wiki.eveuniversity.org/Rifter
Rifter - EVE University Wiki
wiki.eveuniversity.org