500+ entries in 0.192s

snsabot: Logged on 2019-09-04 00:38:55 feedbot: http://bvt-trace.net/2019/09/linux-kernel-genesis-and-early-entropy-users/ << bvt's backtrace -- Linux kernel: genesis and early entropy users

feedbot: http://bvt-trace.net/2019/09/linux-kernel-genesis-and-early-entropy-users/ << bvt's backtrace -- Linux kernel: genesis and early entropy users

mircea_popescu: not like current kernel starts up with a 4kb entropy pool

mircea_popescu: srsly now, IF there is such a thing as a program that needs crypto-grade entropy at boot time, it's a piece of shit.

mircea_popescu: hence my comment re O-entropy

asciilifeform: and those by all rights oughta get marsaglia or similar penny 'entropy'. crypto-battlefield proggies have no biz making an appearance at boot time or in absence of init'd FG .

asciilifeform: strikes me as nuttery that e.g. nic driver, might not only want entropy, but ~in boot~

bvt: the O ring needs to be initialized somehow, zero-filling it may be bad, and keeping the existing infrastructure for just boot-time entropy collection is not an option; should i look for something simple that would work for initialization?

mircea_popescu: while alf's "no need for entropy during boot" is not correct, nevertheless "no need for I-entropy" stands, can just use the O register until you can indeed http://logs.nosuchlabs.com/log/trilema/2019-08-23#1930512

asciilifeform: btw it remains unclear to asciilifeform , why entropy would be wanted ~during boot~ at all

snsabot: Logged on 2019-08-22 13:25:30 mircea_popescu: imo correct design is 16kb to cpu-cache-sized inner ring buffer, wherein fg material is simply written into a loop, plain ; and from where high quality entropy is read blockingly. whenever the writing head threatens to overwrite the reading head, the overwritten bits are instead fed into outer ring

a111: Logged on 2019-08-22 16:43 mircea_popescu: imo correct design is 16kb to cpu-cache-sized inner ring buffer, wherein fg material is simply written into a loop, plain ; and from where high quality entropy is read blockingly. whenever the writing head threatens to overwrite the reading head, the overwritten bits are instead fed into outer ring

mircea_popescu: in exchange, they get non blocking "entropy" reads.

asciilifeform: same 'thermodynamic' problem , no matter how the output of FG is massaged, tho. if proggies ask for moar FG bits than FG has actually produced, they gotta block. ( linus fraudulently gave folx shitropy instead of entropy, so 'not block', but imho that's not even worth to discuss )

mircea_popescu: such that it can't either deplete the machine entropy by reading mb/s nor can it figure out the internals by reading straight fg bytes

mircea_popescu: no, it's not the same length, for one thing. 2nd buffer should be mb or larger ; and it gives the effect that there's an always-full entropy buffer

mircea_popescu: imo correct design is 16kb to cpu-cache-sized inner ring buffer, wherein fg material is simply written into a loop, plain ; and from where high quality entropy is read blockingly. whenever the writing head threatens to overwrite the reading head, the overwritten bits are instead fed into outer ring ☟︎

snsabot: Logged on 2018-10-12 12:56:05 mircea_popescu: in the 1980s engineers / cstronicists' defense, it was not yet understood how important entropy is to individuality and human existence.

mircea_popescu: considering the keys we use are 4kb, it seems reasonable we should keep entropy pools of no less than 16kb ?

mircea_popescu: incidentally, the fucking notion of a byte-counted entropy pool is fucking ridiculous.

bvt: http://logs.nosuchlabs.com/log/trilema/2019-08-08#1926500 << i think get your point; though tbh, from my reading of linux it's not clear that urandom uses separate entropy pool, as i understood so far urandom uses the same pool as random, just ignores all 'entropy' measures (i still did not quite load that part in head, so this is not final info).

asciilifeform: given that asciilifeform does not buffer fg ( exercise for reader: why not? ) these waits do not happen in parallel with the computation, and by all rights ought to expect that a live-fg test (not yet performed) can be expected to take longer by the above interval, vs. the bottled-entropy test pictured in yest. thrd.

asciilifeform: ftr almost all of that 1MB of bottled entropy, actually gets eaten in both runs (it is used not only to get candidates , recall, but to generate m-r witness for ea. shot of m-r )

a111: Logged on 2019-05-17 22:33 asciilifeform: http://btcbase.org/log/2019-05-17#1914441 << this is good q imho. the only reason i can think of for 'throw dice on flat table', is to avoid 'came to rest sharp edge up', which can introduce 'must either throw again, or pick between numbers by hand ' etc

asciilifeform: http://btcbase.org/log/2019-05-17#1914441 << this is good q imho. the only reason i can think of for 'throw dice on flat table', is to avoid 'came to rest sharp edge up', which can introduce 'must either throw again, or pick between numbers by hand ' etc ☝︎☟︎

stjohn_piano_2: well, when (roughly) can ~all the 70s, 80s stuff be expected to be dead, purely from entropy.

a111: Logged on 2019-03-28 15:21 mircea_popescu: i'm willing to bet "entropy is improved" 50% of the time.

mircea_popescu: asciilifeform no, the question of what % of the "entropy" leaked was entropic. obviously from chtulhu's pov your message's just as delicious entropy as any other.

asciilifeform: per shannon, ~all~ methods other than otp (i.e. where key is shorter than payload) 'leak entropy'. the q is just how much, and how to even quantify.

mircea_popescu: this is the fundamental cost here : EITHER have asymmetric keys, or ELSE leak entropy.

mircea_popescu: i think basically the point here is to summarize what was found. and that's specifically that a) there's no meaningful discussion of "better" or "worse" ciphers worth having when by "cipher" one understands "mixing in 0 entropy".

mircea_popescu: fg, makes entropy.

asciilifeform: ideal algo imho would carry at least 5 bit of entropy for erry bit of payload, and in such a way that all bits are 0/1 with exactly 0.5 prob.; and such that flipping one bit of ciphertext flips at least 1/2 of the output bits.

a111: Logged on 2018-10-29 05:07 asciilifeform: relatedly, asciilifeform tried to bake a proof that the lamehash keyinflater function of serpent is one-to-one ( i.e. actually carries 256bit of the key register's entropy into the 528 bytes of whiteolade ) and not only didnt , but realized that afaik no such proof exists for any 'troo' hash also ( incl keccak.. )

mircea_popescu: anyway, back to it : "blockcipher takes 10 bits of P and no more ; spits out 16 bits of E exactly" a) needs entropy and b) probably reduces to rsa-with-oaep.

mircea_popescu: http://btcbase.org/log/2018-10-31#1867988 << can be ; but no, server will sell entropy in this package3. ☝︎

mircea_popescu: i suppose that could be the backup alternative then : if we end up ditching serpent, we use a rsa packet to move ~1.4kb of entropy for initializing the mt, and then use mt generated pads for a cipher.

mircea_popescu: the problem is irreducible, either you mix entropy in or you don't.

a111: Logged on 2018-10-29 19:39 asciilifeform: pretty handy proof , however, that the xor liquishit on the right hand side of those serpent eqs, doesn't conserve entropy !

asciilifeform: pretty handy proof , however, that the xor liquishit on the right hand side of those serpent eqs, doesn't conserve entropy ! ☟︎

a111: Logged on 2018-10-29 16:06 asciilifeform: nao, is it a controversial statement that xors with an item that's already been rolled in, can only ~subtract~ entropy, never add ?

a111: Logged on 2018-10-29 15:53 mircea_popescu: it is entropy* conserving, where entropy* is a special "entropy-colored-for-meaning", but this isn't useful.

asciilifeform: nao, is it a controversial statement that xors with an item that's already been rolled in, can only ~subtract~ entropy, never add ? ☟︎

mircea_popescu: it is entropy* conserving, where entropy* is a special "entropy-colored-for-meaning", but this isn't useful. ☟︎

mircea_popescu: consider the sets P {1,2,3,4} and E {1,2,3,4,5}. now, the function taking all numbers <4 to themselvews and 4 to either 4 or 5 with 50-50 probability IS in fact reversible (because E5 and E4 are directly P4). is however not in fact entropy conserving.

mircea_popescu: is however not in fact entropy conserving

asciilifeform: now we factor out the ... xor 16#9e3779b9# xor Unsigned_32(I), it's an injective operation (neither adds nor subtracts entropy) ;

mircea_popescu: asciilifeform this isn't much of an argument, let alone "proof". + and * also conserve entropy, yet y=x/2 - x/2 +4 does not.

asciilifeform: ( in serpent inflator, the only ops are xor, rotate, and sboxation, all 3 conserve entropy )

asciilifeform: relatedly, asciilifeform tried to bake a proof that the lamehash keyinflater function of serpent is one-to-one ( i.e. actually carries 256bit of the key register's entropy into the 528 bytes of whiteolade ) and not only didnt , but realized that afaik no such proof exists for any 'troo' hash also ( incl keccak.. ) ☟︎

a111: Logged on 2018-10-23 14:19 asciilifeform: i'ma describe , for the l0gz : ideal cpu for crypto would be something quite like the schoolbook mips.v -- no cache, no branch prediction, no pipeline, no dram controller (run off sram strictly), a set of large regs for multiply-shift , and dedicated pipe to FG (i.e. have single-instruction that fills a register with entropy )

a111: Logged on 2016-12-24 01:46 asciilifeform: mircea_popescu: all schemes where the transform is of 'payload itself' and 0 entropy, suffer from immediate 'penguin problem', https://blog.filippo.io/content/images/2015/11/Tux_ecb.jpg .

asciilifeform: 1 of the wins from it would be that user could immediately verify that baud rate etc are set correctly, instead of relying on the convenient happenstance that a FG misconfigged serial line will produce low-entropy rubbish (with stuck bits)

asciilifeform: i'ma describe , for the l0gz : ideal cpu for crypto would be something quite like the schoolbook mips.v -- no cache, no branch prediction, no pipeline, no dram controller (run off sram strictly), a set of large regs for multiply-shift , and dedicated pipe to FG (i.e. have single-instruction that fills a register with entropy ) ☟︎

mircea_popescu: but yes, access to entropy is by now the one underpinning of being a citizen.

mircea_popescu: in the 1980s engineers / cstronicists' defense, it was not yet understood how important entropy is to individuality and human existence.

asciilifeform: mircea_popescu: recall also how the thing came to be ( was idjit hack, around the fact that 'sshd wants entropy at boot time, before rng init' or somesuch )

mircea_popescu: asciilifeform the reason i even called it ideological patch is because the pretense that shitropy-eaters have anyhthing to do with entropy, or us, must be shot in the head.

a111: Logged on 2018-10-12 12:39 asciilifeform: http://btcbase.org/log/2018-10-12#1860778 << there are not so many legitimate uses for /dev/urandom. however the idea that it can be fully reproduced in userland without kernel knob is afaik a mistake -- the thing gives you real entropy if available, and elsewise prngolade; importantly, as a ~nonblocking~ operation. idea is that it ~always~ returns in constant time.

mircea_popescu: http://btcbase.org/log/2018-10-12#1860803 << any program which CAN use urand DOES NOT need nor should have real entropy. ☝︎

a111: Logged on 2018-10-12 15:01 ave1: btw, turns out I was wrong on; http://btcbase.org/log/2018-10-12#1860768. I can run the entropy source tests in parallel without problem (jyst takes n times longer, so scales as expected)

ave1: btw, turns out I was wrong on; http://btcbase.org/log/2018-10-12#1860768. I can run the entropy source tests in parallel without problem (jyst takes n times longer, so scales as expected) ☝︎☟︎

asciilifeform: reading a legit /dev/random ( or FG, or any other device ) is a ~blocking~ op, potentially returns in a year if you're entropy-poor , or even rich but 9000 processes want it

a111: Logged on 2018-10-12 09:00 mircea_popescu: there's absolutely no excuse for having "urandom" as a kernel signal. applications that both a) care about entropy debit over time and b) can get away with substituting shit for entropy should simply manage their entropy/shitropy interface in a dedicated thread. let it read from /dev/random, add however many bits of 11110000 they want whenever they want to and vomit the resulting cesspool as the app that spawned them demands.

asciilifeform: http://btcbase.org/log/2018-10-12#1860778 << there are not so many legitimate uses for /dev/urandom. however the idea that it can be fully reproduced in userland without kernel knob is afaik a mistake -- the thing gives you real entropy if available, and elsewise prngolade; importantly, as a ~nonblocking~ operation. idea is that it ~always~ returns in constant time. ☝︎☟︎

mircea_popescu: there's absolutely no excuse for having "urandom" as a kernel signal. applications that both a) care about entropy debit over time and b) can get away with substituting shit for entropy should simply manage their entropy/shitropy interface in a dedicated thread. let it read from /dev/random, add however many bits of 11110000 they want whenever they want to and vomit the resulting cesspool as the app that spawned them demands. ☟︎

mircea_popescu: asciilifeform this is an amusing symmetry, republican machine that halts if it can't touch entropy like imperial machine that halts if it can't phone nsa hq.

mircea_popescu: neither "add some entropy" nor "inline asm" strike me as bad solutions.

mircea_popescu: padding wouldn't cost in principle, except if crypto produced then entropy costs.

asciilifeform: the salt, goes in the cipherola payload, so gets switched out in erry round. ( given as you have a trng, this is not an exorbitant entropy expense )

asciilifeform: if somebody absolutely positively MUST buffer his FG, because, idk, he's generating a icbm launch key and wants to xorlemma 8 weeks of entropy into 4096b, oughta do it ~in his proggy~ imho.

asciilifeform: really oughta be improved, to add entropy from arse clenching also!

asciilifeform: i'd really rather not, tho, they are extremely labour-intensive to test, on acct of the very modest speed of entropy generation.

asciilifeform: mircea_popescu: prolly for the same reason as http://www.loper-os.org/bad-at-entropy/manmach.html

asciilifeform: illustration, so to speak, of the connection b/w 'physical' entropy and the rng one

mircea_popescu: trinque, that does at least half the job -- will get some actual entropy in there, even if it doesn't prevent the dilution with cvasi-random crap

zx2c4: which can take entropy from trngs bla bla

mircea_popescu: asciilifeform my thoughts exactly. the swarm of idiots use non-qntra "press", get reality winnings, non-nsa entropy, get etc.

BingoBoingo: The lulz are all in that first sentence: "A significant number of past and current cryptocurrency products contain a JavaScript class named SecureRandom(), containing both entropy collection and a PRNG."

ckang: and entropy of someone find it

mircea_popescu: admitting the merkle-damgard construction (what ripemd is built out of, see http://homes.esat.kuleuven.be/~bosselae/ripemd160.html ) does not have a backdoor, and that sha256 doesn't have a backdoor, you are looking at something like 256 bits of entropy involved.

ben_vulpes: called our logs "insane" and "good source of entropy" if you can imagine the cheek

deedbot: Entropy_ voiced for 30 minutes.

mircea_popescu: !!up Entropy_

asciilifeform: 'if there ain't any entropy, there wont be any fucking output, take it or leave'

a111: Logged on 2018-03-08 16:33 mircea_popescu: ave1 is your dancing around the entropy problem with files etc driven by the fact you don't have a fg, incidentally ?

mircea_popescu: evidently "entropy" requires, conceptually, a socket. a file is thje opposite of this, and the opposition is rendered by the word "specifically". a socket and a file differ in that files have lengths, sockets have widths.

mircea_popescu: ave1 is your dancing around the entropy problem with files etc driven by the fact you don't have a fg, incidentally ? ☟︎

a111: Logged on 2018-03-08 14:20 ave1: I want to use a single "entropy" file.

diana_coman: for completeness: there is in fact a performance penalty for opening/closing the entropy source repeatedly, so from this point of view yes, you'd want it open and reused; that being said, it's not a massive penalty and atm I can live with it

diana_coman: to answer the question directly: you can certainly do it with reuse; the reason I avoided it there is because otherwise the caller needs to handle/be aware of the entropy source per se; which did not really belong in the caller

diana_coman: ave1 there is open_entropy_source which simply opens it and returns the handle; then you can use it for as long as you want, with get_random_octets_from (rather than get_random_octets)

ave1: diana_coman, I'm reading through the eucrypt / RSA code and see that the 'get_random_prime' function will open and close the random number generator itself. I would like to open the entropy source once and reuse it, but maybe there is good reason to do it like this and I should not attempt to do it differently?

mircea_popescu: ideally you want to kill the "csprng" altogether and simply feed the entropy pool.

a111: Logged on 2018-02-09 20:16 pete_dushenski: ben_vulpes: fed fg to /dev/random then crossed my fingers and closed my eyes in hoping that gpg sourced entropy from there

pete_dushenski: ben_vulpes: fed fg to /dev/random then crossed my fingers and closed my eyes in hoping that gpg sourced entropy from there ☟︎

asciilifeform: found) only consumes roughly as many random bits as the size of the output primes, but we can show that its output distribution, even if it can be shown to have high entropy if the prime r-tuple conjecture holds, is also provably quite far from uniform... It is likely that most algorithms that proceed deterministically beyond an initial random choice, including those of Joye, Paillier and Vaudenay... ...or Maurer... exhibit similar d