500+ entries in 0.257s
ossabot: Logged on 2020-01-12 12:47:24 dorion: mircea_popescu in light of never moving off
linux 2.x series, it seems to me bvt ought to port his rng work from
4.9.95 it currently sits on to 2.x.
dorion: mircea_popescu in light of never moving off
linux 2.x series, it seems to me bvt ought to port his rng work from
4.9.95 it currently sits on to 2.x.
mircea_popescu: there's an aspect under which it's currently agnostic, namely that articles come in an
entropy-locked timeflow, whereas it expects to see them as random lists.
bvt: but how much of cpu percentage would we want to dedicate to good hashing? if e.g. keccak is in place, what could happen (but only experiments will show) is that the bottleneck would be in the CPU -> if the system runs low on
entropy somehow and self-hashes (which goes through the HG now, should it go through HF? (would also make sense from Size/2 decision rule)), we'd have an easy DoS vector against the
mircea_popescu: not like current kernel starts up with a 4kb
entropy pool
mircea_popescu: srsly now, IF there is such a thing as a program that needs crypto-grade
entropy at boot time, it's a piece of shit.
bvt: the O ring needs to be initialized somehow, zero-filling it may be bad, and keeping the existing infrastructure for just boot-time
entropy collection is not an option; should i look for something simple that would work for initialization?
snsabot: Logged on 2019-08-22 13:25:30 mircea_popescu: imo correct design is 16kb to cpu-cache-sized inner ring buffer, wherein fg material is simply written into a loop, plain ; and from where high quality
entropy is read blockingly. whenever the writing head threatens to overwrite the reading head, the overwritten bits are instead fed into outer ring
a111: Logged on 2019-08-22 16:43 mircea_popescu: imo correct design is 16kb to cpu-cache-sized inner ring buffer, wherein fg material is simply written into a loop, plain ; and from where high quality
entropy is read blockingly. whenever the writing head threatens to overwrite the reading head, the overwritten bits are instead fed into outer ring
mircea_popescu: such that it can't either deplete the machine
entropy by reading mb/s nor can it figure out the internals by reading straight fg bytes
mircea_popescu: no, it's not the same length, for one thing. 2nd buffer should be mb or larger ; and it gives the effect that there's an always-full
entropy buffer
mircea_popescu: imo correct design is 16kb to cpu-cache-sized inner ring buffer, wherein fg material is simply written into a loop, plain ; and from where high quality
entropy is read blockingly. whenever the writing head threatens to overwrite the reading head, the overwritten bits are instead fed into outer ring
☟︎ snsabot: Logged on 2018-10-12 12:56:05 mircea_popescu: in the 1980s engineers / cstronicists' defense, it was not yet understood how important
entropy is to individuality and human existence.
mircea_popescu: considering the keys we use are 4kb, it seems reasonable we should keep
entropy pools of no less than 16kb ?
mircea_popescu: incidentally, the fucking notion of a byte-counted
entropy pool is fucking ridiculous.
bvt:
http://logs.nosuchlabs.com/log/trilema/2019-08-08#1926500 << i think get your point; though tbh, from my reading of linux it's not clear that urandom uses separate
entropy pool, as i understood so far urandom uses the same pool as random, just ignores all '
entropy' measures (i still did not quite load that part in head, so this is not final info).
stjohn_piano_2: well, when (roughly) can ~all the 70s, 80s stuff be expected to be dead, purely from
entropy.
a111: Logged on 2019-03-28 15:21 mircea_popescu: i'm willing to bet "
entropy is improved" 50% of the time.
mircea_popescu: asciilifeform no, the question of what % of the "
entropy" leaked was entropic. obviously from chtulhu's pov your message's just as delicious
entropy as any other.
mircea_popescu: this is the fundamental cost here : EITHER have asymmetric keys, or ELSE leak
entropy.
mircea_popescu: i think basically the point here is to summarize what was found. and that's specifically that a) there's no meaningful discussion of "better" or "worse" ciphers worth having when by "cipher" one understands "mixing in 0
entropy".
a111: Logged on 2018-10-29 05:07 asciilifeform: relatedly, asciilifeform tried to bake a proof that the lamehash keyinflater function of serpent is one-to-one ( i.e. actually carries 256bit of the key register's
entropy into the 528 bytes of whiteolade ) and not only didnt , but realized that afaik no such proof exists for any 'troo' hash also ( incl keccak.. )
mircea_popescu: anyway, back to it : "blockcipher takes 10 bits of P and no more ; spits out 16 bits of E exactly" a) needs
entropy and b) probably reduces to rsa-with-oaep.
mircea_popescu: i suppose that could be the backup alternative then : if we end up ditching serpent, we use a rsa packet to move ~1.4kb of
entropy for initializing the mt, and then use mt generated pads for a cipher.
mircea_popescu: the problem is irreducible, either you mix
entropy in or you don't.
a111: Logged on 2018-10-29 19:39 asciilifeform: pretty handy proof , however, that the xor liquishit on the right hand side of those serpent eqs, doesn't conserve
entropy !
a111: Logged on 2018-10-29 16:06 asciilifeform: nao, is it a controversial statement that xors with an item that's already been rolled in, can only ~subtract~
entropy, never add ?
a111: Logged on 2018-10-29 15:53 mircea_popescu: it is
entropy* conserving, where
entropy* is a special "
entropy-colored-for-meaning", but this isn't useful.
mircea_popescu: it is
entropy* conserving, where
entropy* is a special "
entropy-colored-for-meaning", but this isn't useful.
☟︎ mircea_popescu: consider the sets P {1,2,3,4} and E {1,2,3,4,5}. now, the function taking all numbers <4 to themselvews and 4 to either 4 or 5 with 50-50 probability IS in fact reversible (because E5 and E4 are directly P4). is however not in fact
entropy conserving.
mircea_popescu: asciilifeform this isn't much of an argument, let alone "proof". + and * also conserve
entropy, yet y=x/2 - x/2 +4 does not.
a111: Logged on 2018-10-23 14:19 asciilifeform: i'ma describe , for the l0gz : ideal cpu for crypto would be something quite like the schoolbook mips.v -- no cache, no branch prediction, no pipeline, no dram controller (run off sram strictly), a set of large regs for multiply-shift , and dedicated pipe to FG (i.e. have single-instruction that fills a register with
entropy )
mircea_popescu: but yes, access to
entropy is by now the one underpinning of being a citizen.
mircea_popescu: in the 1980s engineers / cstronicists' defense, it was not yet understood how important
entropy is to individuality and human existence.
mircea_popescu: asciilifeform the reason i even called it ideological patch is because the pretense that shitropy-eaters have anyhthing to do with
entropy, or us, must be shot in the head.
a111: Logged on 2018-10-12 12:39 asciilifeform:
http://btcbase.org/log/2018-10-12#1860778 << there are not so many legitimate uses for /dev/urandom. however the idea that it can be fully reproduced in userland without kernel knob is afaik a mistake -- the thing gives you real
entropy if available, and elsewise prngolade; importantly, as a ~nonblocking~ operation. idea is that it ~always~ returns in constant time.
a111: Logged on 2018-10-12 09:00 mircea_popescu: there's absolutely no excuse for having "urandom" as a kernel signal. applications that both a) care about
entropy debit over time and b) can get away with substituting shit for
entropy should simply manage their
entropy/shitropy interface in a dedicated thread. let it read from /dev/random, add however many bits of 11110000 they want whenever they want to and vomit the resulting cesspool as the app that spawned them demands.
mircea_popescu: there's absolutely no excuse for having "urandom" as a kernel signal. applications that both a) care about
entropy debit over time and b) can get away with substituting shit for
entropy should simply manage their
entropy/shitropy interface in a dedicated thread. let it read from /dev/random, add however many bits of 11110000 they want whenever they want to and vomit the resulting cesspool as the app that spawned them demands.
☟︎ mircea_popescu: asciilifeform this is an amusing symmetry, republican machine that halts if it can't touch
entropy like imperial machine that halts if it can't phone nsa hq.
mircea_popescu: neither "add some
entropy" nor "inline asm" strike me as bad solutions.
mircea_popescu: padding wouldn't cost in principle, except if crypto produced then
entropy costs.
mircea_popescu: trinque, that does at least half the job -- will get some actual
entropy in there, even if it doesn't prevent the dilution with cvasi-random crap
zx2c4: which can take
entropy from trngs bla bla
mircea_popescu: asciilifeform my thoughts exactly. the swarm of idiots use non-qntra "press", get reality winnings, non-nsa
entropy, get etc.
BingoBoingo: The lulz are all in that first sentence: "A significant number of past and current cryptocurrency products contain a JavaScript class named SecureRandom(), containing both
entropy collection and a PRNG."
ckang: and
entropy of someone find it
ben_vulpes: called our logs "insane" and "good source of
entropy" if you can imagine the cheek
deedbot: Entropy_ voiced for 30 minutes.
a111: Logged on 2018-03-08 16:33 mircea_popescu: ave1 is your dancing around the
entropy problem with files etc driven by the fact you don't have a fg, incidentally ?
mircea_popescu: evidently "
entropy" requires, conceptually, a socket. a file is thje opposite of this, and the opposition is rendered by the word "specifically". a socket and a file differ in that files have lengths, sockets have widths.
mircea_popescu: ave1 is your dancing around the
entropy problem with files etc driven by the fact you don't have a fg, incidentally ?
☟︎ a111: Logged on 2018-03-08 14:20 ave1: I want to use a single "
entropy" file.
diana_coman: for completeness: there is in fact a performance penalty for opening/closing the
entropy source repeatedly, so from this point of view yes, you'd want it open and reused; that being said, it's not a massive penalty and atm I can live with it
diana_coman: to answer the question directly: you can certainly do it with reuse; the reason I avoided it there is because otherwise the caller needs to handle/be aware of the
entropy source per se; which did not really belong in the caller
ave1: I want to use a single "
entropy" file.
☟︎ diana_coman: ave1 there is open_entropy_source which simply opens it and returns the handle; then you can use it for as long as you want, with get_random_octets_from (rather than get_random_octets)