log☇︎
228700+ entries in 0.136s
jurov: asciilifeform: so is there any database in existence that allows it? without occassional garbage?
asciilifeform: it is precisely chewing gum, 1000 tonned of it
a111: Logged on 2016-12-30 17:29 mircea_popescu: exactly how the statements {"do not allow anyone else to write here until i say" ; "let anyone read anything at any time"} amount to an "unsolved problem in cs" ? and wtf cs is this we speak of, sounds more like chewinggum-science.
asciilifeform: ditch it, and ditch randos and their shitblocks, and 0--current sync takes 6 or so hrs.
phf: davout: you don't get consistent, uninterrupted, sequential chain of blocks. the actual distribution pattern is a mess, that "orphanage" was bandaiding
mircea_popescu: otherwise, the bottleneck is the shitsoup outisde.
ben_vulpes: davout: my node is for example, busy sometimes serving blocks to other people
mircea_popescu: davout block verification is the bottleneck in the dump-eat block process.
mircea_popescu: filtering a chain out of the soup outside like BingoBoingo is not without merit.
davout: i'm still curious what would make this kind of setup where i script "prb dumpblock | hex2bin | trb eatblock" much faster than syncing from network if the bottleneck is indeed the block verification?
asciilifeform: which, in turn, ate mircea_popescu 's vintage block set
asciilifeform: all of my nodes, fwiw, descend from the eatblock experiment
mircea_popescu: davout the suspicion is that relevant data may be missing from the thing, but we really dunno.
mircea_popescu: well that's the next best thing.
davout: asciilifeform: it does have a command that shits hex at me given a block hash
mircea_popescu: davout no he's saying it's not in the sql spec! which, considering how specwork goes, he might be even right about some version.
a111: Logged on 2016-12-30 17:40 davout: maybe i'm too lazy to script this and can live with waiting a month to sync!
ben_vulpes: http://btcbase.org/log/2016-12-30#1593697 << scripting is too much work, just manually dump every block and then manually load it into trb once the previous eat completes. nice meditative activity ☝︎
jurov: phf no mircea imagines he can order c machine "read me this without any locking, but it must be in some class or!"
davout: phf: you're saying postgresql doesn't have a "read uncommitted" transaction isolation level like innodb?
mircea_popescu: phf you'll have to link me to this.
phf: well, "you either expect" is because ~sql~ as a db language is specified to have acid. there are databases that support dirty reads/writes they are just not "sql"
mircea_popescu: YOU JUST SAID THIS!
mircea_popescu: oh i see. it's the c machine. ok then.
jurov: mircea_popescu: this is the problem with c machine, that everythign is pointer, and without preemptive locking, you can't distinguish your pointer points to merely stale data vs. garbage
mircea_popescu: and i'm supposed to care about the fact that they don't know how to write a db that doesn't spit out passwd ?
mircea_popescu: and THIS is what i mean re "problems in the field". whopee, idiots who can't code still want to be "at the forefront of computing" so they made a modern db that doesn't work.
davout: jurov: i think it's more like nobody gives a shit if static wwwtron is out of sync with DB
mircea_popescu: jurov no ; but i am fine with wwwtron ocasionally reading a field that has meanwhile been updated, and giving old, of an unspecified age but less than x time. ☟︎
mircea_popescu: and here's exactly the problem of superficiality : "you either expect consistency or there's no point in discussing". there's LEVELS. maybe i expect all my writes to be consistent and don't care by A CLASS of reads being consistent. this is a consistency model that's consistent.
mircea_popescu: meanwhile inconsistency within the actual db are a different matter.
mircea_popescu: you are confusing two consistencies. the problem here discussed is dirty read by www ; its consistency with the actual db is not seriously contemplated.
jurov: mircea_popescu: in this case asciilifeform categorically claimed he decided to have consistency, or are you deciding otherwise?
mircea_popescu: one cuts and the other picks. if you cut db field into "acid" i pick you out of existence.
phf: but that's the slowest option, so you have strategies for increase of speed that involve strategic placement of locks
mircea_popescu: phf here's the problem : moder(field) consists of take field, redefine it in a practically useless but superficially persuasive way, then bad_words() to whoever dares ask if your "field" solves any important questions in the field. because of course it doesn't, MIT is the premier institution in science(*) and technology(*) in the werld. ☟︎
davout: (not saying that locking should be mandatory ofc)
phf: you have your basic database requirements: atomicity, consistency, isolation and durability. these are axiomatic, you either expect them to hold or there's no point in further elaboration. at least SQL from the conception guaranteed the four requirements. "dirty read" violates consistency. your table might be half way through an update, you do a "dirty read", which is necessarily faster than update, and you have half the results with
mircea_popescu: whether i would or i wouldn't IS NOT THE DB'S DECISION, jurov .
davout: and re locking, how's a RDBMS to provide ACID guantees without locking?
jurov: mircea_popescu: you would trade speed for occassionally getting garbage when you call read()? ☟︎
davout: but then, how can it vastly improve sync time to feed blocks from same machine instead of letting trb suck them from the network?
mircea_popescu: say it again, mebbe it sticks this time.
phf: that's not even close to what i'm saying though.
mircea_popescu: and im supposed to be so cowed by the risk of being called mysql-sometrhing that i'm not going to say anything or i dunno
mircea_popescu: davout this is entirely my argument : they've moved the problem and call this "modern db"
phf: mircea_popescu: that's not quite what i'm saying.
mircea_popescu: because, again, a semaphore exists because the user does not know what the user is doing.
davout: mircea_popescu: seems to me like it would reduce to 'moving the problem'
mircea_popescu: phf i am saying that if you imagine the user can be relied on to "know where the locks are and read around them" then you are therefore necessarily saying "locks are useless - user can always know what he wanted locked and simply not write there hurr"
davout: actually fetching the block data?
davout: what's the syncing bottleneck on trb's side?
davout: looks pretty trivial tbh, will probably end up doing it
phf: mircea_popescu: i'm not quite groking what the bad write is. are you saying that instead of intermingling writes and reads, you should batch them, and not write while you're reading?
davout: maybe i'm too lazy to script this and can live with waiting a month to sync! ☟︎
davout: so yeah, prb does have a way to dump a block to hex from a block hash, and a way to get a block hash from a height, looks like this could work
davout: hahah god, check this out http://wotpaste.cascadianhacker.com/pastes/8CwCf/?raw=true
davout: ah yeah, i'm currently syncing off ben_vulpes, i was wondering if dumping blocks from prb and then eating them with trb would work
jurov: i was talking about network syncing
jurov: davout: the ondisk format (not blocks, but index) changed much earlier, oorc at 0.9 or so
mircea_popescu: phf think for a second : the whole FUCKING POINT of a semaphore, of any kind, is that user can't know what the other item involved is doing. if they could know, they wouldn't "avoid the locks", they'd avoid the bad write outright.
davout: phf: iirc mysql's innodb lets you choose your isolation level per transaction
phf: mysql solution is to creatively relax acid and hope things will "just work", which is the flip side of "mysql crashes all the time"
a111: Logged on 2016-12-30 16:14 asciilifeform: the fastest sync method, supposing one has access to a synced node, but also supposing that it won't do to simply copy the blocks (and it won't, you want to verify) is an eater-shitter system
davout: http://btcbase.org/log/2016-12-30#1593472 <<< trb -> trb only possibru, or could prb -> trb also work? ☝︎
mircea_popescu: exactly how the statements {"do not allow anyone else to write here until i say" ; "let anyone read anything at any time"} amount to an "unsolved problem in cs" ? and wtf cs is this we speak of, sounds more like chewinggum-science. ☟︎
phf: mircea_popescu: yes, ~having to deal with locks~ happens past the limit of db designer's competence
mircea_popescu: nevermind "mysql world" and "security" claptrap. the point of fact is you want me to cut off my hand so my helmet will fit.
mircea_popescu: does either of you see how this is the db writer outsourcing his incompetence on the user ?
phf: mircea_popescu: sop outside of mysql world. dirty read is considered a liability, so whole point of db systems design is to ensure that you don't hit locks when you shouldn't.
jurov: So, you have to live with locks and know them
jurov: Flly lockless dirty read is likely a security hazard (due to race conditions, you may end up reading memory you ough not to)
mircea_popescu: this makes sense to someone ?
phf: jurov: well, it's not clear where "disregard all locks" comes from in the original request. if the actual operations are as asciilifeform describes, i.e. sporadic inserts, and sporadic selects, then there will be no locks. my point is that there's no "disregard all locks" in postgresql, you solve it by knowing what lock you're hitting, and then designing your query to sidestep the lock
mircea_popescu: it really should be up to operator wtf, if i want to read dirty let me read dirty what sort of decision is this for designer to make. ☟︎
jurov: this is decided by transaction isolation level trinque posted. by default, table gets locked only if you explicitly "select for update"
phf: typically you handle it by not making your query lock the entire table, using a where clause of some sort. like if you're inserting things in batches, you can use a batch counter, and you query against max last known batch counter or less (or a variation of)
asciilifeform: trinque: that looks potentially useful, i'ma look at it in detail when i come back from meat .
asciilifeform: (doesn't prove that it is absent)
asciilifeform: yeah when i put on my shit diving suit and went down into the docs, i found none.
phf: pretty sure not on postgresql, they are strict about their acid
asciilifeform: if someone knows , from memory, the relevant knob: please write in.
asciilifeform: for all of the cruel things i have said about postgres : it crashed 0 times.
mircea_popescu: you don't in general want the frontend to be able to expire your cache, let the backend do it whenever it feels like it.
asciilifeform: mysql is a shitsandwich, and i will not touch it (it fucking CRASHES)
asciilifeform: mircea_popescu: it is currently a cached image, i implemented it. the cached snapshots however last for a limited time (iirc i have it set to half hr per url)
asciilifeform: phf, mircea_popescu , et al : one thing that would immediately make a very palpable difference in speed is if there were a permanent way to order postgres to perform all reads immediately, disregarding all locks.
mircea_popescu: asciilifeform as to "how to make www respond", you use the method we were discussing last time, whereby www is a cached image and if out of date tough for viewer ; as to nursery "do we have this ? how about this?" you really want the db to do that for you, it's ~the only thing it;s good for.
asciilifeform: if there is some other way of doing it, i'm all ears
asciilifeform: mircea_popescu: point of 'nursery' was to do the 'do we have this fp? how about this? ...' a few thou. at a time, is all.
asciilifeform: (and yes, it is the obviously correct way to process thous. of keyz, no question)
asciilifeform: how to make the www piece respond at all while this runs ?
mircea_popescu: so : if loading the whole batches of keys through the user-wwwform process is what 99% of the machine time goes to, then yes, put the batches into a single, sorted query, make the workmem should be 256mb or 2gb or w/e it is you actually need to cover your query (yes this can be calculated, but can also be guessed from a few tries) and then run bernstein after every such query, on the db not on "nursery" (which yes, it's a ter
deedbot: http://phuctor.nosuchlabs.com/gpgkey/9AC623C503B5F6FF091E7B5819FAD4EE293D03B779770C1959FD3C159D6653FB << Recent Phuctorings. - Phuctored: 1036...2769 divides RSA Moduli belonging to '77.37.28.20 (ssh-rsa key from 77.37.28.20 (13-14 June 2016 extraction) for Phuctor import. Ask asciilifeform or framedragger on Freenode, or email fd at mkj dot lt) <ssh...lt>; ' (Unknown DE HE)
deedbot: http://phuctor.nosuchlabs.com/gpgkey/2398E0817D454688D06524E1B99CCE125A5E4D5E4DB5FBEFBE1BBE65BDA99AB4 << Recent Phuctorings. - Phuctored: 1730...1787 divides RSA Moduli belonging to '150.187.4.208 (ssh-rsa key from 150.187.4.208 (13-14 June 2016 extraction) for Phuctor import. Ask asciilifeform or framedragger on Freenode, or email fd at mkj dot lt) <ssh...lt>; ' (Unknown VE A)
asciilifeform: mircea_popescu might be social-integrated-genius but often recommends algo that adds up to escherian skyscraper. and so results in headache thread.
mircea_popescu: if this is the path you must walk to go from solipsist-alf to socially-integrated-alf i can see it, but hurry it up already it's irritating.
mircea_popescu: you are getting to where it is in principle not worth anyone's time to talk with you, because your response is random nonsequitur.
a111: Logged on 2016-12-30 16:48 asciilifeform: well 1 db, 2 sets of key/fp/factor tables.
mircea_popescu: http://btcbase.org/log/2016-12-30#1593602 << no. this is nonsense, and not what was at any point either suggested or discussed. ☝︎