log☇︎
500+ entries in 0.007s
bvt: diana_coman: i got it, the preview of how it will look like now: http://paste.deedbot.org/?id=iEDJ
bvt: diana_coman: http://bvt-trace.net/vpatches/vtools_ascii_fix.vpatch http://bvt-trace.net/vpatches/vtools_ascii_fix.vpatch.bvt.sig -- preliminary version, as "flow" command behavior is not useful with such vtree
bvt: diana_coman: fixed link; ty for your test set, i have the fix (totally my bad), which I can upload today in a few hours (as a vpatch) if you still have a timeslot dedicated to v.sh tomorrow, or if you prefer it with a writeup, i will publish it until thursday.
bvt: good, so i guess i can release a vpatch this weekend
bvt: btw, "vtree" command name is also subject to discussion, and it still shows the "leafs" with '(*)' mark.
bvt: i extended the examples to also show vpatches selected for presses (http://paste.deedbot.org/?id=8Hrq): would this make "leafs" not needed in your view?
bvt: diana_coman: i have a question about leafs command: can you explain how you use it? i gave it some thought, and honestly i fail to see how it is useful: after adding the manifest which linearizes the vpatches, "leafs" reports only one leaf, without showing the split vtree branches before it.
bvt: http://logs.ossasepia.com/log/trilema/2020-03-03#1958789 << http://paste.deedbot.org/?id=0ooc
bvt: diana_coman: http://bvt-trace.net/2020/02/vsh-parts-25-and-3-one-binary-ada-solver-and-ada-vfilter-implementation/#comment-153
bvt: mp_en_viaje: http://ossasepia.com/2020/02/29/a-basic-requirement-for-the-literate-introducing-of-new-tools/#comment-7669
bvt: ty. if you gzip the work directory and publish it somewhere i'll look into it as well. i also took a stab at indented flow output (http://bvt-trace.net/2020/02/vsh-parts-25-and-3-one-binary-ada-solver-and-ada-vfilter-implementation/#comment-150)
bvt: anyone debug.log snapshots of wedge: did the period before wedge involve a large number of continous block requests?
bvt: well, the only thing i feel bad about in this situation is that for some time i was getting wedged in ~30 min, did not check the logs, assumed the DB corruption, rolled back to month-old chain snapshot without keeping the current
bvt: http://bvt-trace.net/2020/02/a-tiny-and-incomplete-trb-wedgetrace/#comment-139 - did some more debugging today, but cannot get a wedge when i do need it
bvt: ty, i will upload it in ~30 min
bvt: BingoBoingo: ok if i upload the data to anyserver? it's 3Gb compressed.
bvt: mod6 or anyone else interested in this wedged bitcoind condition: do you want to get the coredump and the binary?
bvt: mod6: ty, i will try to figure out what is going on further
bvt: diana_coman: answered your comment yesterday, uploaded the regrind of p.1 and p.2 yesterday as well.
bvt: diana_coman: ty for spotting this. i will regring vpatches p.1 and p.2; i wanted to make the vpatch p.1 name the same in manifest and file system, but did the wrong thing there just editing the line from previous vpatch in vpatch p.2.
bvt: mircea_popescu: since this point was raised in #ossasepia, a ping: i did provide the answer (as best as i could) to http://bvt-trace.net/2020/01/re-pbrt/#comment-110
bvt: hanbot_abroad: http://trilema.com/2018/the-lesbian-in-winter/ ?
bvt: dorion: the new v impl is written in a posix shell + awk + ada; the core algos in ada are perhaps too simplistic, but they work nicely so far; the eta is end of next week
bvt: dorion: typically i just go to gym; after vacations i added swimming to the mix (used to do a lot of that in school), and i like the 'noble tiredness' feeling that it gives a lot.
bvt: kernel
bvt: dorion: slowly getting back into shape after vacations, so far some workslots got sacrificed for more sports. other than that, on the vacation i started to write a v implementation that would not have the performance issue; while it's 90% done, need to invest some more time to finish it. after that, will port the fg-rng to 2.6.32
bvt: hello. i have also published and replied to all comments
bvt: dorion: in this case, picking 2.6 will not be a problem, of course
bvt: porting rng to 2.6 kernel should indeed be not too hard, the only thing i may need adaptation is kfifo api (iirc it's api changed at some point, which may break the code)
bvt: *full hardware support
bvt: mircea_popescu: moving to 2.6 kernel will be interesting experience wrt. newer hardware. 2.6.32 was one of the better-patched ones, could try to use that. otoh, the machine i am writing from requires 2.6.38 minimum for full hardware.
bvt: in this case, i can restate it as 'depends on how current fragments will get composed into one tree'
bvt: at the individual v-trees sure, but on the yddragsil level?
bvt: mircea_popescu: re second q, imho cleaning up the components will be driven by how the OS gets [re]structured around V, too early to say right now
bvt: mircea_popescu: ty; fixed.
bvt: mp_en_viaje: http://trilema.com/2014/peri-metaphyseos-in-english-this-time/ http://trilema.com/2014/the-two-all-essences/ ?
bvt: ty
bvt: dorion_road: comment published and answered; i can do a test run of gales after returning (20 dec)
bvt: hello, i'll be traveling 14-20 dec and 5-10 jan; will be in low availability mode
bvt: re efi, i only had a one machine with it, used in-kernel efi stub and efibootmgr (configuration tool, not loader), did not use grub there
bvt: dorion_road: thursday is the new deadline; the reason for the bug was nothing cool; it was a hole in my vpatch generation process for kernel which i will have to review and change.
bvt: dorion_road: i have discovered some WTF??? sort of bug in the kernel that manifests only outside of the VM. i will continue debugging it tomorrow, but this means that the writeup will appear only after it is fixed; sorry for this delay.
bvt: mircea_popescu: even bigger problem for kernel http://bvt-trace.net/2019/10/fg-fed-linux-rng/?b=Pressing&e=though#select
bvt: dorion_road: the kernel rng vpatch will be finished on this weekend (i have all the components in the benchmarking blogpost, just need to clean things up).
bvt: mircea_popescu: published & answered
bvt: spyked: ftr, this is just a brain dump, i'm trying to evolve my own understanding of the problem
bvt: BingoBoingo: published
bvt: re tmsr os - i am curious what work plan trinque will come up with, esp wrt static linking.
bvt: ty, i guess after the article it will become possible to decide what to use for each of the hashes; after that one more patch - user-settable key for hashing, and rng work will be done
bvt: http://logs.nosuchlabs.com/log/trilema/2019-11-12#1950799 << yes, sure. the article with measurements will definitely come tomorrow -- most of it is written, measurents - finished, bugs - fixed.
bvt: diana_coman: ok, this should be it http://bvt-trace.net/tree-vtools/
bvt: s/hand/hang/
bvt: also, update on the measurements: the rest of perf overhead was still coming from the tty driver, i had to resort a yet different fix for reading, however with keccak (i turned already present sha3 into keccak) i am currently stuck with a hand that manifests during early bootup, so i will the full writeup when i eliminate the bug; most likely, keccak would be feasible for both fast and good hashing.
bvt: diana_coman: i never signed all of the vtools vpatches. the only patches with my signatures are that of http://bvt-trace.net/2018/10/vpatch-replacing-mktemp3-take-two/ http://bvt-trace.net/2019/07/vdiff-vpatch-blockwise-keccaking/ http://bvt-trace.net/2019/08/vpatch-support-for-files-in-vtree-root/; the eta for a full tree can be this wednesday.
bvt: not that it takes a lot of time, i guess i just never properly noted it down. will do after the measurements
bvt: diana_coman: setting up a mirror with everything is still in TODO for me.
bvt: i didn't think any special activity was urgently required from me, and the blogpost with my comments on the matter will come a bit later; i can see re weak sample
bvt: re closing of s.nsa an the related conflict, i will do a post on this (imo irc would be a bad medium for this purpose, at least for me). i can increase the priority of this post if people consider it a pressing matter.
bvt: http://logs.ossasepia.com/log/trilema/2019-11-04#1949422 << well, still have to do measurements (end of this week if everything goes right), then it would become possible to decide if everything works.
bvt: http://logs.nosuchlabs.com/log/trilema/2019-10-24#1948109 << it's mine, installed in my ancestral home (and not kiev, no). it switched from -connect to independent mode only yesterday, so i'd let it work for at least a month before advertising it as a stable node.
bvt: http://logs.nosuchlabs.com/log/trilema/2019-10-22#1947746 << meanwhile figured out how to read from tty correctly, the updated number is 1.3%
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947565 << ty for offer, unfortunately something like this would be possible for me only in feb.
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947578 << yeah, i will definitely test, including keccak (can pick c impl. for tests)
bvt: require more involvement.
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947572 << allowing arbitrary hash functions would create more bloat -- either i'd have to use some generic crypto abstractions, or hack up the build system and unconditionally enable all the accepted crypto algos at build time to use them directly. it's only chacha and sha1 that are located among e.g. memcpy in lib/ and available uncoditionally. the rest
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947556 << well, as long as selection procedure is rational, can always explain why have chosen this one and explicitly say that this is not an endorsement.
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947554 << and ty, mircea_popescu, diana_coman!
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947552 << this is a nice idea, i will implement it
bvt: ftr diana_coman's keccak is 1.7 MB/s on my machine @ 100% CPU utilization.
bvt: system.
bvt: but how much of cpu percentage would we want to dedicate to good hashing? if e.g. keccak is in place, what could happen (but only experiments will show) is that the bottleneck would be in the CPU -> if the system runs low on entropy somehow and self-hashes (which goes through the HG now, should it go through HF? (would also make sense from Size/2 decision rule)), we'd have an easy DoS vector against the
bvt: some thoughts: currently feeder app takes 3% CPU @ ~2.4 kb/s when feeding data into O through HG, because the bottleneck is in FG reading, and lots of overhead seems to be coming from retarded tty interface, which forces reading of individual bytes.
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947551 << well yes, sha1 is making FG bottleneck much worse.
bvt: http://logs.ossasepia.com/log/trilema/2019-10-22#1947549 << i certainly liked serpent most, because it went through at least some of form republican investigation, but decided not to hurry too much putting it everywhere.
bvt: http://logs.ossasepia.com/log/trilema/2019-10-14#1945443 << please proceed with the installation
bvt: http://logs.ossasepia.com/log/trilema/2019-10-13#1945073 << the only problem i ever encountered is project gutenberg blocking germany, but no problems with archive.is
bvt: hi. i intend to finish the kernel rng work end of next week - then will do a dissection-writeup on what i did + vpatch. i also have to setup mpwp somewhere - this may take a bit of additional time.
bvt: asciilifeform: you can get 2u for yourself - you'd still run logger somewhere, no?
bvt: well yes, 180 > 140; also freenode declares clients dead after ~250 secs of silence, so 180 should work
bvt: asciilifeform: on the non-logging bot.py i used for testing my vpatch, i got pings from freenode only every ~140 seconds -- not 30-45 definitely
bvt: asciilifeform, diana_coman: i have reuploaded the reground vpatch with version in bot.py set correctly
bvt: yw
bvt: diana_coman, asciilifeform: post with vpatch updated
bvt: will regrind once again later today
bvt: spyked: that was for the pre-writeup vpatch release, the one published with a writeup was reground to the head of that time
bvt: it will need regrind now though
bvt: diana_coman: this should have been fixed here: http://bvt-trace.net/2019/09/logotron-active-disconnect-vpatch/ , please give it a spin; iirc it did not get into sept_fixes vpatch
bvt: germans are taught nowadays that they were all victims of evil nazis, and imprison anyone who was close enough to KZs back in the day, but at least older generation would still tell stories about their fathers when drunk (i have only second-hand recollections though)
bvt: ah so, this is clear from the letter (and iirc you pointed it out in some of the trilema articles)
bvt: http://logs.nosuchlabs.com/log/trilema/2019-09-29#1939005 << i can see the point of 'only man can be caring' in that letter, but tbh the 'cunts/prostate' one totally elides me.
bvt: yes, this is what i was talking about, ty.
bvt: mircea_popescu: i.e., in this context my rating remains +2, hence no self-voice in #trilema?
bvt: hi. for me, the meatworld events mentioned in http://bvt-trace.net/2019/08/fg-fed-linux-rng-work-schedule/ are over, i am continuing active fg-kernel work
bvt: lobbes: please test the following vpatch: http://bvt-trace.net/vpatches/active_disconnect.kv.vpatch http://bvt-trace.net/vpatches/active_disconnect.kv.vpatch.bvt.sig ; i'll have time for a writeup only on the weekend. this vpatch is only lightly tested so far.
bvt: i don't think that zombie socket can interfere with current code -- the local side of the socket gets unused random port automatically. this would be an issue only when bind(2) was called on socket before connect
bvt: asciilifeform: today i had a quick look through stevens, apparently timeout for SO_KEEPALIVE is two hours (digging through linux source confirms), so i don't think this option is any useful as is; TCP_KEEPIDLE sets the keepalive timeout per-socket, but then there is a question to which extent to rely on all of these socket options.
bvt: http://logs.nosuchlabs.com/log/trilema/2019-09-10#1935426 << ty, works great
bvt: trinque: ^ (self-voice somehow got through yestereday on the 2nd attempt)
bvt: !!invoice BingoBoingo 0.123 auction #1058 : 1209 WFF sent
bvt: http://logs.nosuchlabs.com/log/trilema/2019-09-09#1935294 << i'll have a look into this problem. this month i'm a bit tight on time for republican work, and already have work in schedule, so i can't promise any deadline, unfortunately.