229000+ entries in 0.131s

a111: Logged on 2016-12-30 05:40 phf: so if you were
to produce a patch with a/old-veh.lisp and b/veh.lisp. existing vtrons will happily press it,
though it's a
total clusterfuck
Framedragger: mircea_popescu: consider a scenario in which you knew how much data you could lose ("up
to 100 last rows"), and you could check if you lost any (last row id == last-id-processed.txt ? false :
true).
that being said,
this way
things become more wibbly-wobbly, so probably fuck
that. :(
mircea_popescu: Framedragger data loss is catastrophic
to a degree
that can't be described, as far as phuyctor goes. if you have
to also check, your workload goes up 3x at least.
Framedragger: "Note
that open_sync writing is buggy on some platforms (such as Linux), and you should (as always) do plenty of
tests under a heavy write load
to make sure
that you haven't made your system less stable with
this change. Reliable Writes contains more information on
this
topic. " oh god. more inserts/sec but zero data loss => probably can't help you much. documentation doesn't encourage me :/
mircea_popescu: asciilifeform hey, i only said
they exist, i didn't say
their brains work.
a111: Logged on 2016-12-30 05:22 phf: i
think it
treats one of
the names as canonical
Framedragger: but
this way you would constrain any losses
to particular known amounts.
Framedragger: here's what i'm
thinking: disable synchronous_commit , but set 'checkpoints' so
that results are flushed
to db every $n inserts/updates. i can see however how you may barf from such an idea, "it's either reliable, or isn't".
Framedragger: "lose weeks of work" is insane :( i'm sorry
to hear
that. *this* would not expose you
to
that scenario. but one would have
to pin down still-possible data loss scenarios, if any.
Framedragger: << need
to understand just what does
that imply..
Framedragger: "For situations where a small amount of data loss is acceptable in return for a large boost in how many updates you can do
to
the database per second, consider switching synchronous commit off.
This is particularly useful in
the situation where you do not have a battery-backed write cache on your disk controller, because you could potentially get
thousands of commits per second instead of just a few hundred."
Framedragger: asciilifeform: docs advise heavily on enabling write cache (but (sanely) insist on battery backup in
that case) for your 'loads of inserts per sec' use case..
Framedragger: asciilifeform: any JOINs in
those multiple queries for each 'insert'? (if yes,
this param should help.)
Framedragger: (do note, 'work_mem' is per user / per request. so may be easier
to DoS.
thought i should mention
this for completeness)
☟︎ Framedragger: (some of
those settings don't require db restart (but may require
to 'flush' params), some of
them do, best
to restart db after all changes are made.)
Framedragger: ah hm.
tbh i'd still change work_mem because it's ridoinculously low by default, but i hear ya.
Framedragger: work_mem (used for in-memory sorts) is 4 MB default. 4 MB. (9.5 anyway). set it
to 50 MB as per advisory at least.
a111: Logged on 2016-11-19 18:52 asciilifeform: Framedragger: db being hammered 24/7 with 'do we have
this hash' 'do we have
this fp' 'add
this and
this' 1000/sec is
the bottle.
Framedragger: ahh right, i assume
those include in-memory sorts
Framedragger: mircea_popescu: it's dark here in
the northern hemisphere, god it's deperessing :( mornin'..
Framedragger: asciilifeform: i
take it you are certain
that main bottleneck and 'hogger' is
the numerous inserts?
a111: Logged on 2016-07-18 18:08 asciilifeform: i know of no file system
that would not choke.
Framedragger: to be 100% certain, i'd have
to check. i see your concerns.
Framedragger: right. i just
thought about checkpoint_completion_target (set
to say 0.9) which may help with inserts, but ultimately you're right, physical reality
Framedragger: busy for a bit, i don't want
to cite you sth without
thinking about it
Framedragger: i'm
thinking,more memory could help with certain
things
that db is busy with, incl insertion, even. i'm not sure.
Framedragger: aha right. i'm doing sth else but i could later ping you with a sample postgres file which you could
try out (would need db restart)
Framedragger: it's something
that can be very easily changed and
tested without sweat or breaking
things.
Framedragger: asciilifeform: do you have an idea how much memory you could allow postgres
to eat up? i know you have
that other super hardcore
thing eating lots of memory on
the side
a111: Logged on 2016-12-30 12:53 Framedragger: (i'm
thinking about
things like size of shared buffers etc)
Framedragger: (i'm
thinking about
things like size of shared buffers etc)
☟︎ Framedragger: would still be interested
to
take a look, wouldn't hurt.
a111: Logged on 2016-11-21 12:48 Framedragger: asciilifeform: since i'm fiddling around with postgres for work anyway, i'm curious, if you find a moment, could you maybe send me
the postgresql.conf file on phuctor's machine? i'd
take a look (it's very possible you know much more re. what's needed
there, but i'm just curious about a coupla parameters, doesn't hurt
to check)
a111: Logged on 2016-12-30 01:20 asciilifeform: yeah but one
that doesn't motherfucking grind
to a halt when read 1000/sec omfg
davout: was
there a discussion of
the use case where one wishes
to create, and sign
transactions from an arbitrary set of unspent inputs?
☟︎ BingoBoingo: need
to remind self blockchain only gets longer
BingoBoingo: davout: 30 days
to Februrary 6th 2015 block 34236 in latest sync
davout: out of curiosity, how long did it
take
trb node operators
to fully sync?
☟︎ davout: "plox
to use xcode, where everything works differently, because reasons"
davout: OSX,
totally
the platform sane people develop on "valgrind:
This formula either does not compile or function as expected on macOS" hurrrr
ben_vulpes: i believe
that mod6 has a solid one as well, pete_dushenski's has been blackholed of late
a111: Logged on 2016-12-29 23:08 asciilifeform: also i see some 'connect() failed after select(): Connection refused' which iirc is bleeding edge prb kicking
trb out
ben_vulpes: not
that it cannot be done
that way, but it is faster other ways.
ben_vulpes: and waste
time negotiating connections
ben_vulpes: otherwise
trb may decide
to ask utter randos for blocks
ben_vulpes: davout: you will sync far more quickly if you -connect
to a single, high reliability, high bandwidth node during sync
davout 's
trb node is now up and apparently syncing at 62.210.206.141
ben_vulpes: amusing innit
that
the father of
the since-aborted 'blockchain spam' meme is now spamming irc
ben_vulpes: mod6: how does
that help me filter luke-jr's spam and not noobs?
ben_vulpes: i see a new face at
the back of
the hall, i'm going
to give
them
the opportunity
to at least say hello and introduce
themselves.
ben_vulpes: i will not spend
the
time
to figure out how
to mute everyone but noobs i've never seen before
mod6: fwiw you can
turn off
those messages in your client
too.
ben_vulpes: for
those who *still* miss
the point, joining and parting is opening and closing
the squeaky doors on a hall where 5 people are arguing and 500 muffling laughter and groans
ben_vulpes: a join and an up, which is predicated on
the obvious
davout: luke-jr: it's not like you *have*
to idle in
the chan, logs are public and if you have something
to say, it's a /join away
ben_vulpes: wear a shirt with last weeks sweat stains on it, it's not like you're
that important
ben_vulpes: nah, park your boat on
the lawn, who cares
ben_vulpes: it's like omfg even aws is barely 10us/mo for a vps you don't have
to
trust
luke-jr: not
that IRC is all
that important
ben_vulpes: what part of bumfuckistan do you live in
that prevents colo access?
luke-jr: want
to donate $30k so I can get a better ISP? :p
ben_vulpes: you want
to matter in crypto but "nah bro, it's just my isp" at me?
ben_vulpes: top 10 in disconnects over
the year? no problem?
luke-jr: I saw
the link. didn't see a problem.
ben_vulpes: luke-jr: i know you're awake and reading
this because you pm'd me. don't pretend otherwise, it's downright foolish.
ben_vulpes: one can patch an empty directory with an arbitrary patch and extract
the filenames patch wanted
to hit
ben_vulpes: phf: when it works, it outputs
the list of patched files
phf: ben_vulpes: you mean like ~parse~
the output of patch?
ben_vulpes: !up luke-jr well did you read
the link or what
ben_vulpes: wait phf hang on no i don't
think i'm going
to do
the largest common, i
think i'm just going
to use
the output of patch
to figure out what was actually patched
phf: i noticed
that btcbase supports filenames with spaces in
them: if you start a filename with " it will read until a closing ". i have no idea where i got
this from, because gnu diff/patch don't support spaces in names.
ben_vulpes: ^^ mircea_popescu asciilifeform mod6
trinque and any other vtronicists pls
to opine
ben_vulpes: phf: when i crack my v again in
the morrow, i'm going
to implement hash-checking against longest common directory
tree
a111: Logged on 2016-06-20 04:23 phf: which is handy if you're using something else
to produce
the patch, or if you need
to use a non-trivial diff command. for example i sometimes need
to exclude files from diffing, so a command might look like diff -x foo -x bar -x qux -ruN a b | grep -v '^Binary files ' | vdiff > foo.vpatch