76000+ entries in 0.569s

phf: jurov: well, it's not clear where "disregard all locks" comes from in the original request. if the actual operations are as asciilifeform describes,
i.e. sporadic inserts, and sporadic selects, then there will be no locks. my point is that there's no "disregard all locks" in postgresql, you solve it by knowing what lock you're hitting, and then designing your query to sidestep the lock
mircea_popescu: it really should be up to operator wtf, if
i want to read dirty let me read dirty what sort of decision is this for designer to make.
☟︎ mircea_popescu: mysql doesn't lock reads on write locks ;
i expect any rmdbs should be capable via config.
mircea_popescu: if this is the path you must walk to go from solipsist-alf to socially-integrated-alf
i can see it, but hurry it up already it's irritating.
phf: asciilifeform: right,
i was going to get to that :}
phf:
i'm not even arguing with you,
i'm saying that the ~full extent~ of what "move it to psql" is going to do is ~eliminate cross-boundary issue~ that is all. so it'll shave some significant overhead, but it's not a silver bullet.
phf: what
i'm saying is that a significant fraction of "1000s of queries AND ..." is the cross-boundary. you compile queries on c side, you send them to psql, it then parses, prepares results, serializes, sends it to c side, c side has to now parse all over again
phf: asciilifeform: did you understand what
i said?
phf: asciilifeform:
i'm just trying to establish the dataflow here, for my own curiosity
trinque: but
I am not arguing for something here; you'd know what you want
mircea_popescu: though
i am unaware anyone ever implemented this ; because, of coruse,
i am unaware anyone used the guy's algo for any other purpose than gawking.
trinque:
I am sadly, quite good at SQL if you want the thing translated
mircea_popescu: incidentally asciilifeform since we're now doing open source db optimization shared_buffers is probably a larger concern. what is it ? defaults to 125mb but
i'd readily see it 1-4gb in your case.
a111: Logged on 2016-12-30 14:32 Framedragger: (do note, 'work_mem' is per user / per request. so may be easier to DoS. thought
i should mention this for completeness)
a111: Logged on 2016-12-30 14:30 asciilifeform:
i'ma try it soon.
a111: Logged on 2016-12-30 14:20 asciilifeform:
i even spoke with career dbists, answer was 'your application is monstrous abuse and you need a cluster'
jurov:
i was just paraphrasing, don't remember the exact word
mod6: when
I get a free moment,
i'll throw the latest eulora on there. can be my mining box. :]
mod6:
i had obsd on it like for nearly all of '16... but wasn't doing anything with it. so
i threw linux on there.
mod6: oh, hey, actually. so
I've got a box.
mod6: nice!
i haven't done any sledding yet. gotta do that one of these times.
diana_coman: fwiw
I can confirm that current code compiles perfectly fine on gcc 4.4 in any case
mircea_popescu: sure.
i'm not saying it must be standardized. just, there.
mircea_popescu: asciilifeform hey,
i'm not surfe
i want it to work on heathens what.
mircea_popescu: incidentally,
i would say deedbot also counts as tmsr keyserver now.
mircea_popescu: asciilifeform ah,
i didn't realise you were happy with linux readcache.
Framedragger:
i suspect then that the inserts/sec slowness is due to postgres currently making really damn sure that *all* layers of cache are forced. this "full forcing of cache for every row" is what makes things slower; but it's also the only really-super-reliable approach for the case at hand (remote box).
mircea_popescu: well not exactly like that, but
i guess that may work for a heuristic early on.
mircea_popescu: because it is not computer-possible to have what you describe without what
i describe.
mircea_popescu: Framedragger see here's what graybeard means :
i see that statement, and
I KNOW there's a footnote somewhere you don't know about / bother to mention which says "except when abendstar in conjunction with fuckyoustar when it's 105th to 1095th column".
mircea_popescu: asciilifeform hey,
i only said they exist,
i didn't say their brains work.
a111: Logged on 2016-12-30 05:22 phf:
i think it treats one of the names as canonical
Framedragger: here's what
i'm thinking: disable synchronous_commit , but set 'checkpoints' so that results are flushed to db every $n inserts/updates.
i can see however how you may barf from such an idea, "it's either reliable, or isn't".
Framedragger: "lose weeks of work" is insane :(
i'm sorry to hear that. *this* would not expose you to that scenario. but one would have to pin down still-possible data loss scenarios, if any.
Framedragger: (do note, 'work_mem' is per user / per request. so may be easier to DoS. thought
i should mention this for completeness)
☟︎ Framedragger: ah hm. tbh
i'd still change work_mem because it's ridoinculously low by default, but
i hear ya.
Framedragger: ahh right,
i assume those include in-memory sorts
Framedragger: asciilifeform:
i take it you are certain that main bottleneck and 'hogger' is the numerous inserts?
a111: Logged on 2016-07-18 18:08 asciilifeform:
i know of no file system that would not choke.
Framedragger: to be 100% certain,
i'd have to check.
i see your concerns.
Framedragger: right.
i just thought about checkpoint_completion_target (set to say 0.9) which may help with inserts, but ultimately you're right, physical reality
Framedragger: busy for a bit,
i don't want to cite you sth without thinking about it