32200+ entries in 0.02s

a111: Logged on 2019-05-28 12:07 diana_coman: my proposal was
to have
the Requester ask "what are a,b,c" and move
those
three objects into a "pending" queue; when another request for
them arrives,
that's fine; when requester wakes up, it checks and prunes any
that meanwhile are
there so it doesn't ask again for stuff it meanwhile got
mp_en_viaje: (fucking obviously
they got
the
ttl wrong, who
the fuck heard of
ttl as a SERVER-side setting. how is
the server
to know how often my pictures of jodie foster's injured snatch need refreshing ?!)
diana_coman has just re-read
the cs manual so feels rather ..funny
diana_coman: for
the same money
the c code can have
the defaults and
that is it
mp_en_viaje: it's remarkable, reading
through all
this, how much like a web browser
this client actually is.
TTL next ?
a111: Logged on 2019-05-28 11:51 diana_coman: client asks for data from EuCache; EuCache replies with either
true (data found + whatever values
that data has) or false (not found + default values)
a111: Logged on 2019-05-28 11:47 diana_coman: mp_en_viaje: I suspect it's again one of
those
things where
there is no disagreement at
the core but we are not yet fully in sync re various bits and pieces;
BingoBoingo: nocredit: Other
than running a Bitcoin node, what else do you spend your
time on?
BingoBoingo: nocredit: How about you make your way
to #pizarro
diana_coman: nocredit: register a key with deedbot as otherwise
there can't possibly be a "next
time/later", it's always first
time...
BingoBoingo: nocredit: Maybe register a GPG key. Otherwise hard
to
tell whoever returns is you
nocredit: ok perfect. Will update here when I complete
the sync
diana_coman: it seems
to me you can't afford
to not use pizarro for bitcoin really
nocredit: but yes, i've learn
the lesson. Only bare metal at my home
BingoBoingo: But you have
to shut
the daemon down cleanly for blockindex.dat
to be portable
diana_coman: nocredit: you know
that saying
that one's
too poor
to use cheap stuff?
nocredit: with some spare
time
to dump
the hdd
nocredit: because vultr vps has sent me a
takedown notice
BingoBoingo: <nocredit> and last: if i
tar gz everything synced as far as now on
the VPS and dump it at my premise, i'll be able
to restart at block height 300k in
the future? << As long as you cleanly shut down
the daemon before dumping
BingoBoingo: The
TRB 3-6 week sync (CPU and disk bound) is a strictly linear, no exceptions
to verification affair
nocredit: and last: if i
tar gz everything synced as far as now on
the VPS and dump it at my premise, i'll be able
to restart at block height 300k in
the future?
diana_coman: nocredit: for
that matter if running own
trb is
too big a pain/expense, I suppose you might be better served by getting in
the wot and using deedbot's wallet for
that matter.
BingoBoingo: nocredit: Once synced it is very
tenacious with staying synced. Most of
the core sync speedup is
they stopped verifying many blocks
BingoBoingo: The Gavin or some other shitgnome early on
tried
to push a "mandatory" segwitting, but
that proposal died quickly and
they all now pretend
that never happened.
nocredit: correct, I appreciate
TRB as it removes
the bloat. But 3 weeks
to sync is really a pain
BingoBoingo: The bigger concern with core is
the bloat. Shit like
the "payment protocol"
diana_coman: nocredit: since you have nothing
to do with segwit, you are immune
to attacks on segwit, not as much protected as entirely immune by definition, no?
BingoBoingo: 1 addresses are pay
to public key hash. 3 addresses are pay
to weird addresses.
BingoBoingo: nocredit: Segwit and all
the other core weird happens on 3 addresses
BingoBoingo: It
takes
time, but I can also but a blockchain on a drive.
nocredit: another question: if i run core without using segwit features (so sticking with
the 1 starting addresses) am i actually protected from an eventual attack on segwit? I know
that here is not core support, but
there is a way
to
tell core
to dump
the segwit part?
☟︎ BingoBoingo: The vacant rk can indeed be rigged with
the SATA snake
to add a 1tb drive.
nocredit: thanks, yes physical colo is
too much, i hope
that ssh
tunnel is stable enough for 3 weeks of sync
diana_coman: nocredit: re provider I warmly recommend
talking
to Pizarro and in particular
to BingoBoingo in #pizarro.
nocredit: the small vps would only handle
the network
traffic
nocredit: my problem is
that i don't have a static ip at my premises, so at home it's a pain with
the myip parameter. I was
trying with a pico vps
to bypass
this by set up a private vpn, but as now i'm stuck
diana_coman: for paying customers who may want
to run
trb, what?
diana_coman: asciilifeform: doesn't pizarro offer anything
though?
nocredit: is
there a recommended vps provider
to use?
nocredit: second, my vps provider (vultr) is complaining with me
that i put
too much wear and
tear
to
their ssd
nocredit: first of all i'd like
to ask what is a sane
time for sync from zero
to 100%
☟︎ trinque: they had an outage earlier
today in singapore, and for some reason
this resulted in
the bot being permastuck
trinque: having serious problems with
the DC I'm using for deedbot, sorry folks. I'm going
to
try
to get
the migration
to pizarro completed
this weekend.
diana_coman: you can feed it manually blocks if you have
them & are in a hurry but otherwise I don't yet fully grasp your problem as such: is it stalled or is it just
that you don't expect it
to
take longer
than 1 week or what?
nocredit: 80% of
the debug.log is about discarded blocks
diana_coman: it does
take
time
to sync fully if you start from 0, yes;
diana_coman: nocredit: it's unlikely
that it's
too slow since plenty of people are running same and sync no problem ; it is
true
that it's not on vps usually but at any rate, it may be all sorts of other stuff: is it blackholed?
nocredit: hi,
thanks for
the voice. Basically
trb (with aggressive patch) simply is
too slow
to sync, and i'm using a VULTR vps with 6 cores and 16GB of ram. For
too slow i mean
that after 1 week is just at block height 300k
diana_coman: nocredit says he needs support with
trb so let's here
a111: Logged on 2019-05-28 08:49 mp_en_viaje: practically, yes it;s undesirable, but
the overwhelming consideration is
that
this undesirableness can not be managed for
the client by
the server, because
the server suffers from a serious knowledge problem wrt it.
diana_coman:
http://btcbase.org/log/2019-05-28#1915708 - even on re-re-read I can't follow
this: where does it seem as if I'm saying in
the least
that server should solve
this at all for client? (no, it can't, of course); my approach is
to solve
this in a single point in client aka Requester rather
than have it spread
throughout client at every point where some part finds out it wants some data.
☝︎ diana_coman: the
timeout is
the only magic value, yes, but
that is literally last resort, aka guaranteed after
that
time, it WILL send another request; it WILL however send one sooner if
the previous one is answered, why would it wait longer
than it has
to
diana_coman: and re Y magic intervals,
that is not really
there either,
that's
the whole point of data-received notifications,
to NOT rely on magic Y interval
diana_coman: it's
true
that atm at least it's more independent from game play rather
than "when not busy"
a111: Logged on 2019-05-28 09:12 mp_en_viaje:
http://btcbase.org/log/2019-05-27#1915700 << if your argument is actually "the client should asynchronously ask for all data it uses defaults for AT SOME OTHER
TIME
than in
the middle of heavy gameplay, such
that all
the complex gfx of everyone's armors, mounts and flying dildoes are downloaded piecemeal and while sitting around, rather
than en masse whenever
teleporting
to a large market
town" you have a solid point. but it's
diana_coman: and given
the
two-class system
those are effectively priorities: at every ask-opportunity,
the Requester will choose first object request and only second file request (those really are
the ONLY
two
types of questions
the client may ask
the server)
diana_coman: might add also, since it's perhaps not obvious:
there is no exact "repeat request" as such because anyway, how could
that be (counter of messages at
the very least is different!) but more importantly, every
time Requester asks
the server for something, it simply asks about as many pending
things as it can,
there is no "oh, I asked about
those and not yet here so let's ask exactly
those again"
diana_coman: for
that matter I suppose it can even just have one queue,
they are all "pending" and simply prune it every
time it wakes up;
diana_coman: my proposal was
to have
the Requester ask "what are a,b,c" and move
those
three objects into a "pending" queue; when another request for
them arrives,
that's fine; when requester wakes up, it checks and prunes any
that meanwhile are
there so it doesn't ask again for stuff it meanwhile got
☟︎☟︎ diana_coman: and now re waste
traffic: at
t1
there are requests for obj a, b, c; at
t2 Requester wakes up and asks
the server "what are a, b, c", drops
those as "done" and goes back
to sleep; at
t3
there is another request for a,b,c so Requester puts
them back in its queue; at
t4 a,b,c arrive; at
t5 Requester wakes up and ...asks
the server again "what are a,b,c?" because well,
they are
there in
the queue, right?
☟︎ diana_coman: the idea here being
that well, if
the caller still wants
that stuff and it's not
there,
they will just request it again anyway so it gets again into
the queue and at some point it will make it into a message
diana_coman: now
there is
the apparently disputed bit: in
the simplest implementation, requester can now consider
that it's job is done and
therefore go
to sleep until next
time when it might send a message
diana_coman: whenever it decides it CAN actually send a message
to
the server
to ask for something, it packs
together as many of
those pending requested stuff as it can in one message (protocol allows a request for several files/obj in same message) and it sends it on its way
diana_coman: so
the Requester accepts all and any requests and keeps
them neatly in a queue
diana_coman: the Requester is
the one who knows where data comes from, what does one need
to do
to obtain it, what sort of constraints
there are (e.g. don't spam server with 1001 requests per second) and even what has
to be obtained in order
to be able
to make a request at all (e.g. a set of Serpent keys!)
diana_coman: anytime it wants something fresh, it will place a request with
the local Requester (hence, NOT directly with
the server, for all it cares
the data comes from Fuckgoats really)
diana_coman: and I say "fresh" because it's not even necessarily a case
that it doesn't have it but maybe (e.g. for position) it considers it obsolete hence deletes it and wants it new
diana_coman: on
top of
the above,
the client further has
this choice: it can decide it wants
to ask for some fresh stuff basically, be it file or anything else
diana_coman: cache will have some default value for anything (because defaults are by
type/role so not a problem
to have
them upfront) and it provides
those or better, simply marking
them as what
they are but never saying "huh, no such
thing"
diana_coman: so up
to here I
think it's clear
that yes, client can
therefore play happily forever after
totally offline
diana_coman: this can/is
to be done by any bit and part of
the client
that is looking for some data of any sort, be it art, position, whatever
diana_coman: client asks for data from EuCache; EuCache replies with either
true (data found + whatever values
that data has) or false (not found + default values)
☟︎ diana_coman: let me expand a bit on
the concrete solution I'm
talking about:
diana_coman: hence my "doesn't fit" - in
this specific data-that-matters sense, not re eye candy
diana_coman: one
thing
that I see
there is
that you seem
to consider
that
this "request" is ONLY for art stuff;
the way I see it, it's not just for
that but a generic mechanism for any sort of
thing requested, be it art or contents or position or whatever
diana_coman: mp_en_viaje: I suspect it's again one of
those
things where
there is no disagreement at
the core but we are not yet fully in sync re various bits and pieces;
☟︎ a111: Logged on 2019-05-27 22:58 diana_coman: basically I don't actually
think
that "needed 100
times" SHOULD
translate into "send 100 requests
to
the server" ; something is either needed or not; it might be more needed
than something else, sure but
that's a relative (and changing) ordering of requests at most, not a
traffic-generator essentially.
mp_en_viaje: the elegant solution for
this would be for
the client
to keep a list, "items
the server mentioned, we asked for but never received usable answer
therefore using a default", and either go
through it whenever convenient (eg, when player goes afk) or else even expose a button for player
to do himself).
☟︎ mp_en_viaje: a case for "retry X magic number of
times at Y magic intervals which i
the designer knew ahead of
time, for everyone and for all
time".