4800+ entries in 0.003s

mircea_popescu: meanwhile, some installs incorectly config'd (ancient ?) eat a tags
mircea_popescu: i dunno that it's a bug, just don't leave < floating around.
mircea_popescu: wouldn't that read "motherfucker, i have no fucking idea what the fuck i wrote in here, was i drunk or what, it's illisible!"
mircea_popescu: ahh, feedbot conveniently pointed to me diana_coman 's comment. this thing is so fucking useful, making me look like a cyborg
mircea_popescu: illogical, granted, but to me the expectation's obviouis
mircea_popescu: but personally, i'm waiting for billymg to emerge, out of his current work. if nothing's clear by then, we can hack. but before, no real benefit, all downsides.
mircea_popescu: and yes, we'll prolly have to hack soemthing togethet to find this out
mircea_popescu: in any case -- the part where categories are useless, tags are where it's at, and they must be recalc'd every article publish is clear. the part where HOW to calculate them in the first place, that's unclear.
mircea_popescu: i suppose nobody wrote enough since the dawn of the digital age for this need to appear and be conceptualized
mircea_popescu: basically, each article's tags' lease on life is "until another article is published".
mircea_popescu: well, there's no system that currently does this, recalculates the whole cloud tag and each article's tags on each new article published
mircea_popescu: diana_coman, i mean, that by your writing article 19, the tags of article 2 change
mircea_popescu: that's why nobody has a working system : 1. any meaningful interpretation of "categories" reduces to "tags", so even though implementations give "the choice" it is a dud choice ; and 2. any meaningful implementation of tags requires they change with the blog, whereas every implementation presumes to enter them at the time of publishing (which coincidentally but harmfully overlaps with the "don't alter history" imperative)
mircea_popescu: search words are "i know the searcher but not the material, here's soem clues' whereas tags are "i know the material but not the seacher, here's some clues"
mircea_popescu: however you turn this matter around, yes categories make no sense, if you wanted super-titles you'd do chapters. and tags ONLY make sense as the converse to search terms, "here's the words you might wish to search trilema for"
mircea_popescu: eg the trilema article i quoted above : i had fully forgotten about. not in the sense that i don't recognize it when i see it, i do, but in the sense that when i penned
http://trilema.com/2019/black-or-white-the-day-of-saturday/ which needed it, i did recall to put it in. i've meanwhile corrected this and added the link, but i am certain there's THOUSANDS of such "actually mp, the item you'd link here is this" "oh shit you
mircea_popescu: this augments the ai with human mind, but then again also limits it -- you won't find what you didn't put in.
mircea_popescu: another approach is to just generate the list of most common words on trilema, pick the best ones, and tag with them all the articles that contain them
mircea_popescu: on the other hand, something complex, involving linkage and actual attempts at "ai" sesne-making... well
mircea_popescu: on one hand, something simple like "tag each article with the 12 most frequently occuring live words over 3 characters long ; keep a central list of "dead" words that occur in more than x% of articles, re-tag all articles tagged with one of the words there"
mircea_popescu: i confess among the papers on my desk there's some various aproaches at word-distance and otherwise auto-tagging
mircea_popescu: btw, don't you find the titles-only style for archives / categories better ?
mircea_popescu: diana_coman, srlsy ? the core of the argument was that google lists a supposed number of results, in the bns, but it never disgorges any significant count
mircea_popescu: where you were ranting about how it sucks, doesn't even give 5 of its claimed 5bn results, what reason could anyone have to believe the count
mircea_popescu: make yourself a spyked-genesises-stolen-crap sig, use that.
mircea_popescu: nto v infrastructure, and to use non-main keys for this.
mircea_popescu: this understanding is current as of cca 2016. meanwhile we agreed that because a) it is preferrable to work with republican rather than imperial items and to prevent more imperial seepage than needed ; and because b) there's no limit to signature count as per long standing observations and discussions (with a very early asciilifeform cca 2013 maybe) then therefore the correct approach is to sign things early, to get them i
mircea_popescu: somebody decides to spend some time towards reviewing. what do they do next ?
mircea_popescu: why first ? think you about it, how is the review supposed to work ?
mircea_popescu: the fundamental problem wiht ideas is that they're not patches.
mircea_popescu: yes, but the idea is that "let whoever signed the genesis evaluate your patches, rather than do it for them through the venue of keeping them phf'd"
mircea_popescu: spyked, i'm starting to suspect, incidentally, that no cheekiness is involved, he simply never saw either instance, does something like two hours/week keeping track of things, and if that week has 25 hours' worth of logs and developments, well, gets 8%. hey trinque, are you current with the logs ? how descriptive is that model /
mircea_popescu: i don't intend to negrate him, as things stand, so you're more than welcome to explore wonderful world detailed in the further paragraphs of that comment.
mircea_popescu: "Needless to say, I am unamused ; and, to answer the
original inquiry in firmer terms containing no ifs or buts : no, I personally have no further interest in hearing what phf may have to say on any topic. The time for "ok then, I will get my logger to spec by X date and hope to have my blog up by Y date" came and went, sometime yesterday.
mircea_popescu: including an answer for trinque, as to, where's his patch. aite.
mircea_popescu: now help me out here. is the answer "late aug/early sept i will publish my sept workplan which'll include a date at which i intend to publish the answer to that q" ?
mircea_popescu: so is the idea feedbot gets abandoned a la lobbes' orig bot ?
mircea_popescu: yes, that wasn't in discussion. but the current plan takes you to week 35, after which comes week 36 and a new plan ?
mircea_popescu: not necessarily. but you're running eg feedbot, you make improvements to it, nobody gets to see.
mircea_popescu: spyked, you know, it occurs to me your workplan is fundamentally weak in that it includes no "will genesis material / publish patches". am i guessing right in that the next edition, seeing how week 35 is just about the corner, will include prior plan performance review and that ?
mircea_popescu: so long as we don't exceed it by mass, it'll be the correct approach.
mircea_popescu: it's this device that transforms inca (circular motion) into republic (linear motion) by the principle of only permitting rotary motion in one direction, thereby using the inca mass against itself.
mircea_popescu: nothing prohibits proggy reading 2mb ring buffer at gb/s speeds. it'll get... well, a lotta hashmaterial.
mircea_popescu: though this comes at cost of complexity. imo only correct approach is to have this set ~at kernel compile~, and there it stays. if you declare I 16kb and O 2mb, then therefore your stretch factor is 256
mircea_popescu: anyway, exactly what stretch factor to use is a bit of an open question. may be worth it to permit the thing to self-adjust, based on O read volume.
mircea_popescu: if indeed a stretch of say 8:1 is preferred, HF takes 1 byte makes 8, then the HG will work on 8byte buffers, take 8 bytes make 8 byres.
mircea_popescu: now, the magic numbers are only here by need of example.