89100+ entries in 0.033s

mircea_popescu: Framedragger bingo, "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
mircea_popescu: here's a heads-up : head has been going for ~4 minutes. i expect it to crash without output.
mircea_popescu: and the mystery starts to unravel : the first 52mb of this file consist of ... one line of ^@
mircea_popescu: -- Dump completed on 2017-04-06 15:10:30 << at least i know for a fact i got an integral mysqldump. sadly it's 40.9 gb.
mircea_popescu: tail -n+150000 tril_posts.sql | head -n5 > hurr.txt produces empty file ; meanwhile wc -l claims 111119289
mircea_popescu: i can see the theoretical "benchmarks" online too. the problem is they're bs, "test file generated by seq 100mn". hurr. how about generate a file with lines of random length between 1kb and 1mb, then benchmark it.
mircea_popescu: apparently if you send it usr1 sig it will spit stats to stderr
mircea_popescu: 111119289 tril_posts.sql << that took ~5 minutes of wc time.
mircea_popescu: and in other lulz : apparently tail -n+x | head -n10 takes 4 times less than tail -nx+1 | head -n10
mircea_popescu: anyway, im dumping teh table as a sql and will be wanting to hack it up. but i have no tools with which to identify the CUT HERE spot
mircea_popescu: copied multiple 100s of gb past hours, none failed anything.
mircea_popescu: head must takle THE SAME TIME whether reading 1kb or 1pb file.
mircea_popescu: explain this to me. however long it may be, IT STILL STARTS IN THE SAME PLACE.
mircea_popescu: i guess i'm naive to imagine standard tools can do the job of fixing the mess standard tools created huh
mircea_popescu: any ideas on probing and cutting large files ? dd freezes.
mircea_popescu: trinque i certainly don't see anything wrong with teaching the stupid how to be even stupider until the time they're actually ready to hands and knees.
mircea_popescu: i thought it's obvious ubi is for the incompetent sluts that are "going to college" and the assorted ghetto scum "tryna make it in dis game"
mircea_popescu has recovered his db, is now fighting with monstrous conspiracy that swears on all voices "no dude it's legit, 40gb!"
mircea_popescu: but i get tons of idiots trying to you know, spam trilema claiming to be googlebot.
mircea_popescu: asciilifeform possibly those were just fake user agents ?
mircea_popescu: you (all of you, really) are very well linked since all the trilema mentions and etc. we've built like a coven
mircea_popescu: google will definitely slurp 5mn pages in the next week-ish
mircea_popescu: in fairness it's entirely my fault, i got bored with sql import and interrupted it. didn't work so well seeing how it was a write op.
mircea_popescu: this is also the first time a mysql db crashed in my personal experience.
mircea_popescu: my first time using any of these shits. prior to this morning i had nfi what /var/lib/mysql/ is or how you repair crashed mysql db or anything of the sort.
mircea_popescu: a superficial cat | strings | grep yielded a bunch of old stuff so it doesn't appear dataloss is guaranteed.
mircea_popescu: i still have nfi elephantiazis hit that poor file that it ended up 40g, but anyway, im moving the whole shebang on a tb partition and will see wtf myisamchk does.
mircea_popescu: Framedragger for some reason root gets 50G and mysql for everything is stored in there.
mircea_popescu: anyway, turns out the myslq files are stored on a tiny partition for some incomprehensible reason. fancy that wonder.
mircea_popescu: 39491524 /var/lib/mysql/trilema_blog/tril_posts.MYD << reported by du | 40439263232 Apr 6 11:12 tril_posts.MYD << reported by ls -l
mircea_popescu: last i saw MySQL Disk Space 596.29 MB, so evidently mysql "repair table" corrupted the table to shit.
mircea_popescu: meh. in which we find trilema's posts data is 40g somehow.
mircea_popescu: 0 indication of what this is. will it take another 6 weeks ? "maybe". how do i tell ? what do i do ?