72 entries in 0.377s
mircea_popescu: ned for that
yudkowsky character and just as so following.
a111: Logged on 2017-08-29 12:57 phf:
http://btcbase.org/log/2017-08-29#1704716 << i don't think it matters which faction of ultra rationalist posthumanists you belong to. either way the nature of your misunderstanding of tmsr take certain predictable shape, and so are your interests, etc. nobody thinks that you literally suck
yudkowsky's cock, but personally i don't even care.
phf: well, that aside wasn't about you, but about
yudkowsky.
phf: my impression is all that lesswrong, rationalwiki,
yudkowsky, etc. is basically mentat wiki that took itself seriously.
phf:
http://btcbase.org/log/2017-08-29#1704716 << i don't think it matters which faction of ultra rationalist posthumanists you belong to. either way the nature of your misunderstanding of tmsr take certain predictable shape, and so are your interests, etc. nobody thinks that you literally suck
yudkowsky's cock, but personally i don't even care.
☝︎☟︎ phf: "intelligence" "humans are idiots" "kurzweil" "working memory" "genetics" "
yudkowsky". it's funny at some point you don't even need to read the log, you can figure out the particular, previously observed failure mode of a person by keywords that stand out
mircea_popescu: apparently the
yudkowsky fellow pod' his atrocious fanfic.
mircea_popescu: we now have a precise sum of
yudkowsky's total interest in this venture.
a111: Logged on 2015-03-20 03:35 mircea_popescu: "Roko's basilisk is a thought experiment that assumes that an otherwise benevolent future artificial intelligence (AI) would torture the simulated selves of the people who did not help bring about the AI's existence. [...] The concept was proposed in 2010 by contributor Roko in a discussion on LessWrong.
Yudkowsky deleted the posts regarding it and banned further discussion of Roko's basilisk on LessWrong af
trinque: "Per
Yudkowsky's conception of continuity of identity, copies of you in these branches should be considered to exist (and be you) — even though you cannot interact with them."
BingoBoingo: <mircea_popescu> "It's wise to keep track of what your mortal enemies do, and there's little that more exemplifies Pure Evil in this world than Eliezer
Yudkowsky." << Dude is harmless can't even kill the little girl without a troll
mircea_popescu: "It's wise to keep track of what your mortal enemies do, and there's little that more exemplifies Pure Evil in this world than Eliezer
Yudkowsky."
mircea_popescu: "Labels: creativity, Psychology,
yudkowsky". lol. why ?!
mircea_popescu: "Roko's basilisk is a thought experiment that assumes that an otherwise benevolent future artificial intelligence (AI) would torture the simulated selves of the people who did not help bring about the AI's existence. [...] The concept was proposed in 2010 by contributor Roko in a discussion on LessWrong.
Yudkowsky deleted the posts regarding it and banned further discussion of Roko's basilisk on LessWrong after it had
☟︎ decimation: “There is light in the world, and it is us!" Eliezer
Yudkowsky, Harry Potter and the Methods of Rationality << "We are the ones we've been waiting for" Obama
assbot: Logged on 18-11-2014 15:38:39; asciilifeform: ;;later tell mircea_popescu with regard to the 'abuse of statistical devices' outlined in note iii, the accomplished master of the art is a fellow named
yudkowsky (search #b-a log). he isn't an idiot, bastard knows exactly what he's doing, and his cult is a veritable vacuum trap for thinking folks
jurov: but herr
yudkowsky was apparently upset at harry's stupidity and wrote a fanfic
decimation: lol
yudkowsky wrote a harry potter fan book
decimation: so I find these AI doomsayers highly amusing. most of them would laugh at a Christian's claim of revelation (knowledge which comes neither through induction nor deduction) but are more than happy to jump on board the eliezer
yudkowsky tard train because "I know it is true"
undata:
yudkowsky worships at its altar
decimation: yeah
yudkowsky and him co-wrote a paper on the 'ethics of artificial intelligence'
decimation: hehe yeah asciilifeform this guy has to be in
yudkowsky's orbit somehow
mircea_popescu: asciilifeform "Eliezer Shlomo
Yudkowsky (born September 11, 1979[citation needed]) is an American blogger, writer, and advocate for friendly artificial intelligence." << lol i guess tardpedia actually informs, for once.
moriarty:
Yudkowsky is reason enough why formal education makes all the difference in the world
jurov: Comment author: Eliezer_Yudkowsky This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.