Roko's Basilisk

Singularitarian community LessWrong discovers Hell:

The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person (e.g. by mind uploading), which the AI would construct by deduction from first principles. In LessWrong’s Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.


The more sensitive and OCD-prone on LessWrong began to have nightmares. Within four hours, Roko’s post and all discussion was deleted by Yudkowsky, as the transhumanist side of LessWrong overpowered the rationalist side.