atx crypto club

riot

#riot

Anon Ymous

Thu Jan 6 07:08:27 2022
(*0bc15a9d*):: +public!

*** Defense against discourse – LessWrong
*** So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this: First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed. She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable. O’Neil is operating under the assumption that the denotative content of the futurists’ arguments is not relevant, except insofar as it affects the enactive content of their speech. In other words, their ideology is part of a process of coalition formation, and taking it seriously is for suckers. AI AND AD HOMINEM Scott Alexander of Slate Star Codex recently complained about O’Neil’s writing: It purports to explain what we should think about the future, but never makes a real argument for it. It starts by suggesting there are two important axes on which futurists can differ: optimism vs. pessimism, and belief in a singularity. So you can end up with utopian singularitarians, dystopian singularitarians, utopian incrementalists, and dystopian incrementalists. We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad. […] The author never even begins to give any argument about why the future will be good or bad, or why a singularity might or might not happen. I’m not sure she even realizes this is an option, or the sort
*** lesswrong.com

Back to top