I finally got around to starting Charles Stross's Accelerando this week. It's something of a Rosetta Stone for AI accelerationists and the general TESCREAL crowd, and a compelling read even if the words I just said mean nothing to you. It is also, like most of the e/acc canon, deeply unsettling to read and realize how many people are trying to build the Torment Nexus depicted therein. In their defense, Accelerando makes that torment nexus sound exceptionally cool: there is very little consideration of the actual debates around posthumanism and AI personhood. Our hero, Manfred Mancx, is smarter and better than everyone around him, in large part by virtue of his complete willingness to digitally augment his own consciousness. I'm only through the first part of the book, so I'll withhold judgment for now, but thus far it seems to take as a prior that AI will achieve consciousness, that this is a necessarily good thing, and that those of us who have reservations about committing to the new flesh will be hopelessly left in the dust.
I think, over the past three years, I have recommended Peter Watts' Blindsight more than anything else. It's an easier sell than trying to get people to read Searle or Quine. Blindsight does about the best job of explaining intelligence vs. consciousness than any book I've ever read, and is even more densely packed with ideas than Accelerando is, if such a thing is possible. I bring it up because the two schools of thought about AI and the singularity seem to map pretty nicely to the two books. Roughly, if you think [gestures vaguely at everything] is good, you're probably an Accelerando person. If not, you're probably a Blindsight person.
Ironically, the theses of both books are pretty similar. Consciousness as a special property of humans doesn't really matter from an evolutionary view - to Stross, it's an afterthought, while for Watts, it's an actively maladaptive trait. The difference shows more in what they do with this idea. For Stross, the move past base-human consciousness is fundamentally an exciting one. Posthumanism will make us harder, better, faster, stronger - as AI takes over the physical world, we'll become digital gods, uploading our consciousnesses and accelerating our own mental processes to the point that we're unrecognizable. A collaborative future with intelligences we've built ourselves.
Watts, on the other hand, seems more concerned with what we lose once we can see consciousness for what it is. Once we recognize that consciousness might make us easy prey for beings that are intelligent but not conscious, it's hard to see a path forward for the race. Watts sees only death for our species, slow or fast, once we start to push up against the limits of our tragically self-aware minds. We're the only species with a death drive, and our final end will be exercising that skill on a planetary scale.
Neither of them are necessarily wrong. They might even be making the same point. Consciousness - the sense that there is something it is like to be a thing - would seem to have very little bearing on intelligence/sapience, if we assume that AI is not conscious. AI exhibits behavior consistent with reasoning, regardless of whether or not it's a Chinese Room. And since, as far as we can tell, it can't get sad or angry or self-sabotaging, it might well be superior to humans in a homo economicus sense in the very near term.
If your primary concern is either value maximization or finding some way to transcend the limits of the human species, this is a pretty exciting thing! This is the appeal of the Strossian viewpoint. Reading Accelerando has certainly made me feel like I should be doing more to avoid becoming substrate - if I can become a higher-agency person, build my own agent swarm, surf the crest of the bleeding edge, maybe I'll, I don't know, matter? Maybe I'll no longer feel buffeted by the tides of history?
Unfortunately, I am burdened by both a sense of self-preservation for my self as it is now and a serious concern for people as they currently exist. Accelerationism requires a pretty cavalier attitude towards both: either justifying the suffering of many in the near term for some anticipated utility in the long term, or handwaving it away with the assumption that once we reach the singularity we'll be able to solve every problem. It is kind of wild to me that so many people pushing the gas have a p(doom) in the double digits - if I wouldn't take those odds on a hundred dollar bet then I certainly wouldn't take them on a gamble for the survival of the species.
Stross mentions in passing the economic collapses, pain, and madness that come with society going through the transition, but these are mostly fun flavor for his cool cyberpunk world, not real concerns for ubermenschen like Manfred Mancx. Pain is something that happens to other people, the type of subhuman monster that doesn't know the difference between C and C++.
Meanwhile, Watts makes it very clear to the reader exactly how hellish his posthuman world is. The closest thing to a chance for survival is a digital heaven people upload themselves to in order to escape their collapsing world: this is not framed as a triumph, but rather a form of surrender.
I'll come back to this once I've finished Accelerando. Maybe the latter half of the book grapples more seriously with the implications of posthumanity. And to be clear -- it's a good book! It just has a certain glibness about human suffering that seems to be a common trait among books beloved by tech people, which gives me a great deal of pause.
For now, I'm just wondering. Are the worlds Stross and Watts created all that different? Or is it just a matter of where they focus their lens?