AI - Raised stakes
I have recently read the two articles Cognitive Orthoses: Toward Human-Centered AI by Kenneth M. Ford, Patrick J. Hayes, Clark Glymour, James Allen and The AI Revolution: The Road to Superintelligence by Tim Urban as a study assignment. In this blog post, I will note down my thoughts and questions about these articles.
Both articles generally try to answer the question: “Will AI help humanity as a whole? Or be its doom?” The first article takes the stance, that AI, like so many other technological advancements, will enhance the life of humans around the globe. Not as a replacement to humans, but as an assisting tool, hence the use of Orthoses in the title. This is very close to the ideas Garry Kasparov argues about in his talks, like the one at WeAreDevelopers in Berlin this year which I happen to have visited. Kasparov, being one of few people on the planet to be very directly “replaced” by an AI, does have a lot of interesting thoughts on this matter. Just like the article, he sees AI more as amplified intelligence, instead of artificial intelligence.
I found the articles comparison between the development of artificial flight to the current doubts about AI quite fascinating. From the initial understanding that it was impossible, the fears of people about the safety of it, to the aim of replicating results seen in nature without constricting itself by adhering to biology. This made me wonder: Might this apply to other ideas we consider “impossible” today?
And with that, we smoothly arrive at the second article. In it, the author starts by imagining the shock a person from 1750 might feel, if they were transported to the current day and shown how much has changed, how many “impossible” things were now a reality. Giant buildings, cars, airplanes, modern media, smartphones, all things that the people 1750 would have thought impossible, if they even could imagine them. The article defines the amount of shock needed to kill the poor, involuntary time traveller as 1 DPU, a Die Progress Unit. (Which by the way, is now my new favourite unit.)
While this is a funny thought experiment, I started wondering if we might not be underestimating people from the past and our own human adaptability. It is undeniable that there have been huge technological achievements, and that they have been reached faster and faster through our history. But has human life really changed in a way that would confuse a time traveller to such a degree, that they would just keel over and die? Assuming you would answer the traveller some questions (which in itself assumes that you found some way to communicate), how would this play out after the initial confusion? “Where do I get food?”, they might ask. “Go into that building, get something to eat, give the person at the end this paper until they are satisfied.” - “Sleep?” - “You can stay at my place, go through this series of doors, lie down wherever.” - “Children?” - “Didn’t change.” - “Death?” - “Haven’t fixed that yet.” They might not understand how or why something works, but I’d argue they’d be perfectly fine.
All of this is leading to the main point of the article: How the development of AI, through the current Artificial Narrow Intelligences, like the Orthoses mentioned in the first article that handle specific problems, over Artificial General Intelligences, that would be on the same level as humans, through to a Artificial Superintelligence, that performs better than any human brain (or even, all of them together), would impact our lives. This development would be reached through a self-improving AI, which would follow the same path as our own technological advancement, getting ever better and faster at improving itself, until it eventually outgrows us to such extreme extents that it would be like a god to us. One that could make humanity as a whole extinct, or make us immortal.
The article left a very alarmist taste in my mouth after the first part. It seemed like the author was the exact kind of panic-driven person that the first article argues against. But to their credit, the second part pulled me back in.
I am not yet at the point to accurately judge which, if any, of the postulated futures are likely to happen. I hope that at the end of this semester, I will be able to do exactly that. As such, I will not spend much more time on these thoughts, but I will say, many of the ideas in this article seem very far fetched. I also dislike a few of the assertions made in the article, like that “Humans get over things, not computers”. By the very idea, that AI would surpass us to such extremes, it is impossible for us to say what an AI might be capable of. But, as mentioned, I might be completely wrong about this.
But what I am pretty certain about, is our adaptability. Unless we do develop a full-on Skynet-like murder-AI, that directly wants to eradicate us, I reckon we will be just fine.
“Food?” - “Just say whatever out loud.” - “Sleep?” - “Wherever, but no need to.” - “Death?” - “Not a thing anymore.” - “Cool.”