The four different poles of understanding AI, from optimism to doom

Sometimes I think there are two big differences in the world of artificial intelligence. One, of course, is whether the researchers working on advanced AI systems in everything from medicine to science will bring about a catastrophe.

But the other — possibly more important — question is whether artificial intelligence is a big deal or some other ultimately trivial piece of technology that we’ve somehow developed a societal obsession with. So we have some improved chatbots, is the skeptical view. This won’t end our world – but it won’t make it much better either.

One comparison I see sometimes is with cryptocurrency. A few years ago, many people in the tech world were convinced that decentralized currencies would fundamentally change the world we live in. But that is largely not the case because it turns out that many things people care about such as fraud prevention etc. Ease of use actually depend on the centralization that crypto should be breaking down.

When Silicon Valley says its current issue is the biggest deal in the history of the world, the right response is generally healthy skepticism. This obsession could end up being the basis for some cool new businesses, it could contribute to changes in the way we work and live, and it will almost certainly make some people very wealthy. But most new technologies aren’t having nearly the transformative impact on the world that their proponents claim.

I don’t think AI will be the next cryptocurrency. Technologies like ChatGPT based on large language models have seen adoption much faster than cryptocurrencies ever have. They replace and change significantly more jobs. The progress in this area over the last five years is shocking. Nevertheless, I would like to do justice to the skeptical perspective here; Most of the time, when we’re told something is a tremendously big deal, it really isn’t.

Four quadrants of thinking about AI

Based on this, the range of attitudes towards artificial intelligence can be divided into four broad categories.

There are people who believe that extremely powerful AI is on the horizon and will change our world. Some of them think that’s going to happen and are convinced that it’s going to be a very, very good thing.

“Every child will have an AI teacher who is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful,” Marc Andreessen wrote in a recent blog post.

Each scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessman, every doctor, every nurse will have the same thing in their world. …

AI is possibly the most important – and greatest – thing our civilization has ever created, certainly on par with electricity and microchips and probably even better. …

Far from being a risk that we should fear, the development and spread of AI is a moral obligation we have towards ourselves, our children and our future.

With AI we should live in a much better world and now it is possible.

Let’s call that the “It’s going to be big and it’s going to be good” corner of the spectrum. Compare that, for example, to AI Impacts’ Katja Grace, whose recent survey found that half of machine learning researchers say there’s a good chance AI will lead to human extinction. “Advances in AI could lead to the creation of superhumanly intelligent artificial ‘humans’ with goals contrary to the interests of mankind – and the ability to pursue them autonomously,” she recently wrote in Time.

(In the middle, you might place AI pioneer Yoshua Bengio, who has argued, “Unless breakthroughs are made in AI alignment research…we don’t have strong guarantees of security. What remains unknown is the severity of the damage that may result from misalignment (and depends on the specifics of the misalignment).”)

Then there’s the quadrant “AI isn’t going to change our world much – all that superintelligence stuff is nonsense – but it’s still going to be bad.” “It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse promising either a ‘prosperous’ or ‘potentially catastrophic’ future,” several AI ethics researchers wrote in response to the Future of Life Institute’s recent letter, in which they called for a break from training extremely powerful systems. These superintelligence skeptics argued that focusing on the most extreme, existential outcomes of AI will distract us from the exploitation and bias of workers made possible by today’s technology.

Finally, there’s the “AI won’t change our world much – all that superintelligence stuff is nonsense – but it will be good” quadrant, which has a lot of people working on developing AI tools for programmers. Many people I speak to who are in this corner worry that concerns about superintelligence and bias or concerns about worker exploitation are overdone. AI will be like most other technologies: fine if we put it to good use, which most of the time we will.

talking past each other

In conversations about AI, it often feels like we’re talking past each other, and I think the four quadrant picture I suggested above makes it clearer why. The people who believe that AI will potentially be an earth-shattering big thing have a lot to talk about.

If AI is truly to be a great force for good, to augment human strengths, and to make vast improvements in all aspects of our way of life, then there is a risk that millions of people who could benefit from its advances would suffer if allowed to very procrastinated to address safety concerns and die unnecessarily. Those who believe that AI development poses major, world-changing risks must demonstrate to the optimists that those risks are serious enough to justify the truly enormous cost of slowing down the development of such a powerful technology. If AI is a world-changing big thing, then the high-level societal discussion we want to have is about that how best to get safely to the point where it changes the world for the better.

But many people aren’t convinced that AI will even be a big deal, and find the discussion about whether it should speed up or slow down confusing. From their point of view, there is no world-changing news at all on the horizon, and we should aggressively regulate current AI systems (if they’re mostly bad and our main intention is to limit their use) or leave current AI systems alone (if that’s the case). are mostly good and we especially want to encourage their use).

However, they are baffled when humans respond with actions aimed at safely controlling super-intelligent systems. Andreessen’s claims about the enormous potential of AI do not address her concerns, nor does Grace’s argument that we should stay away from an AI arms race that could kill us all.

In my opinion, for the social discussion about AI to go well, everyone could use a little more uncertainty. With the AI ​​progressing so quickly, it’s really hard to be sure that something is – or isn’t – in there. We are deeply confused as to why our current techniques have worked so well so far and how long we will continue to see improvements. It remains anyone’s guess what breakthroughs are on the horizon. Andreessen’s glorious utopia seems a real possibility to me. Just like a total disaster. And the same goes for a relatively monotonous decade that passes without major new breakthroughs.

Everyone may find that as we acknowledge a little more that the terrain we’re treading in AI is as confusing as it is uncertain, we’re talking at cross purposes a little less.

Leave a Comment