Kevin Drum, the liberal Mother Jones blogger, had a smart essay in 2013 about artificial intelligence and its impacts on the economy. One of his points, made with this graphic, is that exponential increases in computing power mean that progress in artificial intelligence could easily go from seemingly lagging behind expectations to abruptly matching human capacities in a matter of months.
This model of artificial intelligence’s progress was called “the back half of the chessboard” by Erik Brynjolfsson and Andrew McAffee in their book Race Against the Machine, after the old story in which a king promises to place one grain of rice on the first square of a chessboard, two grains on the second, four on the third and so on, until he bankrupts the kingdom by the time he gets to the last square, which should have nine quintillion (2^63)grains of rice on it. Exponential growth is a doozy once it gets going.
The thing is, this seems to be happening. A few years ago, voice recognition and automated dictation suddenly went from poorly functioning to almost perfect, for example, and every day brings another announcement of algorithmic prowess at one task or another, matching or exceeding human capabilities in domain after domain.
One question almost everybody has in regards to AI is which jobs will it affect first and last- what are we naturally better at than computers, and where do they have an immediate edge. Something like being a waiter at a high end restaurant is probably a good candidate for a “human-leaning” skillset, since it involves both social interaction and moving around a complex and changing three-dimensional environment. Same with day care with toddlers, perhaps. But my guess is that the last thing AI will replace isn’t any particular activity or skill. It’s who can get blamed for things.
Blame is still a valuable commodity even if AI utterly surpasses humans in one way or another. Things will still go wrong, and people will still be upset with how things go. This is particularly true if many of the ways that AI works are ones we’d rather not know about, for example by violating people’s privacy or relying on information or generalizations that companies or governments would rather not endorse publicly but are useful algorithmically.
If this is right, there’ll be an odd period (before Skynet really gets going, that is) where most of our highest-status people are spending all their time apologizing for screw-ups. So maybe these last few weeks are a preview of things to come.