Those of you who’ve seen the movie “The Imitation Game” are at least vaguely familiar with Alan Turing – the British mathematician, cryptanalyst and all around SUPER genius who was instrumental in engineering a “bombe” that was eventually capable of cracking Nazi communication ciphers (generated by their “Enigma machine”) during WWII. (Benedict Cumberbatch gives an outstanding performance of Turing in the movie, by the way…I don’t know how historically accurate the movie is with respect to Turing’s supposed Aspergian traits but, regardless, it is worth watching.)
Those of you who are bit more on the “geeky” side (like me) might also know that Turing is considered by many to be the “father” of modern computing andArtificial Intelligence (AI). It’s that latter bit of Turing genius that I want to focus on in this post, i.e., AI.
Sliding still further UP on the “geek scale” is a notion that Turing put forth in a paper called “Computing Machinery and Intelligence” (1950) where he posits that the true test of AI would be its indistinguishability from humans in performing a given task. The task most commonly associated with this “Turing test” is a natural language conversation between an AI and a competing human and a human “judge” who cannot see either the AI or competing human. If the judge cannot effectively tell the difference between the competing human and the AI, then the AI has in fact “passed” the Turing test. (By the way, you get UBER-GEEK bonus points if you knew that CAPTCHAs are a form of “reverse Turing test!”)
It’s worthwhile to quote Turing’s actual words here: “Are there imaginable digital computers which would do well in the imitation game?” This “imitation game” is, ironically, where the movie derived its name, though, the movie really has nothing to do with AI.
“What’s any of this geekery (I’ll have to see if I can hashtag that on Twitter) have to do with me?” you ask.
Well, it has everything to do with you if you believe any/some/all of the predictions regarding the effects of AI on humanity that are to come when this threshold has been met by the many scientists/technologists that are currently working on making AI a reality. Let’s skip the Armageddon-like scenarios that the movies have predicted for many, many years with the advent of AI (“The Terminator”, “The Matrix” and – more recently – “Ex Machina” all come readily to mind but there are a TON more) and focus on some of the more likely (and practical) realities that will come along with AI:
- AI will likely have systemic effects on human employment that will probably make MANY of those humans unhappy…but, then again, this has always been true of ALL past technological revolutions going back to cavemen who lashed stones to sticks instead of just using one or the other to hunt, gather, etc. What may be different with this particular technological revolution, however, is the speed with which this technology is evolving (think Moore’s law here, though, there are great many other technical advances beyond just the doubling of transistors on an integrated circuit that are in play here). In the past, technological change has been a bit “slower” to say the least.
- Ironically, AI may actually make life better for those at the opposite ends of the wealth spectrum and possibly widen the “wealth gap” even further than it has already been widened in recent decades. The logic here is that “low-skilled” tasks will still be hard for AI/robots to handle because of the rather non-repetitive nature of those tasks (think house cleaners, gardeners, etc.) and thus these types of jobs would be “safe” from “robo-intrusion”; the same would be true of “highly-skilled” tasks which require advanced education/knowledge and some amount of creative problem-solving skills in order to be successful (think politicians, highly-specialized medical staff, rocket scientists, etc.). Even some current billionaires fear that the consequence of this increasing wealth gap will be real class warfare complete with violence, government overthrow and the like.
- The purveyors of AI will likely target repetitive jobs/tasks where inputs and outputs are largely known and/or invariable thus squeezing out many who currently do these jobs and consider themselves to be part of the “middle class.” Relatively simplistic present-day AI examples here include, Intuit’s TurboTax, Apple’s Siri andGoogle’s Self-Driving Car (If you’re interested in the many pros and cons of AI, then take a look at some of the links that I will provide at the bottom of this post.)
So are the robots coming for our jobs? The short answer is “yes”…the extended answer is that it depends on how long we expect specific jobs to remain viable in the future. Put another way: ALL JOBS have a “shelf life.”
Don’t believe me? Answer me this: How many buggy whip makers do you know??? (Apparently, in the late 1800s – just 120-ish years ago – there were about 13,000 businesses in the buggy/wagon industry!) Remember Kodak? (Yeah, alright…technically, they’re still around but really who cares?) How about Palm, or Blockbuster or Hostess (“Twinkies,” baby…TWINKIES!!!)? I think we are all going to have to get used to the idea (like it or not) that a great many jobs now and in the future will have shrinking shelf lives. (Unlike Twinkies, ironically.)
Is this shrinking shelf life of jobs a good thing or a bad thing? I think it’s too early to say, though, that hasn’t stopped a boat-load of brainy people from weighing in on the topic including, Stephen Hawking, Elon Musk and Marc Andreessen to name just a few. That said, call me an optimist but I just don’t think that humanity is quite “down for the count” yet as some believe. I’m not saying that jobs won’t be lost over the next 50 to 100 years or that there won’t be other painful results from the advent of AI…I’m just saying that I don’t think that SIX MILLION years of evolution (FIFTEEN MILLION if you count the introduction of the “Great apes” on Earth) go down the tubes in the span of a couple of hundred years because we are monkeying around with machines that can “think”. Call me crazy.
Bringing this post back around to the “Turing test” for AI that I began it with…I propose that we replace this test altogether. How will we REALLY know that our species is in danger of “death by AI?” When robots start marketing/advertising to OTHER robots! Yeah, I’m pretty sure that’s the point that we’ll know that we can stick a fork in ourselves ‘cuz we’re done. We should take some comfort, though, in the knowledge that Twinkies (and likely cockroaches, though, we can’t take credit for those) should outlast BOTH humanity and AI. In your face, robots!!!
(Further reading/listening/viewing: Humans Need Not Apply by Jerry Kaplan;The Second Machine Age by Erik Brynjolfsson and Andrew McAfee; At the Edge of Uncertainty by Michael Brooks; “Why AI could destroy more jobs than it creates, and how to save them” by Nick Heath; “Now, Even Artificial Intelligence Gurus Fret That AI Will Steal Our Jobs” by Robert Hof; “Are droids taking our jobs?” TED talk by Andrew McAfee; “Timeline of human evolution” via Wikipedia.)
[Originally published on LinkedIn.]