The Cognitive Apocalypse: Could AI Destroy What it Means to be Human?

You are sitting in a dimly-lit restaurant. The piano softly plays under the hushed conversations of other patrons. Your mind is racing with an mix of anticipation and anxiety as you await the arrival of your date. But it’s not just any date — it’s your first date and furthermore it’s a blind date. “How do I look?” “How will they look?” “I hope we have things in common.” “Does my breath stink?”

Finally they arrive and you feel eased — they seem to be just your type and perfectly attractive to you. You connect on many levels and talk for hours until the restaurant shuts down for the evening. But there is a catch. The dating app you used has a highly-developed artificial intelligence system that can predict the “freshness” of your pending relationship in real time — how long will you two be compatible with each other?

With that, it’s set an “expiration date” on you and your date’s relationship. When they excuse themselves to use the restroom, you quickly glance at your smartphone to check.

“10 MINUTES REMAINING”

What? How can this be? Things are going splendidly! What could possibly happen to make you both lose interest in each other in the next 10 minutes?

Your date arrives back at the table and you decide to split a cab home. In the cab, they admit that they also checked the expiration date of your compatibility while in the restroom. While neither of you understand why you aren’t exactly destined for each other, you comply with the dating app and part ways. After all, this dating app’s AI system has a (supposedly) dizzying accuracy of making correct relationship matches 99% of the time. It must know you better than you know yourself — right? Your next date with your next match will be even better. It just has to be…?

Understanding Decision Making

The scene described above is based off a recent episode of the BBC/Netflix sci-fi series “Black Mirror.” The show’s raison d’etre is to portray how reliance on modern technology can impact humanity in (often) negative ways. As interest and development in AI builds, it’s prudent that we take a moment to understand the ramifications of incorporating intelligent systems into our daily lives.

My professional career in market research was always focused on understanding the decisions consumers make in their daily lives. The most common way of addressing this was through direct questioning to gather the who, what, where, when, why, and how of decision making. After some time, I began to question these methodologies and the data they produced suspecting significant bias, from self-selection to acquiescence. I turned my attention to “hard” data instead: transaction data, sensor data, log file data, app data, etc. Why ask someone a question and get a skewed response when you can simply see their behaviors captured in the data they produce daily, hourly, or even by the minute?

As predictive analytics has become of greater interest to businesses, my interests and skill set developed in-step. While analyzing and detecting patterns in historic data leads to valuable insights in its own right, that same data can be modeled to give probabilistic estimates of future behavior as well. Sure, you can ask a customer (from those that even decide to take the time with your survey) “how likely they are to purchase a refrigerator in the next 6 months” and get a “Very Likely” response which is neither useful nor actionable, or you can model the characteristics and purchase patterns of your customers’ previous transactions to estimate “Jane Doe has a 47% probability of purchasing a refrigerator in the next 6 months” — all without asking Jane a thing and achieving near a 100% “response rate” from all customers in your CRM database.

We are now slowly beginning to enter the prescriptive analytics world where optimal decisions can be made for us using quantitative and qualitative inputs within a given domain and often in real time. This is the realm of big data, machine learning and, more broadly, AI. AI brings with it the expectation to make our lives much easier by removing most of the procedural and monotonous decisions we make every day — all without the “nuisance” of a subjective, emotional “computer” (i.e. our human brains).

The autonomous vehicle is a shining example of this. The Occupational Safety and Health Administration (OSHA) reports that drivers make approximately 200 decisions while driving per mile — any of which could lead to an accident, injury, or fatality. This says nothing of fatigued driving, distracted driving, impaired driving, or aggressive driving which can compound the difficulty in decision making. AI does not suffer from these drawbacks. The situation is analyzed, the options are weighed, and the best decision is made every time. Or is it?

Turn On, Tune In, Drop Out

People often think of the “Robot” or “AI Apocalypse” in terms of machines gaining sentience, coming to life, self-replicating, and enslaving humankind. Or less grimly, just taking away all our jobs and giving us no means of creating value for financial stability. Yet I propose that we aren’t considering another type of AI end-of-days: the cognitive apocalypse.

A discussion I’ve had with friends and colleagues several times focuses on what jobs will be replaced by AI systems and which ones will never be. My position is that very few, if any, jobs will be completely safe from automation if you look at a long enough timeline. One opposing viewpoint often cites medical professionals as a job that is forever safe from these technological advances but even on this point I tend to disagree — eventually AI will outperform human caregivers in terms of accurate diagnosis, treatment design, etc. In fact, machine learning algorithms are already making strides in this direction today.

So to me it isn’t really a question of if AI will proliferate throughout our lives, but how long will it take for us to accept it as a better alternative to the human minds it’s effectively replacing?

This is really the crux of the matter. Long term acceptance leads to heedlessness. When taken on a massive scale this can inspire herd mentality which humankind has shown it easily falls victim too. We begin to accept the decisions of our AI assistants willingly and unquestioningly without thought because we accept that we are error-prone emotional beings that can make error-prone emotional decisions…and on top of that all our friends and family are putting their faith in AI, why shouldn’t we?

It shouldn’t be forgotten though that AI systems’ base architecture (i.e. the infrastructure that is set in place for future learning and intelligence development — think of designing a neural network in Python/Keras) is still designed by humans and can lead to errors in machine reasoning that proliferate long past initialization. AI is not without its flaws and may still fall below the power of the human mind as I wrote about in a previous post discussing DIY research tools and the Turing Test.

Fear of a Blank Planet

What does a planet look like where humans have put their full trust in AI systems to make their decisions for them? Is it something akin to the oblivious humans of Disney’s Wall-E?

Where do we draw the line in the incorporation of AI? Dating apps like in our Black Mirror episode, refrigerators that tell us when (and what) to eat, or even alarm clocks that tell us the appropriate time for copulation with our wives/husbands to achieve fertilization — or taking it a step further, the same devices distributed by government agencies for population control?

So maybe the robot apocalypse science fiction has illustrated for us won’t be a matter of man vs. machine enmeshed in warfare, but rather the slow devolution of the human brain as we rely on it less and less for the abilities superseded by the AI we designed centuries in the past with our still-cognizant minds.

Sound far fetched? Not on a long enough timeline…