It’s a topic that has been explored countless times in the annals of science fiction…human dominance subsumed by ever more intelligent machines. Mankind, in an attempt to create the ultimate simple life where most tasks are carried out by untiring automatons, inadvertently creates its own ironic extinction. The signs are everywhere…Google’s Self-Driving Car, fully-automated factories, and of course human-like robots (replicants?) designed for a variety of uses. These machines/cyborgs or whatever you decide to classify them as have been developing at a rapid pace within a timescale that is minute compared to the entire history of computing. Some may (hopefully misguidedly) suggest that we are headed towards a Technology Singularity where we’ve created artificial intelligence (AI) so powerful that it is capable of not only self-improvement of its own faculties but full replication in an endless array of iterations more intelligent than its previous versions. Essentially the bullet point to this prediction is:
“Dr. Frankenstein, we’ve created a monster.”
Admittedly, it is a topic that interests me and one that I’ve spent many hours learning about. No, not learning how to build a robot (although that does sound like an exceptional idea for keeping my home clean and cooking) but rather understanding and applying some of the techniques that make AI so powerful and sophisticated. At a rudimentary level, this would be what falls into the machine learning/predictive modeling space. Working within the research and data analysis industry, this topic is very applicable in my opinion; especially with the advent and proliferation of new “self-service” MR tools aimed at reducing, if not fully eliminating, the need for research professionals. Many of these new tools incorporate algorithmic (or at least automated) approaches to answering traditional MR questions. Purveyors of these tools come from companies such as Survey Monkey, Google Customer Surveys, Microsoft Pulse, Qualtrics, Zappistore, and Qlik to name a few. This creates an environment that is attractive to client-side researchers looking to in-source some of the projects traditionally given to agencies. After all, the axiom “time is money” still holds true today. Who has time to run a research study that takes months to answer today’s pressing questions? Who needs professional researchers if the tools will answer all our questions in an easy point-and-click/drag-and-drop manner — not to mention, look at those beautiful visualizations! To say nothing regarding the potential cost savings…
I’m not here to bash these tools by any means. I’ve used a few of them myself and can attest that they do, indeed, streamline a lot of the heavy-lifting inherent in research studies. However, the ease-of-use and oftentimes “flashy” UX can create a false sense of understanding and insights that may not be totally reflective of reality. We need to remember that many of these tools are targeted at marketing professionals and not necessarily research professionals. These individuals may not be as inclined to “look under the hood” and simply accept output at face value. The next generation MR professional could be a Siri or Alexa or Cortana-like AI distributed in a future version of one of these tools! Woeful is the obsolete research professional!
Not so fast.
(Before moving forward, I think it is best to tell the reader that my argument to follow tends to wax philosophical and push the envelope on hyperbole. Nevertheless, I think it presents foundational logic and a great analogy for why these tools will never be able to replicate a human research practitioner or any system intent on replacing humans with AI. Also, please let me know if I’ve misrepresented any of these concepts. I’m far from the intellectual level of these giants!)
The Limitations of AI
Alan Turing is most widely known for his efforts to break Nazi Germany’s
ENIGMA code during World War II which he did with success. He is also considered the “father of theoretical computer science and artificial intelligence.” Turing devised a “test” for assessing the “intelligence” of an AI system referred to as the Turing Test. The idea is simple: an AI system can be considered “intelligent” if a human, engaging the AI system in regular conversation, cannot tell whether or not it is conversing with a fellow human or computer (granted, they are hidden away from actually being able to look at what they are conversing with). One would infer that with the regular advances in technology that this surely cannot be too far off in the future. The concept has entered the mainstream and even pushes the question of whether or not AI will someday fill human constructs like love.
So this presents the question: can a sufficiently complex system (which we will assume is a collection of algorithms) be created to simulate a human mind? Is such an amalgamation of algorithms even a simulation anymore or should it be considered a mind itself — even imbued with consciousness? Proponents of what is called “strong-AI” would say that yes, indeed, a human mind is nothing more than an unimaginably complex set of algorithms and we are simply biological computers. Yet other circles would deny that human thought/consciousness can ever be mapped to an algorithmic set. I, for one, tend to agree with the later.
Here is where we get slightly technical/philosophical/etc…
Renowned mathematician, logician, and contemporary of Albert Einstein, Kurt Godel, presented two theorems that are somewhat difficult to understand immediately but once you get the gist of it, it begins to make sense. In a nutshell, the theory can be summarized:
“This statement is unprovable.”
From a logical deduction perspective, there is no way to show this sentence to be either “false” or “true” — a paradox within the system of logical reasoning. Taken from one side, you cannot prove this sentence to be “true” because then there is a contradiction. Nor can it be proven to be “false” for the same contradiction. Yet we can see that there is “truth” to this statement even though there is no formal way of proving it within the system of logical reasoning! We, as humans, are capable of observing the system from the outside in order to arrive at the “truthiness” of the statement. This extends to any formal system including arithmetic — which means the algorithms of our “sufficiently complex system” will always be incomplete in some sense.
So let’s extend this idea into the future of MR DIY tools. Say it is the year 2266 (bonus points if you get why I chose that year) and the Skynet Corporation has just developed and released the most sophisticated DIY MR system to date. The system promises to give absolutely definitive answers to any research question you present it with almost instantaneously by utilizing its powerful AI. The AI is composed of a vast array of algorithms for computing solutions to any MR-related business problem. In this sense, the system can be considered complete — it is an all-knowing, all-powerful AI (with a soothing voice that converses with you in natural language). Turning Test…passed?
Yet what happens when it is fed a paradoxical statement like the one above (granted, one with an MR-tinge to it)? How can a system, even one such as ours with a complete set of algorithms for performing any necessary calculation, answer? The resolution to the problem can only be taken from outside the AI’s system of algorithms — a resolution that requires either another set of algorithms — or a decidedly human mind!
Another way to look at this is by referencing Georg Cantor, himself a renowned mathematician and theorist. Cantor argued that no set (“set” meaning a collection of “things”, such as kinds of apples, types of galaxies, or in this case algorithms) can be entirely complete. He showed this using the “diagonal argument” which effectively shows that an infinite set of algorithms which we will represent as binary vectors (i.e. “0000…”, “0001…”,”0010…” and so on with the ellipsis symbolizing the numbers continue on forever) will always have vectors/algorithms that aren’t in the set! How does this make sense? How can an infinite set of infinitely long binary vectors not have every possible vector contained within? If you look at the image below, this shows just how it is possible. By taking the diagonal vector of values and switching each value to its opposite, you are left with a new vector/algorithm of 1s and 0s. A vector/algorithm that is not part of the infinite set! You can see this by realizing that each digit in the new vector will be at least one value off of a vector already in the set. Mind-boggling!
So this brings us back to our original question again: can a DIY system ever fully replace a human research professional? We could just as easily use the word “insight” above to describe how a human mind is capable of “seeing” from outside the system. Last time I checked “insights” were what made the MR industry spin on its axis (if we could only get MR firms to actually help clients apply research findings then we’d really be set)! Now granted, I significantly pushed the bounds of absurdity making this argument and plenty of DIY tools today do what they are supposed to do very well. There isn’t a need (yet) for a system of such monumental power. My point is more to the effect that:
The over-reliance on technology can lead us to make incorrect assumptions in our research (or anything else that may rely on a “sufficiently complex system”).
After all, we are still having trouble with technology as “simple” as Auto-Correct…at least the results are sometimes pretty bunny…