[In the “Executive Perspectives: Machine Learning/AI” series, Expected X founder and Principal Consultant, John Sukup, interviews industry leaders on how they believe the future will be impacted by these technologies]
J: First, let’s start with a brief background on yourself and how you ended up where you are today.
S: Years ago, when I started my Ph.D. in Machine Learning, I was considered a “nerd” because I was one of the few who really understood the concept of what the heck Machine Learning was. I was working on aluminum plants’ mills where they had gigantic mills and each one of them was about $1,000,000. What we did was collect data from the mill’s sensors. These sensors were sound sensors that could record an audio signal from the mill and then transform that using a Fast Fourier Transformation. Based on the frequency, we could recognize anomalies and estimate mill maintenance, but I didn’t really have advanced algorithms and enough data; we didn’t have TensorFlow or the computing power to utilize advanced algorithms anyway.
I moved into the telecom space and I always had this dream of self-optimizing networks even though no one understood what that meant at the time. I joined a startup company around 2007 and we created a platform to manage audio/video network switches of large companies. If you watch sports like the NHL, NFL, or NBA, that’s the software we built to get the signal to you from our Tier 1 customers such as AT&T. So, along those paths, you always use Machine Learning.
Most so-called Machine Learning applications are just digital decision trees that generate probabilities. When people talk about an “AI product” I say “Yeah right, so magicians are jumping out of it?” I’m very critical of what people say about Machine Learning and AI, but I love the technology. In 2017 when I started my own company, I decided to move into Blockchain because I saw a bigger opportunity. So, that’s where we are, we are a startup focusing on Blockchain and to some extent ML/DL. We have generated some revenue and are seeing great potential with it. It’s what I saw years ago with Machine Learning: the same experience almost déjà vu.
J: What do you see as being the potential negative (or positive) impact of the hype right now around AI?
S: I was at a conference last year and met the CMO from [a large telecom provider] who is well respected in the industry. They started talking about “AI this” and “AI that.” I would say they were trying to demystify AI and Machine Learning, in general, but what they were calling “AI/Machine Learning” was something which it is not. They were giving the impression that they had an “AI product” and I was like: “What does it even mean we have an ‘AI product?’” So, say you have trained a model for a specific use case and you have trained, let’s say, a Natural Language Processing model. This is a good example — it is working right at Google, then they took it to the next level. Google has been doing it for a long time. Or Amazon, where with trend analysis, give you suggestions based on data they collect. Those are specific use cases where you can actually leverage on that, right? But if you don’t have the training data and you’re just using the buzzwords “AI” and “Machine Learning” to raise the money, then I think that that’s what is happening on the negative side. No one will trust you anymore in the Bay Area if you use the word “Machine Learning” or “Deep Learning” because they’re all emboldened into an idea that doesn’t exist.”
J: From your perspective where do you see most organizations are in terms of their adoption of AI and Machine Learning? Is it kind of a “Keeping Up with the Joneses” sort of situation where everyone just saying they have these capabilities? What’s the environment look like?
S: Orange was doing speech recognition in English and they tried to do it the same thing for Orange in France. It took them 6 months to adapt it to their language because French is so different from English. Even though we’re talking about Natural Language Processing, the speech recognition challenge was starting to make people realize “this is not easy!”
The challenge with telecom is you have regulatory restrictions. You can’t simply put out data and give it to someone else. So, I think that was when they switched within telecom where they started saying: “OK, wait a minute. We can’t give our data, our most valuable data, to someone else and to someone else, and… so on.” So, they start using an internal team. The challenge with telecom is that innovation and software are not in their DNA so that’s really slowed down adoption.
But I think what happened, especially in telecom, is the adoption of Machine Learning to simple use cases with core business impact. In customer service, for example, they are using Machine Learning to coordinate responses. Then they expanded into, say, accounts receivable to scan invoices coming in for automatic categorization and information extraction. But I think in terms of the bigger schema like in self-optimizing networks, it’s likely too difficult. Having the right data, the difficulty of knowing what to do — that’s where they get stuck.
J: Would you say that is that one of, or maybe be the biggest, area of potential you are seeing in with these technologies, specifically within telecom?
S: Within telecoms, you have organizations coming in and selling robotic process automation as something to do with AI and Machine Learning. Yet really, they are just selling systems for scrapping data from one system and shoveling it into another. But it doesn’t really deal with what I see as the biggest challenge: dealing with unstructured data. That’s where I see the biggest potential. We talk about 95% of the data today been unstructured. That’s where the use cases lie. We can’t really handle unstructured data with a traditional, conventional approach.
J: Thinking specifically about your organization [Ziotis, Inc.], what scares you the most about the adoption of AI/Machine Learning within your organization? It doesn’t have to be technical. It could be human resources-oriented: getting people on board and changing their workflow. Is there any specific thing that comes to mind?
S: Two things come to mind: first, unraveling the power of Machine Learning as a tool to attack, at the largest scale, the security of the foundation of the Internet. A Google executive was saying it’s not about figuring out whether not we’re getting attacked it’s how fast can you react to attacks. So, security in the context of IoT is one of the biggest concerns of mine and nowadays I think it’s very easy to find the most vulnerable link in the chain and attack it. It doesn’t have to just be attacks driven by Machine Learning, it can be any attack, but I think with Machine Learning the attacks are getting faster and more convenient. So right now, if you talk about attacks, machines are attacking machines or algorithms are attacking algorithms. That is a scenario where it can easily get out of hand.
Number two is the ethical issues related to Machine Learning. If the data is biased, it’s not the algorithm. The algorithm is learning from data which in some cases is putting minority groups at a disadvantage.
J: It seems that the fear regarding biased Machine Learning models with adverse outcomes shouldn’t really be centered on the models themselves. It’s the data that’s being used to train these models where the problem really lies. Where did that (historical) data come from? Humans! So, biased Machine Learning models are just a reflection of human biases from the past. If anything, it’s not really “shame on the algorithm” but “shame on us.”
S: You hit the nail on the head. This is where my concern is: as we are advancing and improving our algorithms, the very nature, the foundation of what we’re doing hasn’t changed. You must have data to train models and those doing the data engineering and structuring have a lot of influence on the algorithm. When they start training, what they’ll end up biased on is based on what you give them, right? So, those who are in charge and have the power and the money — they can influence the algorithm.
J: Right. You read about how AI and Machine Learning are going to replace a lot of jobs. This thought is common among many people. But you also read about the tangential and new industries that will arise from it – data governance and algorithm auditing being two of those. It’ll be interesting to see how that plays out or if those become an industry in themselves or just an added service for other organizations.
J: Great information, Shahin! Thanks for your insight into how telecom is dealing with AI and Machine Learning.