Our fifth challenge is semantics.
There are a lot of words associated with Data Science, analytics, algorithms and AI.
We tried dumping several articles on these topics into a Wordle generator. You see the result here. A couple of things you notice from the graphic. Our discussions are increasingly data driven. We try to make sense of data.
It would be one thing if we all agreed on the meaning of the most commonly used words in the word cloud, but sadly… we don’t.
To understand how difficult the semantics problem is, consider two gentlemen. Until recently, they were both with Google. One of them is Google’s CEO, Sundar Pichai. At Davos this year, he compared AI to fire or electricity. He said it is probably the most important thing humanity has ever worked on.
The other is John Giannandrea, or “JG,” who was the search and artificial intelligence chief at Google. He recently moved to Apple. JG is not sure we should even be using the term “artificial intelligence.” At a TechCrunch conference last year, he favored “machine intelligence.” He said we are making “machines slightly more intelligent — or slightly less dumb.” A Financial Times story described him as “bent on demystification.” It quoted him as saying “There’s just a huge amount of unwarranted hype around AI right now, much of it “borderline irresponsible”.” JG is not alone. Many pioneers in waves of AI agree with him.
That’s a pretty fundamental disagreement between two important figures. They just represent the disagreement within the analytics community. Of course, this means we are inconsistent in our use of analytic methods. We can’t be consistent when we lack consistent terminology.
Consistent definitions are an unsolved problem which prevent us from cogent communications, and from consistent application of analytic methods.
To further illustrate the problem of semantics, let’s go back more than 100 years to June 1914 a few weeks before World War I began. Two aviators performed an amazing feat of aviation. They were Lawrence Sperry and Emil Cachin.
Sperry invented the first workable aircraft autopilot. To demonstrate that it could fly the airplane without human intervention, Emil walked out on the wing of their Curtis biplane, throwing it off balance. Lawrence held his hands in the air to show he had no control over the ailerons.
The crowd below was on its feet shouting as Sperry got up and walked out on the other wing. It seemed to be a miracle. Much like today’s claims about AI.
Today, we often use the definition that AI is a machine doing something which would otherwise require the skills of a human. But we don’t think of things like 100-year-old autopilots as AI. Why not?
Don’t say it’s because there was no computer. There was a computer – it was just analog. In fact, part of it was pneumatic – it used air for computing.
If we took a great deal of aircraft data and created a convolutional neural network to perform this function, everyone would call that AI. But it makes people angry to suggest Sperry’s brilliant collection of wires and tubes is AI. The question is, “why?”
This is just one of more than a dozen semantic challenges in analytics, data science, machine learning and AI. Many of you probably have your own list.
Until we can agree on some terms, it seems to me that data science can’t be called a mature science. We should not be surprised. It took calculus about 100 years to settle down into a solid method with definitions and notations most people agreed on. This is our 5th unsolved analytics problem, and we don’t have 100 years to fix problem number 5.