Artificial Intelligence Experiments – Episode #3; Evil AI
(Singularity Won’t Occur in 2017 – And that’s A Very Good Thing)
This is episode 3 in our 4-part series on the state of Artificial Intelligence, in which we look at the potential for evil AI.
The first episodes were much like a recent Wired article (If AI is so smart, why can’t it solve bigger problems) (https://www.wired.com/2016/12/artificial-intelligence-artificial-intelligent/ ).
General classifiers underperformed three-year-old children in our testing. For our IIoT purposes, other methods are better right now.
The Lone Star testing gives us some insight about how worried we should be that an evil AI will soon control our lives, or that a benevolent AI can somehow block fake news.
Another, more serious story from TechCrunch along the same lines; https://techcrunch.com/2016/12/05/deepmind-ceo-mustafa-suleyman-says-general-ai-is-still-a-long-way-off/
But the question remains, can we avoid evil as AI develops? Sadly, we don’t think so. And that seems to be the consensus of some smart people, going back at least as far as Babbage. A rough paraphrase of Babbage’s line of thinking goes like this;
- To be “good” we need to know truth
- To know truth we need to be able to reliably ascertain universal principles, or natural laws
- Any effort to determine these with the “Calculating Engine” is more likely to fail due to exceptions than it is to succeed.
- Therefore, we can’t calculate a great many truths, and, the Calculating Engine can’t be “good”
(See chapter 13 of Babbage’s 9th Bridgewater Treatise, 1838)
Babbage imagined a pure set of data for his Calculating Engine, and didn’t seem to anticipate the sheer volume of noise from internet trolls who post phony and abusive items on chat boards, comments sections, and news outlets. Trolls include state sponsored propaganda outlets. Credible estimates are that more than 20 nations sponsor these trolls. And, we face corporate trolls, hacktivists, and others. It is reasonable to estimate there are at least 100,000 humans who spend significant time putting out slanted posts and stories, or just plain lies. And there may be over a million people doing this.
In some places these folks have a quota of 100 posts a day. There are organized cabals of lies being generated, hence our selection of artwork for this posting by J.J. at the English language Wikipedia.
In a given week, we expect there will be over 10 million internet statements which are not well connected to truth. The number might be as high as 100 million a day. You can think of this as “Bad Big Data” or “Big Bad Data”.
So, is that a problem?
We think so. We did an experiment with an AI engine we admire (and won’t name), asking it to learn about Fracking. This is a serious engineering and economics topic in organizations like the Society of Petroleum Engineers. But, it is a political issue to a great many others.
Our AI “brain” on Fracking was soon hijacked by green activists. Many of these folks deeply believe what they say. They are not trolls who blather fake news and propaganda (at least not on purpose). But our brain soon began to believe things which just defy physics. An AI “brain” on vaccinations also seemed to gravitate to pseudo-science, detached from fact.
Microsoft had the same problem with the Twitter AI, “Tay.” https://twitter.com/tayandyou?lang=en
Tay was supposed to be a chatty engaging voice to show the promise of AI and chatbots. Sadly, Tay learned bad habits on Twitter and exhibited both bad behavior, and bad language. A good explanation of the problem is here; https://en.wikipedia.org/wiki/Tay_(bot). Our AI testing included bots, and we found problems similar to (though not as extreme as) Tay’s pathologies.
Tay was led astray by bad people. There are lots of bad people feeding bad big data.
So, the bad news is that we will need to work hard to avoid evil AI as we approach “general AI” but, the good news is that general AI is a long way off.
Babbage would not have been surprised, it seems.