Big Data backlash is emerging.
This mood could threaten to drag some forms of Artificial Intelligence and Machine Learning with it.
You may have heard of GDPR.
The “R” is “regulation” but it might as well be “rights.” The General Data Protection Regulation (see http://www.eugdpr.org/ ) will give EU citizens extensive rights over their personal data. You might also have seen GDPR promoted by IT services firms hoping to sell upgraded security to Fortune 1000 companies.
You might thing GDPR is about data protection. You’d be partly right.
GDPR is part of a larger movement. Across a wide spectrum of political views, age, and technical sophistication, unease is growing about big data and its use. In the United States, the Association for Computing Machinery recently announced seven principles on “Algorithmic Transparency” (see http://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf).
ACM is one of the most respected voices for computer science. Their public policy statement boils down to “we have a right to know how your algorithms work.” Or, “we have a right to know what you are doing with the big data, and how that works.”
This is much more than just protecting data from hackers; this is being able to explain what you do with data you collect. And, an overlooked aspect of GDPR gives EU citizens the right to know how data is being processed, as well as the right to know what data has been collected about them.
In February, the Pew Research Center for Internet, Science and Tech released a lengthy study report, Code-Dependent: Pros and Cons of the Algorithm Age, found here; http://pewrsr.ch/2kslvuK. Many of the Pew concerns are echoed by a group of authors who published in Scientific American https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/
We wanted to see how the public perceived these issues and commissioned a survey.
Our survey asked for opinions about computer generated ratings, rules and scores. We also asked when respondents felt they had “rights” related to their data.
Most our respondents said they had a “right to know” how algorithms worked when it came to issues like Credit Ratings, Safety, Health Care, Insurance Risk Ratings, School Admissions and Investments.
A majority also said they had “rights because someone collected data about you” in a wide range of conditions; nearly every case where business and government collect data, including taxes, social media, utility smart meters and smart cars.
We doubt there is the political will for the US to follow the EU’s example any time soon. The current mood in Washington is to reduce regulation, rather than increase it.
But this does not mean the Silicon Valley version of economic libertarianism is safe. It seems more likely we will see the US version of data and algorithm regulation play out in courts. We asked our respondents about jury service after a self-driving car’s accident. Our respondents were more than twice as likely to find the car company at fault if “the car’s algorithms were created by a self-taught computer…” than a car managed by hard coded rules.
It’s not just grumpy old Luddites who have these concerns.
Younger respondents on the imaginary jury were less likely to support AI, even though the scenario told them it out performed humans in testing.
The ACM principles, and research like Pew’s (and ours) suggests there is significant business risk associated with some applications of deep neural nets, which no one seems to be able to explain.
Magical thinking about AI seems unlikely to prevail in US courts or with EU regulators. The AI community seems want to discuss when a general AI will emerge. A better question might be how to keep out of court.
There are powerful and proper uses for many forms of machine learning and AI. But there is a growing social, legal and regulatory backlash; it needs to be taken seriously.