Presentation of ESGR Awrd
Lone Star Analysis Recognized by ESGR
September 20, 2018
Reduce Confusion and Fear Around Analytics and The Internet of Things
Lone Star Analysis Provides Analytics Training Offerings to Reduce Confusion and Fear Around Analytics and The Internet of Things
September 28, 2018
“Transparency” used to mean we could check each other.

Are words like “transparency” losing their meaning?

Information Economics vs. Big Data

“Transparency” used to mean we could check each other.
In Case of Trouble, Change Definitions

In Case of Trouble, Change Definitions

In Case of Trouble, Change Definitions

Are words like “transparency” losing their meaning?

Logistics organizations often have instructions, “in case of trouble.” Some are very entertaining. A trucking company hauled kosher beef from the Midwest to New Jersey. Every bill of lading said, “In case of trouble, call a Rabbi.” Humanitarian organizations send convoys into lawless disaster areas with sobering risks; their instructions, “in case of trouble” are not funny at all.

Lone Star knows a lot about logistics and transportation. It’s one of the markets we serve. Those customers have a solid sense of reality. Perhaps that comes from the tangibility of a 150-ton railroad car. It is certainly related to the binary nature of on-time delivery. You are on time, or late. Period.

Tangibility is foreign to many analytics companies. In our three-year benchmarking quest for analytics best practices we found a very disturbing trend; “In case of trouble, change definitions.”

Some of what we saw in the benchmarking was garden variety big organization politics. If a group can obtain a monopoly right to provide forecasts, bad things happen. Absolute power corrupts absolutely.

An executive leading an analytics team (not one our customers) told us how he’d simply made up answers for a critical policy question. After all, his unit was the only one allowed to do this, and he was the smartest person there. He’d had a challenging deadline, so making up answers substituted for any kind of computer work.

He’d changed the definition of analysis; in case of trouble, change the definitions.

The list of all the semantic contortions we’ve seen would be long and tedious. But two examples make the point.

  • First, what are “analytics?”
  • Second, what is “transparent?”

“Analytics” used to mean analysis had been done. Webster’s dictionary defines it as “the method of logical analysis.” So, you’d think no analysis equals no analytics.

But that’s not a given today. A dashboard of lines, pie charts and candlesticks pass for “analytics” even when no math occurs. Visualization used to be done on the backend of a large data problem. Or, it was done after a large simulation was run. Today, there are real risks customers will simply see beautiful visualizations of error.

When “analytics” simply means lumping and displaying raw data, bad things happen. One Lone Star client operating a large fleet of vehicles could not make sense of reporting across the organization. It turned out their most important performance measure was defined differently at each location.

Pesky definitions! The visualizations were gorgeous, but they weren’t “analysis,” The new definition of “analytics” makes it tempting to avoid the hard work of data definitions, attribution and provenance into visualization software. That work is a lot of trouble.

So, in case of trouble change the definition of “analytics.”

“Transparency” used to mean we could check each other. Remember when your teacher said, “show your work” and even if you got the right answer, you’d lose points without showing how you did it?

In science, “transparency” also meant some other researcher could attempt to reproduce your results.

But “transparency” is a pesky subject for many AI methods. As complex neural networks grew, they became unexplainable. Why does Google think a cat is guacamole? Since no one can explain how the NN works, that’s an awkward question.

A while back, we attempted to reproduce results in image recognition. Supposedly an image classifier was very good at equines (horses, donkeys, zebras). We thought zebras were the most recognizable. But classifier performance was poor. 100% of zebras we tested were just tagged “animal.” You can test our results and see how things are working today. Just submit your own zebra pictures. Maybe you’ll get a different result. Our method is transparent and reproduceable.

Perhaps the classifier training set was mostly horses. But it would be a guess, since training data was apparently never published, and of course no one knows what’s going on inside the NN.

More recently several firms have contributed to the decline of transparency.  IBM has done this promoting a new idea about what “transparency” means as part of their trusted AI initiative.  BBN seems to also be doing this by changing the meaning of “explainable.”

What BBN seems to be doing is an advance (thought a small one) which points out some of the findings a Neural Net. It will show “what data matters most.” BBN and IBM are focused on data, not the analytics themselves. And to be sure, much of what IBM calls a declaration of conformity (SDoC) for AI services is laudable. The SDoC checklist requires some thought about unintended consequences. It lays the groundwork for data provenance.

However, neither does much to make AI transparent. Connecting the SDoC to “transparency” avoids the central objection to unexplainable AI. And reading it carefully it seems to push liability from IBM to customers and suppliers.

That raises a question. Who really wrote all this? It seems more like something from IBM’s legal department. Big Blue will testify in some future court how customers promised training data was free from error and bias. Poor IBM was just as much a victim as little Timmy who was denied medical care.

And, when you dig in, you find IBM is not claiming they will abolish bias. Rather they hope to “perpetuate as little inequity as possible.” Those semantics must have a full work week of IBM legal staff behind them. Here’s a great video and story by ZDNet on that.

IBM’s efforts to put rigor behind training data sets is good. But making customers fill out a check list on the data they provide isn’t what your 6th grade math teacher expected. Data is the input to the problem, not how you did your homework.

And, shifting liability to clients while sounding high minded is probably not what IBM customers expected, either.

The fact is, NN AI is not transparent. So, in case of trouble, have your attorney change the definition of transparency.

About Lone Star Analysis

Lone Star Analysis enables customers to make insightful decisions faster than their competitors.  We are a predictive guide bridging the gap between data and action.  Prescient insights support confident decisions for customers in Oil & Gas, Transportation & Logistics, Industrial Products & Services, Aerospace & Defense, and the Public Sector.

Lone Star delivers fast time to value supporting customers planning and on-going management needs.  Utilizing our TruNavigator® software platform, Lone Star brings proven modeling tools and analysis that improve customers top line, by winning more business, and improve the bottom line, by quickly enabling operational efficiency, cost reduction, and performance improvement. Our trusted AnalyticsOSSM software solutions support our customers real-time predictive analytics needs when continuous operational performance optimization, cost minimization, safety improvement, and risk reduction are important.

Headquartered in Dallas, Texas, Lone Star is found on the web at http://www.Lone-Star.com.

Recent Blog Posts