Analytics Lessons from My Father’s 1 Iron
by Steve Roemerman, CEO Lone Star Analysis
A few days ago, Lone Star held a summer social event for our team at Top Golf. It was entertaining to see the range of skills from impressive to none.
I brought my father’s 1 Iron. That was entertaining too.
I’m not sure why Dad thought he needed it. My brother suggested it was a symbol of his endless optimism. I’ve kept it as a reminder of that. Dad’s swing mechanics were often painful to watch. He had some old injuries; fluid motion wasn’t an option. Yet, he bought a custom fitted, graphite shafted 1 Iron. This is the club which Lee Trevino famously quipped “even God can’t hit.” Optimism indeed.
Most of our team had never seen one, much less hit one. So, it was fun to share the experience.
It proved to be a lesson which applies to Artificial Intelligence and Machine Learning. What I noticed was this: the best golfers hit it pretty well on their first swing. But after that, things often went horribly wrong.
It seemed they paid close attention the first time, and if they were good enough, they contacted the tiny “sweet spot” on the club face. You see, the 1 Iron has a very small and unforgiving area which must contact the golf ball.
Unless the golfer is VERY consistent, they will frequently miss that VERY small contact spot.
If you are still reading this, you are thinking “get to the part about Machine Learning and Artificial Intelligence!”
The point is tiny sweet spots are also the nature of many ML and AI methods.
Careful application, in the hands of a skilled user can yield impressive results. But, stray from that sweet spot, and bad things are prone to happen.
This is what we saw in our multi-year international benchmarking effort. All the practitioners of AI and ML who we benchmarked were not paying close attention. They were using methods which were not valid, and most had not bothered to check.
Some seemed to be relying on the Universal Approximation Theorem. UAT proves it is possible to approximate any function with a Neural Net, if certain conditions are met. But they weren’t checking to insure they met those conditions. Oops… missed the sweet spot.
Others seemed to be using Python ML code they’d found on Github, which is fine, provided limitations are understood. The most common failure we saw was the assumption of normality. The typical user didn’t know they were using methods which only applied for Gaussian statistics. Their code didn’t come with a warning sticker. Oops… missed the sweet spot.
What seemed to be happening was the same thing we saw at Top Golf. An early success led to relaxed vigilance. A simple problem had been successfully solved. It might have been contrived by a professor, or a software vendor. It might have been a real business problem. But after that early success, vigilance was relaxed.
Are there some great AI/ML practitioners? Yes, but we didn’t find those in our benchmarking project. Some of the participants were SUPPOSED to be great. None made it past our initial screening. Others claimed impressive results, but the results seemed to be unrepeatable with new data sets.
The quest to find best practice in other algorithm/analytics genres was more successful. The organizations with consistent best practice did not let down their guard. And they all had the advantage of using methods more transparent than nearly all AI.
Lone Star uses ML and AI, of course. But we use them along with other methods, each having their own sweet spot.