Chasing Qualitative Signal In Quantitative Big Data Noise


Joey Votto is one of the best hitters in the MLB who plays for Cincinnati Reds. Lately he has received a lot of criticism for not swinging on strikes when there are runners on base. Five Thirty Eight decided to analyze this criticism with the help of data. They found this criticism to be true; his swings at strike zone pitches, especially fastballs, have significantly declined. But, they all agree that Votto is still a great player. This is how I see many Big Data stories go; you can explain "what" but you can't explain "why." In this story, no one actually went (that I know) and asked Votto, "hey, why are you not swinging at all those fastballs in the strike zone?"

This is not just about sports. I see that everyday in my work in enterprise software while working with customers to help them with their Big Data scenarios such as optimizing promotion forecast in retail, predicting customer churn in telco, or managing risk exposure in banks.

What I find is as you add more data it creates a lot more noise in these quantitative analysis as opposed to getting closer to a signal. On top of this noise people expect there shall be a perfect model to optimize and predict. Quantitative analysis alone doesn't help finding a needle in haystack but it does help identify which part of haystack the needle could be hiding in.
"In many walks of life, expressions of uncertainty are mistaken for admissions of weakness." - Nate Silver
I subscribe to and strongly advocate Nate Silver's philosophy to think of "predictions" as a series of scenarios with probability attached to it as opposed to a deterministic model. If you are looking for a precise binary prediction you're most likely not going to get one. Fixating on a model and perfecting it makes you focus on over-fitting your model on the past data. In other words, you are spending too much time on signal or knowledge that already exists as opposed to using it as a starting point (Bayesian) and be open to run as many experiments as you can to refine your models as you go. The context that turns your (quantitative) information into knowledge (signal) is your qualitative aptitude and attitude towards that analysis. If you are willing to ask a lot of "why"s once your model tells you "what" you are more likely to get closer to that signal you're chasing.

Not all quantitative analyses have to follow a qualitative exercise to look for a signal. Validating an existing hypothesis is one of the biggest Big Data weapons developers use since SaaS has made it relatively easy for developers to not only instrument their applications to gather and  analyze all kinds of usage data but trigger a change to influence users' behaviors. Facebook's recent psychology experiment to test whether emotions are contagious has attracted a lot of criticism. Keeping ethical and legal issues, accusing Facebook of manipulating 689,003 users' emotions for science, aside this quantitative analysis is a validation of an existing phenomenon in a different world. Priming is a well-understood and proven concept in psychology but we didn't know of a published test proving the same in a large online social network. The objective here was not to chase a specific signal but to validate a hypothesis— a "what"—for which the "why" has been well-understood in a different domain.

About the photo: Laplace Transforms is one of my favorite mathematical equations since these equations create a simple form of complex problems (exponential equations) that is relatively easy to solve. They help reframe problems in your endeavor to get to the signal.

Commentaires

Posts les plus consultés de ce blog

Hacking Into The Indian Education System Reveals Score Tampering

Information Service and Cloud Dedicated Hosting

IBM's Blue Cloud Meets Juniper To Alleviate Cloud Computing Adoption Fears