Monday, August 4, 2014

Book Review: Naked Statistics ... Aka How Not to Kill People with Statistics

Charles Wheelan's Naked Statistics is an insightful book laced with college-style humor as indicated by the soft porn book cover. Read this book if you want to understand the concepts behind statistics without having to mine a text book. The book is a quick read at only 250 pages, much of it skimmable. It is especially valuable for digital analytics professionals and marketing executives who want to understand more about data science predictions which are essentially statistically-based "guesstimates". 

Readability: 4 out of 5 stars.
The text is very often at the level of Time Magazine which is to say about 7th grade. Lots of stories, straightforward sentences, very few footnotes or reference notes per chapter, but a nice section at the end of each chapter which dives into more mathematical detail, should you desire it. There are a few chapters where the book slogs through excess detail and over used examples.

Impact: 4 out of 5 stars.
If you read this book, it will change the way you hear and interpret statistics in the world-at-large as well as your specific industry. You will also feel smarter (what a bonus!). The potential impact is medium-high.

Speed read pattern: You really have to read it from start to finish because each chapter builds on the last. However, once you get the point, I recommend skimming over the examples which cuts the reading time down. Skip the conclusion. It does not summarize the book but instead provides feel-good examples about making the world a better place with statistics. Do not skip the statistical tools review. This is the clearest tool comparison I've seen.

The book has 13 chapters. My summary notes are included below: 

1 - What's the Point - This chapter introduces statistics as a way to summarize or simplify the data around us. Example: stock market index. He compares data as a crime scene where we might want to "capture everything" and the statistics are the detective work that comes to a meaningful answer. 

2 - Descriptive Stats - Statistical vocabulary is introduced here (mean, median, standard deviation, quartiles, relative and absolute numbers). If you always wanted to know what those funny symbols meant, this chapter will tell you. 

3 - Deceptive Description - Examples of how statistics go awry are included here. There is a specific call out on scorecards. When scorecard success contains financial incentives be on the lookout for manipulation. 

4 - Correlation - The correlation coefficient is easily understood in this chapter (-1 to 0 to 1). Zero is no correlation. -1 is a perfect negative correlation (e.g. what you ate and the movement of the stock market). +1 is a perfect positive correlation (e.g. what you ate and how much you weigh).   

5 - Basic Probability - In infamous coin flipping begins here, though the password examples are compelling. This also contains a good financial references about how to calculate the payout (or expected value) of a change. If you do any web testing, this is very helpful to communicate impact. The law of large numbers (a.k.a. "why casinos always win") is also introduced here. 

5 1/2 - The Monty Hall Problem - BEST CHAPTER IN THE BOOK. Highly entertaining game show illustration about whether it's better to keep your original guess or switch when conditions change. 

6 - Problems with Probability - This crosses over with Nate Silver's Signal and Noise book a bit. The discussion covers the financial crises and the use of "Value at Risk." It's a good reminder to lift your head up and see the broader picture. For web analysts, this means visiting the website once in awhile. :) Independent and dependant variables as well as clusters are covered. A fascinating discussion about reversion to the mean ends the chapter. This theory illustrates why companies that are featured on Business Week or teams featured on Sports Illustrated routinely tank afterward.    

7 - The Importance of Data - Here we learn how sampling 1000 people can project actual patterns of 1 million. This chapter also lays out five types of sampling bias that could affect the results, but the end result is a firmer belief in the statistical soundness of sampling.  

8 - The Central Limit Theorem - At this point the learning curve gets steeper. There are more equations used to explain concepts that rely on previous concepts. This is where standard error and the interpretation of it is covered. The larger the sample, the more closely it will approximate normal distribution. This allows us to make inferences about the data in general (e.g. which candidate will win an election). 

9 - Inference - An unlikely pattern in the data is just that, until supported with more evidence. If you have ever been suspected of cheating by a professor, you will want to read this chapter. The importance of forming and rejecting a hypothesis is covered. There is no standard for statistical significance. It is a flexible target often found between .05 (95% confidence) and .01 (99% confidence). This is a good chapter to read if people ever ask you if they can trust your numbers. 

10 - Polling - Polls are a twist on the previous theorem because we are now looking for a percentage, not the mean and deviations from it. The importance of response rate to any poll is covered. 

11 - Regression Analysis - Regression analysis is the best tool for finding meaning in complex data sets. It's basically the line you see in a scatter plot chart that shows the association between the X and Y axes. This is the SECOND BEST CHAPTER IN THE BOOK a MUST for digital marketers. The chapter is a bit mathematical so I recommend reading the earlier chapters first. Also be sure to catch the t-distribution discussion in the appendix.

12 - Common Regression Mistakes - Finding a connection in the data does not mean there are not other risks. An example is the mass prescription of estrogen to older women in the 1990s where a later clinical trial discovered significant health risks. The New York Times Magazine estimated tens of thousands of women died prematurely. Regression analysis is a powerful - and potentially dangerous - statistical tool. This chapter outlines seven common mistakes from non-linear relationships to too many variables. 

13 - Program Evaluation - This chapter covers how a good test is constructed to determine cause and effect. It leaves you with a very good appreciation of how HARD it is to cleanly isolate these two. Some elegant examples are covered including how natural events can make good experimental models.  

Conclusion - Five examples of how statistics is making the world a better place. 

Appendix - Statistical software - Excel, SAS, R, and SPSS are reviewed. 

Friday, August 1, 2014

How to Make it Rain Money in Digital Marketing

A few weeks ago I reached out to fellow XChange attendee and friend, Bob Page, because I wanted to understand more about the modern data architectures driving high speed analytics. Bob is the VP of Partner Product Management at Hortonworks, a company that enables modern data architecture via Apache Hadoop.
I like technology so I often make it a habit with my clients to add a few slides showing how the data architecture would need to change as companies blend multiple data sources and eventually do more real time analytics.
You have never seen marketers do a faster "head scratch and tilt" than when the database slide comes up. IT and marketing rarely understand each other. And yet, a recent IBM study points out where CMOs and CIOs work together the company is 76% more likely to outperform in terms of revenue and profitability. SEVENTY-SIX PERCENT! Is this not a great reason to make some new friends with a box of doughnuts?
As digital marketers we must seek ways to work with IT, even if it is just to keep them informed. I am not saying it is easy. I am saying that high quality digital marketing eventually includes the entire company and all it can bring to bear to delight the customer.
Here are two big reasons why you might care about IT today:

1. Your ability to get insights could eventually stall

Traditional databases are slow. You probably know this, so here's an example to illustrate why they are slow. Let's say you go to a party and decide to look for a specific person. You do not know this person's name so you introduce yourself and meet every single person in the room until you find the person you are looking for. You might speed this process up a bit if you know their hair color or another attribute. But you still have to plow through the structure of each personal meeting to find the person.
This is how a relational database works. It reads each table all the way through to find the data you need. You can speed it up by organizing these tables by attributes. For example, one table might be physical attributes containing subtables such as hair, gender, and height. Traditional databases requires the data to be structured.
And this is why skills such as SQL are in such high demand. They query these traditional databases and pull out the structured data. The same applies to data visualization systems like Tableau. Tableau needs structure to work its magic as anyone who has accidentally processed 2 million rows can tell you.
Your IT department may have a lot of traditional databases but it won't always be that way. Who will you turn to when the high speed systems like Hadoop come to town? You may land an experimental Hadoop server under your desk in the marketing department, but don't expect it to stay there. It belongs in IT.

2. Your ability to add fresh data could stall

Older databases cannot handle the volume. Information carried by customer records and ERP systems is paltry compared to modern data types.
Let's say traditional data volume is represented by one Twinkie a day placed on your desk. Modern data in the form of pictures, videos, clickstream (web) data are akin to thousands of Twinkies filling up your cubicle every minute. Not only is it coming in high volumes but it contains a lot more unorganized data.
Dealing with the rules and regulations of data hygiene has long been the domain of IT. As new data sources are added (these help you gain a comprehensive view of the customer) they need to be landed, processed and made available for use. IT folks are well versed in this.

How to bridge the gap

Digital data is unstructured. The modern route is to use Hadoop which is cheap and scalable with free open source software to land a virtual "data lake." To do this you are going to need substantial technical help. Consider bridging the gap between Marketing and IT by designating a technically-inclined marketing person to attend a few one-time or quarterly IT meetings and start to understand what IT cares about in your organization. The more you understand about the world of IT, the faster you will converge on that elite group of companies that are 76% more likely to outperform their competition.

Monday, July 21, 2014

Two Ways to Create Customer Bliss with Choice

Last week I ventured out to what I thought was a digital marketing networking event. In the usual manner, I checked in, picked up my drink ticket and then went to put on a name badge. And that is where I was greeted with an unexpected degree of complexity:

Yes, there were actually 10 choices of name tag color. Further, each color represented at least 3 more choices which were not necessarily related such as engineering and graphic design (orange category). After picking light blue for consulting, I began to mix with the crowd. Now, I like to think I have a good memory, but I made at least 3 trips back to the sign in table to remind myself what each color meant. So red is marketing and yellow is sports... what if I want to talk to a sports marketer?

And this got me thinking about choice. There has been some good research nicely summarized in two TED lectures about this topic, one from Barry Schwartz and the other from Sheena Iyengar. Basically, too many choices make us unhappy and can even inspire no action at all (aka "analysis paralysis"). We become less satisfied with the choice we made because it's easy to think we missed an opportunity and selected the less-than-ideal choice. Further, too many choices raises our expectations which leaves little opportunity to be pleasantly surprised.

How can we use this in digital marketing analytics? I can think of two ways but you are welcome to contribute more in the comments section below.

1. Group then cull products

Researchers found when a large volume of choices are available, such as magazines in a rack, grouping them into as little as 4 categories increased sales. This aligns with the way we memorize data by "chunking". It makes us happy to know we have found the right category and can now focus on limited remaining choices. There is no research on the ideal number of choices, but if we take a page from memorization techniques, I would guess it is no more than 6. So, if you had 4 primary groups with 6 secondary choices this would be ok. But 10 groups of 3 to 5 options would not be ideal, as we see with the name tags.

So if you have a products or services to sell, consider how many choices your audience can dive into from the top? Are the areas clearly chunked? Here is an example of a navigation failure. The services menu starts off well by chunking the choices into 3 groups but then 9 subgroups causes too much confusion about which choice to select.

Services broken into 3 groups and then 9 subgroups is less than ideal
This is where I would run an experiment to regroup the categories, perhaps even merging or eliminating some altogether. To be clear, what I am suggesting is an A/B test which has a control group, not a wholesale change. The research says (all other things being equal) sales should increase as well as customer satisfaction. Measure satisfaction with a quick survey. Now that's an interesting test!

2. Ramp engagement

People walk away when it is too difficult to make a choice. The solution to this is, Iyengar found, is to gradually ramp the complexity. For digital marketing analytics, I like to think of this in terms of ramping the relationship, also called engagement. As we strive to know more about our customers ask for small engagements first. Then build up to larger asks. This key concept has long been understood in negotiation.

Applying this to website engagement, consider what you want people to do first. For example, if I want customers to download a whitepaper, then putting a large, complex form in front of them would increase the complexity and cause them to walk away. Gradually asking for more information each time is a better choice. The same goes for introductory videos or tools.

What about the back end of the sales cycle? We've gradually engaged and nurtured a customer all the way through the sales cycle and now... nothing. Would this not be the richest time to ask for higher engagement perhaps in the form of a product review on the site or even a tweet or share? Some companies are afraid they won't be able to control this information and bad reviews might circulate. However, if a person is unhappy with your product or service already, wouldn't you rather immediately address the problem than discover it randomly and perhaps too late? Afterall, you are engaged now.


Customer bliss comes from the reduction of friction, and often that friction appears as choice. Too many choices lead to paralysis. Digital marketing analytics can solve for this by culling and testing products and product groupings. Analytics also reminds us to ramp engagement as we ramp customer relationships.

Although I was planning to spend several hours in pleasant conversation at this Portland networking event, I actually left after 20 minutes. I met a financial planner, two bank tellers and an engineer. Did the overwhelming choice of name tags contribute to my dissatisfaction? Sure, a bit. It was difficult to know who to engage and how to engage. Maybe next time I will just enjoy a Web Analytics Wednesday.

Tuesday, July 15, 2014

Book Review: The Signal and the Noise
by Nate Silver

Nate Silver's the Signal and the Noise is a forecasting book with broad appeal. Read this book if you want to understand more about decision-making, statistics and predictive analytics without having to mine a text book. The book is richly researched, well organized and packed with engaging examples. It is especially valuable for digital analytics professionals and marketing executives who may be facing pressure to provide more predictions.

Readability: 4 out of 5 stars.
The text is at the level of the Financial Times which is to say about 11th grade. Lots of compound complex sentences, footnotes and about 50-100 reference notes per chapter.

Impact: 5 out of 5 stars.
If you read this book, it will change the way you look at major world events and think about prediction. You will also feel smarter (isn't that great?). The potential impact is high.

Speed read pattern: To get the main idea without digesting the full book, I recommend hitting the conclusion first, then the introduction, then chapters 2, 4, 8 and 10 in that order. A word of warning though, you may find yourself drawn into the book and end up starting at the beginning anyway. I did.

The book is organized into four main sections. Each grouping contains a chapter summary with a few insights I found useful. There are many more insights in the book.

Failures of Prediction. Chapters 1-3

Examples of how noise was mistaken for signal.

1. Financial crisis. We focus on signals that tell us how we would LIKE things to be, not how they really are. This creates major blind spots in our models. When the system fails, these blind spots finally come to light (moneyball, financial meltdown). There is a very good chart about accuracy vs. precision at the end of this chapter. Accurate and precise is equals a good forecast.

2. Politics. Hedgehog vs Fox thinking. Hedgehogs are fixed, overly confident, weak forecasters (e.g. political pundits). Political TV pundits make terrible predictions, no better than random guesses. Their goal is to entertain. They are not penalized for being wrong. Foxes, on the other hand are continuously adapting theories, cautious, modest, better forecasters. Foxes qualify and equivocate a lot which makes for less dramatic TV by people who are more likely to be correct.

3. Moneyball. Statistics have not replaced talent scouts altogether. Prediction is always an art and a science.

Dynamic Systems of Prediction. Chapters 4-7

How dynamic systems make forecasting even more difficult.
The Analysts Prayer

4. Weather. Prediction has improved due to highly sophisticated, large-scale supercomputers. However, humans still improve the accuracy of precipitation models by 25% over computers alone and temperature forecasts by 10%. Weather is an exponential system which can see a huge impact when initial factors are off by small amounts. This explains why it makes sense to think of outcomes as a range (95% likely or 50% likely).

5. Earthquakes. We have almost no ability to predict earthquakes. But we know that some regions are more earthquake prone. The random noise scientists have used to historically predict earthquakes are an example of how to overfit a model (to fit the noise rather than the underlying structure).

6. Economic. The exponential growth of things to measure will not yield more signal, but more noise. The danger in big data is losing sight of this underlying data story.

7. Disease. Self-fulfilling predictions can be caused by the sheer act of releasing the prediction. For example, when news about H1N1 flu is broadcast, more people go to doctors and more H1N1 is identified. Self-cancelling predictions can also occur. Navigation systems show where the least traffic is but simultaneously invalidate the route by sending all traffic there en masse.

Prediction Solutions. Chapters 8-10

How to use Bayes Theorem to think probabilistically.

8. Gambling. Bayes Theorem is a powerful tool which leads to vast predictive insights. This allows us to use probability ("the way point between ignorance and knowledge," Silver says) to get closer and closer to the truth as we gather more evidence. Again, predictions are MORE prone to failure in the era of big data because there are exponentially more hypothesis to test and yet the number of meaningful relationships does not increase.

9. Chess. Simplified models or heuristics (e.g. always run away from danger) are used in chess. These necessarily produce biases and blind spots. Observe - hypothesize - predict - test helps us converge toward the truth. Beware absolute truths which are untestable. Computers are great calculators but they still have trouble coming up with creative ideas to test.

10. Poker. There is a "water level" in some fields where getting the first 80% right is easy and the remaining 20% is hard. Poker was such a field at one time. Overconfidence is rampant here. We must accept the fallibility of our judgments if we want to come to more accurate predictions.

Hardest to Predict Problems. Chapters 11-13

How to make the world a little safer.

11. Stock market. Consistency makes superior results but most data ranges are too small to show this. It is nearly impossible to beat the market. The test is that no model is able to beat it predictably over time.

12. Climate change. Very few scientists doubt greenhouse gases cause global warming. Temperature data is quite noisy which makes scientists uncertain about the details. Estimating uncertainty is essential. The further you move away from consensus, the stronger the evidence must be.

13. Terrorist attacks. We failed to predict both Pearl Harbor and September 11th as a result of "unknown unknowns." Logarithmic scales can help us overcome these blind spots.

Buy the book at Amazon


Humans like simplicity and we despise uncertainty. This makes it easy for us to jump in and look quick answers or predictions.

For digital marketers this means:

1. Testing is the rule not the exception
2. Be prepared to have your hypothesize proven wrong, a lot. The noise is growing exponentially.
3. When asked to predict the future, put it in Bayesian terms. "There is a 1 in 10 chance this test will succeed."

Nate Silver's book encourages all of us to slow down, consider the imperfections and look for hypothesis to test which eventually bring us closer to the truth.

Tuesday, July 8, 2014

Pandora's Blind Spot: Two Simple Ways to Make Your Digital Marketing Data Smarter

Last week in a flash of inspiration, I decided to to create a special station on Pandora. The basic premise would simply be women singers rocking out about powerful women's topics. I searched the existing channels for "women" and "female" and words related to "power" and came up with nothing. That was rather sad, but fortunately, on Pandora I can build my own channel. 

So I started with a series of "seeds" to kick off the kind of artists I had in mind. Seeds represent the core of the station. Pandora takes the profile of these songs and finds similar artists using the music genome project. Notice all my seeds all female artists. And I'm not exactly up-to-date on music which is why I use Pandora to introduce me to new artists. 

I needed to train my new station for two concepts.

Concept one: singer must be female.

This should be easy to train. Male or female are pretty clear concepts.

Concept two: singer's topic must be empowering. No whining.

This would be the more difficult concept to train because "empowering" is not clearly defined. I would have to cull this content over time. 

It was a decent start and now music is playing, great!

Original Seed song list

And then I noticed a series of male rap stars coming through. Now I've curated another channel called Bad Girlfriend which I use for working out that contains a bunch of these artists. So it's not completely unusual that these male artists might bleed through. Every time a male singer came up, I selected Thumbs Down or moved it to another channel which allows me to reject the song from this station, eventually "teaching" Pandora what the station should be. 

Thumb's Down list on new Pandora Station

I actually gave the thumbs down to so many songs while training the station that I received an error message from Pandora saying I'd exceeded my thumbs down limit for the day. Who knew? This told me the concept of "female singer" does not exist in Pandora's data. Any system that could tell the difference between male and female lead singers would have caught the pattern and it wouldn't have to be IBM's Watson to do it. Which brings me to this fundamental tenet about data:

Data is only as smart as the information it carries. 

If you have ever tried to combine data streams from, for example, an agency's detailed paid search spreadsheet with basic web visit and page data, then this is for you. Data streams are like straws. They only carry what you put in the glass. If you want to extract value from your analytics, then think about adding these two essentials to any data stream: 

Essential #1: What is the purpose or goal?

Why did you send out this content in the first place? For digital marketing data, I recommend using a discrete set of 5-7 labels such as attract, engage, build loyalty or however you visualize your customer stages. In Pandora's case which is product data, I might tap the 7 universal emotions designed to capture why someone chose to create a station.

Essential #2: Who is the target?

Who is the audience? This is especially nice for digital marketing data when you have multiple audiences such as business units or multiple customer types such as partners and prospects to track. In the Pandora product example, my target is female singers.

Adding more intelligence to the data can happen gradually over time so there's no need to think of every circumstance and drive your technical team crazy. Start with these two fundamentals and I guarantee it will immediately boost your digital marketing analytics results. 

And just for kicks, you can sample to the Fierce Powerful (women) Pandora station, here