Monday, July 21, 2014

Two Ways to Create Customer Bliss with Choice

Last week I ventured out to what I thought was a digital marketing networking event. In the usual manner, I checked in, picked up my drink ticket and then went to put on a name badge. And that is where I was greeted with an unexpected degree of complexity:



Yes, there were actually 10 choices of name tag color. Further, each color represented at least 3 more choices which were not necessarily related such as engineering and graphic design (orange category). After picking light blue for consulting, I began to mix with the crowd. Now, I like to think I have a good memory, but I made at least 3 trips back to the sign in table to remind myself what each color meant. So red is marketing and yellow is sports... what if I want to talk to a sports marketer?

And this got me thinking about choice. There has been some good research nicely summarized in two TED lectures about this topic, one from Barry Schwartz and the other from Sheena Iyengar. Basically, too many choices make us unhappy and can even inspire no action at all (aka "analysis paralysis"). We become less satisfied with the choice we made because it's easy to think we missed an opportunity and selected the less-than-ideal choice. Further, too many choices raises our expectations which leaves little opportunity to be pleasantly surprised.

How can we use this in digital marketing analytics? I can think of two ways but you are welcome to contribute more in the comments section below.

1. Group then cull products

Researchers found when a large volume of choices are available, such as magazines in a rack, grouping them into as little as 4 categories increased sales. This aligns with the way we memorize data by "chunking". It makes us happy to know we have found the right category and can now focus on limited remaining choices. There is no research on the ideal number of choices, but if we take a page from memorization techniques, I would guess it is no more than 6. So, if you had 4 primary groups with 6 secondary choices this would be ok. But 10 groups of 3 to 5 options would not be ideal, as we see with the name tags.

So if you have a products or services to sell, consider how many choices your audience can dive into from the top? Are the areas clearly chunked? Here is an example of a navigation failure. The services menu starts off well by chunking the choices into 3 groups but then 9 subgroups causes too much confusion about which choice to select.

Services broken into 3 groups and then 9 subgroups is less than ideal
This is where I would run an experiment to regroup the categories, perhaps even merging or eliminating some altogether. To be clear, what I am suggesting is an A/B test which has a control group, not a wholesale change. The research says (all other things being equal) sales should increase as well as customer satisfaction. Measure satisfaction with a quick survey. Now that's an interesting test!

2. Ramp engagement

People walk away when it is too difficult to make a choice. The solution to this is, Iyengar found, is to gradually ramp the complexity. For digital marketing analytics, I like to think of this in terms of ramping the relationship, also called engagement. As we strive to know more about our customers ask for small engagements first. Then build up to larger asks. This key concept has long been understood in negotiation.

Applying this to website engagement, consider what you want people to do first. For example, if I want customers to download a whitepaper, then putting a large, complex form in front of them would increase the complexity and cause them to walk away. Gradually asking for more information each time is a better choice. The same goes for introductory videos or tools.


What about the back end of the sales cycle? We've gradually engaged and nurtured a customer all the way through the sales cycle and now... nothing. Would this not be the richest time to ask for higher engagement perhaps in the form of a product review on the site or even a tweet or share? Some companies are afraid they won't be able to control this information and bad reviews might circulate. However, if a person is unhappy with your product or service already, wouldn't you rather immediately address the problem than discover it randomly and perhaps too late? Afterall, you are engaged now.

Summary

Customer bliss comes from the reduction of friction, and often that friction appears as choice. Too many choices lead to paralysis. Digital marketing analytics can solve for this by culling and testing products and product groupings. Analytics also reminds us to ramp engagement as we ramp customer relationships.

Although I was planning to spend several hours in pleasant conversation at this Portland networking event, I actually left after 20 minutes. I met a financial planner, two bank tellers and an engineer. Did the overwhelming choice of name tags contribute to my dissatisfaction? Sure, a bit. It was difficult to know who to engage and how to engage. Maybe next time I will just enjoy a Web Analytics Wednesday.




Tuesday, July 15, 2014

Book Review: The Signal and the Noise
by Nate Silver

Nate Silver's the Signal and the Noise is a forecasting book with broad appeal. Read this book if you want to understand more about decision-making, statistics and predictive analytics without having to mine a text book. The book is richly researched, well organized and packed with engaging examples. It is especially valuable for digital analytics professionals and marketing executives who may be facing pressure to provide more predictions.


Readability: 4 out of 5 stars.
The text is at the level of the Financial Times which is to say about 11th grade. Lots of compound complex sentences, footnotes and about 50-100 reference notes per chapter.

Impact: 5 out of 5 stars.
If you read this book, it will change the way you look at major world events and think about prediction. You will also feel smarter (isn't that great?). The potential impact is high.

Speed read pattern: To get the main idea without digesting the full book, I recommend hitting the conclusion first, then the introduction, then chapters 2, 4, 8 and 10 in that order. A word of warning though, you may find yourself drawn into the book and end up starting at the beginning anyway. I did.

The book is organized into four main sections. Each grouping contains a chapter summary with a few insights I found useful. There are many more insights in the book.

Failures of Prediction. Chapters 1-3

Examples of how noise was mistaken for signal.

1. Financial crisis. We focus on signals that tell us how we would LIKE things to be, not how they really are. This creates major blind spots in our models. When the system fails, these blind spots finally come to light (moneyball, financial meltdown). There is a very good chart about accuracy vs. precision at the end of this chapter. Accurate and precise is equals a good forecast.

2. Politics. Hedgehog vs Fox thinking. Hedgehogs are fixed, overly confident, weak forecasters (e.g. political pundits). Political TV pundits make terrible predictions, no better than random guesses. Their goal is to entertain. They are not penalized for being wrong. Foxes, on the other hand are continuously adapting theories, cautious, modest, better forecasters. Foxes qualify and equivocate a lot which makes for less dramatic TV by people who are more likely to be correct.

3. Moneyball. Statistics have not replaced talent scouts altogether. Prediction is always an art and a science.

Dynamic Systems of Prediction. Chapters 4-7

How dynamic systems make forecasting even more difficult.
The Analysts Prayer


4. Weather. Prediction has improved due to highly sophisticated, large-scale supercomputers. However, humans still improve the accuracy of precipitation models by 25% over computers alone and temperature forecasts by 10%. Weather is an exponential system which can see a huge impact when initial factors are off by small amounts. This explains why it makes sense to think of outcomes as a range (95% likely or 50% likely).

5. Earthquakes. We have almost no ability to predict earthquakes. But we know that some regions are more earthquake prone. The random noise scientists have used to historically predict earthquakes are an example of how to overfit a model (to fit the noise rather than the underlying structure).

6. Economic. The exponential growth of things to measure will not yield more signal, but more noise. The danger in big data is losing sight of this underlying data story.


7. Disease. Self-fulfilling predictions can be caused by the sheer act of releasing the prediction. For example, when news about H1N1 flu is broadcast, more people go to doctors and more H1N1 is identified. Self-cancelling predictions can also occur. Navigation systems show where the least traffic is but simultaneously invalidate the route by sending all traffic there en masse.

Prediction Solutions. Chapters 8-10

How to use Bayes Theorem to think probabilistically.

8. Gambling. Bayes Theorem is a powerful tool which leads to vast predictive insights. This allows us to use probability ("the way point between ignorance and knowledge," Silver says) to get closer and closer to the truth as we gather more evidence. Again, predictions are MORE prone to failure in the era of big data because there are exponentially more hypothesis to test and yet the number of meaningful relationships does not increase.

9. Chess. Simplified models or heuristics (e.g. always run away from danger) are used in chess. These necessarily produce biases and blind spots. Observe - hypothesize - predict - test helps us converge toward the truth. Beware absolute truths which are untestable. Computers are great calculators but they still have trouble coming up with creative ideas to test.

10. Poker. There is a "water level" in some fields where getting the first 80% right is easy and the remaining 20% is hard. Poker was such a field at one time. Overconfidence is rampant here. We must accept the fallibility of our judgments if we want to come to more accurate predictions.

Hardest to Predict Problems. Chapters 11-13

How to make the world a little safer.

11. Stock market. Consistency makes superior results but most data ranges are too small to show this. It is nearly impossible to beat the market. The test is that no model is able to beat it predictably over time.

12. Climate change. Very few scientists doubt greenhouse gases cause global warming. Temperature data is quite noisy which makes scientists uncertain about the details. Estimating uncertainty is essential. The further you move away from consensus, the stronger the evidence must be.

13. Terrorist attacks. We failed to predict both Pearl Harbor and September 11th as a result of "unknown unknowns." Logarithmic scales can help us overcome these blind spots.



Buy the book at Amazon

Summary

Humans like simplicity and we despise uncertainty. This makes it easy for us to jump in and look quick answers or predictions.

For digital marketers this means:

1. Testing is the rule not the exception
2. Be prepared to have your hypothesize proven wrong, a lot. The noise is growing exponentially.
3. When asked to predict the future, put it in Bayesian terms. "There is a 1 in 10 chance this test will succeed."

Nate Silver's book encourages all of us to slow down, consider the imperfections and look for hypothesis to test which eventually bring us closer to the truth.


Tuesday, July 8, 2014

Pandora's Blind Spot: Two Simple Ways to Make Your Digital Marketing Data Smarter

Last week in a flash of inspiration, I decided to to create a special station on Pandora. The basic premise would simply be women singers rocking out about powerful women's topics. I searched the existing channels for "women" and "female" and words related to "power" and came up with nothing. That was rather sad, but fortunately, on Pandora I can build my own channel. 

So I started with a series of "seeds" to kick off the kind of artists I had in mind. Seeds represent the core of the station. Pandora takes the profile of these songs and finds similar artists using the music genome project. Notice all my seeds all female artists. And I'm not exactly up-to-date on music which is why I use Pandora to introduce me to new artists. 

I needed to train my new station for two concepts.

Concept one: singer must be female.

This should be easy to train. Male or female are pretty clear concepts.

Concept two: singer's topic must be empowering. No whining.

This would be the more difficult concept to train because "empowering" is not clearly defined. I would have to cull this content over time. 

It was a decent start and now music is playing, great!

Original Seed song list

And then I noticed a series of male rap stars coming through. Now I've curated another channel called Bad Girlfriend which I use for working out that contains a bunch of these artists. So it's not completely unusual that these male artists might bleed through. Every time a male singer came up, I selected Thumbs Down or moved it to another channel which allows me to reject the song from this station, eventually "teaching" Pandora what the station should be. 

Thumb's Down list on new Pandora Station

I actually gave the thumbs down to so many songs while training the station that I received an error message from Pandora saying I'd exceeded my thumbs down limit for the day. Who knew? This told me the concept of "female singer" does not exist in Pandora's data. Any system that could tell the difference between male and female lead singers would have caught the pattern and it wouldn't have to be IBM's Watson to do it. Which brings me to this fundamental tenet about data:

Data is only as smart as the information it carries. 

If you have ever tried to combine data streams from, for example, an agency's detailed paid search spreadsheet with basic web visit and page data, then this is for you. Data streams are like straws. They only carry what you put in the glass. If you want to extract value from your analytics, then think about adding these two essentials to any data stream: 

Essential #1: What is the purpose or goal?

Why did you send out this content in the first place? For digital marketing data, I recommend using a discrete set of 5-7 labels such as attract, engage, build loyalty or however you visualize your customer stages. In Pandora's case which is product data, I might tap the 7 universal emotions designed to capture why someone chose to create a station.

Essential #2: Who is the target?

Who is the audience? This is especially nice for digital marketing data when you have multiple audiences such as business units or multiple customer types such as partners and prospects to track. In the Pandora product example, my target is female singers.


Adding more intelligence to the data can happen gradually over time so there's no need to think of every circumstance and drive your technical team crazy. Start with these two fundamentals and I guarantee it will immediately boost your digital marketing analytics results. 

And just for kicks, you can sample to the Fierce Powerful (women) Pandora station, here


Tuesday, July 1, 2014

Why your KPIs are meaningless (and what to do about it)


This post originally ran in Online Behavior.com in January 2014. 

It’s that “most wonderful time of the year” when annual reports are generated explaining what the digital team did and the value it produced. Yet, if like many companies, you report value as page view volume, visit volume, social likes or tweets, then your boss is most likely to say “So what?” And if they don’t then they should be. For example, there’s no way to tell whether an increase of time spent on site is confusion or engagement. But these are topics for another post. If you are lucky he or she will grudgingly renew your budget for another year on the simple idea that digital marketing is not going away any time soon.
But there is so much more you could be doing with essentially the same data.

Welcome to the miracle of audience segmentation. Yes, it is true that there are actually groups of people with similar goals who come to your sites, mobile apps and interact with you in the social sphere. But when your digital marketing metrics are reported as “most popular blog post” or “most liked page” or even “most popular video,” even though it sounds like valuable information about what people are doing it’s really not.
What matters is the discrete audience groups behind those KPIs and to find them you have to separate the signal from the noise.

A good example comes from the non-profit space. This is the busiest time of the year for organizations that count on donors to meet their missions (and if you have not yet picked a charity to donate to, then use this link to find a good one).

Support your favorite charity

As with many companies, in the non-profit space there are news releases, social postings and tweets, site updates, and numerous ways to “engage” or encourage the desired action, in this case donation. Basic KPI reporting for charities would likely include dollars donated, top pages, and maybe even list marketing campaign or newsletter click-throughs.

Yet this completely misses the point because nonprofit data is filled with noise. They are commonly inundated with job seekers, so the audience must be grouped before the metrics have any meaning.
Grouping the audience means creating mutually exclusive definitions which capture 95% or more of your existing non-bounce visits. So, we might start with a 3 month time period to get a reasonable sample of diverse audience visit behavior. Then we might create a handful of segments that group behavior.
Anyone who landed on the jobs page or consumed it within the first 2 pages of their visit goes in the “Job Seekers” bucket. Anyone who immediately goes to the event space section becomes “Event Planners” and those who shop the holiday “give in-honor-of section” are the “Donors”.

Once the initial buckets are defined – and mutually exclude each other by definition – then look at what’s left to see what was missed. Should an existing bucket have an expanded definition? Or perhaps there should there be a new audience type? When you are finished there should be about 5-7 groups.
Now take those same KPIs and run them through audience types and your report sounds more human.

·        ·         Job seekers really like the photo contest and content about volunteering, but our Donors prefer news content specifically about Syria.
·       ·          Our paid search campaign on these terms is generating less donation interest – even over multiple visits – than these other terms.
·       ·          While we are seeing more products purchased this year overall, most purchases are coming from existing donors. Only 5% of our newly recruited donor targets are converting, many of whom we acquired in paid search but 75% simply bounced off the site. 

Digital Storytelling

The best part of grouping your audience is that you've now become a storyteller, not a KPI data generator, and everyone loves a good story.  

If you would like to learn more about data storytelling, check out the Digital Analytics Association Seattle upcoming 2014 speaker series. If you’re looking for help executing a measurement framework, give me a call.   

Wednesday, December 4, 2013

Modern Digital Analytics Tools

Here's the original copy of an article I wrote for online-behavior.com. You can see the published version here if you like. It's based on my presentation at the Seattle Digital Analytics Association which I have to say was a whole lot of fun. 

A new generation of digital analysis tools are arriving. A generation where speed is king and customer data is united. If you have been buried with reports and analysis from the typical tools then take a break and check out these six modern analytics tools. 

Heap Analytics
“Capture everything” is the theme of Heap Analytics. Similar to tag management a small snippet of code is added to the site. Then a flood of data begins to roll into the system. This data comes in raw in the form of an event feed. Every click, tap, swipe and page is captured. Using the interface these actions are labeled “Facebook Like” or “Twitter Follow”. Super Events can then be created which are a combination of events. In our example “Social actions” would be a super event containing “Facebook Like” and “Twitter Follow”.

If data was not captured “right”, simply re-label it. New pages can be shipped anytime without waiting for an analytics QA. Once the events are defined, then the real magic of Heap takes over.

Unlike other analytic tools which group data fundamentally by page view, Heap organizes by visitor first. If there is no Visitor ID specified, it uses its own Visitor ID. Then when segments are defined such as “People who partially complete a form” the individuals who make up the segment can be listed. Further, the event stream related to each customer is attached to the ID so it’s possible to easily see the paths each person followed as they fell into your defined segment.  

Heap has a sliding pricing scale based on unique visitors. 500,000 unique visitors runs approximately $2000 a month. http://heapanalytics.com/

Lytics.io
Lytics.io is part of a new generation of tools designed to connect all your customer data and then turn around and power other marketing tools with sharply refined segments. Lytics can start with a small pool of data such as email addresses or web logs, then using a proprietary matching technology, it adds color to the customer record using external data sources from Rapleaf, Facebook and keyword search data. The end effect is the creation of a “gold customer record” using an ever-sharpening picture of your customer.

Using a series of rules, very fine segments can be created that use triggers to send emails, post messages or otherwise reach out to the customer. The customer record includes active internet times, active locations, active devices, subjects of interest, demographic and psychographic data.  

Lytics pricing starts around $5000/mo. for the initial model. http://lytics.io/


Nectar
Nectar Online Media specializes in “hyperpersonalization” driven by the unification of social media streams. Nectar also unites data to form a holistic view of the customer, however the goal is execution of speedy triggers designed to nudge customers into action at exactly the right time. For example, Ryan browses pages on your site showing the various models of cameras you sell. He spends a lot of time on one specific model. Then he leaves but doesn’t buy it. Using data from Ryan’s Facebook stream, you see he has an event coming up… he’s going on vacation to Belize. So you post a coupon for $100 off said camera in Ryan’s Facebook newsfeed. Then you use remarketing techniques to advertise the camera and accessories to him as he browses around the web. This is hyperpersonalization. 

Nectar pricing was not available. http://nectarom.com/

Infogr.am
Right or wrong, beautiful visual presentation of data can control how much it is read and consumed. But few graphic designers have analysis knowledge and few analysts have design skills. Enter Infogr.am. Infogr.am is an online system with ready-made infographic templates as well as customizable graphs, word clouds and icons. You can also upload and embed your own graphics and video.

The system is very user-friendly. If you have data ready to go then in about 5-10 minutes you can produce your infographic. Most of that time is exploring the various customization options. Infogr.am is a static system so once the graphic is posted, you cannot drill and dig into the numbers. It’s also not designed to take in huge volumes of data. But for meaningful summaries designed to get attention, it’s exactly right.

Infogr.am is free but if you desire more templates, there is a pro version for $12/month. http://infogr.am

Insight Rocket
Automated multichannel storytelling is the way to think about Insight Rocket. This tool combines the reporting strength and flexibility of Tableau with social commentary. This is particularly useful in larger companies where there may be multiple teams, brand and commerce for example, that define the numbers in different ways. Ideally, everyone should be using the same metrics but this is not usually the case. The teams need a way to connect and agree internally before data is surfaced as “truth” to management.

Insight Rocket also has a strong data integration team to help break down data silos in the first place.  So once data is combined and the Tableau reports are running, the analysts are free to dig into the data. Once an insight is found, it’s published just like a story and can be fed into email or an intranet system. Questions can be asked and answered right inline with the story.

Insight Rocket pricing ranges from $2000 - $10,000 per month depending on data sources and required support. http://www.insightrocket.com

Beyond Core
Beyond Core represents a new frontier in data analytics. It combines true data science methods with machine learning and automated video. There are a collection of features on this tool for beginning analysts as well as seriously advanced analysts. For beginning analysis or non-analysts the guidance feature is nothing short of amazing. After data is loaded into the system, the machine takes over charting and plotting the most interesting data *and then* providing an automated, intelligent, animated voice over that describes in 2 minutes the most salient points. Since approximately 95% of the population cannot easily interpret analysis, this is an extremely useful tool. Ask any analyst who has explained what the data means multiple times to the same stakeholders how nice it would be to send this out automatically.

Beyond Core also opens its platform for advanced analysis operations. Since the system is Hadoop based, it can process millions of data rows in hours, not weeks, without sampling.  As the data is loaded the analyst selects what column the system should optimize for. Then the number crunching begins to show what factors are really driving the numbers. If an obvious answer comes back (i.e. the number of uninsured patients is driving up hospital costs) simply customize the analysis by throwing out that group and run it again. The second time, the deeper correlations show up, an operation which usually takes weeks to re-run. Further, because hundreds of variables can be included, any initial variable selection bias is factored out. 
   
Beyond Core is a self-service tool and pricing begins at $500 per month. http://beyondcore.com/

The big data revolution is driving a lot of change in digital analysis speed and tools. Senior management does not know it yet which places analysts in a wonderful position of looking like a hero.