Talk of “conversion rate optimization” is everywhere. Fromecommerceto landing pages, social media to email, webinars to the currentpresidential marketing campaigns… everybody’s hunting for those most elusive of digital big-game beasts:more clicks, more money, more growth.
The problem is most CRO guides come off sounding something like this …
Step 1: Change the color of your call-to-action button during checkout.
Step 2: Run an A/B test.
Step 3: Get a major lift in clicks, strike it rich, and retire to St. Barts.
Easy, right?
Well, not so much.
Not only is that an overly simplistic view of CRO, but -- asKaiser Fung recently told Harvard Business Review-- it misrepresents the kind of results that ultimately lead to more clicks:
I’d estimate that 80 to 90 percent of the A/B tests I’ve overseen would count as ‘failures’ to executives.
“By failure they mean a result that’s ‘not statistically significant’—A wasn’t meaningfully different from B to justify a new business tactic, so the company stays with the status quo.... when the vast majority of tests ‘fail,’ it is natural to wonder if testing is a waste of time and resources.“”
Misconceptions about CRO abound, even when it comes to something as basic as definingsuccess and failure.
To help you separate CRO myth from CRO fact, I’ve put together this list of the seven things everyone gets wrong about conversion-rate optimization and what to do to get yours right.
1. You Don’t Focus on the Right Process
Optimization is about adopting a holistic process to growth driven by the least sexy word in business: science.
While it might sound strange, this means starting your CRO endeavors at the macro level with corporate culture. For example,with optimization there’s no such thing as a failed test.
In large companies, getting optimization off the ground can be a laborious undertaking. Testing takes investment and when that investment returns a negative result, it’s all too easy to chalk the whole thing up as a “failure” and start pointing fingers.
Don’t.
The entire point of good science is to find outwhat’s not true and what doesn’t work. That’s profoundly counterintuitive to our humanity. We’re accustomed to thinking in terms of right versus wrong, good versus bad, winner versus loser.
事先没有教育你的公司文化and championing the process internally -- with a high degree of empathy and patience for those who don’t “get it” -- what turns out to be a “losing” test can not only hurt your reputation but easily derail your entire CRO approach.
Few sources are better for getting your optimization process right thanConversionXL’s Conversion Optimization Guide. In the first chapter they lay down these three inalienable truths:
- Your opinion doesn’t matter.
- You don’t know what will work.
- There are no magic templates for higher conversions.
Those first two truths are especially foundational on this mistake. The wrong process is built on their antitheses: ego and assumptions. The first is easy enough to see. The second -- assumptions -- are far more sneaky.
Adopting a process of optimization means being willing set aside both and instead embrace whatever road the data points to.
And it means being wide-eyed about that from the very start.
2. You Don’t Start with a Big-Picture Hypothesis
The tactical bedrock of online optimization is testing. Sort of.
Initially, both A/B testing as well as its more advanced counterpartmulti-variant testingwere designed as a way to determine an audience’s preferences. Those preferences, however, were never meant to be relegated to the shallow, on-page elements many tests now revolve around: big button versus small button or short headline versus long headline.
Rather, these on-page elements were only a representation of deeper underlying thoughts and views about how people behave. AsMichael Aagaard高级转换优化器Unbounce所说:
Conversion rate optimization really isn’t about optimizing web pages – it’s about optimizing decisions – and the page itself is a means to an end but not an end in itself.
In other words,you should start with a big-picture hypothesis and only then proceed to nitty-gritty tests. Thankfully, it doesn’t need to be complicated. Just follow this simple formula:
I think that changing [Element A] to [Element B] will produce [Qualitative Result] and therefore [Quantitative Result].
What does a big-picture hypothesis look like in action?
Take this insanely simple test I reviewed inA/B Testing: The Staggering Success of Presidential Optimization and How to Do It Yourselfas an example:
The test is between impersonal language in the headline (“Receive a Free Bumper Sticker”) and personal language (“I’ll Send You a Bumper Sticker”). But your hypothesis needs to go beyond that:
I think that changing the headline from (A) impersonal language to (B) personal language will produce a far more direct and engaging user experience (qualitative), and therefore increase form completion (quantitative).
Do not neglect the qualitative.
Another of the most valuable big-picture hypotheses you should test is about choice; most notably, what level of choice -- i.e., the number of options -- encourages visitors to become customers. This is especially important for ecommerce CRO where product offerings are often massive.
For instance,Sticker Mule’soriginal homepage was built on the assumption that less is more. Because of that, the company includedonlyits four highest selling products:
To test this assumption, the CRO team built an alternative homepage with the following hypothesis: “We think that changing the number of products on our homepage from (A) four to (B) eighteen will immediately showcase our level of customization to new visitors (qualitative), and therefore increase new purchases (quantitative)”:
The result?
Adding all of its products to the homepage hurt revenue by nearly 48%. As Tyler Vawser, VP of Content at Sticker Mule told me, the old versionwon all of their tests and remains their current homepage.
That might sound like afailed test,but it’s not.
Sticker Mule validated the original assumption by testing it against an alternative hypothesis. The team learned a lesson far beyond mere numbers: more choices lead to inaction. And this is the big win the company was able to apply throughout its buying funnels.
3. You Don’t Have the Right Goal
For optimization having the right goal doesn’t mean limiting the number of on-site choices.
Instead, it means zeroing in on the single biggest growth metric for your own site: page by page. I stressed this point to near exhaustion in my own massive post:Landing Page Optimization: Find Heaven By Saving Your Visitors From Hell:
“The starting point for your landing page is about you. And it centers around a single, all-consuming question ...
What is your goal … the one, smallest, easiest thing you want your reader to do?
Here a few examples of single goals:
- Join my email list.
- Follow me on Twitter.
- Preview my app.
- Like my Facebook page.
- Schedule a demo.
- Watch my video.
- Give me their phone number.
- Get a quote.
- Access my report.
- Arrange a consultation.
- Redeem a coupon.
- Sign up for a webinar.
- Download my ebook.
But here’s the thing: while that list of possible goals is great for landing pages -- as well as other points in your funnel -- with ecommerceonly one thing really matters: sales.Singularity is a must.
By way of illustration, I recently ran an A/B test for a weight-loss program. While my alternative page contained more than a few alterations in structure, flow, and copy, the major change I implemented was to move the first call-to-action button from the middle of the page to right below the fold.
The page itself was incredibly long, so my hypothesis was that some visitors would be ready to convert from the jump and giving them that option would encourage them to take action.
And take action they did.
Moving the CTA up increased the number of click-throughs -- basically “Add to Carts” -- by 42%. I was overjoyed.Except … that move also decreased total sales by 9%.
I share that cautionary story to drive home this truth: small wins that feel good in the moment, but don’t add up to real sales are worthless.
Even better, this test wasn’t a failure. It taught us visitor’s didn’t trust the page enough to make a purchase of this sort so early on. We needed to back off, overcome objections, and build desirebeforeletting them “add to cart.”
4. You Don’t Go Big
Conversion-rate optimization all too often gets lost in the minutia.
Small changes like button copy, headline alternatives, and images may result in big wins, but normally they don’t. Big wins almost always come frombig changes.
In large organizations, there are two or more teams internally who, when presented with the data, champion two wholly different approaches to solve the same exact problem: different images, different headlines, different layouts, different approaches. What ensues is a series of compromises from both sides, and what actually goes to production gets smaller and smaller.
Concept testing, however, runs two big concepts against each other. Once there is a clear winner then you dive into iterative tests on the small scale.
Considerthe massive redesign of Optimizely’s homepageearlier this year. Here’s a quick visual comparison of the changes the company initially made:
Image ViaOptimizely
But the team didn’t stop there. Optimizely also created more than 26 personalized versions of that same homepage, which Cara Harshman walked through in her Unbounce CTA Conference presentationThe Homepage is Dead.
And this time, the results were staggering:
Image ViaOptimizely
The lesson?
It’s easy to get stuck in tiny tests that recycle the same oldsmall tweaks. Breakthrough comes when you use CRO to go big.
5. You Don’t Wait for Statistical Significance
True story: combining dark chocolate with a low carb diet can help you lose 10% more weight.
Well … that’stechnically true. But don’t run off to indulge your sweet tooth just yet.
尽管这一发现是三英洁具的结果tific study reported on by many prominent media outlets -- Huffington Post, The Daily Mail, and Cosmopolitan to name a few -- those reports neglected to ask a few key questions that would easily negate the validity of that amazing result.
- Of what did the low carb diet consist?
- How much dark chocolate was consumed?
- How many volunteers were in the study and from where did they come?
Why does this matter?
Because presenting data in relative terms devoid ofthe whole storycan trick you into believing something is working when it’s really not. And beyond the percentages and perfectly-crafted wording, there might not be much to your own findings after all.
Epidemiologist Dr. Ben Goldacre’s TEDTalk“Battling bad science”offers a 15-minute crash course in some of the obvious and not-so-obvious ways data can be misrepresented:
Building off of disturbing facts like, “Around half of all of the trial data on antidepressants has been withheld,” Dr. Goldacre’s conclusion drives home what he calls the “the single biggest ethical problem facing medicine today”:
We cannot make decisions in the absence of all of the information.
The stakes for conversion-rate optimization may not be quite so dire, but the only way to know if a change is truly working is to ensure that the data has enough statistical significance to be conclusive.
So what’s enough time and enough participants?
Unfortunately, there’s no one right answer. It depends significantly (pun intended) on your audience, what you’re testing, and what other events or activities might skew your results, for example, a major holiday or pop-culture event.
A good rule of thumb is to run the test for as long you need to.An even better rule of thumb is not to tie your tests to time, but instead to participants.
With an average confidence rate of 95%,1 in every 20 tests will commit a false positive. Even more troubling, “a study conducted by popular split testing software VWO found that 6 out of 7 A/B tests did not provide a statistically significant improvement.”
Why? Because as Tommy Walker explains: “Very likely, this is also because they found that the time invested to create a test, including research, was usually less than 5 hours.”
At the risk of sounding like your high-school math teacher: reaching statistical significance is non-negotiable, no matter how exciting initial results might look.
6. You Don’t Test Mobile Separately
Let’s say you run an online clothing store and have the following hypothesis: your customers are fashion-savvy but with short attention spans. Therefore, you need to present your latest trendy clothes prominently on the homepage and related product pages as well as give visitors the option to buy instantly.
So you conduct an A/B test.
Sadly, the results fall flat. Only a slight change in overall clicks and a negative difference in purchases.
What the big numbers didn’t show is this: desktop users come to your sitedirectly和平均花27-43分钟浏览和planning outfits before they make a purchase. Mobile users, on the overhand, come to your site either as returning visitors or via Facebook ads for specific products and really like being able to view the latest items on the go as well as make quick purchases.
These behaviors mean you may have had a significant positive change in one group -- namely, mobile users -- but when you combined both groups that change didn’t just disappear, it reversed.
Mathematicians call this theSimpson's paradox, or the Yule–Simpson effect:
A paradox in probability and statistics, in which a trend appears in different groups of data but disappears or reverses when these groups are combined.
The most famous real-world example of Simpson’s paradox came from a 1973 study of gender bias among graduate school admissions to the University of California. When looked at as aggregate groups, the numbers were startling: the average admittance rate of male candidates was 44%, and the average admittance rate of female candidates was 35%.
Clearly, your chances of getting into the University of California as a male were significantly higher than your chances as a female.
Except that it wasn’t.
When separated by department, the numbers show a significant preference for females in one department, and inclusive differences in all the others:
Even more interesting, research papers in the aftermath of the study “concluded that women tended to apply to competitive departments with low rates of admission even among qualified applicants (such as in the English Department), whereas men tended to apply to less-competitive departments with high rates of admission among the qualified applicants (such as in engineering and chemistry).”
The University of Californiaactuallyhad a bias against men despite what the big numbers seemed to imply.
Why does all this matter for CRO?
Mobile and desktop are very different platforms, and users -- eventhe same users-- have different goals on each. If you’re taking a blanket approach to testing and analysis -- if all you ever look at are the aggregate numbers -- you are almost guaranteed to be missing key CRO wins.
测试手机不仅仅是跟踪结果s on mobile separately after you’ve implemented a test across the board. It’s about runningdifferent tests for your mobile platformaltogether.
Lastly, having a responsive site that fits on a mobile screen is not the same as trulyoptimizing for the mobile experience. As Talia Wolf, Conversioner & Banana Splash, explains:
At its core, responsive design makes the desktop experience look good on mobile, but it doesn’t address the specific needs of mobile visitors.
7. You Don’t Test for Yourself
Finally, the cardinal sin of conversion rate optimization.
Best practices are good starting points, but they can never replace your own tests. InWhy You Should Question Conversion Rate Optimization Best Practices, Neil Patel makes this point abundantly clear:
“Reading about best practices, tests that won, and A/B testing success stories is great. Testing stuff on your site is great. However, blindly following best practices or believing that you’ll experience the same results can do more harm than good.”
Two examples will help bring this sin into the light.
First, already I’ve stressed how vital “less is more” is to CRO on at least two fronts: (1) less choices equal more clicks, and (2) less goals equal more results. It’d be understandable if you were to run off and start applying this principle to your own site immediately.
Once again, don’t.
Nowhere is “less is more” heralded as gospel than when it comes to clicks. According to thethree-click-rule, users will abandon your site or application if it takes more than three clicks for them to find what they’re seeking.
The so-called rule has been disproven time and time again:
Image ViaUnbounce
In fact, raising the bar on clicks can significantly improve not only your total number of conversions but the quality of the people converting as well, especially if you sell a high-priced product.
Investor Carrot-- who builds and maintains real estate investor websites -- released data on their own two-step opt-in process in which they (1) broke their standard lead-generation form into two separate forms and (2) made the second form contain a whopping 12 fields.
Image ViaInvestor Carrot
Conventional wisdom would tell you both of those moves should have hurt conversions. Instead, “the 2-Step Opt-In Processimproved conversions by 46% … with dramatically improved lead quality.”
Likewise,Leadpageshas built their entire lead-generation approach around “placing a barrier in front of the form [to] actually get more people to complete the form”:
“Once someone clicks, they can easily fall under the sway of behavioral inertia: the principle that once you start down a certain pathway, you’re likely to continue. One ‘yes’ leads to another, until visitors have completed the process you’ve set up.”
Second, another golden rule of CRO is people willonly give to get: “No one will sign up for your email list without you offering something ofvaluein exchange.”
This “best practice” explains the rise of all things lead magnet: ebooks, courses, whitepapers, industry reports, and so on. But do you really need a lead magnet to improve conversions?
No, at least not always.
Rebel Growth podcasthost Borja Obeso recently shared with me this incredibly counterintuitive test:
“In some of the industries we target withCreativiu(cake decorating and quilting), it actually works better to mention the overall benefits people will get from being a part of our community. We tested countless ‘bribe to subscribe offers,’ mostly offering people entire courses for free.
“Then we made this offer:
“Instead of offering something specific, we simply highlighted ‘exclusive content and tools on how to improve your creative decorating.’ And here’s the tests we have run:
To my surprise, the ‘Plain’ test absolutely destroyed all the others.
The only way to know for sure is to try it out for yourself.
Conversion Rate Optimization: St. Barts Will Have To Wait...
So maybe it’s not as easy as 1, 2, 3.
But it’s not dark magic either, if you know what you’re doing and get your foundations laid down:
- Process
- Hypothesis
- Goal
- Big
- Significance
- Separation
- Testing
As long as you remember the reason you’re engaging in CRO to begin with -- to provide a better experience for your users so they want to become customers -- then you’ll be on the right path.
Read More
- The 1 Rule for Building a Billion-Dollar Business
- Top Omnichannel Logistics and Supply Chain Challenges in Ecommerce
- Stop Trying to Increase Your Conversion Rates
- Why Pop-up Shops Are the Future of Physical Retail
- How You Can Profit from Personalizing Content on Your Ecommerce Store
- Reimagining Racial Equity in the Fashion Industry
- Instagram Ecommerce, Apps & Business Accounts: Are You Ready for the Coming Changes? (Here’s How to Make Sure)
- Multi-Channel Ecommerce: Are eBay & Amazon the Enemy?