This is a guest post by Edwin Choi, VP of Marketing at Mobovida, a customer-driven, vertically integrated mobile accessory brand delivering fashion forward products direct to consumer. Check out our recent podcast interview with Edwin.
At CellularOutfitter.com, a leading online retail site owned by Mobovida, it can be said that we have created a new religion centering on conversion rates. This can best be exemplified by the giant (slightly altered) Wu-Tang decal we had in our old office:
C.R.R.E.A.M. = Conversion Rates Rule Everything Around Me!
Sadly, the decal didn’t survive the move to our new office. It’s hard to fault us for being dedicated to this metric: it’s a key reason why we have been growing like a weed during my five year tenure.
With higher conversion rates, we were able to exponentially lower our cost per acquisition for new customers and pour the cost savings into capturing more marketing share on our top-performing marketing channels. We’ve also built a considerable moat as competitors struggle to keep up with the rising costs of paid digital marketing (I’m looking at you, Google Adwords).
A huge proportion of our time during the last three years was dedicated to running hundreds of A/B tests through our constantly evolving conversion rate optimization process. A consistent, ever-present motivator for the blood, sweat and tears involved are the numerous learnings we were able to extract from our test data.
With a heavy combination of quantitative and qualitative data, we’re able to get into the minds of our customers and build marketing messages that resonate with our core personas. The wealth of information driven by Adobe Test & Target, Visual Website Optimizer and Optimizely tests helped us to intimately understand what makes our customers tick.
CellularOutfitter.com is built to handle large volumes of traffic and daily orders, so our testing methodology was perfect for scaling the business rapidly. However, what if each order was worth more to the business? One of our core metrics is the all-important “revenue per visit” metric, which combines both conversion rate and average order value into a telling key performance indicator.
We decided to extrapolate our learnings about the customer with a new goal in mind: raising the average order value.
We knew that our customers loved a great deal and that they often wanted to buy more things on the site, but they didn’t know that we had certain deals or products available. Because of this, we could hypothesize this through some of the data that we ran in previous tests as well as effective event-based, triggered marketing campaigns.
A light bulb went off in our heads: what is one of the most effective ways that retailers increase average order size in physical retail? It’s the “impulse buy” section at a checkout aisle:
This is the same reason why Fry’s Electronics, a large consumer electronics chain, sells candy in their checkout aisles as well. Customers waiting in line to complete their order are confronted with low-priced, high-margin items that they didn’t want or need. Close rates are high because the customer is already in line to make a purchase and the comparative cost of adding additional items to their current order is relatively small.
Also, it’s super easy! All you have to do is reach over to a bag of M&Ms, toss it onto the conveyor belt, and the retailer just gained $2 in average order value. It takes the customer just a few seconds and there’s very little decision criteria needed to add friction to the process.
We hypothesized that we could mimic this same experience online on our virtual shopping cart and increase the average order volume of our carts.
Our first test would give a massive discount if the customer added a certain amount of items to their cart. This was mainly powered by two widgets:
We launched the test and crossed our fingers.
It sharply decreased conversion rates and the net loss from losing those orders hurt one of the most important parts of our site. In order to confirm this, we double checked the amount of carts created before, during and after the test in order to make sure it wasn’t influenced by pre-cart site factors. It was flat:
We have losing results from our tests all the time, so we were excited to see what type of learnings we could dig up from this test. We started to look into the key losing metric: cart conversion rate. It plummeted 6.2%:
At the very least, we proved that we could sharply change consumer behavior! This means that the presentation was effective enough to alter the path of our customers.
Let’s dig in further!
We segmented the customers who reached this page into three groups and looked at their average order values:
We really spiked up average order values for users that somehow powered their way through promo redemption. The proportion was just too small to overcome the drastic drop in conversion rates. We cross verified this with coupon use event fires in Google Analytics, Adobe Analytics and our internal reporting system and everything checked out.
We had huge learnings from our first test, so we decided to refine the test and launch it again. This time, we had additional data and hypotheses. We knew we could significantly influence consumer behavior, but we were turning off too many people with the minimum order redemption requirement. We decided to build custom promotional tiers based on data:
As you can see from the above chart (KPIs blurred for privacy), we mined our cart data from our in-house business intelligence database and broke users out to certain cohorts based on the average number of items they had in their cart. We then asked ourselves this question:
Can we have a personalized widget that will get more people to “just buy one more item”?
This is akin to getting more people in the checkout line to buy a pack of gum. We also heavily reduced the amount of friction needed to add one more item to their cart. We had these hypotheses:
We decided to target certain Average Order “breakpoints” where the discounted AOV would still raise our sitewide based on our cohort data. The hypothesis was that we could upgrade cart AOV for our high volume cart cohorts if we tiered out our coupon code structure.
We relaunched the test with a new widget – this widget would count how many items were in a customer’s cart and give them a special discount upon adding one more item. The discount was mathematically derived to give the customer a great deal while at the same time raising our average cart values.
The results were astonishing for average order value – it increased by over 15% and it stayed there over time! It also resulted in flat conversion rates which meant that we had a massive double digit net gain.
As with any test, we had to cross verify the numbers with as many different data sources as possible to be sure that this was, indeed, the case. We built a custom dashboard that feeds data in real time from our business intelligence server to monitor “Big Carts.” A “Big Cart” is a cart defined as a shopping cart with an average order value at least two standard deviations above our typical rolling average:
After the test was launched, our number of “Big Carts” increased by over 20%! The widget coupon codes also became some of the most highly used coupon codes on our entire site. They also have some of the highest conversion rates and revenue per visitor metrics as verified by Adobe Analytics:
This single test has added millions of dollars in revenue to our site per year.
For every test that wins big, we always try to harvest the learnings from understanding our customers better into more wins down the road. The learnings from this test powered other similar winning tests on our mobile sites as well as countless hyper effective campaigns (for example, this was converted into a very effective e-mail campaign).
For us, this test also highlights the importance of testing for learnings instead of wins. When we lost heavily during the first test, our first reaction was not “Aw shucks, we lost. Let’s go for that win!”
It was “Let’s see what we can learn from this” – and the second home run test would not have happened without the first test.
Read more on this topic (and much more!) from Edwin Choi on his blog Marketing Muses.
What is your experience with AOV? Have you run any tests or learned anything new about it? Share in the comments below so we can all learn!