Welcome to the recap of Session #15 of the Million Dollar Case Study. This was an inspirational insight into the world of Amazon Split Testing, and how we can really optimize a private label product listing for increased profits. Joining Gen on the webinar is Andrew Browne from Splitly – the smartest tool for Amazon optimization using artificial intelligence and automation to build a fine-tuned and highly converting product listing.
We already covered finding the right keywords in Session #11 and Scott Voelker covered Amazon search engine optimization in the last Session. So how can we continue to make improvements from here?
The answer is with Split Testing, or A/B testing. Let's find out what that looks like for Amazon sellers…
As always, here's your full webinar replay:
Here's the slides from the webinar:
If you have been in the digital marketing world previously, you may already know the answer. For some Amazon sellers, Split Testing is an entirely new concept. Andrew starts by explaining Split Testing in simple terms:
Split Testing is where you have two different versions of something, and test which one is better, with a data-driven and controlled experiment.
In ecommerce this often means testing two different versions of a product page, to see which results in more conversions. It really is this simple on the surface level. We want to find out how changes to a product page can have an effect on the number of sales.
The complexity behind Split Testing is ensuring that this is tested in a reliable and controlled manner, and this is where software like Splitly comes in, to automate these scientific experiments. However, it is possible to run tests manually, if you have the time and inclination. We're going to cover both of these options in more detail in this recap.
Andrew covers three core reasons why Split Testing is one of the smartest ways to get ahead as a seller today:
At this point you may be wondering if you can just change your price, or your lead image, for a few weeks, and then compare it to the previous version.
Andrew explains very carefully why doing this will not result in the most accurate results. To put it simply, Amazon has a lot of moving parts. This includes market conditions, seasonality, demand, competition and more. All of these things are consistently changing and as such, if you simply changed your lead image for two weeks and see an improvement in conversions, then you can't guarantee that it's because you changed your image.
For this reason, the best way to make solid business decisions when it comes to listing optimization is to automate with split testing, and test two versions of your listing concurrently. Though, it is possible to run concurrent tests manually, but it requires a very hands-on approach.
Andrew explains that split testing works quite differently on Amazon compared to your own website. Mostly because you have less control over Amazon, compared to your own ecommerce store.
This is because Amazon gives traffic data on a 24 hour period, and sales velocity affects search rank on a 30 day rolling average.
This means two things:
In the live webinar, Andrew walks through how you could change your pricing each day at midnight. This is quite laborious as you would need to set reminders to do it on time each day, and then pull your sales data from Seller Central and keep a record of that during the course of your split test experiment.
So at the end of your test, every even day will be variant B and every odd day will be variant A. At this point, you will see that one of your variants is doing better than the other. But how do you then determine that this wasn't just luck? Enter statistical significance.
This is a mathematical function that helps to determine whether your result was luck or not. This essentially helps you to determine whether or not your split test results are reliable.
The easiest way to figure this out is to use an online statistical significance calculator. Here's the example Andrew shows using the Splitly calculator:
You simply have to take the following steps:
In the example above, the result is only 62%. This means there is not a high enough statistical significance to move forward with your “winning” variant in this case.
If your result is statistically significant, i.e. over 90%, then you should move forward with the winning variant.
However, if your result is not statistically significant, or under 90%, then you can continue with your test to gather more data points. Or, you can conclude that there is not much different between the two variants and try to test something else, or tweak your test.
Andrew states that there is no rule of thumb when it comes to starting your split tests. It's more of a moving feast and should be a continuous improvement. Therefore you can start testing right away.
If you have launched a new product and have low sessions and sales, then you may need to run longer tests to find statistical significance.
The best products to start testing are your best sellers!
Additionally, the best places to start testing your Amazon listing are your price and your image. These are the most effective tests that Splitly users have been running since the tool was launched.
However, you can also test the following elements of an Amazon product listing:
Andrew shared a really useful process which will help you to plan and successfully carry out an ongoing optimization funnel:
Gen and Andrew have a really interesting conversation in the webinar about the pitfalls of concentrating too heavily on conversion rates, rather that profits.
It's not always the case that a lower conversion rate means worse performance. For example, if you raise your price in a pricing test, you might see a drop in conversion rate, but you might be making more profit per day, due to the price increase. There's a really detailed explanation of Amazon conversion rate in this Splitly article.
Another important thing to remember is that Amazon rewards you for a higher sales velocity. So this is an extra thing to consider when you are looking at your split test results, as a higher sales velocity may have a positive compound effect over time.
One really easy way to test your keywords and their effectiveness is to test your product title. Andrew recommends you test your title sooner, rather than later, in your product's lifecycle.
At the moment, there is a lot of talk about shorter Amazon titles being more effective. But as reiterated throughout Andrew's presentation, the only true way to find out what would work best for your product is to test this assumption, in a controlled A/B test.
You can test your product title length as well as changing some of your keywords.
Tip: Don't forget that you can change the keywords in your listing or in the search terms field in the back-end of Seller Central anytime. You do not need to run a split test to see the effect this has on your rank, as you can see this on your product listing page.
There is no right or wrong answer when it comes to product listing optimization. What it comes down to is creating hypothesis and continuously testing to get data-driven results.
Once you have got past the initial stage of getting your listing out there, it's never too soon to start running split tests. If you don't have the time to do the manual work of changing your listings every day at midnight and crunching the numbers to find statistical significance, you can use Splitly to automate your Amazon listing optimization.
Splitly has three main features:
Don't forget, if you want to try Splitly today, use the code ‘Million' for 25% off your first month!
Next week we have an exclusive “Ask Me Anything” Session with Greg Mercer. He is going to review any burning questions you have relating to any of the previous Million Dollar Case Study sessions. So if you've hit roadblocks, are experiencing challenges or you just want to join in the dialogue, make sure you are registered!