• Serhii Tolstoi

25 important questions about your AB test


Most of the product teams don't get all useful learning from AB tests and usually, this happens because of a shallow-minded approach to AB tests. Below is the list of good questions that will help you to think about your experiments on a deeper level.


Hypothesis:

  1. What is the hypothesis? Is it a valid hypothesis? Is this the most valuable hypothesis?

  2. What prior work/data/research influenced the hypothesis? Why was this hypothesis generated?

  3. Does it resonate and contribute to the overall product goals?

Methodology:

  1. What are you hope to learn?

  2. What are you going to do if your experiment will fail or succeed? Do you have a sense of your next steps?

  3. Will this data be informative for developing new hypotheses or further experiments?

  4. How will the data be collected in the new version of the product? Will the logic of events stay the same?

  5. Did you calculate the sample size? How long will this test run?

Design:

  1. How effective is the design reflecting the hypothesis?

  2. How does the design you've made help you gather the right kind of data that will provide evidence in favor of or against your hypothesis?

  3. What is the minimum number of test variations that designer need to create in order to get the learning that you are looking for?

  4. How does the new variation can influence user behavior? Can it influence other products?

Success metrics:

  1. Have you defined one of the most important metric? Will these really measure the validity of the hypothesis?

  2. Can you improve this metric, but make the product worse? If yes, this metric is bad.

  3. Do you have good secondary metrics so you can make a deeper analysis?

  4. Have you defined metrics that you don't want to reduce in any way?

Results and analysis:

  1. Was the hypothesis proven? Why or why not?

  2. What did the team learn and what can be applied to other work that is going on?

  3. Did the results support any other larger trends that you might have seen before?

  4. How do these results compare to prior work?

  5. What are the next steps?

Concern:

  1. Did we see anomalies or surprises?

  2. Did we test the right things?

  3. Could or should we have tested other things at the same time?

Recommended articles:



© 2030 by Product Analyst Journey. Proudly created with Wix.com