All Collections
Managing Experiments
How to drive the experimentation cycle
How to drive the experimentation cycle

The big picture: Taking your experiments from brainstorm to measurable outcomes

Updated over a week ago

Experiments of all kinds share a basic lifecycle: idea generation and prioritization, planning and roadmapping, live experimentation runtime, and wrap-up. Experiments aren’t always strictly A/B tests – growth teams are wise to do qualitative research alongside quantitative testing – but this article focuses on taking a product experiment from backlog idea to shareable result, using Panobi.

💡 Idea stage: The backlog

Capture as many ideas as you can about what to experiment on. Click on + Add project to add an idea to the backlog. Remember to include a hypothesis about what you’re testing, such as “Adding an invite step to the onboarding flow will drive up the rate of activation.” It’s common for teams to run multiple tests using the same hypothesis.

Be sure to add tags to your projects for things like product areas. That way, you’ll be able to find all experiments you’ve run on the `landing page` or `iOS app` with a simple click. You can do this on each project or by adding tags in Settings. You can also create custom fields to your projects, which will allow members with editing privileges to add data to that field for any project.

Once you have some ideas fleshed out, you can prioritize them using the RICE method in Panobi or by adding a priority score from 1-100.

📅 Planning stage: The roadmap

Keeping up with experiment status changes across various systems is manual and time-consuming work. It can be challenging for members of the growth team to keep track of what’s live in the product or what’s up next — and it’s often next to impossible for anyone outside of the growth team.

There are two ways that Panobi helps you save time managing your experiments: either by scheduling projects or by integrating with your company’s feature flag system.

1️⃣ Scheduling a project

You can add start and end dates to any project in Panobi. Once a project has a start date, the project card will advance to the next stage according to the dates you’ve chosen: from Backlog to Upcoming, then from Upcoming to Live, and finally from Live to Completed.

You can always update the dates as needed to align with what’s really happened.

2️⃣ Automating projects with feature flag integrations.

For product projects, it’s best practice to integrate with your feature flagging system (such as LaunchDarkly, Statsig, Split.io, or an internal system) so that your colleagues always know what’s live in the product. This can be especially useful for teams using their own homegrown feature flag system, which often lacks a user interface and can be harder for non-engineers to monitor.

Whether your team is using our SDK or one of our feature flag integrations, you’ll be able to seamlessly share status updates with your team by connecting Panobi to your Slack workspace.

📎 (If your company uses a different experimentation framework than those listed here, we’d love to integrate with it. Take our quick integration requests survey to let us know what system you’re using)

⏳Live runtime: Add context

Sometimes you just have to wait for a test to reach a reasonable confidence interval – best practice is to hit a 95% confidence interval before concluding. While you look forward to your experiment results, it’s a good time to make sure all the records are up-to-date for your team (and your future self will thank you).

Check for the following:

  • Have you added screenshots or images of each treatment you’re testing? Later on, you’ll want to look back and easily recall what you’ve tried.

  • Have you added all collaborators? Give credit where credit’s due.

  • Is there a link to the metrics you’re testing? Make sure you’re sharing all the context.

🎁 Final stage: Wrap up

Congratulations… or, sorry! You’ve either proved or disproved the null hypothesis and now have an answer to your hypothesis. Once a project is completed, it’s time to wrap it up and summarize what you’ve learned. Write a short summary of takeaways for the highlight, and then share the specific results and next steps you’re considering in the longer text block. Wrapping up is an important part of closing the experiment lifecycle. It lets your colleagues know that you’re no longer waiting on any analysis, and it ensures that you’re bringing relevant context into the tool.

Suggested next reads:

Did this answer your question?