Previous Experiments is your organization's optimization knowledge base—a comprehensive archive of every test you've run, every result you've achieved, and every learning you've gained. Rather than treating completed experiments as finished business, Previous Experiments helps you extract ongoing value from historical tests, avoid repeating mistakes, and build upon past successes to accelerate future optimization.
Previous Experiments provides a searchable, filterable library of all completed tests:
Complete History - Every experiment ever run on your account
Detailed Results - Full performance data, statistical analysis, and creative assets
Contextual Information - When experiments ran, market conditions, app version, etc.
Outcome Classification - Winners, losers, and inconclusive results clearly marked
Find relevant historical experiments quickly:
Date Range Filtering - Focus on experiments from specific time periods
App and Platform - Filter by specific apps or iOS/Android
Locale Selection - View experiments from particular markets
Experiment Type - Filter by icon tests, screenshot tests, copy variations, etc.
Outcome Filtering - Show only winners, only losers, or inconclusive tests
Hypothesis Linking - Find all experiments related to specific hypotheses
Keyword Search - Search experiment descriptions, creative content, and notes
Each archived experiment includes complete information:
Experiment Overview - Name, description, hypothesis, dates
Creative Assets - All variations tested, including control
Performance Metrics - Impression counts, install rates, statistical significance
Statistical Analysis - Confidence levels, sample sizes, effect sizes
Assignment Details - Which Custom Store Listings and locales were used
Deployment Status - Whether winning variation was published
Team Notes - Comments, discussions, and observations from your team
Easily compare variation performance:
Side-by-Side Views - Compare control and variations visually
Metric Comparison - See exact performance differences
Statistical Significance - Understand confidence in results
Segment Performance - How different user groups responded
Analyze historical experiments to discover patterns:
Winning Strategies - What types of changes consistently improve performance?
Failed Approaches - Which hypotheses have been disproven multiple times?
Locale Differences - How do optimization strategies vary by market?
Seasonal Trends - Do certain approaches work better at specific times of year?
App Evolution - How have successful strategies changed as your app matured?
Compare experiments to understand what drives different outcomes:
Winner vs. Loser Analysis - What distinguished successful from unsuccessful tests?
Magnitude Comparison - Why did some winners deliver larger improvements than others?
Time to Significance - Which types of experiments reach conclusions fastest?
Market Comparison - How do the same tests perform in different locales?
Learn from experiments that didn't produce improvements:
Hypothesis Analysis - Why did you expect these changes to work?
Actual Results - What happened instead?
Alternative Approaches - How might you test the underlying hypothesis differently?
Similar Experiments - Find related tests to understand patterns
Use Previous Experiments to avoid testing failed ideas multiple times:
Pre-Approval Review - Check if similar experiments have been tried before
Hypothesis Checking - Verify new hypotheses haven't been invalidated previously
Creative Similarity - Identify variations similar to past losers
Team Knowledge Transfer - New team members can learn from past mistakes
Apply successful strategies to new contexts:
Cross-Locale Application - Test winning strategies from one market in others
Creative Adaptation - Adapt winning creative approaches to new features
Strategy Scaling - Apply successful tactics across your app portfolio
Seasonal Replication - Reuse approaches that worked in previous similar periods
Use winning experiments as foundations for further testing:
Refinement Tests - Optimize winning variations further
Element Isolation - Test individual components of winning combinations
Bold Iterations - Make larger changes based on validated direction
Combination Testing - Combine elements from multiple winners
Generate reports from your experiment archive:
Quarterly Summaries - Overview of all experiments in a period
Year-over-Year Comparison - How has your optimization program evolved?
Win Rate Analysis - Trends in experiment success over time
Cumulative Impact - Total business impact from all winning experiments
Use historical data to demonstrate program value:
Success Stories - Highlight biggest wins and their business impact
Learning Progression - Show how optimization sophistication has increased
ROI Documentation - Quantify return on optimization investment
Strategic Insights - Present learnings that inform broader business strategy
Compare multiple historical experiments simultaneously:
Performance Ranking - Sort experiments by improvement magnitude
Category Comparison - Compare icon tests vs. screenshot tests vs. copy variations
Temporal Comparison - How have results changed over time?
Market Comparison - Performance patterns across different locales
Visualize historical experiment data:
Performance Trends - Line charts showing optimization progress over time
Win Rate Visualization - Track experiment success rate trends
Impact Distribution - Understand typical improvement ranges
Category Breakdown - Pie charts showing experiment type distribution
Organize experiments for easier future reference:
Custom Tags - Create your own classification system
Strategic Themes - Tag experiments by strategic initiative
Feature Focus - Organize by which app features were highlighted
Creative Approach - Tag by visual style, messaging strategy, etc.
Add institutional knowledge to experiment records:
Post-Experiment Notes - Document learnings after results are in
Context Recording - Note market conditions, competitive actions, or other factors that might have influenced results
Follow-Up Suggestions - Ideas for future tests based on these results
Stakeholder Feedback - Record reactions from product, marketing, or leadership
Use historical data when creating new hypotheses:
Similar Experiment Search - Find related past tests when developing new hypotheses
Success Pattern Application - Base new hypotheses on validated patterns
Failure Avoidance - Ensure new ideas don't repeat disproven theories
During experiment review, access relevant historical data:
Similar Past Tests - Automatically surface related experiments
Performance Predictions - Estimate likelihood of success based on similar past tests
Risk Assessment - Understand if proposed experiment has failed before
Establish routine review of historical experiments:
Monthly Review - Look back at experiments completed in the previous month
Quarterly Analysis - Deeper dive into patterns across recent experiments
Annual Learning Review - Comprehensive analysis of year's testing program
Add notes and insights immediately after experiments complete:
Capture Reactions - Record team's immediate observations
Note Surprises - Document unexpected results for future investigation
Record Context - Capture relevant market conditions before you forget
Make historical experiment insights accessible across your organization:
Weekly Highlights - Share interesting completed experiments with the team
Learning Presentations - Present key findings to broader organization
Onboarding Resource - New team members review significant past experiments
Link experiment results to broader business metrics:
Revenue Impact - Calculate business value of improvements
User Acquisition - Understand impact on acquisition costs and volume
Strategic Alignment - Show how experiments support business objectives
Automatically group similar experiments to identify patterns:
Creative Similarity - Find experiments with similar visual approaches
Hypothesis Clustering - Group tests of related theories
Performance Clustering - Identify experiments with similar outcomes
Use historical data to predict future experiment performance:
Success Likelihood - Estimate probability new experiment will produce improvements
Impact Prediction - Forecast likely magnitude of improvement
Time Estimation - Predict how long experiment will take to reach significance
Use historical data outside PressPlay:
Data Export - Download experiment data for custom analysis
BI Tool Integration - Connect to Tableau, Looker, or other analytics platforms
API Access - Programmatic access to experiment history
Previous Experiments transforms your testing history from static records into dynamic, actionable intelligence. By systematically reviewing past tests, identifying patterns, and applying learnings to future experiments, you build organizational knowledge that compounds over time. Each experiment becomes not just a one-time test, but a permanent contribution to your optimization expertise. Teams that effectively leverage Previous Experiments run smarter tests, achieve higher win rates, and continuously accelerate their optimization velocity—turning app store optimization from a tactical activity into a strategic competitive advantage built on accumulated learning and validated insights.