When you have multiple experiment ideas competing for attention, effective prioritization becomes critical. Running the right tests in the right order maximizes your optimization velocity and impact. This guide provides frameworks for managing your testing queue strategically.
You can only run one experiment at a time per app (testing multiple elements simultaneously makes it impossible to attribute results). With typical experiments taking 2-4 weeks, prioritization directly determines:
Optimization velocity: How quickly you improve conversion rates
Resource efficiency: Designer and marketer time focused on high-impact work
Learning rate: Speed at which you understand your audience
Revenue impact: Faster improvements to high-value metrics
Poor prioritization means spending months testing low-impact elements while missing major opportunities.
Use the ICE framework to score experiment ideas objectively:
How much will this experiment improve your key metric if successful?
10 points: Could improve install rate by 20%+ (e.g., complete icon redesign, hero screenshot overhaul)
7-9 points: Could improve install rate by 10-20% (e.g., messaging changes, visual style shifts)
4-6 points: Could improve install rate by 5-10% (e.g., color adjustments, text refinements)
1-3 points: Likely to improve install rate by 0-5% (e.g., minor tweaks, incremental changes)
How certain are you that this change will produce positive results?
10 points: Backed by strong data, user research, or proven best practices
7-9 points: Based on competitive analysis or moderate evidence
4-6 points: Reasonable hypothesis but limited supporting evidence
1-3 points: Speculative idea or shot-in-the-dark attempt
How quickly and easily can you execute this experiment?
10 points: Asset already exists or requires
7-9 points: Requires 2-4 hours of design work
4-6 points: Requires 1-2 days of design and iteration
1-3 points: Requires extensive design, illustration, video production (3+ days)
ICE Score = (Impact + Confidence + Ease) / 3
Prioritize experiments with the highest ICE scores. This balances potential impact with execution feasibility.
Here's how you might score several experiment ideas:
Experiment Idea Impact Confidence Ease ICE Score Priority | |||||
Test benefit-focused vs feature-focused hero screenshot | 8 | 7 | 6 | 7.0 | High |
Change icon background color (blue vs purple) | 5 | 4 | 9 | 6.0 | Medium |
Add video preview to store listing | 7 | 8 | 2 | 5.7 | Medium |
Test complete brand redesign | 9 | 5 | 1 | 5.0 | Lower |
Adjust screenshot text size | 3 | 6 | 10 | 6.3 | Medium |
Based on ICE scores, you'd test the benefit-focused screenshot first, despite the brand redesign having higher impact—the confidence and ease factors make it a better starting point.
Different app store elements have different baseline priority levels based on their conversion impact:
App icon: Affects both search impressions and conversion rate
First screenshot: Primary conversion driver for users who click through
Short description: Appears in search results and at top of listing
Screenshots 2-3: Many users scroll to these
Feature graphic: Prominent in certain placements
App title optimization: Affects search visibility and messaging
Video preview: High impact when watched, but viewing rate varies
Screenshots 4-5: Fewer users reach these
Long description: Only highly engaged users read
Screenshots beyond #5: Very few users scroll this far
Additional promotional assets: Limited visibility
If you've never tested your icon or first screenshot, start there regardless of other ICE scores.
Your traffic level should influence prioritization:
Strategy: Can afford to test incrementally, running more experiments
Approach: Work through tier-by-tier systematically
Risk tolerance: Can test bolder ideas more frequently
Strategy: Focus on highest-impact elements only
Approach: Stick to Tier 1-2 elements initially
Risk tolerance: Balance safe and bold tests (70-30 split)
Strategy: Test only dramatic differences on critical elements
Approach: Icon and hero screenshot only until optimized
Risk tolerance: Test bold variations—subtle changes won't show significance
Time your tests strategically around seasons and events:
Priority: Complete high-impact tests before holiday traffic surge
Focus: Tier 1 elements that affect conversion most
Goal: Have optimized listing ready for high-value holiday traffic
Priority: Pause experimentation or run only low-risk tests
Focus: Maintain stable, proven listing
Goal: Maximize conversions with your best-performing assets
Priority: Good time for bold experiments and learning
Focus: Test riskier ideas when traffic is more stable
Goal: Generate insights for rest of year
Education apps: Optimize before back-to-school (July-August)
Fitness apps: Optimize before New Year (November-December)
Tax apps: Optimize before tax season (January-March)
Travel apps: Optimize before summer (April-May)
Create a 3-6 month testing roadmap:
Test app icon if never tested or last tested over 1 year ago
Test first screenshot
Goal: Optimize the two highest-impact elements
Test screenshots 2-3
Test feature graphic or video preview
Goal: Optimize the full visual story users see when exploring your listing
Iterate on winning concepts from earlier tests
Test remaining assets based on ICE scores
Goal: Compound improvements and apply learnings broadly
Maintain an organized backlog of experiment ideas:
Next Up: 2-3 experiments ready to launch (assets prepared, ICE scores calculated)
In Design: 3-5 experiments being prepared (design in progress)
Backlog: 10-15 prioritized ideas (ranked by ICE score)
Someday/Maybe: Additional ideas not yet prioritized
Review and re-prioritize monthly as you gain insights from completed experiments.
Certain events should trigger immediate re-prioritization:
Major app update: Test new features or changes in store listing first
Competitor changes: If a competitor makes major listing changes, consider responsive tests
Traffic shifts: Sudden traffic increases or decreases may change what's testable
Business priorities change: New company goals may shift optimization focus
Platform updates: Google Play changes may require listing adjustments
Unexpected results: Surprising test outcomes may suggest new priority areas
If you manage multiple apps, prioritize across your portfolio:
Revenue contribution: Apps generating more revenue get more testing attention
Growth stage: New apps may need more optimization than mature apps
Traffic level: Apps with sufficient traffic for reliable testing get priority
Strategic importance: Key apps for company strategy get more resources
Flagship app (60% of revenue): 50% of testing resources, continuous experimentation
Growing app (25% of revenue): 30% of testing resources, regular testing
Mature apps (15% of revenue): 20% of testing resources, periodic refresh tests
Effective prioritization requires team alignment:
Review completed experiments: What did we learn?
Assess backlog: Are priorities still correct?
Score new ideas: Apply ICE framework to new proposals
Align on next tests: Confirm next 2-3 experiments
Resource check: Ensure design bandwidth matches plans
Share your testing roadmap and rationale for priorities
Explain why you're not testing certain ideas (low ICE scores)
Report on impact of optimization program using priorities as framework
Adjust priorities based on business feedback while maintaining data-driven approach
✓ All experiment ideas scored using ICE framework
✓ Backlog organized into Next Up, In Design, Backlog, Someday/Maybe
✓ Next 2-3 experiments identified and assets in progress
✓ Tier 1 elements tested before moving to lower tiers
✓ Seasonal calendar reviewed for upcoming key periods
✓ Traffic levels considered in prioritization decisions
✓ Team aligned on priorities and rationale
✓ Monthly prioritization review scheduled
✓ Portfolio-level priorities set for multi-app teams
Smart prioritization ensures you're always working on the experiments that will drive the most value for your app, maximizing the return on your optimization efforts.