The Review Dashboard is your central hub for managing AI-generated experiments before they go live. This human-in-the-loop workflow ensures that every experiment aligns with your brand guidelines, marketing strategy, and quality standards before deployment.
Navigate to the Review Dashboard from the main navigation menu. You'll land on the review queue, which displays all experiments pending your approval. This interface serves as a gatekeeper between AI-generated ideas and your production app store listings.
The review queue displays experiments in a card-based layout, showing key information at a glance:
Experiment Title - A descriptive name for the test
Hypothesis - The testing theory grouping this experiment belongs to
Generation Date - When the AI created this experiment
Asset Type - What's being tested (icon, screenshots, short description, etc.)
Preview Thumbnail - Visual preview of the proposed changes
Status Indicator - Shows if the experiment is pending review, requires attention, or has conflicts
The Review Dashboard provides several ways to organize and filter pending experiments:
Sort your queue by:
Generation Date - Newest or oldest first
Asset Type - Group similar experiments together
Hypothesis - View all experiments related to a testing theory
Priority Score - AI-assigned ranking based on potential impact
Narrow your view with filters:
Asset Type Filter - Show only icons, screenshots, or text experiments
Hypothesis Filter - Display experiments from specific testing theories
App Filter - Focus on a single app (useful for agencies managing multiple clients)
Locale Filter - View experiments for specific languages or regions
Each experiment card in the review queue provides detailed information without requiring you to open it. The card interface includes:
For visual assets like icons and screenshots, you'll see a side-by-side comparison of the control (current) and variant (proposed) versions. This immediate visual reference helps you make quick decisions about quality and brand alignment.
Key information displayed on each card includes:
Hypothesis Tag - Clickable label showing which testing theory this experiment belongs to
AI Confidence Score - How confident the AI is that this experiment will improve performance
Estimated Impact - Predicted effect on conversion rate or other key metrics
Asset Details - Specific information about what's being changed
From the card view, you can perform several actions without opening the full detail view:
Quick Approve - Immediately approve experiments that clearly meet your standards
Quick Reject - Dismiss experiments that obviously don't fit
Add to Hypothesis - Assign to an existing hypothesis group or create a new one
View Details - Open the full experiment view for thorough review
When reviewing multiple similar experiments, batch operations save significant time:
Enable multi-select mode to choose several experiments at once. This is particularly useful when:
Approving multiple experiments from the same hypothesis
Rejecting a batch of low-quality generations
Assigning several experiments to the same store listing
Moving experiments to a hypothesis group together
With experiments selected, you can:
Bulk Approve - Approve all selected experiments
Bulk Reject - Reject all selected experiments
Assign Hypothesis - Add all to the same hypothesis group
Set Store Listing - Assign all to the same CSL and locale
The Review Dashboard highlights experiments that require immediate attention:
High Priority Badge - AI has identified this as a high-impact opportunity
Time-Sensitive Label - Experiments related to seasonal events or current trends
Conflict Warning - Indicates experiments that may conflict with others in your backlog
Incomplete Tag - Experiments missing required information before approval
The dashboard displays clear status indicators showing where each experiment is in the review workflow:
New - Just generated, not yet reviewed
In Review - You or a team member is currently evaluating this
Needs Changes - Flagged for modification before approval
Ready to Approve - Meets all requirements, awaiting final approval
Approved - Moving to backlog (briefly visible before transitioning)
For teams with multiple reviewers, the dashboard includes collaboration features:
Assign specific experiments to team members for review. This prevents duplicate effort and clarifies responsibility.
Leave notes on experiments for other team members. This is helpful when:
Requesting a second opinion on borderline cases
Explaining why you rejected an experiment
Suggesting modifications before approval
Documenting brand guideline considerations
Track who reviewed what and when. The activity log shows all actions taken on experiments, creating an audit trail for your approval process.
When your review queue is empty, the dashboard displays helpful next steps:
Generate New Experiments - Link to the generation interface
Review Backlog - View approved experiments awaiting deployment
Check Running Tests - Monitor active experiments
View Results - Analyze completed tests
To maintain an efficient review workflow:
Review Regularly - Check the dashboard daily to prevent queue buildup
Use Filters Strategically - Focus on one asset type or hypothesis at a time for consistent decision-making
Leverage Quick Actions - Use quick approve/reject for obvious decisions to speed through the queue
Set Aside Deep Review Time - Schedule dedicated time for thorough review of complex experiments
Establish Team Guidelines - Create clear criteria for what gets approved vs. rejected
Monitor Priority Items - Address high-priority and time-sensitive experiments first
The Review Dashboard is designed to make the approval process efficient while maintaining quality control. By mastering this interface, you'll quickly identify winning experiments while filtering out ideas that don't align with your brand and strategy.