Rejecting experiments is an essential part of maintaining a high-quality testing program. Not every AI-generated experiment will be worth testing, and knowing when and why to reject helps you focus resources on the most promising opportunities. The rejection process also teaches the AI system about your preferences, gradually improving the quality of future generations.
Rejecting experiments serves several important purposes in your optimization workflow:
Rejecting low-quality experiments ensures that only professional-grade assets make it to your store listings. Even though experiments are tests, they represent your app to real users, so maintaining quality standards is crucial for protecting your brand reputation.
Every experiment you run consumes resources—traffic allocation, monitoring time, and analysis effort. Rejecting experiments that are unlikely to provide valuable insights frees up these resources for more promising tests.
By rejecting experiments that don't align with your testing strategy or hypotheses, you maintain focus on the insights that matter most to your app's growth.
Your rejection decisions provide feedback that helps PressPlay's AI learn your preferences, brand guidelines, and quality standards. Over time, the AI generates experiments more aligned with what you actually approve, reducing review time.
Understanding when to reject versus when to modify or approve is key to an efficient review workflow:
Reject experiments with:
Visual Defects - Pixelation, artifacts, poor rendering, or technical problems that can't be easily fixed
Professional Standards - Amateurish design that doesn't meet your quality bar
Resolution Problems - Images that don't meet store requirements for size and quality
Text Issues - Spelling errors, grammar mistakes, or awkward phrasing in text experiments
Reject experiments that:
Violate Brand Guidelines - Use wrong colors, fonts, or visual style
Wrong Tone - Voice and messaging don't match your brand personality
Off-Brand Imagery - Visual concepts that don't fit your brand identity
Inconsistent Messaging - Claims or positioning that contradict your marketing strategy
Reject experiments that:
Test the Wrong Thing - Don't actually test the intended hypothesis
Lack Differentiation - Too similar to control or other variants to generate learnings
Wrong Priority - Test something unimportant when you should focus elsewhere
Timing Issues - Reference seasonal events or trends that have passed
Reject experiments that:
Violate Store Policies - Contain content prohibited by Google Play Store guidelines
Make False Claims - Promise features your app doesn't deliver
Use Prohibited Content - Include violence, mature content, or other restricted materials
Copyright Issues - Incorporate elements you don't have rights to use
Reject experiments that:
Can't Be Implemented - Require features or changes your app doesn't support
Too Resource-Intensive - Would require major modifications to make work
Technical Conflicts - Incompatible with your current infrastructure or store listing configuration
Some experiments fall in the gray area between rejection and approval. For these cases:
Core concept is strong but execution needs refinement
Quick fixes would address the issues (5 minutes or less)
Minor adjustments would bring it into brand alignment
Text tweaks would perfect the messaging
Major redesign would be needed to make it work
Fundamental concept is flawed regardless of execution
Your instinct says it won't perform well
Time investment to fix it isn't worth the potential learning
PressPlay provides several ways to reject experiments depending on your workflow:
For obvious rejections:
From the Review Dashboard, locate the experiment card
Click the "Quick Reject" button (X icon)
Optionally provide a brief reason
Confirm rejection
The experiment is removed from your queue
Quick reject is ideal when you can immediately see the experiment doesn't meet standards.
For experiments requiring closer examination before deciding:
Click "View Details" on the experiment card
Review the full experiment details and assets
Consider whether modification might be worthwhile
Click "Reject Experiment" button
Select rejection reason from dropdown
Optionally add detailed notes
Confirm rejection
For multiple experiments with similar issues:
Enable multi-select mode in the review queue
Select all experiments you want to reject
Click "Bulk Reject" in the actions toolbar
Select a common rejection reason
Add notes if applicable
Confirm batch rejection
All selected experiments are removed from queue
Batch rejection is efficient when the AI has generated multiple experiments with the same problem.
When rejecting experiments, selecting a specific reason helps the AI learn from your decisions:
Poor Visual Quality - Technical problems, pixelation, artifacts
Doesn't Meet Professional Standards - Amateurish or low-quality design
Technical Specifications Not Met - Resolution, aspect ratio, or format issues
Off-Brand Visual Style - Doesn't match brand guidelines
Wrong Tone or Voice - Text doesn't match brand personality
Color Palette Issues - Uses colors outside brand guidelines
Inconsistent with Marketing - Contradicts positioning or messaging
Doesn't Test Hypothesis - Doesn't actually test what it's supposed to
Insufficient Differentiation - Too similar to existing variants
Wrong Priority - Tests something unimportant
Outdated or Irrelevant - References outdated trends or seasonal events
Store Policy Violation - Violates Google Play guidelines
False or Misleading Claims - Promises features app doesn't have
Prohibited Content - Contains restricted materials
Copyright or Legal Issues - Uses content without proper rights
Not Implementable - Can't be deployed with current infrastructure
Too Resource Intensive - Requires disproportionate effort
Technical Incompatibility - Conflicts with app features or listings
Doesn't Match My Vision - Subjective rejection based on overall fit
Prefer Different Direction - Want to explore alternative approaches
Other - Custom reason with required explanation
Beyond selecting a category, providing specific feedback helps the AI improve:
"Icon colors are too bright and don't match our brand's muted palette"
"Text is too casual—we need professional, enterprise-focused tone"
"Screenshot set shows features we deprecated last month"
"Too similar to variant from Experiment #47—need more differentiation"
"Good concept but visual quality is too low for our standards"
"Don't like it" (too vague)
"Bad" (no actionable information)
"Wrong" (doesn't specify what's wrong)
Specific feedback enables the AI to understand patterns in your preferences and generate better-aligned experiments in the future.
When you reject an experiment:
Removed from Queue - Experiment disappears from your review dashboard
Archived - Moved to rejected experiments archive for reference
Status Updated - Marked as "Rejected" with timestamp and reason
AI Feedback Loop - Your rejection and reason are logged for AI learning
Rejected experiments aren't deleted—they're archived:
Navigate to "Rejected Experiments" in the Review section
View all rejected experiments with reasons and timestamps
Filter by rejection reason, date, or asset type
Restore experiments to review queue if you change your mind
If you later decide a rejected experiment is worth testing:
Find the experiment in the Rejected Experiments archive
Click "Restore to Review Queue"
The experiment returns to your pending review list
You can then approve or reject again
This is useful if your strategy changes or you rejected something by mistake.
Your rejection decisions continuously improve PressPlay's AI generation:
The AI analyzes rejection patterns to understand:
Quality Standards - What level of visual and textual quality you require
Brand Preferences - Your specific brand guidelines and style preferences
Strategic Priorities - What types of tests you find valuable
Common Pitfalls - Mistakes to avoid in future generations
Over time, the AI adapts to generate:
Higher Quality Outputs - Fewer technical and quality issues
Better Brand Alignment - Closer match to your visual and tonal guidelines
More Strategic Experiments - Tests that better align with your priorities
Improved Approval Rate - Higher percentage of generated experiments meet your standards
Monitor how your rejection feedback improves AI performance:
Rejection Rate Trend - Should decrease over time as AI learns
Reason Distribution - Common rejection reasons should become less frequent
Approval Time - Should decrease as more experiments meet standards immediately
Modification Rate - Should decrease as AI generates closer-to-perfect experiments
To maximize the value of the rejection process:
Apply Same Standards - Don't reject something one day and approve similar experiments the next
Document Criteria - Write down your approval/rejection criteria for consistency
Team Alignment - Ensure all reviewers apply similar standards
Periodic Calibration - Review rejected experiments together to ensure consistency
Be Specific - Explain exactly what was wrong
Be Constructive - Frame feedback in terms of what would make it approvable
Reference Guidelines - Cite specific brand guidelines or policies when relevant
Use Examples - Reference approved experiments that better meet your standards
Remember the Purpose - These are experiments, not final assets—they don't need to be perfect
Embrace Learning - Sometimes testing an imperfect experiment yields valuable insights
Consider Modification - If you're rejecting 70%+, you might be too strict—consider modifying borderline cases
Balance Speed and Quality - Don't let perfect be the enemy of good enough to test
Trust Your Gut - If it's obviously wrong, reject quickly without over-analysis
Set Time Limits - Don't spend more than 1-2 minutes deciding on rejections
Use Batch Rejection - Group obviously similar problems and reject together
Clear the Queue - Don't let rejected experiments linger in your queue
Regularly review your rejection data to gain insights:
PressPlay provides analytics on your rejection patterns:
Overall Rejection Rate - Percentage of generated experiments you reject
Rejection by Category - Most common rejection reasons
Rejection by Asset Type - Whether icons, screenshots, or text have higher rejection rates
Rejection by Hypothesis - Which hypotheses generate more rejected experiments
Trends Over Time - Whether rejection rate is improving as AI learns
Use rejection analytics to:
Refine Generation Prompts - If specific rejection reasons are common, adjust how you prompt the AI
Update Brand Guidelines - Make guidelines clearer in areas where AI frequently misses the mark
Adjust Hypotheses - If a hypothesis consistently generates rejected experiments, refine or abandon it
Calibrate Standards - If rejection rate is very high or very low, reconsider your approval criteria
Sometimes patterns in your rejections indicate larger opportunities:
If you're rejecting many experiments for the same quality issue, consider:
Adjusting AI generation settings
Providing more detailed brand guidelines
Using reference examples to train the AI
If you're rejecting experiments because they don't test what you want to test:
Refine your hypothesis definitions
Provide more specific generation prompts
Adjust your testing strategy
If you're rejecting experiments because the AI can't produce what you need:
Provide feedback to PressPlay about missing capabilities
Consider hybrid approaches (AI + manual creation)
Adjust your expectations for what AI can generate
Rejection is not a negative action—it's a critical quality control mechanism that ensures you're only testing experiments with real potential to improve your app's performance. By thoughtfully rejecting experiments that don't meet your standards and providing clear feedback, you simultaneously maintain a high-quality testing program and train the AI to better serve your needs in the future.