Each asset type in PressPlay can have its own default settings that automatically apply to new experiments. Configuring these settings saves time and ensures consistency across your testing program. This guide explains how to set up and customize settings for Icons (IC), Feature Graphics (FG), and Short Descriptions (SD).
Different asset types require different testing approaches:
Duration: Icon tests need more time than description tests
Traffic allocation: Visual assets might use different splits than text
Success metrics: Each asset type impacts different conversion funnel stages
Review requirements: Different stakeholders may review different assets
Setting defaults by asset type ensures each experiment starts with appropriate parameters for what's being tested.
Navigate to your app's dashboard
Click the "Settings" tab in the main navigation
Select "Experiment Settings" from the settings menu
Choose the asset type you want to configure
Icons require the longest testing periods because they need significant impression volume to show statistically significant results:
Default Duration: 14-21 days
Traffic Split: 50/50 (control vs. variant)
Minimum Sample Size: 10,000 impressions per variant
Auto-deploy Winners: Disabled (manual review recommended)
Test Duration
Set how many days icon experiments run by default. Longer durations increase confidence in results but delay testing cadence.
7 days: Only for high-traffic apps (100k+ daily impressions)
14 days: Standard for most apps
21+ days: For apps with lower traffic or seasonal considerations
Traffic Allocation
Determines what percentage of users see each variant:
50/50 split: Standard for A/B tests (recommended)
80/20 split: Conservative approach, limits exposure to new variant
Multi-variant: Test 3+ variants simultaneously (not recommended for icons)
Significance Threshold
The confidence level required before declaring a winner:
90%: More winners but higher false positive risk
95%: Industry standard (recommended)
99%: Very conservative, fewer winners
Approval Requirements
Who must approve icon experiments before deployment:
Designer review required
Marketing approval needed
Automatic approval for AI-generated variants
Brand team sign-off mandatory
Feature graphics typically show results faster than icons because they're viewed by users who have already clicked through:
Default Duration: 10-14 days
Traffic Split: 50/50
Minimum Sample Size: 5,000 listing views per variant
Auto-deploy Winners: Optional (depends on governance)
Test Duration
Feature graphics can typically run shorter than icon tests:
7 days: Apps with 5k+ daily listing views
10 days: Standard for most apps
14+ days: For lower-traffic apps or seasonal testing
Success Metrics
What metrics determine the winning variant:
Install rate: Installs divided by listing views (primary metric)
Install count: Total installs (can favor higher traffic variant)
Engagement rate: Scroll depth and time on listing
Deployment Options
What happens when an experiment completes:
Manual review: All experiments require approval before rollout
Auto-deploy winners: Winning variants deploy automatically
Notify only: Results are reported but no automatic action
Short descriptions show results the fastest since they're text-based and appear in search results:
Default Duration: 7-10 days
Traffic Split: 50/50
Minimum Sample Size: 5,000 impressions per variant
Auto-deploy Winners: More commonly enabled
Test Duration
Short descriptions typically need the shortest test periods:
5 days: High-traffic apps testing minor variations
7 days: Standard for most apps
10+ days: Include weekday and weekend traffic patterns
Keyword Optimization
Special considerations for ASO testing:
Track keyword rankings: Monitor position changes during test
Measure search impressions: Include search visibility metrics
Click-through rate: Primary metric for search result performance
Localization Settings
How translations are handled:
Test in source locale first: Validate messaging before translating
Auto-translate winners: Automatically create experiments for other locales
Professional translation required: Flag winners for human translation
Some settings apply to all asset types:
Default priority: What priority new experiments get (1-10 scale)
Auto-increment: Automatically adjust priorities to prevent conflicts
Priority-based queuing: Run high-priority experiments first
Experiment started: Notify when deployment begins
Results available: Alert when experiments complete
Significant changes: Notify if results shift dramatically
Errors and issues: Alert for deployment or sync problems
Slack notifications: Post updates to specific channels
Webhook endpoints: Send events to external systems
Calendar sync: Add experiment schedules to team calendars
Navigate to Settings → Experiment Settings
Select the asset type to modify
Update any configurable parameters
Click "Save Settings" to apply changes
New experiments will use updated defaults
Settings can be configured at different levels:
App-level: Different settings for each app (default)
Publisher-level: Shared settings across all apps (optional)
Per-experiment override: Individual experiments can override defaults
Set conservative defaults: Err on the side of longer tests and higher confidence
Adjust based on traffic: High-traffic apps can use shorter durations
Review periodically: Update settings as you learn what works for your app
Document your rationale: Note why you chose specific settings
Test the tests: Run a few experiments to validate your settings work well
IC duration: 7 days
FG duration: 7 days
SD duration: 5 days
Auto-deploy: Enabled with manual review option
IC duration: 14 days
FG duration: 10 days
SD duration: 7 days
Auto-deploy: Disabled
IC duration: 21 days
FG duration: 14 days
SD duration: 10 days
Auto-deploy: Disabled
Issue: Experiments not reaching significance
Solution: Increase test duration or lower significance threshold
Issue: Too many experiments timing out
Solution: Check if sample size requirements are too high for your traffic
Issue: Inconsistent results across experiments
Solution: Ensure sufficient test duration to account for traffic variations
Understanding Asset Types - Learn about all 7 asset types (IC, FG, SD, LD, SS, PV, MULTI_ASSET)
Creating Manual Experiments - Override settings in individual experiments
Analyzing Experiment Results - Understand how settings affect outcomes
Publisher Settings Overview - Configure publisher-wide defaults