We made a quick checklist of DO’s and Don’ts in mobile game analytics, based on Industry best practice and some of the practices Keewano AI Agents uses daily.
✅ Do’s – Clearly Defined Test Objectives
- Establish success criteria in advance.
- Ensure actionable results.
- Formulate clear hypotheses before testing.
❌ Don’ts -Unplanned Experiments
- Starting tests without clear goals.
- Achieving inconclusive outcomes.
- Creating ambiguous results.
✅ Do’s- Ensure Statistical Significance
- Run tests for adequate duration.
- Adhere to predetermined thresholds.
- Clearly define minimum sample sizes.
❌ Don’ts – Short, Insignificant Tests
- Ending tests prematurely.
- Misinterpreting minor results.
✅ Do’s- Maintain Test Consistency
- Set parameters clearly beforehand.
- Avoid altering test conditions.
- Keep stability to ensure accuracy.
❌ Don’ts – Changing Test Conditions Mid–Test
- Altering parameters during tests.
- Introducing bias and unreliable data.
- Confusing test interpretation.
✅ Do’s- Comparable, Sized Segments
- Clearly define comparable user groups.
- Ensure balanced segment sizes.
- Validate segment similarity before testing.
❌ Don’ts – Small or Mismatched Segments
- Comparing disproportionate groups.
- Producing misleading data.
✅ Do’s- Distinct Test User Groups
- Clearly separate users per test.
- Avoid simultaneous test overlaps.
- Ensure clean, distinct results.
❌ Don’ts – Overlapping Test Users
- Allowing users multiple test exposure.
- Causing confusion in data interpretation.
- Introducing test interference.
✅ Do’s- Use Representative Sample Sizes
- Ensure segments are statistically significant.
- Clearly define and validate segment similarity.
- Optimize test sizes for meaningful insights.
❌ Don’ts – Small-Segment Testing
- Using insufficient sample sizes.
- Comparing overly niche user groups.
✅ Do’s- Test with Clear Hypothesis
- Define a specific question upfront.
- Ensure actionable outcomes.
- Align tests with strategic goals.
❌ Don’ts – Testing Without Clear Hypothesis
- Conducting ambiguous experiments.
- Generating unclear results.
✅ Do’s- Define Custom KPIs
- Tailor metrics specifically to your genre.
- Validate KPIs against real player data.
❌ Don’ts – Use Generic Metrics
- Applying broad KPIs for all types of games.
- Ignoring unique genre factors.
✅ Do’s- Single-Variable A/B Tests
- Clearly attribute test outcomes.
- Ensure simple, actionable results.
❌ Don’ts – Complex A/B Tests
- Testing multiple variables simultaneously.
- Producing unclear results due to mixed variables.
✅ Do’s- Prioritize High-Impact Tests
- Focus tests on key metrics (e.g., ARPU, retention).
- Ensure significant potential returns.
- Regularly reassess test priorities.
❌ Don’ts – Random Test Prioritization
- Wasting effort on minor aesthetics.
- Neglecting impactful experiments.
- Misallocating valuable resources.
✅ Do’s- Personalize Monetization
- Segment players effectively.
- Tailor strategies to player behaviors.
- Regularly refine segmentation criteria.
❌ Don’ts – Uniform Monetization
- Using one model for all players.
- Ignoring varied player spending habits.
- Missing revenue optimization opportunities.
✅ Do’s- Blend Metrics and Feedback
- Regularly collect player opinions.
- Leverage qualitative insights deeply.
- Combine insights for comprehensive analysis.
❌ Don’ts – Ignoring Qualitative Data
- Focusing only on quantitative metrics.
- Missing deeper player insights.
✅ Do’s- Regular Data Validation
- Cross-check and cleanse data.
- Ensure accurate interpretation.
- Periodically audit data sources.
❌ Don’ts – Blind Data Trust
- Not verifying data accuracy.
- Acting on misleading information.
✅ Do’s- Provide Contextual Analysis
- Clearly document analysis conditions.
- Explicitly explain data outcomes.
❌ Don’ts – Contextless Results
- Sharing data without context.
- Allowing misinterpretation.
✅ Do’s- Analyze Failures & Successes
- Study unsuccessful competitors closely.
- Identify clear pitfalls to avoid.
- Regularly reassess competitor landscape.
❌ Don’ts – Ignoring Failed Competitors
- Only observing successful games.
- Missing valuable failure lessons.
- Repeating avoidable mistakes
✅ Do’s- Combine Data and Playtesting
- Balance using player feedback.
- Leverage data-driven insights.
- Iterate continuously for refinement.
❌ Don’ts – Single Balancing Method Reliance
- Trusting only one approach (metrics or intuition).
- Risking ineffective balance outcomes.
✅ Do’s- Iterative Difficulty Adjustments
- Regularly refine difficulty curves.
- Adapt gameplay to player progression.
- Continuously monitor player skill metrics.
❌ Don’ts – Set-and-Forget Difficulty
- Assuming static difficulty suffices.
- Overlooking evolving player skills.
✅ Do’s- Clear, Visual Data Presentations
- Utilize dashboards and clear charts.
- Simplify complex insights.
❌ Don’ts –Overloading Stakeholders with Raw Data
- Presenting overwhelming information.
- Causing stakeholder confusion.
✅ Do’s- Regular Stakeholder Updates
- Provide consistent insights.
- Maintain transparency and trust.
- Set predictable communication schedules.
❌ Don’ts –Infrequent Stakeholder Communication
- Providing rare updates.
- Reactively communicating issues.
- Losing stakeholder alignment.
✅ Do’s- Stick to Predefined Thresholds
- Define clear metrics thresholds beforehand.
- Allow tests to fully mature before analysis.
- Clearly differentiate noise from valid outcomes.
❌ Don’ts –Premature Result Interpretation
- Drawing conclusions from minimal data.
- Mislabeling noise as significant.
Struggling with fixing your game metrics?
Keewano’s AI Analytics Agent does the heavy lifting: It spots the metrics that matter, forms hypotheses, and gives you answers instantly.