Do’s and Don’ts in Game Analysis

Do's and Don'ts in game analysis

We made a quick checklist of DO’s and Don’ts in mobile game analytics, based on Industry best practice and some of the practices Keewano AI Agents uses daily.

✅ Do’s – Clearly Defined Test Objectives 

  1. Establish success criteria in advance. 
  2. Ensure actionable results. 
  3. Formulate clear hypotheses before testing. 

❌ Don’ts -Unplanned Experiments 

  1. Starting tests without clear goals. 
  2. Achieving inconclusive outcomes. 
  3. Creating ambiguous results. 

Do’s- Ensure Statistical Significance 

  1. Run tests for adequate duration. 
  2. Adhere to predetermined thresholds. 
  3. Clearly define minimum sample sizes. 

❌ Don’ts – Short, Insignificant Tests

  1. Ending tests prematurely. 
  2. Misinterpreting minor results.

✅ Do’s- Maintain Test Consistency 

  1. Set parameters clearly beforehand. 
  2. Avoid altering test conditions. 
  3. Keep stability to ensure accuracy. 

❌ Don’ts – Changing Test Conditions MidTest

  1. Altering parameters during tests. 
  2. Introducing bias and unreliable data. 
  3. Confusing test interpretation. 

✅ Do’s- Comparable, Sized Segments 

  1. Clearly define comparable user groups.
  2. Ensure balanced segment sizes. 
  3. Validate segment similarity before testing. 

❌ Don’ts – Small or Mismatched Segments 

  1. Comparing disproportionate groups. 
  2. Producing misleading data. 

✅ Do’s- Distinct Test User Groups

  1. Clearly separate users per test. 
  2. Avoid simultaneous test overlaps. 
  3. Ensure clean, distinct results. 

❌ Don’ts – Overlapping Test Users

  1. Allowing users multiple test exposure. 
  2. Causing confusion in data interpretation. 
  3. Introducing test interference. 

✅ Do’s- Use Representative Sample Sizes 

  1. Ensure segments are statistically significant. 
  2. Clearly define and validate segment similarity. 
  3. Optimize test sizes for meaningful insights. 

❌ Don’ts – Small-Segment Testing

  1. Using insufficient sample sizes
  2. Comparing overly niche user groups. 

✅ Do’s- Test with Clear Hypothesis 

  1. Define a specific question upfront.
  2. Ensure actionable outcomes.
  3. Align tests with strategic goals. 

❌ Don’ts – Testing Without Clear Hypothesis 

  1. Conducting ambiguous experiments.
  2. Generating unclear results.

✅ Do’s- Define Custom KPIs 

  1. Tailor metrics specifically to your genre. 
  2. Validate KPIs against real player data. 

❌ Don’ts – Use Generic Metrics

  1. Applying broad KPIs for all types of games. 
  2. Ignoring unique genre factors.

✅ Do’s- Single-Variable A/B Tests 

  1. Clearly attribute test outcomes. 
  2. Ensure simple, actionable results. 

❌ Don’ts – Complex A/B Tests 

  1. Testing multiple variables simultaneously. 
  2. Producing unclear results due to mixed variables.

✅ Do’s- Prioritize High-Impact Tests 

  1. Focus tests on key metrics (e.g., ARPU, retention). 
  2. Ensure significant potential returns. 
  3. Regularly reassess test priorities. 

❌ Don’ts – Random Test Prioritization 

  1. Wasting effort on minor aesthetics. 
  2. Neglecting impactful experiments. 
  3. Misallocating valuable resources. 

✅ Do’s- Personalize Monetization

  1. Segment players effectively. 
  2. Tailor strategies to player behaviors. 
  3. Regularly refine segmentation criteria. 

❌ Don’ts – Uniform Monetization 

  1. Using one model for all players. 
  2. Ignoring varied player spending habits. 
  3. Missing revenue optimization opportunities.

✅ Do’s- Blend Metrics and Feedback

  1. Regularly collect player opinions. 
  2. Leverage qualitative insights deeply
  3. Combine insights for comprehensive analysis. 

❌ Don’ts – Ignoring Qualitative Data 

  1. Focusing only on quantitative metrics.
  2. Missing deeper player insights.

✅ Do’s- Regular Data Validation 

  1. Cross-check and cleanse data. 
  2. Ensure accurate interpretation. 
  3. Periodically audit data sources. 

❌ Don’ts – Blind Data Trust

  1. Not verifying data accuracy.
  2. Acting on misleading information.

✅ Do’s- Provide Contextual Analysis 

  1. Clearly document analysis conditions.
  2. Explicitly explain data outcomes. 

❌ Don’ts – Contextless Results

  1. Sharing data without context.
  2. Allowing misinterpretation. 

✅ Do’s- Analyze Failures & Successes 

  1. Study unsuccessful competitors closely
  2. Identify clear pitfalls to avoid. 
  3. Regularly reassess competitor landscape.

❌ Don’ts – Ignoring Failed Competitors

  1. Only observing successful games.
  2. Missing valuable failure lessons. 
  3. Repeating avoidable mistakes

✅ Do’s- Combine Data and Playtesting 

  1. Balance using player feedback. 
  2. Leverage data-driven insights. 
  3. Iterate continuously for refinement. 

❌ Don’ts – Single Balancing Method Reliance 

  1. Trusting only one approach (metrics or intuition). 
  2. Risking ineffective balance outcomes. 

✅ Do’s- Iterative Difficulty Adjustments 

  1. Regularly refine difficulty curves. 
  2. Adapt gameplay to player progression. 
  3. Continuously monitor player skill metrics

❌ Don’ts – Set-and-Forget Difficulty

  1. Assuming static difficulty suffices. 
  2. Overlooking evolving player skills. 

✅ Do’s- Clear, Visual Data Presentations 

  1. Utilize dashboards and clear charts. 
  2. Simplify complex insights. 

❌ Don’ts –Overloading Stakeholders with Raw Data 

  1. Presenting overwhelming information. 
  2. Causing stakeholder confusion.

✅ Do’s- Regular Stakeholder Updates 

  1. Provide consistent insights. 
  2. Maintain transparency and trust. 
  3. Set predictable communication schedules.

❌ Don’ts –Infrequent Stakeholder Communication 

  1. Providing rare updates. 
  2. Reactively communicating issues.
  3. Losing stakeholder alignment. 

✅ Do’s- Stick to Predefined Thresholds 

  1. Define clear metrics thresholds beforehand. 
  2. Allow tests to fully mature before analysis
  3. Clearly differentiate noise from valid outcomes

❌ Don’ts –Premature Result Interpretation 

  1. Drawing conclusions from minimal data. 
  2. Mislabeling noise as significant.

Struggling with fixing your game metrics?

Keewano’s AI Analytics Agent does the heavy lifting: It spots the metrics that matter, forms hypotheses, and gives you answers instantly.

Related posts.

No configurations. No distractions. Just answers.