User acquisition metrics for mobile apps are the quantitative measures that evaluate how effectively a growth program attracts, activates, retains, and monetizes new users. They span the full post-install lifecycle, from the cost of the initial install through activation, retention, revenue generation, and long-term value.
CPI is the foundation of mobile UA measurement. Every media buyer knows their CPI by channel, by geo, by creative. It’s the metric campaigns are built on, and for good reason: CPI is fast, comparable, and directly actionable. For gaming advertisers running install-optimized campaigns, CPI is the primary lever for managing scale and efficiency in real time.
But CPI measures one event at one moment. The install. What happens after it (activation, retention, revenue) requires additional metrics. The most effective UA programs don’t replace CPI. They layer quality and value metrics on top of it, creating a measurement stack that connects what you paid at the top of the funnel to what you got at the bottom.
The gap between CPI and downstream outcomes has widened in recent years. Privacy regulations have reduced targeting precision, which means more spend can reach less qualified users even at attractive CPIs. Channel saturation has increased competition across major platforms. The industry-wide retention reality (77% of users gone in three days, 90% by day 30) confirms that the install event alone doesn’t tell you enough about whether a campaign is producing real value.
This article presents a complete framework for user acquisition metrics in mobile apps, organized as a measurement stack that starts with CPI and builds upward through quality, value, and efficiency layers.
The Metric Stack: From Install to Value
User acquisition metrics exist in layers. Each builds on the one below it. CPI is the foundation. The metrics above it add progressively more information about whether the users you’re acquiring are worth the cost.
The strongest UA teams work the full stack, using CPI for campaign-level optimization while using higher-layer metrics for budget allocation and channel strategy.
Layer 1: Cost metrics (what you paid)
CPI (Cost Per Install) Formula: Total ad spend ÷ number of installs What it tells you: How much you paid per download from a specific channel or campaign. Why it matters: CPI is the workhorse metric of mobile UA. It’s the fastest feedback signal a media buyer has. It enables real-time creative comparison (creative A vs creative B), channel benchmarking, and campaign pacing. For gaming advertisers running at scale, CPI is the primary metric for managing daily spend efficiency. How smart teams use it: CPI becomes most powerful when paired with LTV models. A media buyer who knows their CPI by source and their predicted LTV by source can calculate whether a campaign is profitable before waiting 30 or 90 days for the revenue data to mature. CPI against LTV is, in practice, a proxy for ROAS — and many experienced buyers use it exactly this way. Where it needs support: CPI alone can’t distinguish between a $3 install that churns in 48 hours and a $3 install that retains through D90 and generates $35 in revenue. Two campaigns with identical CPIs can produce radically different business outcomes. That’s not a limitation of CPI as a metric — it’s a reason to pair it with quality and value metrics that fill in the picture.
CPA (Cost Per Action) Formula: Total ad spend ÷ number of users completing a defined post-install action What it tells you: How much you paid for a user who did something meaningful: registered, made a purchase, started a subscription, funded an account, or completed a tutorial. When it’s the right model: CPA is valuable when the advertiser’s primary goal is driving specific post-install behaviors and the campaign can be optimized toward those events. It’s common in fintech (cost per funded account), subscriptions (cost per trial start), and e-commerce (cost per first purchase). When CPI is the better fit: For gaming advertisers, especially those running at high volume across multiple geos, CPI is often the more practical campaign model. The install is the event the ad platform can optimize toward most efficiently, and the quality assessment happens downstream through retention and LTV analysis. Many gaming campaigns use CPI as the acquisition model and measure quality through ROAS and cohort analysis rather than paying on a CPA basis. The key point: CPI and CPA are different campaign models, not a quality ladder where one is better than the other. The right model depends on the app’s monetization strategy, the volume required, and where in the funnel the advertiser can most efficiently optimize.
CAC (Customer Acquisition Cost) Formula: Total acquisition costs (media + creative + tools + salaries) ÷ number of new paying users What it tells you: The fully loaded cost to acquire a user who generates revenue. Why it matters: CAC captures costs that CPI and CPA ignore, including creative production, attribution tools, and team overhead. It gives you the true denominator for your LTV-to-CAC ratio. Common mistake: Calculating blended CAC across all channels. Blended CAC is useful for board reporting but masks channel-level performance. Calculate CAC by source to identify which channels produce paying users efficiently and which produce volume that never monetizes.
Layer 2: Quality metrics (what you got)
Activation Rate Formula: Users completing first key action ÷ total installs × 100 What it tells you: What percentage of installers found enough value to take a meaningful first step. Why it matters: Activation is the earliest measurable quality signal. If a channel produces high install volume but low activation, the users either didn’t understand the product or weren’t motivated to explore it. Both are acquisition quality signals worth investigating. Benchmark: Varies widely by category. Fintech apps (account creation): 40-60%. Gaming (tutorial completion): 50-70%. Subscription apps (trial start): 20-40%. If your activation rate is below your category benchmark, investigate onboarding friction and acquisition source quality simultaneously.
D1 / D7 / D30 / D90 Retention Formula: Users active on Day X ÷ users installed on Day 0 × 100 What it tells you: What percentage of users return at each milestone. Why it’s critical: Retention is the clearest signal that users are finding ongoing value. D1 retention indicates whether the first session delivered on the promise. D7 indicates habit formation. D30 is the industry standard for evaluating cohort quality. D90 identifies core users. Benchmark averages: D1: 25-35% (category dependent). D7: 10-20%. D30: 5-12%. D90: 3-7%. iOS retains approximately 2-3 percentage points higher than Android across all windows. How to use it: Build retention curves by acquisition source. If Channel A retains 8% at D30 and Channel B retains 18% at D30, those channels are producing fundamentally different user populations. Retention by source is the single most actionable metric for understanding what your CPI actually bought.
Post-Install Event Completion Rate Formula: Users completing event X ÷ total installs × 100 What it tells you: What percentage of users reach specific engagement milestones after install, such as registration, first purchase, level completion, subscription start, or first deposit. Why it matters: Post-install events are the bridge between engagement and monetization. A high event completion rate indicates that users understand the product, find it valuable, and are progressing toward revenue-generating behavior. How to use it: Track event completion by acquisition source. A channel that delivers 10,000 installs with 2% purchase completion contributes 200 buyers. A channel that delivers 4,000 installs with 8% purchase completion contributes 320. The smaller channel produces 60% more revenue-generating users.
Layer 3: Value metrics (what it’s worth)
Cohort LTV (Lifetime Value) Formula: Total revenue from a user cohort ÷ number of users in the cohort What it tells you: How much revenue a group of users acquired at the same time generates over their lifetime. Why cohort matters: Aggregate LTV across all users is misleading because it blends high-quality organic users with lower-quality paid cohorts. Cohort LTV, segmented by acquisition date and source, shows the actual revenue trajectory of each channel’s users. How to use it: Plot LTV curves by acquisition source over D7, D30, D60, D90, and D180 windows. Channels where the curve continues to rise past D30 are producing users with compounding value. Channels where the curve flattens at D7 are producing users who generate early revenue but don’t grow. This distinction determines where to scale budget.
ROAS (Return on Ad Spend) Formula: Revenue attributed to a campaign ÷ ad spend on that campaign What it tells you: How much revenue each dollar of ad spend generates. Why it matters: ROAS is the most direct measure of campaign profitability. A ROAS of 1.0 means breakeven. Above 1.0 is profitable. Below is loss. For experienced media buyers, ROAS is the metric that ties everything together. It’s where CPI, retention, and LTV converge into a single profitability signal. How CPI connects to ROAS: Strong media buyers don’t treat CPI and ROAS as separate conversations. They build LTV prediction models and measure CPI against those models to estimate ROAS in real time. A $4 CPI on a campaign where the predicted D30 LTV is $12 implies a 3.0 ROAS, and that’s a scale signal, even if the CPI itself is higher than another campaign’s. CPI is the input. ROAS is the output. The best teams connect them. Time windows: D7 ROAS is common for subscription apps. D30 ROAS for gaming and e-commerce. D90 ROAS for apps with longer monetization curves. The right window depends on your payback period. Post-ATT reality: ROAS measurement on iOS is complicated by limited deterministic attribution. Teams combine SKAdNetwork conversion values with modeled data from their MMP. The measurement isn’t perfect, but directionally accurate ROAS by channel is achievable and essential.
LTV-to-CAC Ratio Formula: Lifetime value ÷ customer acquisition cost What it tells you: Whether your acquisition program creates or destroys value. Benchmarks: 3:1 is the minimum for sustainable growth. Top-performing apps achieve 4:1 or higher. Below 1:1 means you lose money on every user acquired. Between 1:1 and 3:1 suggests the program is viable but not yet efficient. Why it’s the north star: LTV-to-CAC ratio is the single number that summarizes whether your acquisition program is working. Every other metric feeds into it. CPI, activation, retention, event completion, and LTV are all inputs. LTV-to-CAC is the output.
Layer 4: Efficiency metrics (what it really costs)
Cost Per Retained User Formula: Total acquisition spend ÷ users still active at D30 (or D90) What it tells you: What you actually paid for a user who stayed. Why it adds a dimension CPI doesn’t cover: Cost per retained user bridges the gap between what you paid (CPI) and what you got (retention). Two channels can have identical CPIs but produce retained users at wildly different costs. A $4 CPI channel with 5% D30 retention produces retained users at $80 each. A $4 CPI channel with 15% D30 retention produces retained users at $27 each. Same CPI. Very different value. How to implement: Add this column to every channel performance report. Calculate it weekly. Use it alongside CPI and ROAS to evaluate channels. It’s not a replacement for CPI — it’s the layer above it that helps explain why two campaigns with similar CPIs can produce such different ROAS.
Incrementality Measurement approach: Holdout groups (suppress ads to a random subset and compare install rates), geo-based lift tests (run campaigns in some markets, hold others as controls), or synthetic control methods. What it tells you: Whether a channel’s conversions are genuinely net new or would have occurred without the campaign. Why it matters: Attribution tells you which channel touched the user last. Incrementality tells you whether the channel actually caused the acquisition. A channel may claim 10,000 installs through last-touch attribution while only driving 3,000 truly incremental ones. Without incrementality testing, your cost metrics are inflated by users who weren’t actually acquired by the channel claiming credit. When to test: Run incrementality tests on your highest-spend channels first. If Meta represents 60% of your budget, a 5% holdout test can reveal whether you’re paying for users who would have found you organically.
Building a Measurement Stack That Works
Tracking all of these user acquisition metrics for mobile apps simultaneously creates noise. The framework becomes actionable when you organize it into a decision stack.
Daily operating metrics: CPI by source and creative. Spend pacing. CTR and conversion rates. These are the metrics your media buyers live in and use for real-time campaign management. CPI is the right metric at this level — it’s fast, actionable, and directly tied to the levers a buyer controls.
Weekly quality checks: D7 retention by source. Activation rate by source. Post-install event completion by source. Cost per retained user by source. These metrics tell you whether your daily CPI optimization is producing users worth keeping. If CPI is trending down but retention is also trending down, the efficiency gain is illusory.
Monthly strategic metrics: D30 retention and cohort LTV by source. ROAS by channel at D30. LTV-to-CAC ratio by channel. These are the metrics that inform channel strategy, budget allocation across the portfolio, and decisions about scaling or cutting channels.
Quarterly deep dives: D60 and D90 cohort LTV by source. ROAS at D90. Incrementality test results. These metrics validate whether your monthly decisions held up over time and whether your LTV predictions were accurate.
The operating principle: CPI drives daily campaign decisions. Quality and value metrics drive weekly and monthly budget allocation. The two levels work together — CPI manages the pace, quality metrics steer the direction.
What Changes When You Measure the Full Stack
When user acquisition metrics for mobile apps are organized as a full stack rather than a single number, three things change.
Budget allocation becomes more precise. When you can see CPI, retention, and ROAS by source in one view, the allocation decisions become clearer. A channel delivering $3 CPI with strong D30 ROAS gets scaled. A channel delivering $2 CPI with weak ROAS gets investigated. The CPI difference isn’t the signal — the ROAS difference is. But CPI helps you understand the input cost that’s producing that ROAS.
The definition of “scale” expands. Volume-first teams define scale as install count. Full-stack teams define scale as the number of profitable users acquired per period. This reframing unlocks channels that produce fewer installs but stronger ROAS. Value Exchange Media, for example, may deliver fewer total installs than paid social but produce higher ROAS per dollar spent, because the opt-in mechanic filters for intent before the install. On a profitability-adjusted basis, it’s often the more scalable channel.
Conversations with finance and leadership improve. When you report ROAS and LTV-to-CAC ratios alongside CPI and install counts, the narrative shifts from “we need more budget to hit our install target” to “every dollar we spend returns $X in user lifetime value, and here’s which channels produce the strongest return.” That’s a conversation a CFO wants to have.
Why Measurement and Acquisition Model Are Connected
The metrics you track are shaped by the acquisition model you use.
In a CPI campaign model, the install is the primary optimization event. Quality assessment happens downstream through retention analysis, LTV modeling, and ROAS calculation. The media buyer manages CPI and creative performance daily. The analytics team evaluates cohort quality weekly and monthly. The two functions work together to connect cost to value.
In an outcome-based model like Value Exchange Media, the post-install action is the primary event. The user doesn’t just install. They complete a registration, a tutorial, a purchase, or another verified action. The pricing is tied to that outcome. This means the cost metric and the quality metric are the same number: you know what you paid, and you know the user did something meaningful, because you only paid when they did.
Both models can produce high quality users. The difference is in the measurement path. CPI-based models require the advertiser to build the quality layer themselves, connecting CPI to retention to LTV to ROAS through their own analytics. Outcome-based models build quality into the cost structure, which simplifies the measurement but may offer different scale dynamics depending on the vertical and campaign objectives.
The strongest UA programs use both. CPI-optimized campaigns for scale and creative testing. Outcome-based channels for incremental, quality-verified users. The measurement stack described in this article is what connects the two into a unified view of acquisition performance.
AdAction powers the Value Exchange Media infrastructure for enterprise advertisers, publishers, and platforms. Qualume™ connects brands with high-intent users across global publisher apps, delivering verified outcomes with full attribution transparency.
Glossary
User Acquisition Metrics: The quantitative measures used to evaluate how effectively a mobile app’s growth program attracts, activates, retains, and monetizes new users. Includes cost metrics, quality metrics, value metrics, and efficiency metrics.
CPI (Cost Per Install): Ad spend divided by installs. The foundational UA metric for campaign-level optimization. Most powerful when paired with LTV models to estimate ROAS.
CPA (Cost Per Action): Ad spend divided by users completing a defined post-install action. Valuable when campaigns are optimized toward specific downstream events. Common in fintech, subscriptions, and e-commerce.
CAC (Customer Acquisition Cost): Fully loaded acquisition costs (media + creative + tools + salaries) divided by new paying users. The true denominator for LTV-to-CAC calculation.
Activation Rate: Percentage of installers who complete a first meaningful in-app action. The earliest measurable quality signal.
D1 / D7 / D30 / D90 Retention: Percentage of users still active at each milestone after install. The primary indicator of user quality and the strongest predictor of LTV.
Post-Install Event Completion Rate: Percentage of users reaching specific engagement milestones (registration, purchase, subscription, tutorial completion). Bridges engagement and monetization.
Cohort LTV (Lifetime Value): Total revenue from a user cohort divided by cohort size. Must be segmented by acquisition source and date to be actionable.
ROAS (Return on Ad Spend): Revenue attributed to a campaign divided by that campaign’s ad spend. The most direct profitability measure. Where CPI, retention, and LTV converge into a single signal.
LTV-to-CAC Ratio: Lifetime value divided by customer acquisition cost. The north star metric. Minimum 3:1 for sustainable growth; top performers achieve 4:1+.
Cost Per Retained User: Acquisition spend divided by users still active at D30 or D90. Bridges the gap between CPI (what you paid) and retention (what you got). Helps explain why campaigns with similar CPIs produce different ROAS.
Incrementality: Whether a channel’s conversions are net new or would have occurred without the campaign. Measured through holdout groups, geo-based lift tests, or synthetic controls.
Blended CAC: Total acquisition costs across all channels divided by all new users. Useful for board reporting but masks channel-level performance differences.
CPE (Cost Per Engagement): Cost when a user completes a defined engagement milestone. Common in Value Exchange Media campaigns.
Value Exchange Media: An acquisition model where users opt in to engage with brands in exchange for rewards. Can operate on CPI or outcome-based pricing, with the opt-in mechanic providing an intent filter regardless of cost model.
MMP (Mobile Measurement Partner): Third-party attribution platform (AppsFlyer, Adjust, Branch) that tracks which channels and campaigns drive installs and post-install events.
ATT (App Tracking Transparency): Apple’s iOS privacy framework requiring opt-in before cross-app tracking. Reduced deterministic attribution on iOS, requiring teams to combine SKAdNetwork data with modeled approaches.
SKAdNetwork (SKAN): Apple’s privacy-preserving attribution framework for iOS. Provides aggregated, delayed conversion data to advertisers without exposing user-level information.
ARPDAU (Average Revenue Per Daily Active User): Revenue per active user per day. Used to compare monetization efficiency across acquisition sources and user segments.



