How we score every app and channel, and how we turn 8M+ live snapshots into a revenue forecast. No black box, no hand-waving.
Last updated 2026-05-13Every public metric on bumetric comes from one of four sources, all clearly labelled in the snapshot history of each app:
| Source | What we read | How often |
|---|---|---|
| iTunes Lookup API | iOS app metadata, ratings, icon, screenshots, current price | On-view, then weekly |
| Google Play Store | Android app metadata, ratings, install band, developer info | On-view, then weekly |
| YouTube Data API | Channel subscriber count, view count, video counts, category | On-view, then weekly |
| App Store Connect / Stripe Connect | Real revenue numbers, supplied by app owners who claim their listing | Pulled the moment an owner verifies |
We do not scrape App Store Connect (it would violate Apple's terms). We do not buy mobile attribution data. The "estimate" you see on any page is fully derived from public signals plus our calibration set.
The BU Score on every app page is a single 0–100 number summarising overall health. It is composed of three weighted components:
rating × 10. An app rated 4.7 stars earns 47 points here.log10(ratings_count) × 5. This rewards apps with a large rated audience but discounts the marginal gain past ~1M ratings (so a 10M-rating app does not just outscore a 1M one by 10×).The final score is clipped to 100. The tier label ("Transcendent", "Excellent", "Strong", "Solid", "Emerging", "Niche") is a lookup based on score band.
Each app page shows a forecast monthly revenue. It is produced in two steps:
The forecast does not use the LLM. Gemini is used elsewhere on the site for ASO rewrites and growth recommendations, but the revenue figure is a deterministic calculation.
Without ground-truth data, any app revenue estimate is just an opinion. We solve that with the verified anchor set:
This is the closest thing to a defensible moat on this site. Anyone can copy a scraper. Nobody can copy a curated verified-MRR set without doing the slow work themselves.
We label every forecast with a confidence band derived from how many anchors are available in the same niche:
| Anchors in category | Typical accuracy | Confidence |
|---|---|---|
| 10 or more | ±15% | High |
| 3 to 9 | ±25% | Medium |
| 1 to 2 | ±40% | Low |
| 0 | category median fallback | Indicative only |
An app with the "Low" confidence label is still useful for ranking and trend, but the dollar figure should be read with healthy skepticism.
Every app page is live. When you open /p/{app_id} for an app we haven't scanned recently, our worker fetches new metadata in the background and saves a fresh snapshot before you finish scrolling. The "Last update" timestamp on the page is the moment we actually wrote that snapshot.
Cron jobs additionally re-scan high-traffic apps weekly to keep the snapshot history smooth.
Things our forecast deliberately ignores:
If a number on our site is wrong for an app you own, you have two options:
The reputation of this site depends on our numbers being closer to ground truth than competing scrapers. We take corrections seriously.