How to Build a Secure Monthly App Store Review Aggregator on Firebase

A practical guide for public-sector and regulated organizations that need a private, auditable way to aggregate app store reviews and share them with leadership and product teams… without tracking users, linking reviews to customer accounts, or running AI in production.


What This System Does

The system is a monthly App Store Review & Feedback Aggregator. It runs automatically once per month on the 5th (America/New_York), pulls reviews from the Apple App Store (via the App Store Connect API) and the Google Play Store (via the Android Publisher API), normalizes them into a single schema, and stores the data securely. It then generates a monthly summary report and hosts a dashboard where approved users can view metrics, explore reviews, and download reports. Optionally, it can email a link to the report to a fixed distribution list. The site is private, authenticated, and not indexed by search engines. AI is not part of the runtime; it is only a development accelerator, not a feature.

Explicit scope: The system does not track users, join reviews to customer accounts, perform sentiment targeting, or use AI in production. It aggregates, stores, and reports, nothing more. That bounded scope makes it easier to describe to security and compliance teams and to audit.


Architecture: How It’s Set Up

Hosting. The dashboard UI is served by Firebase Hosting. Only authenticated users with the right role can access meaningful content.

Data. Firestore holds normalized reviews, monthly report metadata, user roles (via custom claims), and the email distribution list. Cloud Storage holds generated artifacts: CSV and PDF reports. Raw review data stays in your project; nothing is sent to third-party analytics.

Workloads. Cloud Functions (2nd gen) handle scheduled ingestion, report generation, and optional email sending. Cloud Scheduler triggers the monthly job on the 5th of every month in America/New_York. The job pulls reviews for the previous calendar month, so January 5th processes December’s reviews. Using 2nd gen Functions gives you better control over timeouts and scaling, which matters when calling external APIs like App Store Connect and the Android Publisher API.


Authentication and Authorization

Authentication. Users sign in with Firebase Authentication using Email/Password. Work email addresses are used; there is no Google Workspace dependency and no public signup.

User provisioning. Admins create users directly. Users receive a password-set or reset email. Passwords are never stored by the app; Firebase Auth handles them. There is no self-registration.

Roles. Two roles: admin and viewer. Roles are enforced via Firebase custom claims, not client-side logic. The backend and security rules rely on these claims, so role changes are consistent and auditable.


Secrets and Key Management

Sensitive keys do not live in Firestore, Hosting config, or client code. They are stored in Google Secret Manager:

  • Apple App Store Connect: The .p8 private key used to sign JWTs for the App Store Connect API.
  • Google Play: Service account JSON (only if you use the Android Publisher API).
  • Email: API key or credentials for the email provider, if you send report links.

JWT handling. Apple JWTs are generated on demand inside the scheduled Cloud Function. They are short-lived and never persisted. JWTs are not stored anywhere; they are created at runtime when calling Apple’s API.


Security Posture: A Checklist

  • Firestore rules: Default deny. Read access only for authenticated users with an approved role. No client writes to review data; only backend functions write.
  • Cloud Storage rules: Only approved users can download reports; no public access.
  • App Check: Enabled so only your legitimate clients can call your backend.
  • Audit logging: In place for user creation/disable, monthly job execution, and report generation.
  • Non-indexing: robots.txt disallows all; pages use noindex, nofollow (meta or headers). All meaningful content sits behind auth.

Together, this gives you a clear security posture and an audit trail for access and job runs. For regulated organizations, being able to answer “who saw what, and when did the job run?” is essential.


How Users Interact With the Dashboard

Monthly report view. Users select a month and see high-level metrics: total reviews, average rating, 1-star review count, and changes versus the prior month. They can download CSV or PDF for that month.

Review explorer. Users filter by app, store (iOS or Android), rating, locale, and keyword to drill into specific feedback. No PII is exposed; the focus is on review text and metadata.

Admin settings. Admins manage users (create, disable) and the email distribution list. They can view last job run status to confirm the monthly pipeline completed.


Email Behavior

On the 5th, the system can email a link to the monthly report. The link requires login; it does not expose the report to the open internet. Attachments are optional but discouraged for security and consistency. The distribution list is admin-managed in Firestore, so only designated recipients receive the link.


How Teams Use It in Practice

  • Monthly leadership review. Leadership gets a single dashboard and report: one place to see volume, ratings, and 1-star trends without logging into Apple or Google.
  • Release regression detection. After a release, product and engineering can compare the current month’s metrics and 1-star count to the previous month to spot regressions.
  • Support triage. Support teams use the review explorer to filter by rating or keyword and prioritize issues; this is manual triage, not automated sentiment targeting.
  • Tracking sentiment after major releases. Teams can observe how ratings and review tone change after a big launch—again, by reviewing the data in the dashboard, not by automated sentiment in production.

These workflows stay human-in-the-loop: the system surfaces the data; people make the decisions.


Why This Approach Works for Regulated Environments

This design keeps operational risk low: no runtime AI, no user tracking, no joining reviews to customer identities. Access is role-based and authenticated; secrets live in Secret Manager; and audit logging covers user and job activity. The schedule is predictable (5th of the month, previous month’s data), and the scope is bounded—aggregation, storage, reporting, and secure access only.

If you’re building internal tools in regulated environments, this pattern is reusable: same principles of least privilege, secret management, and auditability apply to other internal dashboards and report pipelines.