
The Problem: Scaling Broke Our Data Infrastructure
When our company secured funding and scaled, our BI infrastructure collapsed under pressure.
I was the only analyst at the time, and for six months, I was stuck in a reactive loop, running manual SQL queries all day, unable to build anything sustainable. Every day felt like a firefight:
- Postgres couldn’t handle the load—queries took minutes to hours.
- Looker was unreliable—dashboards were slow and full of inconsistencies.
- Investor reporting skyrocketed—funding meant endless ad-hoc data requests.
- Too many conflicting KPIs—every team had a different definition of "active users."
This was unsustainable. We needed to pause, rebuild, and scale properly.
📌 Case Study Overview
What This Case Study Covers
This case study details how our team migrated our BI infrastructure from Postgres to BigQuery, rebuilt Looker from scratch, and created a scalable, self-serve analytics system—all while managing an overwhelming influx of data requests.
Who Was Involved?
- 1 BI Engineer – Led the BigQuery migration.
- 3 Analysts (including me) – Rebuilt Looker, designed Explores & dashboards.
- 1 Data PM – Aligned stakeholders & drove BI roadmap.
Key Outcomes Achieved
- 10x faster queries by migrating from Postgres to BigQuery.
- Looker adoption jumped to 40% in six weeks—teams could self-serve insights.
- Replaced 60+ redundant dashboards with 10 core dashboards as a single source of truth.
- Migrated 30+ BI models & 30+ Explores, reducing ad-hoc SQL requests.
- Standardized KPIs—no more conflicting definitions of "active users" or "retention."
⏸️ Step 1: Hitting Pause & Stakeholder Alignment (Month 1)
We couldn’t fix everything while reacting to daily chaos, so we hit pause to rebuild our BI strategy from the ground up.
What We Did:
- Paused all non-critical data requests to break the reactive loop.
- Hired 1 BI Engineer & 2 Analysts to scale capacity.
- Ran stakeholder interviews to identify core data needs (what was actually critical).
- Created a BI Roadmap, prioritizing:
- Migrate only core data models first (users, feature engagement, daily activity, product areas).
- Rebuild Looker from scratch—no more patchwork.
- Establish clear governance to prevent future breakdowns.
Impact:
- Immediate drop in ad-hoc SQL requests.
- A clear roadmap for migration, Looker rebuild, and self-serve analytics.
🏗️ Step 2: Migrating Core Data Models to BigQuery (Month 2)
The biggest bottleneck? Postgres was collapsing under scale.
What We Did:
- Focused on 4 core data models first:
- Users (identity, metadata).
- Feature Engagement (what users interact with).
- Daily Activity (user sessions, time spent).
- Core Product Areas (feature adoption).
- Build out 5 - 7 core explores in looker
- Set up interim dashboards for stakeholders to use ( top line kpi dashboard, core feature dashboards etc).
Impact:
- 10x faster queries in Looker—no more timeouts.
- Immediate trust in dashboards—stakeholders saw reliable numbers for the first time.
🔄 Step 3: Rebuilding Looker from Scratch (Month 3-5)
Once our core models were in BigQuery, we scrapped the old Looker setup and started fresh.
What We Did:
- Rebuilt all Explores & dashboards from scratch—no more pointing Looker to Postgres.
- Reduced Looker fields—removed duplicate or unused fields.
- Migrated BI models based on urgency—prioritizing stakeholder needs.
- Analysts & BI Engineers split the migration—each person owned specific models.
Impact:
- Looker adoption hit 30% in six weeks—teams could finally trust dashboards.
- Eliminated 30+ redundant dashboards—no more personal stakeholder versions.
📚 Step 4: Training & Looker Enablement (Month 6)
Even with a clean Looker setup, adoption required training & documentation.
What We Did:
- Trained teams in stages:
- Explores training (finding & using data).
- General Looker training (mandatory before advanced training).
- Dashboards & advanced setup (calculated measures, filtering, etc.).
- Created extensive documentation on:
- Where to find what they need.
- Best practices for Looker usage.
- How to self-serve instead of requesting data.
Impact:
- Stakeholders became self-sufficient—ad-hoc requests dropped.
- PMs & Growth teams could use Looker effectively without BI intervention.
📊 Final Results: BI That Actually Scales

🔑 The Real Fix: People, Process & Technology
Process: Clean Looker = Clean Decisions
- Established data contracts to prevent KPI drift.
- PMs had to define KPIs upfront before new dashboards.
People: Empowering Teams, Not Analysts
- No more random SQL requests—teams had to self-serve.
- Looker training & documentation reduced dependency on BI teams.
Technology: Looker, Done Right
- Fewer dashboards, better-designed LookML models.
- BigQuery for speed & scalability—Postgres was killing us.
🏁 Conclusion
This project wasn’t just about fixing a broken BI system. It was about creating a sustainable, scalable data culture. By migrating to BigQuery, rebuilding Looker from scratch, and implementing governance and training, we shifted from reactive firefighting to proactive decision-making.
This transformation allowed:
- Analysts to focus on high-impact work instead of ad-hoc SQL requests.
- PMs and leadership to trust their data and self-serve insights.
- The company to scale without constantly rebuilding its BI infrastructure.
This wasn’t just a technology fix. It was a fundamental shift in how the organization used data.