back to blog
Microsoft Fabric4 min read

Migrating to fabric: a 3 day plan for power bi teams

#fabric#migration#strategy#power-bi

Teams often stare at microsoft fabric and get paralyzed by the size of it. "It's a whole new platform," they say. "We need a 6-month migration strategy."

You honestly don't.

Fabric is designed to be adopted incrementally. You can (and should) get your first end-to-end solution running in days, not months. The best way to learn is by shipping something real.

Here's the plan to take a slice of your existing power bi analytics and move it properly into fabric.

Day 1: foundation and ingestion

Your goal for day 1 is simple: get raw data into onelake. Don't worry about the report yet. Focus on the plumbing.

Morning: workspace and capacity setup

  1. Create a new workspace: Don't reuse an old one. Call it [Project Name] - Dev.
  2. Assign capacity: You need at least an F2 (or trial) capacity assigned to this workspace. Without capacity, it's just a power bi workspace.
  3. Create a lakehouse: Name it lh_raw. This is your landing zone.
  4. Create a second lakehouse: Name it lh_gold. This is where your clean, modelled data will live.

Why two lakehouses? It separates "messy data we just ingested" from "clean data ready for reporting." It's a simplified medallion architecture (bronze/silver -> gold).

Afternoon: ingestion with dataflows gen2

Pick one data source. Just one. Ideally something you already know, like a sql database or a set of sharepoint files.

  1. Build a dataflow gen2: Connect to your source.
  2. Do minimal transformation: Just enough to get the data types right. Don't go crazy with business logic yet.
  3. Set destination: Point it to your lh_raw lakehouse.
  4. Publish and run: Watch the green checkmarks.

If you're stuck on dataflows, check out my dataflows gen2 guide.

Day 1 checkpoint: You have data sitting in delta tables in your lh_raw lakehouse. You can query it with sql.

Day 2: transformation and modeling

Now we turn raw data into a proper semantic model. This is where the "fabric magic" happens—specifically direct lake mode.

Morning: notebooks or stored procedures

You need to move data from lh_raw to lh_gold. You have two choices:

  • Option A (low code): Use another dataflow gen2 to read from lh_raw, apply business logic (renaming, merging, calculating), and write to lh_gold.
  • Option B (code): Use a spark notebook. Read from lh_raw, do your transformations in pyspark or sql, and write to lh_gold.

If you're comfortable with python/sql, use notebooks. They are faster and cheaper at scale. If you are strictly power bi, dataflows are fine for starting out.

See my post on spark optimization if you go the notebook route.

Afternoon: the sql endpoint

Open your lh_gold lakehouse. Switch to the sql analytics endpoint view.

  1. Verify relationships: You can actually define primary/foreign keys here. They aren't enforced, but they help the semantic model.
  2. Write a test query: Make sure your data looks right. SELECT TOP 100 * FROM my_table.

Day 2 checkpoint: You have clean, star-schema friendly tables in lh_gold.

Day 3: serving and consumption

This is the payoff. We're going to build a report that connects directly to onelake without importing data.

Morning: the direct lake semantic model

  1. From your lh_gold lakehouse, click "New semantic model".
  2. Select the tables you want (Fact and Dimensions).
  3. Define relationships: Drag and drop just like in power bi desktop.
  4. Write DAX: Yes, you still write DAX. Create your Sum(Sales) and YTD measures here.

This model is in direct lake mode. It's not directquery (slow), and it's not import (stale). It reads delta files directly from storage. It is blazingly fast.

Afternoon: the report

  1. Open power bi desktop.
  2. Connect to onelake data hub.
  3. Select your new semantic model.
  4. Build the report.

Notice something? It feels exactly like building a normal power bi report. But there is no refresh schedule to manage for the dataset. When the lakehouse updates, the report updates.

Evening: deployment

Publish your report to the workspace. Share it with a few friendly users.

Day 3 checkpoint: A live power bi report running on fabric data, with no import refresh schedule.

What's next?

You just built a modern data platform solution in 3 days.

  • Day 4: Add a pipeline to orchestrate the refresh of the dataflows/notebooks.
  • Day 5: Add row-level security (RLS) to the semantic model.
  • Day 6: Start your second subject area.

Don't overthink the migration. Pick a slice, move it, learn, and repeat.

share:
Yari Bouwman

Written by

Data Engineer and Solution Designer specializing in scalable data platforms and modern cloud solutions. More about me

related posts