The real problem: compute costs do not sleep
Fabric’s pricing model is capacity based. That means compute is your biggest lever for cost control.
Typical enterprise pattern looks like:
- heavy data engineering pipelines run overnight
- transformation workloads finish by morning
- business users only need read access during the day
Yet many teams keep the engineering capacity running 24/7 and so the obvious question teams ask is: if we pause a capacity, does access to the lakehouse disappear too?
Interestingly, the answer is not always yes.
I recently watched a demo shared by Heidi Hasting, credited to fellow MVP Martin Catherall, that highlighted something many Fabric architects overlook. It is not a hidden feature. It is simply how the platform is designed to work.
Here is the practical takeaway.
If you create a lakehouse in one workspace backed by one capacity, and then expose that data to another workspace using Fabric shortcuts, the compute used to query that data comes from the workspace doing the querying. Not from the workspace where the data was originally processed. That distinction matters.
It means you can run heavy engineering workloads overnight, finish processing, and then pause that engineering capacity to save cost. As long as another workspace with an active capacity references the lakehouse through shortcuts, users can still query the data. The original workspace may look unavailable, but the data itself is not locked away.
what actually happens under the hood?
Fabric separates storage from compute orchestration more than many people realise.
If you:
- build your lakehouse in workspace A (capacity A)
- create a shortcut to that lakehouse from workspace B (capacity B)
- pause capacity A
then:
- workspace A cannot query the lakehouse anymore
- but workspace B can still read the data
But Why/How?
Because the compute used to query the data is coming from the workspace where the query originates, not where the data was created. This is subtle, but powerful. The shortcut is not just a pointer, It effectively allows another capacity to execute compute over the same underlying storage.
From an architectural lens, this reinforces something Fabric quietly promotes: storage persistence does not depend on continuous compute. Compute is just the engine that wakes up when a workspace needs it.
In real environments, this opens up a few useful patterns.
For example, read the scenarios below:
Scenario 1: Many enterprises separate engineering from consumption anyway. Pipelines run in one workspace, while reporting lives in another. With shortcuts in place, you can deliberately size the engineering capacity for burst workloads and then pause it when it is idle. Meanwhile, a smaller reporting capacity continues serving analysts and Power BI models without disruption.
Scenario 2: Scheduled compute windows. If your ingestion and transformation run only a few hours a day, there is little reason to keep that expensive capacity alive the rest of the time. Instead of treating Fabric compute as always-on infrastructure, you can start thinking of it as something closer to batch compute in modern cloud architectures.
Of course, this is not such an easy hack. The workload does not vanish; it shifts. If the reporting workspace is undersized, queries may slow down because that is now where compute happens. Governance also becomes more important. When shortcuts span workspaces, lineage, ownership, and lifecycle decisions must be explicit. Otherwise you end up with invisible dependencies that surprise teams later.
Still, the broader lesson here is not just about saving money. It is about understanding how Fabric wants architects to think. Workspaces are logical boundaries, capacities are compute pools, and shortcuts are connectors. Once you internalise that model, you stop designing Fabric like a single monolithic system and start treating it as a modular data platform.
And in my experience, that mindset shift is what usually separates a well optimised Fabric environment from one that quietly burns budget every month.
Demo:
Finally I tried to reproduce and see the demo on how does this work practically. Presenting few screenshots of that with description of actions under every image.

Creating a new shortcut

Compute paused.

Lakehouse in the source workspace does not load, which is obvious..

But still the Shortcut’d table is active and able to be queried in the other workspace.
Note: As you may know, my main blog site is currently down due to account issues with no clear resolution timeline. Most of my latest blogs are gone with it and I am working on to recover that. For now, I will be posting here on this older blog until things are sorted out.