We connect to your Fabric tenant and scaffold a complete Bronze/Silver/Gold lakehouse architecture — workspaces, lakehouses, PySpark notebooks, pipelines, Spark configs, and Direct Lake semantic model. AI builds the scaffolding. A Dual Microsoft MVP customizes it for your data.
A production-ready lakehouse architecture, deployed and documented, in a single day.
Separate Bronze/Silver/Gold workspaces with proper RBAC. Engineers write to Bronze/Silver, analysts read from Gold. Governance built in from day one.
Bronze: write-heavy, auto-compact. Silver: balanced with V-Order. Gold: read-optimized with Optimize Write at 1GB bin size. Each tuned for its job.
Production-ready ingestion (Bronze), transformation (Silver), and aggregation (Gold) notebooks. Metadata columns, dedup logic, schema enforcement, validation gates.
Bronze → Silver → Gold sequential pipeline with date parameters, retry logic, and scheduling. Ready for daily runs.
Gold lakehouse connected to Power BI via Direct Lake — no import, no duplication. Measures, dimensions, and a starter report.
Full markdown docs: architecture diagram, layer configs, workspace mapping, Spark settings, RBAC guide, and next-steps playbook.
4-hour hands-on session with a Dual Microsoft MVP. We customize notebooks for your data, configure the pipeline, and validate end-to-end.
Two follow-up calls in the first 30 days. Pipeline tuning, Spark optimization, and data quality questions answered.
30-min call to understand your data sources, volume, schema complexity, and analytics goals. We scope the medallion design.
We design the workspace layout, naming conventions, layer configs, and notebook logic. You approve before we touch your tenant.
Using the FabricDataEngineer agent and medallion architecture skills, we create workspaces, lakehouses, notebooks, and pipeline in your tenant. Takes 1–2 hours.
4-hour session: we customize notebooks for your data, run the pipeline end-to-end, validate results, and walk through the documentation.
A layered data architecture from raw ingestion to analytics-ready output.
DATA SOURCES → BRONZE (Raw) → SILVER (Cleaned) → GOLD (Analytics) → POWER BI Files Append-only Dedup, validate Aggregate, curate Direct Lake APIs Schema-on-read Schema enforce V-Order, ZORDER Semantic Model Databases Partition: date Partition: biz date Partition: month Reports
| DIY | With PowerMates | |
|---|---|---|
| Time to production architecture | 4–8 weeks | 1 day |
| Notebook development | Trial-and-error | Production-ready templates |
| Spark optimization | Learn by Googling | Pre-tuned per layer |
| RBAC setup | Often skipped | Built in from start |
| Documentation | Usually none | Full architecture docs |
| Expert guidance | Stack Overflow | MVP working session |
We can scaffold into your existing workspace or create new ones alongside. The notebooks and pipeline logic work either way.
That's what the 4-hour working session is for. We'll adapt the templates to your actual tables, schema, and business logic.
Yes. You need at least an F2 SKU (or Power BI Premium P1) to run Spark notebooks and lakehouses. We can help you figure out the right size during the discovery call.
Bundle them. We'll scan your existing tenant first ($2,500–$12,500 depending on tier), then use those findings to inform the medallion architecture design. Common combo: AI-Assisted Scan ($12,500) + Architecture Accelerator ($15,000) = $25,000.
Stop building from scratch. Get a production-ready medallion architecture deployed by a Microsoft MVP.