Findings (7)
Compatibility
Fabric SQL Analytics Endpoint does not support T-SQL stored procedures. All 52 procs must be converted to Spark notebooks or Dataflow Gen2.
→ Categorize procs by complexity: simple (SELECT-based) → views, medium (INSERT/UPDATE) → Dataflow Gen2, complex (cursors/temp tables) → Spark notebooks.
Compatibility
14 user-defined functions use CLR assemblies not supported in any Fabric endpoint.
→ Rewrite as Python UDFs in Spark notebooks or inline the logic in Dataflow Gen2 expressions.
Performance
47 tables use HASH distribution in Synapse. Fabric Lakehouse tables use Delta format without explicit distribution control.
→ For hot query tables, use Z-ORDER on the former distribution key columns. Monitor query performance post-migration.
Compatibility
23 external tables using PolyBase to read from ADLS Gen2 need conversion to OneLake shortcuts or OPENROWSET.
→ Replace PolyBase external tables with OneLake shortcuts for same-tenant data. Use Dataflow Gen2 for cross-tenant sources.
Security
Column-level security on 8 tables and dynamic data masking on 15 columns must be reimplemented using Fabric security features.
→ Implement OneLake data access roles for table-level security. Use RLS in semantic models for row-level. Column masking via Purview.
Compatibility
12 materialized views used for query acceleration in Synapse have no direct equivalent in Fabric.
→ Convert to Delta tables with scheduled refresh via notebook or pipeline. Use Fabric Direct Lake mode for similar performance.
Migration Planning
Moving 1.24TB in a single migration window exceeds the 4-hour maintenance window. Risk of data inconsistency during cutover.
→ Use phased migration: Wave 1 (dimension tables, <100GB) → Wave 2 (fact tables, incremental) → Wave 3 (real-time cutover).
Recommendations
1Categorize and convert 52 stored procedures: simple→views, medium→Dataflow, complex→Spark
2Replace 23 PolyBase external tables with OneLake shortcuts
3Plan 3-wave migration for 1.24TB dataset to stay within maintenance windows
4Rewrite 14 CLR-dependent UDFs as Python functions
5Implement OneLake data access roles to replace column-level security
6Convert 12 materialized views to scheduled Delta tables
7Apply Z-ORDER optimization on former HASH distribution key columns