🗺️ Migration Playbook
Assessment Report
Client...
March 29, 2026 11:18 AM
gaston@thepowermates.com
Overall Health Score
55
Grade: F
2
CRITICAL
3
HIGH
2
MEDIUM
Azure Synapse Dedicated SQL Pool
source_platform
Microsoft Fabric SQL Analytics Endpoint
target_platform
342
total_objects
187
tables
89
views
52
stored_procedures
14
functions
1240
estimated_data_size_gb

Findings (7)

52 stored procedures require rewrite
CRITICAL
Compatibility

Fabric SQL Analytics Endpoint does not support T-SQL stored procedures. All 52 procs must be converted to Spark notebooks or Dataflow Gen2.

→ Categorize procs by complexity: simple (SELECT-based) → views, medium (INSERT/UPDATE) → Dataflow Gen2, complex (cursors/temp tables) → Spark notebooks.
14 UDFs with CLR dependencies
CRITICAL
Compatibility

14 user-defined functions use CLR assemblies not supported in any Fabric endpoint.

→ Rewrite as Python UDFs in Spark notebooks or inline the logic in Dataflow Gen2 expressions.
Distribution keys not transferable
HIGH
Performance

47 tables use HASH distribution in Synapse. Fabric Lakehouse tables use Delta format without explicit distribution control.

→ For hot query tables, use Z-ORDER on the former distribution key columns. Monitor query performance post-migration.
PolyBase external tables require new approach
HIGH
Compatibility

23 external tables using PolyBase to read from ADLS Gen2 need conversion to OneLake shortcuts or OPENROWSET.

→ Replace PolyBase external tables with OneLake shortcuts for same-tenant data. Use Dataflow Gen2 for cross-tenant sources.
Security model migration needed
HIGH
Security

Column-level security on 8 tables and dynamic data masking on 15 columns must be reimplemented using Fabric security features.

→ Implement OneLake data access roles for table-level security. Use RLS in semantic models for row-level. Column masking via Purview.
Materialized views not supported
MEDIUM
Compatibility

12 materialized views used for query acceleration in Synapse have no direct equivalent in Fabric.

→ Convert to Delta tables with scheduled refresh via notebook or pipeline. Use Fabric Direct Lake mode for similar performance.
1.24TB data transfer requires phased approach
MEDIUM
Migration Planning

Moving 1.24TB in a single migration window exceeds the 4-hour maintenance window. Risk of data inconsistency during cutover.

→ Use phased migration: Wave 1 (dimension tables, <100GB) → Wave 2 (fact tables, incremental) → Wave 3 (real-time cutover).

Recommendations

1Categorize and convert 52 stored procedures: simple→views, medium→Dataflow, complex→Spark
2Replace 23 PolyBase external tables with OneLake shortcuts
3Plan 3-wave migration for 1.24TB dataset to stay within maintenance windows
4Rewrite 14 CLR-dependent UDFs as Python functions
5Implement OneLake data access roles to replace column-level security
6Convert 12 materialized views to scheduled Delta tables
7Apply Z-ORDER optimization on former HASH distribution key columns