The DP-700 certification exam of Microsoft validates your ability to implement data engineering solutions using Microsoft Fabric, certifying you as a Fabric Data Engineer. 🧑🎓

The exam covers essential skills and key components such as warehouses, lakehouses, data pipelines, dataflows and notebooks. The key skills required for the DP-700 exam include:
- Implement and manage an analytics solution (30–35%)
- Ingest and transform data (30–35%)
- Monitor and optimize an analytics solution (30–35%)
This page provides a comprehensive overview of the key skills covered in the DP-700 exam. We provide step-by-step tutorials to each exam topic, helping you to prepare for the DP-700 certification exam and master the skills needed to become a certified Fabric Data Engineer. 🚀

Microsoft Fabric Book
We are delighted to publish a hands-on guide to implementing end-to-end data projects in Microsoft Fabric. This hands-on book walks you through the key components and functionalities of Microsoft Fabric and invites you to actively follow the steps yourself. Numerous visual elements are used to make learning even clearer. The explanations are illustrated by a fictional story about a futuristic data factory that symbolizes Microsoft Fabric.
View on AmazonImplement and manage an analytics solution (30–35%)
Configure Microsoft Fabric workspace settings
- Configure Spark workspace settings
- Configure domain workspace settings
- Configure OneLake workspace settings
- Configure data workflow workspace settings
Implement lifecycle management in Fabric
- Configure version control
- Implement database projects
- Create and configure deployment pipelines
Configure security and governance
- Implement workspace-level access controls
- Implement item-level access controls
- Implement row-level, column-level, object-level, and folder/file-level access controls
- Implement dynamic data masking
- Apply sensitivity labels to items
- Endorse items
- Implement and use workspace logging
Orchestrate processes
- Choose between a pipeline and a notebook
- Design and implement schedules and event-based triggers
- Implement orchestration patterns with notebooks and pipelines, including parameters and dynamic expressions
Ingest and transform data (30–35%)
Design and implement loading patterns
- Design and implement full and incremental data loads
- Prepare data for loading into a dimensional model
- Design and implement a loading pattern for streaming data
Ingest and transform batch data
- Choose an appropriate data store
- Choose between dataflows, notebooks, KQL, and T-SQL for data transformation
- Create and manage shortcuts to data
- Implement mirroring
- Ingest data by using pipelines
- Transform data by using PySpark, SQL, and KQL
- Denormalize data
- Group and aggregate data
- Handle duplicate, missing, and late-arriving data
Ingest and transform streaming data
- Choose an appropriate streaming engine
- Choose between native storage, followed storage, or shortcuts in Real-Time Intelligence
- Process data by using eventstreams
- Process data by using Spark structured streaming
- Process data by using KQL
- Create windowing functions
Monitor and optimize an analytics solution (30–35%)
Monitor Fabric items
- Monitor data ingestion
- Monitor data transformation
- Monitor semantic model refresh
- Configure alerts
Identify and resolve errors
- Identify and resolve pipeline errors
- Identify and resolve dataflow errors
- Identify and resolve notebook errors
- Identify and resolve eventhouse errors
- Identify and resolve eventstream errors
- Identify and resolve T-SQL errors
Optimize performance
- Optimize a lakehouse table
- Optimize a pipeline
- Optimize a data warehouse
- Optimize eventstreams and eventhouses
- Optimize Spark performance
- Optimize query performance
Ressources
