DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric Certification Video Training Course
The complete solution to prepare for for your exam with DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric certification video training course. The DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including Microsoft DP-700 exam dumps, study guide & practice test questions and answers.
DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric Certification Video Training Course Exam Curriculum
Introduction
-
1:28
1. Introduction
-
2:32
2. Curriculum
A look arounf Fabric
-
5:12
1. Signing into Microsoft Fabric
-
13:49
2. Why do I need a Work email address? And how can I get one, if I don't have it?
-
7:02
3. Creating a Fabric capacity and configure Fabric-enabled workspace settings
-
6:56
4. Identify requirements for a Fabric solution and manage Fabric capacity
-
8:55
5. A quick tour of Fabric
Using Dataflow Gen2
-
8:22
1. Ingest data by using a dataflow
-
7:08
2. Add a destination to a dataflow
-
4:52
3. Saving as a template and scheduling a dataflow
-
3:22
4. Implement Fast Copy when using dataflows
-
6:21
5. Monitor data transformation, identify and resolve errors using dataflows
-
7:04
6. 35. Optimize a dataflow
Transforming data using Dataflow Gen2
-
7:06
1. The first part of the Home menu, including converting column data types
-
8:30
2. Removing rows/columns, and filtering and sorting data
-
5:52
3. Grouping and aggregating data, and duplicating and referencing queries
-
6:34
4. Denormalize data by Joining data together using Merge Queries
-
5:50
5. Unioning data using Append Queries
-
8:51
6. Identify and resolve duplicate data, missing data (null values)
-
6:55
7. Transforming data and adding additional columns
-
9:21
8. Practice Activity Number 1 - The Solution
Transforming data by using Power Query (M)
-
8:42
1. Introducing the M language
-
9:14
2. M Number functions
-
6:51
3. M Text functions
-
7:15
4. M Date, Time and Duration functions
-
6:29
5. 27. M Group functions and removing rows
-
9:31
6. M Table functions
Using pipelines
-
7:05
1. Ingest data by using a data pipeline, and adding other activities
-
9:29
2. Copy data by using a data pipeline
-
4:27
3. Schedule data pipelines and monitor data pipeline runs
-
6:41
4. Identifying and resolving pipeline errors, and optimizing a pipeline
-
01:57
5. Exploring sample data (including copy data assistant) + data pipeline templates
-
9:03
6. Practice Activity Number 2 - The Solution
Loading and saving data using notebooks
-
5:28
1. Ingesting data into a lakehouse using a local upload
-
3:03
2. Choose an appropriate method for copying to a Lakehouse or Warehouse
-
9:00
3. Ingesting data using a notebook, and copying to a table
-
8:20
4. Saving data to a file or Lakehouse table
-
7:16
5. Loading data from a table in PySpark and SQL, and manipulating the results
-
03:31
6. Practice Activity Number 3 - The Solution
Manipulating dataframes- choosing columns and rows
-
05:53
1. Reducing the number of columns shown
-
07:14
2. Filtering data with: where, limit and tail
-
3:07
3. Enriching data by adding new columns
-
7:41
4. Using Functions
-
7:28
5. More advanced filtering
-
8:37
6. Practice Activity Number 4 using PySpark - The Solution
-
3:28
7. Practice Activity Number 5 using SQL - The Solution
Converting data types, aggregating and sorting dataframes
-
6:08
1. Converting data types
-
3:58
2. Importing data using an explicit data structure
-
6:42
3. Formatting dates as strings
-
4:37
4. Aggregating and re-filtering data
-
5:52
5. Sorting the results
-
4:49
6. Using all 6 SQL Clauses
-
6:35
7. Practice Activity Number 6 using PySpark - The Solution
-
5:55
8. Practice Activity Number 7 using SQL - The Solution
Transformation data in a lakehouse
-
8:12
1. Merging data
-
5:11
2. Identifying and resolving duplicate data
-
6:31
3. Joining data using an Inner join
-
6:44
4. Joining data using other joins
-
7:33
5. Identifying missing data or null values
-
3:00
6. Practice Activity Number 8 using PySpark - The Solution
-
7:34
7. Practice Activity Number 9 using PySpark - The Solution
-
8:38
8. Practice Activity Number 10 using SQL - The Solution
Improving notebook performance and automating notebooks
-
02:50
1. Schedule notebooks
-
08:12
2. Process data by using Spark structured streaming in a notebook
-
04:00
3. Testing the processing of streaming data in a notebook
-
09:09
4. Process data by using a Spark Job Definition
-
4:11
5. Choosing between a pipeline, a dataflow and a notebook
-
7:19
6. Implement parameters with notebooks and pipelines
-
6:14
7. Implement dynamic expressions with notebooks and pipelines
-
8:03
8. Practice Activity Number 11 - The Solution
Creating objects
-
5:55
1. Create and manage shortcuts
-
3:49
2. Identify and resolve Shortcut errors
-
3:02
3. Configure OneLake workspace settings
-
2:23
4. Creating a Microsoft Azure SQL Database as a source
-
8:25
5. Implement file partitioning for analytics workloads using a pipeline
-
3:10
6. Implement file partitioning for analytics workloads - data is in a lakehouse
-
7:34
7. Implement mirroring of external databases
-
3:41
8. Practice Activity Number 12 - The Solution
Optimize performance in notebooks
-
03:11
1. Identify and resolve data loading performance bottlenecks in notebooks
-
3:54
2. Implement performance improvements in notebooks, inc. V-Order
-
2:42
3. Identify and resolve issues with Delta table file: optimized writes
-
5:10
4. Optimize Spark performance
Transform data in a data warehouse
-
5:52
1. Creating tables in a data warehouse
-
6:58
2. Inserting data into tables and transforming data in a Data Warehouse
-
3:26
3. Choose between dataflows, notebooks, and T-SQL for data transformation
-
7:06
4. Slowly changing dimensions - Theory
-
4:56
5. Implement Type 0 slowly changing dimensions - Practical Example
-
7:29
6. Implement Type 1 and Type 2 slowly changing dimensions - Practical Example
Creating incremental data loads
-
5:00
1. Design an incremental data load from a Data Warehouse using a pipeline
-
9:46
2. Implement an incremental data load from a Data Warehouse using a pipeline
-
4:50
3. Test an incremental data load from a Data Warehouse using a pipeline
-
9:02
4. Implementing an incremental data loads using a Dataflow Gen2
Mange and Optimize Data Warehouse
-
6:53
1. Creating a Premium Per User (PPU) workspace and Azure DevOps repos
-
7:56
2. Implement version control for a workspace
-
7:06
3. Implement database projects, including in source control
-
6:32
4. Implement dynamic data masking in a Data Warehouse - Video 1
-
6:14
5. Implement dynamic data masking in a Data Warehouse - Video 2
-
7:10
6. Optimize a data warehouse
-
5:59
7. Practice Activity Number 13 - The Solution
Creating an eventhouse
-
5:39
1. Creating an eventhouse, exploring the environment, and getting data
-
7:52
2. Creating sample KQL and SQL queries, and exploring the query environment
Selecting, filtering and aggregating data using KQL
-
7:12
1. Selecting data using KQL
-
4:29
2. Further selecting columns and ordering data using KQL
-
5:23
3. Limiting the number of rows
-
9:04
4. Practice Activity Number 14 - The Solution
-
4:57
5. Creating a string literal
-
8:00
6. Filtering for the entirety of a string
-
7:13
7. Filtering for part of a string
-
8:05
8. Aggregating data
-
7:01
9. Practice Activity Number 15 - The Solution
25 KQL Functions
-
8:51
1. Empty strings, concatenating and trimming strings
-
8:25
2. Manipulating strings
-
1:16
3. Other string functions
-
7:27
4. Practice Activity Number 16 - The Solution
-
6:37
5. Number Data Types
-
4:05
6. Other Math Functions
-
5:12
7. datetime and timespan Data Types
-
8:10
8. datetime and timespan Functions
-
6:04
9. Practice Activity Number 17 - The Solution
Transforming data using KQL
-
4:07
1. Merging data
-
10:33
2. Joining data
-
5:18
3. Practice Activity Number 18 - The Solution
-
6:13
4. Identify and resolve duplicate data, missing data, or null values
-
3:48
5. The iif/iff and case conditional functions
-
5:39
6. The OneLake data and real-time hubs + implementing OneLake integration
-
4:57
7. Practice Activity Number 19 - The Solution
Ingest and transform streaming data -eventstreams
-
3:02
1. Choose an appropriate streaming engine
-
8:39
2. Processing data by using an eventstream
-
6:59
3. The Manage fields transform event in an eventstream
-
7:02
4. The Group by transform event, including Creating windowing functions
-
7:24
5. Completing our eventstream
Ingest and transform streaming data- Other objects
-
7:50
1. Revising KQL Syntax
-
8:00
2. Creating a Fabric activator to run based on an event-based trigger
-
9:49
3. Ingest data by using continuous integration from OneLake - Part 2
-
2:50
4. Designing and implement an event-based trigger based on Azure Blob storage
-
5:03
5. Optimizing eventstreams and eventhouses
-
8:47
6. Native storage, mirrored storage, or shortcuts in Real-Time Intelligence
-
3:49
7. Choose between accelerated shortcuts and non-accelerated shortcuts
Workspace settings and Monitoring
-
8:51
1. Spark workspace settings: starter and custom pools, and environments
-
6:06
2. Other Spark workspace settings
-
10:23
3. Configure domain workspace settings
-
2:11
4. Configure data workflow workspace settings
-
4:04
5. Recommend settings in the Fabric admin portal
-
5:03
6. Implement workspace and item-level access controls for Fabric items
-
03:55
7. Installing the Microsoft Fabric Capacity Metrics app
-
7:13
8. Using the Microsoft Fabric Capacity Metrics app - Manage Fabric capacity
-
3:26
9. Monitor semantic model refresh
-
5:27
10. Implement workspace logging
-
6:06
11. Workspace logging dashboards
-
6:55
12. Querying Workspace logs in KQL
Configuring security and governance, and deployment pipelines
-
7:04
1. Apply sensitivity labels to items
-
4:57
2. Endorse items
-
8:54
3. Row-level security in a Data Warehouse
-
6:40
4. Column-level security in a Data Warehouse
-
8:38
5. Object-level security in a Data Warehouse
-
5:15
6. Folder-/File-level access controls in a Lakehouse
-
10:35
7. Creating a deployment pipeline
-
6:43
8. Configuring a deployment pipeline
Congratulations for completing the course
-
1:13
1. What's Next?
-
0:44
2. Congratulations for completing the course
About DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric Certification Video Training Course
DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.
Prepaway's DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric video training course for passing certification exams is the only solution which you need.
Pass Microsoft DP-700 Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers As Seen in the Actual Exam!
30 Days Free Updates, Instant Download!
DP-700 Premium Bundle
- Premium File 139 Questions & Answers. Last update: Apr 29, 2026
- Training Course 160 Video Lectures
| Free DP-700 Exam Questions & Microsoft DP-700 Dumps | ||
|---|---|---|
| Microsoft.testking.dp-700.v2026-02-23.by.heidi.7q.ete |
Views: 0
Downloads: 816
|
Size: 520.85 KB
|
Student Feedback
Can View Online Video Courses
Please fill out your email address below in order to view Online Courses.
Registration is Free and Easy, You Simply need to provide an email address.
- Trusted By 1.2M IT Certification Candidates Every Month
- Hundreds Hours of Videos
- Instant download After Registration
A confirmation link will be sent to this email address to verify your login.
Please Log In to view Online Course
Registration is free and easy - just provide your E-mail address.
Click Here to Register