freefiles

Microsoft DP-300 Exam Dumps & Practice Test Questions

Question 1

You are a cloud administrator overseeing the migration of an on-premises Microsoft SQL Server 2019 database named DB1 to SQLMI1, an Azure SQL Managed Instance deployed in VNET1. Connectivity between the on-premises network and Azure is established using ExpressRoute. To prepare for migration using Azure Database Migration Service (DMS), 

Which configuration must be applied to VNET1 to allow the DMS to communicate with both the source and target servers?

A. Configure service endpoints
B. Set up virtual network peering
C. Deploy an Azure Firewall
D. Modify network security group (NSG) rules

Correct Answer : B

Explanation:
To successfully migrate the database using Azure Database Migration Service (DMS), it's essential to ensure that both the source (on-premises SQL Server) and the target (SQL Managed Instance) can communicate. The correct configuration for enabling this communication in Azure is setting up virtual network peering. Virtual network peering allows networks in Azure to communicate with each other, ensuring that the migration process can flow seamlessly between the on-premises network (via ExpressRoute) and the Azure environment.

A. Configure service endpoints - While service endpoints are used to enable direct access to Azure services over a private connection, this doesn't address the requirement to allow DMS to communicate between different networks (on-premises and Azure). Service endpoints are helpful for services like Azure Storage and SQL Database but don't solve the broader need for secure cross-network communication.

B. Set up virtual network peering - This option is correct because virtual network peering allows VNET1 (where the SQL Managed Instance resides) to communicate securely with other networks, such as the on-premises network or other VNETs in Azure. This peering ensures that DMS can migrate the database from the on-premises server to SQLMI1.

C. Deploy an Azure Firewall - An Azure Firewall could help with filtering traffic, but it is not the primary solution for enabling communication between the source and target servers in the context of a database migration. While it can provide network-level security, it doesn't directly facilitate the communication required for DMS migration.

D. Modify network security group (NSG) rules - Network Security Group (NSG) rules control access to resources based on IP address, port, and protocol. While configuring NSG rules might be necessary for securing communication, they alone do not ensure the overall network communication required between the on-premises environment and the Azure SQL Managed Instance. NSGs can limit or permit traffic but cannot enable inter-network connectivity like virtual network peering can.

Question 2

Your organization uses SQL Server with FILESTREAM and FileTables to manage unstructured data like images and documents. You plan to migrate this environment to Azure while maintaining compatibility with these features. 

Which Azure solution should you choose to fully support FILESTREAM and FileTables?

A. Azure SQL Database
B. SQL Server hosted on an Azure Virtual Machine
C. Azure SQL Managed Instance
D. Azure Database for MySQL

Correct Answer : B

Explanation:
SQL Server's FILESTREAM and FileTables are features used to store unstructured data such as images and documents within a SQL Server database. These features require access to the file system, which is not fully supported by all Azure database offerings. In this case, the correct solution is to migrate to SQL Server hosted on an Azure Virtual Machine. This solution ensures that the FILESTREAM and FileTables features are fully supported because it provides the ability to configure the SQL Server instance just like an on-premises installation, allowing access to the file system for unstructured data storage.

A. Azure SQL Database - Azure SQL Database is a fully managed, scalable relational database service. However, it doesn't support FILESTREAM or FileTables. These features require direct access to the file system, which Azure SQL Database does not provide, so this option would not meet the requirements.

B. SQL Server hosted on an Azure Virtual Machine - This option is correct because SQL Server hosted on an Azure VM allows you to install and configure SQL Server exactly as you would on-premises. You can fully utilize FILESTREAM and FileTables because you have complete control over the SQL Server environment and its access to the underlying file system.

C. Azure SQL Managed Instance - While Azure SQL Managed Instance offers greater compatibility with on-premises SQL Server, it also does not support FILESTREAM or FileTables. Managed Instances offer most features of SQL Server but still do not provide direct file system access for unstructured data storage.

D. Azure Database for MySQL - This option is not applicable because Azure Database for MySQL does not support the SQL Server-specific features such as FILESTREAM or FileTables. It is a database service for MySQL, which is a different relational database system entirely. Therefore, it cannot support the features you're looking for in this scenario.

Question 3

You need to migrate a large on-premises SQL Server database to Azure SQL Database, with the goal of minimizing application downtime during the migration process. Which method should you use to best meet this requirement?

A. Configure Transaction Log Shipping
B. Implement Always On availability groups
C. Set up transactional replication
D. Export and import using a BACPAC file

Answer: C

Explanation:

When migrating a large on-premises SQL Server database to Azure SQL Database, minimizing application downtime is a critical factor. The best method depends on the complexity and real-time requirements of the migration.

Option C, set up transactional replication, is the most appropriate method for this scenario. Transactional replication allows for near-real-time data replication from the on-premises SQL Server to Azure SQL Database. By setting up replication, you can keep both systems synchronized while minimizing downtime. Once the replication process is set up, you can cut over to Azure SQL Database with minimal disruption. Transactional replication is designed to handle high-volume transactions with minimal latency, making it ideal for reducing downtime during migration.

Option A, configure transaction log shipping, involves backing up the transaction logs from the on-premises SQL Server and applying them to the secondary database in Azure. However, this method typically involves more downtime than replication because you would need to wait until all logs are applied before switching over. Log shipping is good for disaster recovery scenarios but is less suitable for minimizing downtime during migration.

Option B, implement Always On availability groups, is typically used for high-availability configurations and not specifically designed for database migration to the cloud. Always On availability groups require an on-premises SQL Server to be part of a cluster, and setting it up with Azure SQL Database would require significant planning and configuration. While it could provide some high-availability benefits, it doesn't directly address the goal of minimizing downtime during migration.

Option D, export and import using a BACPAC file, is a method of exporting the database schema and data into a BACPAC file, which can then be imported into Azure SQL Database. This approach is simple but involves longer downtime since the data needs to be exported and imported in bulk, without a live connection between the source and destination databases during the process.

Therefore, the best approach for minimizing downtime is C, set up transactional replication.

Question 4

Your team is managing Azure SQL Database with a table named Table1 containing 20 columns defined as CHAR(400). The actual data stored in each column never exceeds 150 characters. You plan to apply page-level compression to improve storage efficiency. 

Which change should you make to allow for effective compression?

A. Define the columns as sparse
B. Change the data type to NVARCHAR(MAX)
C. Change the data type to VARCHAR(MAX)
D. Change the data type to VARCHAR(200)

Answer: D

Explanation:

Page-level compression in Azure SQL Database works by reducing the amount of space required to store data. The more efficiently data is stored, the more effective compression will be.

In the case of CHAR(400) columns, the fixed-length nature of the CHAR data type can result in inefficient use of space, especially when the actual data stored in the column is much smaller than the defined size. This is because CHAR allocates the full 400 characters regardless of the actual data, leading to wasted space.

Option D, change the data type to VARCHAR(200), is the best choice. The VARCHAR data type is variable-length, meaning it will only use the space needed to store the actual data, rather than padding the column to its full length. By reducing the column size to VARCHAR(200) (based on the maximum expected data size of 150 characters), you enable better compression, as the database will only use as much space as is required for the actual data in each row, resulting in more efficient page-level compression.

Option A, define the columns as sparse, is typically used to save space for sparse columns that contain a lot of NULL values. However, sparse columns are not ideal for this scenario where the data size is consistent but smaller than the defined size. Sparse columns are more useful in scenarios where the data is often missing or NULL.

Option B, change the data type to NVARCHAR(MAX), is not ideal in this case because NVARCHAR consumes more space (2 bytes per character) compared to VARCHAR, and you are not dealing with multilingual data that requires Unicode. Using NVARCHAR(MAX) would actually increase the storage usage and would not provide an optimal compression solution.

Option C, change the data type to VARCHAR(MAX), is an improvement over CHAR(400) since VARCHAR is variable-length. However, VARCHAR(MAX) allows for much larger data (up to 2 GB), which is unnecessary if the data never exceeds 150 characters. This would still waste space and would not optimize compression as effectively as setting a reasonable size limit such as VARCHAR(200).

Thus, the most effective change to improve storage efficiency with page-level compression is D, change the data type to VARCHAR(200).

Question 5

You manage an on-premises SQL Server instance named SQL1, hosting five critical databases. Your organization plans to migrate them to Azure SQL Managed Instance with the following goals: minimal downtime, zero data loss, and a smooth transition. 

Which approach should you use to meet these goals?

A. Configure Always On availability groups
B. Use backup and restore
C. Set up log shipping
D. Use the Database Migration Assistant (DMA)

Correct Answer : A

Explanation:
When migrating critical databases to Azure SQL Managed Instance, the key objectives are minimal downtime, zero data loss, and a smooth transition. The most effective solution to meet these goals is to configure Always On availability groups. Always On availability groups provide high availability and disaster recovery for SQL Server instances by replicating databases across multiple servers or regions. This allows you to set up an availability group between the on-premises SQL Server (SQL1) and the Azure SQL Managed Instance, providing near-zero downtime during the migration process.

A. Configure Always On availability groups - Always On availability groups enable database synchronization between the on-premises SQL Server instance and the Azure SQL Managed Instance. You can use a technique known as hybrid cloud availability groups, which ensures that the databases remain available throughout the migration. This solution minimizes downtime and avoids data loss, as transactions can continue to occur during the migration process, with the final switch to the managed instance happening when synchronization is complete.

B. Use backup and restore - While the backup and restore method is commonly used for database migration, it may lead to some downtime and could involve a risk of data loss, especially in environments with high transaction volumes. Restoring a backup to Azure SQL Managed Instance might also require additional steps like setting up replication to ensure zero data loss, making it less ideal for meeting all of the specified migration goals.

C. Set up log shipping - Log shipping is a good strategy for disaster recovery, as it involves taking transaction log backups on the primary database and shipping them to a secondary server. While this could help minimize downtime during migration, it typically requires more manual effort to configure and may introduce more complexity compared to Always On availability groups, which offer a more seamless and high-availability-based migration process.

D. Use the Database Migration Assistant (DMA) - The Database Migration Assistant (DMA) is a tool designed to assess and migrate databases to Azure. It helps assess compatibility and assists in moving databases, but it is not specifically designed for zero-downtime migrations. DMA typically involves downtime, as it requires a backup of the source database, so it would not meet the requirement of minimal downtime and zero data loss.

Question 6

You manage 10 Windows Server 2019 virtual machines in Azure, each running SQL Server 2019. You want to centrally manage all instances using a single user account, following Azure security best practices. 

What is the best first step to enable centralized SQL Server authentication?

A. Assign a user-assigned managed identity to each VM
B. Deploy Azure Active Directory Domain Services (Azure AD DS) and join the VMs to the domain
C. Assign a system-assigned managed identity to each VM
D. Join each VM directly to the Azure AD tenant

Correct Answer : B

Explanation:
To manage all your Windows Server 2019 virtual machines running SQL Server 2019 in Azure centrally, following Azure security best practices, the most effective approach is to deploy Azure Active Directory Domain Services (Azure AD DS) and join the virtual machines to the domain. By doing this, you enable centralized authentication, which simplifies the management of user accounts, roles, and access control, all while adhering to Azure security standards.

A. Assign a user-assigned managed identity to each VM - User-assigned managed identities are typically used to authenticate Azure resources to other Azure services. While they are useful for managing Azure resource access, they are not designed for centralized management of SQL Server instances or for handling user authentication across multiple virtual machines. This would not address the requirement for centralized SQL Server authentication.

B. Deploy Azure Active Directory Domain Services (Azure AD DS) and join the VMs to the domain - This is the correct approach because Azure Active Directory Domain Services (Azure AD DS) provides domain join capabilities, group policy, and Kerberos/NTLM authentication, which are essential for centralized management of SQL Server authentication. Once the VMs are joined to the domain, you can use centralized authentication to manage all your SQL Server instances via a single user account and align with best practices for Azure security.

C. Assign a system-assigned managed identity to each VM - System-assigned managed identities are similar to user-assigned identities but are automatically tied to the lifecycle of the VM. While they help manage Azure service access, they are not directly related to the management of user authentication for SQL Server instances, making this an inadequate solution for the given scenario.

D. Join each VM directly to the Azure AD tenant - Joining VMs to the Azure AD tenant is more relevant for scenarios where you are managing resources with Azure AD directly, but it doesn't provide full domain join functionality. Azure AD Domain Services (Azure AD DS) is the recommended solution for traditional domain services like centralized authentication for applications (like SQL Server), so this step alone will not be sufficient for your goal.

Question 7

You receive an error message indicating that your Azure SQL Elastic Pool has hit its maximum storage capacity of 76,800 MB. You need to resolve the issue efficiently while minimizing manual work. Which three actions can help alleviate the storage issue?

A. Increase the pool’s maximum storage capacity
B. Remove unnecessary data from a database
C. Move a database out of the elastic pool
D. Enable manual data compression
E. Shrink the size of individual databases

Answer: A, B, C

Explanation:

When an Azure SQL Elastic Pool reaches its maximum storage capacity, it's important to address the issue efficiently while minimizing manual intervention.

Option A, increase the pool’s maximum storage capacity, is a straightforward approach to resolve the issue. Increasing the pool's maximum storage capacity allows more data to be stored without hitting the limit. However, this could incur additional costs, depending on the new capacity.

Option B, remove unnecessary data from a database, is another effective way to alleviate the storage issue. By deleting or archiving unnecessary data, the overall storage usage in the elastic pool decreases, freeing up space for other databases.

Option C, move a database out of the elastic pool, is a valid action to take when storage capacity is limited. By moving one or more databases to another elastic pool or a dedicated SQL database, the storage usage in the current pool is reduced. This allows you to manage storage more effectively and potentially distribute the load.

Option D, enable manual data compression, is not directly applicable in Azure SQL Elastic Pools. While data compression can be applied to individual databases, it is not a feature that directly alleviates storage limits in the context of an elastic pool. Compression can help save space but isn't a typical solution for quickly resolving an overage in storage capacity.

Option E, shrink the size of individual databases, could be a potential solution, but it is generally not recommended for production environments. Shrinking databases can cause fragmentation and performance issues, and it might not provide a significant reduction in space if the data hasn't been deleted or archived. Additionally, the shrink operation is often a manual process and may not be an efficient long-term solution.

Therefore, the best actions to resolve the storage issue are A, B, and C—increasing the pool’s storage capacity, removing unnecessary data, and moving databases to another pool or instance.

Question 8

You are migrating a large SQL Server database to Azure SQL Managed Instance. Due to business requirements, you must ensure that SQL Agent jobs and cross-database queries continue to work after the migration. Which Azure SQL deployment option should you choose?

A. Azure SQL Database - Hyperscale
B. Azure SQL Database - Serverless
C. Azure SQL Managed Instance
D. SQL Server on Azure Virtual Machine

Answer: C

Explanation:

When migrating a SQL Server database to Azure, it's important to choose the correct deployment option to ensure that specific SQL Server features, such as SQL Agent jobs and cross-database queries, continue to function seamlessly after the migration.

Option C, Azure SQL Managed Instance, is the best choice for this scenario. Azure SQL Managed Instance is designed to provide compatibility with SQL Server, supporting features like SQL Agent jobs, cross-database queries, and other SQL Server-specific functionalities. Managed Instance is ideal for migrating SQL Server workloads with minimal changes to the existing applications, making it the most appropriate choice when these features are critical.

Option A, Azure SQL Database - Hyperscale, is a scalable option for Azure SQL Database, but it does not support SQL Agent jobs or cross-database queries. Although Hyperscale offers high scalability and storage flexibility, it is better suited for scenarios where these features are not required.

Option B, Azure SQL Database - Serverless, is a cost-effective option for intermittent or burst workloads, but it does not support SQL Agent jobs or cross-database queries. It is designed for scenarios where database usage patterns are unpredictable and where scaling needs to occur dynamically.

Option D, SQL Server on Azure Virtual Machine, is an option that allows running SQL Server in a fully managed virtual machine. While this option does support SQL Agent jobs and cross-database queries, it requires more manual management compared to Azure SQL Managed Instance. It also doesn't provide the same level of integration with Azure's native SQL capabilities as Managed Instance does.

Thus, the correct choice is C, Azure SQL Managed Instance, as it offers the necessary features to meet the business requirements, including SQL Agent jobs and cross-database queries.

Question 9

Your company has multiple SQL Server databases hosted in Azure SQL Database. A recent audit requires you to implement transparent data encryption (TDE) with customer-managed keys instead of Microsoft-managed keys. 

What is the first step you must perform?

A. Enable Always Encrypted with secure enclaves
B. Import your key into Azure Key Vault
C. Configure TDE using SQL Server Configuration Manager
D. Create a database master key in the SQL database

Correct Answer : B

Explanation:
To implement Transparent Data Encryption (TDE) with customer-managed keys in Azure SQL Database, the first step is to import your key into Azure Key Vault. This is because Azure Key Vault is where you store and manage encryption keys, and you need a customer-managed key (CMK) to control the encryption of the database. After the key is imported into Key Vault, you can then link it to the Azure SQL Database, allowing you to use your own key rather than the default Microsoft-managed key for TDE.

A. Enable Always Encrypted with secure enclaves - Always Encrypted with secure enclaves is a feature used to protect sensitive data within SQL Server by allowing encryption and decryption to occur on the client side, outside of the database engine. While it offers strong data protection, it is not directly related to TDE or customer-managed keys for TDE encryption.

B. Import your key into Azure Key Vault - This is the correct first step. You must import the encryption key into Azure Key Vault, which is where the customer-managed key will reside. After this, you can configure TDE on the Azure SQL Database to use the key stored in Key Vault.

C. Configure TDE using SQL Server Configuration Manager - SQL Server Configuration Manager is a tool used to configure SQL Server settings on on-premises SQL instances. It is not applicable for Azure SQL Database, as the configuration for TDE and key management in Azure is done through the Azure portal and Azure Key Vault.

D. Create a database master key in the SQL database - While creating a database master key in SQL Server is important for certain encryption operations (such as using Always Encrypted or Transparent Data Encryption in on-premises SQL Server), this is not necessary for Azure SQL Database when using TDE with customer-managed keys. The Azure Key Vault handles the key management in this scenario.

Question 10

You are preparing to migrate several on-premises databases to Azure using Azure Database Migration Service (DMS). To minimize downtime, you choose the online migration option. Which of the following must be true for this migration to succeed?

A. The target database must be empty before starting the migration
B. The source database must support Change Data Capture (CDC)
C. The source and target databases must have identical schema
D. The source SQL Server must allow outbound internet access

Correct Answer : D

Explanation:
When performing an online migration using Azure Database Migration Service (DMS), the migration is designed to reduce downtime during the transfer of data from the source database to the target database. The key requirement for this migration type is that the source SQL Server must allow outbound internet access. This is because DMS requires internet access to perform real-time replication and synchronization of changes that occur in the source database after the initial data load.

A. The target database must be empty before starting the migration - While it is ideal for the target database to be empty when performing a migration, it is not a strict requirement. In an online migration, the DMS will handle data replication and synchronization, so the target database can contain some data as long as the DMS handles any conflicts or data transfer correctly.

B. The source database must support Change Data Capture (CDC) - Change Data Capture (CDC) is useful for tracking changes to data, but it is not a strict requirement for the online migration. Azure DMS uses its own method to track and migrate data changes in real-time. CDC can be helpful in some cases, but it is not essential for the migration process when using DMS online migration.

C. The source and target databases must have identical schema - While it's good practice to ensure that the schema of the source and target databases are compatible, they do not have to be identical for the migration to proceed. DMS can handle some level of schema differences, and in some cases, schema mapping can be applied to accommodate small differences between the source and target databases.

D. The source SQL Server must allow outbound internet access - This is the correct answer because DMS requires internet access to communicate with Azure services for the online migration process. The source SQL Server must have outbound internet access to allow DMS to facilitate the continuous replication of changes and ensure minimal downtime. Without internet access, the migration cannot be performed online, and real-time data synchronization will not occur.