Microsoft AZ-500 Exam Dumps & Practice Test Questions
Question No 1:
You work as a cloud security administrator managing identity and access through Azure Active Directory (Azure AD). You've detected suspicious behavior where certain users appear to be repeatedly attempting to access resources they’re not authorized to use. To investigate, you want to write a Kusto Query Language (KQL) query using Azure Log Analytics that:
Retrieves recent sign-in attempts
Filters for failed login events
Counts failed sign-ins per user
Returns only users with more than five failed attempts
Which elements should your query include to accurately identify those users?
A. EventID and CountIf()
B. ActivityID and CountIf()
C. EventID and Count()
D. ActivityID and Count()
Correct Answer: A
Explanation:
To effectively detect repeated failed sign-in attempts using Azure Log Analytics and Kusto Query Language (KQL), it's crucial to utilize the appropriate parameters and aggregation functions. Here’s why EventID and CountIf() are the most appropriate choices:
The EventID parameter allows you to isolate specific types of Azure AD sign-in events, including failures. While Azure sign-in logs contain detailed metadata, using EventID ensures you're pinpointing entries related specifically to sign-in actions.
The CountIf() function is key here because it allows conditional counting. In this context, you want to count only those sign-ins that failed. A typical query might look like this:
This query first filters out successful sign-ins (where ResultType equals 0), then counts only failed sign-ins per user, and finally returns only users with more than five failed attempts.
By contrast, Count() would total all events regardless of condition, making it less suitable. ActivityID is primarily used to trace activities across services and is not optimal for identifying failed sign-ins in isolation.
Thus, combining EventID to filter for relevant events and CountIf() to conditionally count failures provides the most efficient and accurate way to achieve your goal, making A the correct answer.
Question No 2:
Your software development team is using Azure DevOps to manage the end-to-end development lifecycle. To maintain consistency and code integrity, you’ve set up branch policies across your repositories. These policies are vital for ensuring structured collaboration and high-quality code submissions in modern DevOps environments.
Which of the following statements correctly represent what branch policies in Azure DevOps are designed to do?
(Select all that apply.)
A. They enforce how and when changes are introduced to the codebase, supporting your team's change control standards.
B. They control which users can access or make updates to specific branches.
C. They ensure that code meets defined quality criteria before being merged.
D. They prevent any modifications to a branch by locking it completely.
Correct Answers: A and C
Explanation:
Branch policies in Azure DevOps serve as a powerful mechanism to maintain discipline and quality within the software development process. Their core purpose is to establish rules that must be followed before code changes are accepted into specific branches—particularly the main or develop branches.
One key function of branch policies is enforcing change management (A). For example, a policy might require all changes to go through a pull request (PR) process, with mandatory code reviews and associated work items. This introduces accountability and provides a clear audit trail for every change merged into critical branches.
Another essential feature is the emphasis on code quality (C). Branch policies can be configured to require successful build validations, passing test results, or minimum code coverage thresholds. This helps prevent broken code or regressions from being introduced into shared branches.
However, there are misconceptions about what branch policies do not control. Access control (B)—such as who can read or write to a repository or branch—is handled through Azure DevOps security settings, not via branch policies. Similarly, branch policies do not lock branches (D) or make them read-only. They only enforce conditions under which updates are allowed. If a developer meets the required conditions, their changes can be merged.
In short, Azure DevOps branch policies support governance by automating code review, quality checks, and process compliance. They don’t restrict access or freeze development but instead ensure that only thoroughly vetted code is integrated into shared branches—making A and C the accurate answers.
Question No 3:
You’ve recently been assigned the task of implementing advanced threat detection and setting up custom alerting within Azure Security Center for a new Azure subscription. As part of the setup, an Azure Storage account has already been provisioned.
To successfully configure custom alert rules that monitor specific security events across your Azure environment, what additional step must you take?
A. Remove Azure Active Directory (Azure AD) Identity Protection from the environment
B. Create a Data Loss Prevention (DLP) policy
C. Create an Azure Log Analytics workspace
D. Upgrade Security Center to the appropriate pricing tier
Correct Answer: C. Create an Azure Log Analytics workspace
Explanation:
To create and manage custom alert rules in Azure Security Center, the essential requirement is the configuration of an Azure Log Analytics workspace. This workspace serves as the data collection and analysis engine behind Azure's monitoring and alerting functionalities. It is the backbone that allows telemetry data from various Azure services—including virtual machines, storage accounts, and network resources—to be ingested, queried, and acted upon.
By connecting Azure Security Center to a Log Analytics workspace, you enable the environment to collect real-time security data and leverage the Kusto Query Language (KQL) to analyze that data. Custom alert rules are then defined based on KQL queries, which detect specific conditions or anomalies. Once these conditions are met, actions such as sending alerts, triggering automation, or integrating with third-party tools can be initiated.
Without this workspace, Azure Security Center lacks the infrastructure to ingest or query telemetry data, making it impossible to set up custom alerts.
Reviewing the other options:
Option A (removing Azure AD Identity Protection) is unrelated. Identity Protection is a separate security feature and does not influence Security Center's alerting mechanisms.
Option B (creating a DLP policy) pertains to Microsoft Purview or Microsoft 365, and is designed to prevent sensitive data leaks, not to create alerts within Azure Security Center.
Option D (upgrading the pricing tier) is necessary for accessing certain advanced features in Defender for Cloud, but custom alerting itself depends specifically on having a Log Analytics workspace, regardless of tier.
In summary, to meet the goal of implementing custom alerting capabilities, creating and linking a Log Analytics workspace is the critical configuration step. It provides the data collection, query, and alerting foundation that Azure Security Center relies on to effectively monitor and respond to security-related events.
Question No 4:
Your organization has connected 100 Windows servers (running Windows Server 2012 R2 and 2016) to Azure Log Analytics to monitor security-related performance data. You’ve now been tasked with setting up alert rules in Azure Monitor based on this collected data.
These alert rules must meet the following criteria:
Support the use of dimensions for more specific filtering
Trigger alerts with minimal delay after issues are detected
Send a single notification both when the alert is fired and again when it’s resolved
Given these technical requirements, which type of signal should you choose when configuring the alerts?
A. Activity log
B. Application log
C. Metric
D. Audit log
Correct Answer: C
Explanation:
In this scenario, the best option for configuring efficient and responsive alerts in Azure Monitor is to use Metric signals. Metric alerts are designed to work with numerical data collected from various sources, including performance counters, and they provide key benefits that meet all the specified requirements.
Firstly, metric alerts support dimensions, which allow for more granular filtering of data. For example, you can isolate alerts based on server name, counter type, or region, enabling precision alerting tailored to your infrastructure layout.
Secondly, metric-based alerts are near real-time, meaning they can be triggered within minutes of a condition being met. This minimal alert latency is essential for rapidly addressing performance or security issues, especially in large-scale environments.
Third, metric alerts support stateful alerting, allowing the system to send one notification when an alert is triggered and another when the condition returns to normal. This prevents alert fatigue and ensures a clear, concise communication trail regarding system health.
By contrast, Activity log alerts are limited to Azure control plane operations like resource creation or deletion, which are not suitable for performance monitoring. Application logs are intended for app-level diagnostics, and Audit logs relate to identity and access monitoring, primarily within Azure AD.
Given the focus on security-related performance counters and the need for low-latency, dimension-capable, lifecycle-aware alerts, Metric is clearly the most appropriate signal type in this context.
Would you like me to rephrase the next two questions and explanations in the same format?
Question No 5:
Your company manages an Azure subscription that includes nearly 100 virtual machines, all with Azure Diagnostics turned on. Recently, one of the VMs was unexpectedly deleted around 15 days ago, and you're now tasked with finding out which user performed the deletion. You have access to Azure Monitor to assist with this investigation.
To determine the identity of the user who deleted the virtual machine, which Azure Monitor feature should you use?
A. Application Log
B. Metrics
C. Activity Log
D. Logs (Log Analytics)
Correct Answer: C
Explanation:
When it comes to identifying who made administrative changes—such as the deletion of a virtual machine—Azure Activity Log is the go-to tool. This log is part of Azure Monitor and is specifically designed to track control-plane operations, meaning any actions taken to manage or configure Azure resources at the subscription level.
The Activity Log provides a full record of resource-level events like VM deletions, resource group changes, or policy modifications. Each log entry includes critical details such as the timestamp, the resource impacted, the operation performed, and most importantly, the identity (user, service principal, or application) that initiated the action. Since the deletion took place 15 days ago, and Activity Log data is retained for 90 days by default, it’s still accessible and can be queried directly within Azure Monitor or exported to other storage if longer retention is configured.
Now, let’s examine the incorrect options:
Application Log (A): These are used to track events from applications running on the VM, not changes to the VM itself.
Metrics (B): These provide numerical data like CPU utilization or memory usage but don’t include user activity or management operations.
Logs (D): While Log Analytics (part of Azure Monitor Logs) is excellent for deep telemetry and event analysis, it does not automatically contain control-plane events unless Activity Logs are explicitly forwarded to it.
In summary, for tracing who deleted an Azure resource like a virtual machine, Activity Log is the most accurate and efficient tool, as it directly logs such operations along with the responsible identity.
Question No 6:
Your company operates a broad Azure environment with over 100 virtual machines, all configured with Azure Diagnostics. You've been asked to investigate potential security incidents—such as login failures, account lockouts, or unauthorized access—on a Windows Server 2016 VM.
With access to Azure Monitor, which feature should you use to query detailed security-related logs from the virtual machine?
A. Application Log
B. Metrics
C. Activity Log
D. Logs (Log Analytics)
Correct Answer: D
Explanation:
To examine security events on an Azure virtual machine, especially for things like failed logins, access violations, or account lockouts, the most appropriate tool within Azure Monitor is Logs, also known as Log Analytics.
Log Analytics allows you to run powerful, detailed queries using Kusto Query Language (KQL) against data collected from virtual machines and other Azure resources. When Azure Diagnostics is enabled on a VM, it can forward Windows Event Logs, including the Security, System, and Application logs, into a Log Analytics Workspace. These logs can then be queried and filtered to reveal specific incidents.
For example, a basic query to view security events might look like:
This command returns all security-related entries, which can include successful and failed login attempts, privilege escalations, or policy violations—providing a comprehensive view of what happened on the VM.
Let’s contrast this with the other options:
Application Log (A): These logs are focused on application-level issues, not system-level security events.
Metrics (B): Metrics track performance (CPU, disk I/O, memory, etc.) and don’t include security or event log data.
Activity Log (C): This shows management activities for Azure resources (like VM creation or deletion), but does not capture events inside the operating system of a VM.
In conclusion, Log Analytics is the ideal solution for inspecting and analyzing detailed security events within an Azure VM. It allows security teams to trace anomalies, investigate breaches, and ensure compliance—all through customizable and powerful queries.
Question No 7:
You’re a cloud administrator managing Azure resources for an enterprise. As part of enhancing your organization's security posture, your team is using Microsoft Defender for Cloud (formerly Azure Security Center). One of your responsibilities is to apply and manage operating system-level security configurations across virtual machines using this tool.
However, during setup, you notice that some features, including the ability to modify OS-level security settings, are not available by default. You realize that enabling these capabilities requires switching to a different pricing tier.
Which pricing tier should you activate to manage and apply OS-level security configurations through Microsoft Defender for Cloud?
A. Advanced
B. Premium
C. Standard
D. Free
Correct Answer: C
Explanation:
Microsoft Defender for Cloud (formerly Azure Security Center) offers multiple service tiers that provide varying levels of security management and threat protection. The two core pricing options are the Free tier and the Standard tier.
The Free tier includes basic capabilities such as security policy enforcement and ongoing assessments of your environment’s security status. However, it lacks many of the more powerful features that are critical for managing operating system configurations or enforcing real-time security controls.
To access advanced features such as Just-in-Time VM access, adaptive application controls, network security hardening, and most importantly, the ability to change OS-level security configurations directly from the portal, you must upgrade to the Standard tier.
The Standard tier is the paid version that integrates seamlessly with Microsoft Defender services. It allows for automated application of security recommendations, including changes to system settings that align with compliance standards like CIS benchmarks. It also provides richer threat intelligence, advanced analytics, and integration with services like Microsoft Sentinel and Defender for Endpoint.
Incorrect options:
Advanced and Premium are not recognized pricing tiers in Defender for Cloud. They are distractors.
Free is too limited for your needs—it does not permit direct configuration of OS-level security settings.
In summary, to effectively modify operating system configurations and access comprehensive threat protection capabilities within Azure Security Center, enabling the Standard pricing tier is essential.
Question No 8:
Your company has registered an internal application within Azure Active Directory (Azure AD), and this app must retrieve secrets from Azure Key Vault on behalf of the authenticated users. You’ve been told to configure delegated permissions for the app and have also been asked to ensure that admin consent is granted for those permissions.
Will this setup allow the application to access Key Vault secrets on behalf of the users?
A. Yes
B. No
Correct Answer: A
Explanation:
Azure AD supports two main types of permissions for applications: delegated permissions and application permissions, and choosing the correct type depends on how the application interacts with Azure resources.
In this case, the application is designed to access Azure Key Vault on behalf of users, meaning the app will perform operations under the context of the signed-in user. This is a textbook scenario for delegated permissions, where the app uses the user's identity and has access to resources that the user is authorized to use.
Once delegated permissions are configured, an administrator must grant consent—especially for sensitive or high-privilege operations. Granting admin consent ensures that the application can receive OAuth 2.0 tokens with the necessary scopes, allowing it to access resources like Key Vault using the user's privileges.
However, there's an additional step to ensure access: Key Vault access policies or role-based access control (RBAC) must also be properly configured. The users (or app) must be granted the right permissions within Key Vault itself, such as "get" permissions for secrets. Without these internal permissions, even correctly configured Azure AD permissions won’t be sufficient.
To summarize:
Delegated permissions allow an app to act in the context of the user.
Admin consent enables the permissions to be active across the tenant.
Key Vault access policies or roles must explicitly allow the action.
Therefore, by combining delegated permissions, admin consent, and proper Key Vault access setup, the application will be capable of accessing secrets on behalf of its users—making the answer a clear Yes.
Question No 9:
Your organization wants to secure access to Azure resources by enforcing multi-factor authentication (MFA) when users attempt to sign in from outside the corporate network. The goal is to minimize unnecessary prompts while still protecting against unauthorized access.
You have decided to use Azure AD Conditional Access to meet this requirement.
Which Conditional Access condition should you configure to apply MFA only when users sign in from untrusted locations?
A. Sign-in Risk
B. User Group
C. Named Locations
D. Client Apps
Correct Answer: C
Explanation:
Azure AD Conditional Access allows administrators to enforce policies based on specific conditions to protect user sign-ins and resource access.
To require Multi-Factor Authentication (MFA) only when users log in from untrusted or unknown locations, you need to define “Named Locations” (Option C). This condition allows you to tag trusted IP ranges (like your corporate office) and then build policies that apply only when users connect from outside these trusted ranges.
By contrast:
A. Sign-in Risk targets users based on the risk level detected during authentication, such as unusual sign-in behavior. While useful, this is not tied directly to geographic or IP-based location.
B. User Group targets specific groups for Conditional Access policies but doesn’t factor in where the user is signing in from.
D. Client Apps controls access based on the application type (e.g., browser, mobile app) and is not related to the user's geographic or IP location.
Using Named Locations effectively allows you to implement a policy like:
“Require MFA when users are outside the corporate network.”
This enhances security by prompting MFA only when necessary, reducing user friction while ensuring that sign-ins from potentially risky environments are challenged.
Question No 10:
Your company uses Azure Key Vault to manage secrets, certificates, and encryption keys. As a security administrator, you need to ensure that all key operations (such as encryption, decryption, or key creation) are fully logged and auditable for compliance purposes.
Which Azure feature should you enable to collect a detailed audit trail of all activities performed in Azure Key Vault?
A. Azure Monitor Metrics
B. Azure Activity Log
C. Diagnostic Settings with Log Analytics
D. Azure Security Center
Correct Answer: C
Explanation:
To collect detailed logs of data-plane activities (such as key encryption/decryption, secret retrieval, or key creation) in Azure Key Vault, the correct method is to enable Diagnostic Settings and send the logs to Log Analytics (Option C).
Here’s why this is the right choice:
Azure Key Vault activity is divided into control-plane and data-plane operations. Control-plane operations (e.g., Key Vault creation or policy changes) are logged in Azure Activity Log, but data-plane operations (e.g., getting a secret or using a key) are not.
By enabling Diagnostic Settings, you can route Key Vault logs to Log Analytics, Event Hubs, or Storage Accounts. This ensures visibility into who accessed what, when, and how.
Once the data is in Log Analytics, you can run Kusto Query Language (KQL) queries to audit access and detect suspicious behavior.
Let’s examine the incorrect options:
A. Azure Monitor Metrics only provides numerical metrics, such as request rates or latency. It does not give details on who accessed what.
B. Azure Activity Log only covers control-plane actions, such as the creation or deletion of the Key Vault itself—not secret or key access.
D. Azure Security Center offers security recommendations and threat detection, but it does not give the granular logs needed for full audit trails.
In conclusion, to ensure complete visibility into sensitive operations in Azure Key Vault, enabling Diagnostic Settings and sending logs to Log Analytics is the best approach for audit and compliance scenarios.