freefiles

Oracle 1z0-084 Exam Dumps & Practice Test Questions

Question 1

In Oracle’s adaptive query execution, which optimizer component decides whether to use a nested loop join or a hash join?

A. Feedback from execution statistics
B. SQL Plan Directives
C. Statistics Collector
D. Automatic Reoptimization
E. Dynamic Statistics

Correct Answer: C

Explanation:
Oracle’s adaptive query execution is a powerful feature introduced in Oracle Database 12c and enhanced in subsequent versions. It allows the database to adjust execution plans on the fly, especially in situations where initial estimates for cardinality or row counts are inaccurate. One of the central challenges in query optimization is determining the most efficient join method — most notably, whether to use a nested loop join or a hash join.

This decision is not always accurate at compile time, especially when data distributions are skewed or when statistics are stale or missing. That’s where adaptive features come into play.

Among the options given, the Statistics Collector (C) is the component responsible for gathering runtime statistics and helping the optimizer adjust the execution plan during execution. This includes dynamically deciding between nested loop joins and hash joins based on the actual row counts and join selectivity observed during execution.

Let’s evaluate the other choices:

  • A. Feedback from execution statistics: This relates to post-execution analysis (used in Automatic Reoptimization) rather than during execution. It’s not the component making real-time decisions.

  • B. SQL Plan Directives: These are used to influence future optimization decisions by flagging queries or subplans where misestimates occurred. They are not involved in live decision-making during query execution.

  • D. Automatic Reoptimization: This uses feedback from prior executions to generate a better plan on subsequent runs. It doesn’t help with making real-time adjustments during the current execution.

  • E. Dynamic Statistics: These are used by the optimizer at parse time to augment or replace missing table and column statistics. While helpful in estimating cardinality, they do not directly control real-time join method selection.

In contrast, the Statistics Collector is embedded in the execution engine and collects actual runtime statistics that can trigger adaptive plan branches, such as switching from a nested loop join to a hash join or vice versa. This is the hallmark of Oracle’s adaptive join methodology.

Thus, C is the correct answer.

Question 2

In which phase of the application lifecycle is reactive management most commonly applied?

A. Design and Development
B. System Upgrade or Migration
C. Application Testing
D. Production Environment
E. Initial Deployment

Correct Answer: D

Explanation:
In the application lifecycle, reactive management refers to actions taken in response to issues that are already occurring, such as performance degradation, faults, or user-reported problems. This contrasts with proactive management, where efforts are made to anticipate and prevent issues before they happen.

The phase in which reactive management is most commonly applied is the Production Environment (D). This is because the live application is in active use, and issues must be resolved promptly to avoid service disruptions, maintain user satisfaction, and meet SLAs (Service Level Agreements).

Let’s break down the logic further:

  • During the Design and Development phase (A), most efforts are proactive. Architects and developers focus on building robust systems and implementing performance best practices. There's no operational load here to react to.

  • In the System Upgrade or Migration phase (B), changes are generally planned and controlled. While issues can arise during or after upgrades, it’s not the primary domain for reactive operations — that's part of transition or validation, not core runtime operations.

  • Application Testing (C) is a pre-production phase meant to simulate workloads and catch issues proactively before go-live. Reactive management isn't typical here since it involves controlled scenarios.

  • During Initial Deployment (E), the application is transitioning to production, but again the focus is often on monitoring and validation rather than reacting to long-term operational issues.

In contrast, the Production Environment is where the application is subject to real users, unpredictable load, security threats, and operational anomalies. Here, teams use reactive management tools like incident response systems, monitoring dashboards, logs, and alerting tools to quickly address errors, performance bottlenecks, and outages.

Therefore, reactive management is inherently associated with the ongoing operations and real-time issue resolution that happen in the Production Environment, making D the correct answer.

Question 3

When should you consider Oracle database performance tuning complete?

A. After the tuning budget is fully utilized
B. When the top 10 wait events show no concurrency waits
C. Once buffer and library cache hit ratios exceed 95%
D. When I/O accounts for less than 10% of database time
E. When predefined tuning objectives are achieved

Correct Answer: E

Explanation:
In Oracle database performance tuning, success is not measured by generic performance ratios or arbitrary thresholds, but by meeting specific, predefined tuning objectives. These objectives typically align with business or application requirements, such as reducing query response time, improving throughput, or meeting service-level agreements (SLAs).

Let’s break down why E is the most appropriate choice and analyze the others for clarity.

  • E (When predefined tuning objectives are achieved): This is the correct answer because tuning is an iterative, goal-driven process. Before any tuning begins, DBAs and stakeholders should define clear objectives—for example, “Reduce report query execution time from 12 minutes to under 2 minutes” or “Ensure nightly batch jobs complete before 6 AM.” Once these goals are met, regardless of whether all internal metrics are perfect, the tuning task is considered complete. Tuning beyond this point often yields diminishing returns or even introduces new risks.

Now let’s evaluate the incorrect options:

  • A (After the tuning budget is fully utilized): Relying on budget exhaustion as a stopping criterion is not a technical or performance-driven measure. While cost constraints can limit efforts, they don’t determine whether the database is tuned adequately. Tuning should stop only when objectives are met, not merely when money runs out.

  • B (When the top 10 wait events show no concurrency waits): While reducing concurrency waits (like latches or locks) is a good goal, the absence of concurrency waits doesn’t imply the system is tuned. Other types of waits—like I/O, CPU, or network—might still severely impact performance. Moreover, some concurrency waits may be acceptable under specific loads.

  • C (Once buffer and library cache hit ratios exceed 95%): These metrics, although once popular, are not reliable indicators of tuning success in modern Oracle environments. High cache hit ratios can sometimes mask deeper performance problems, such as inefficient SQL or excessive parsing. Also, workloads vary, and focusing on ratios can lead to counterproductive tuning efforts.

  • D (When I/O accounts for less than 10% of database time): This is another misleading performance metric. Some workloads, especially OLTP systems, may naturally have low I/O, while others (e.g., data warehousing) may inherently involve significant I/O. The percentage of database time spent on I/O is context-dependent, and reducing it blindly might not yield meaningful performance gains.

In summary, tuning should always be driven by predefined, measurable, and business-aligned objectives. Hitting arbitrary internal metrics or waiting for zero wait events is not sufficient. Therefore, the correct answer is E.

Question 4

After enforcing a SQL Plan Baseline and leaving a SQL Profile enabled, a new execution plan is observed. Which two statements describe what’s happening? (Choose two.)

A. Both methods use hints, so they produce identical plans
B. The SQL Profile controls the chosen execution plan
C. The SQL Plan Baseline determines the execution plan
D. The baseline must be accepted to affect the plan
E. Oracle raises an error due to conflicting methods
F. Multiple child cursors are created due to the conflict

Correct Answers: C, F

Explanation:
Oracle's SQL tuning features—SQL Plan Baselines and SQL Profiles—serve different but complementary purposes. A SQL Plan Baseline ensures that only known, verified execution plans are used unless a new one is explicitly accepted. A SQL Profile, on the other hand, adjusts optimizer behavior through internal hints, helping the optimizer choose better plans.

Here’s what happens when both a SQL Plan Baseline is enforced and a SQL Profile remains active:

  • C (The SQL Plan Baseline determines the execution plan): This is correct. If a SQL Plan Baseline is enforced, Oracle will only use plans that are part of the baseline—and marked as “accepted.” Even if a SQL Profile is enabled, Oracle will not use a different plan unless it is part of the baseline or explicitly accepted into it. Thus, the SQL Plan Baseline takes precedence in execution plan selection.

  • F (Multiple child cursors are created due to the conflict): Also correct. When Oracle observes differences in optimizer environments (e.g., SQL Profile active versus inactive, or different hints), it creates multiple child cursors under the same parent SQL statement. This allows the database to manage different plan environments in parallel. So, even though a plan may be restricted by the baseline, the optimizer still tries to reconcile it with hints from the SQL Profile, resulting in cursor divergence.

Let’s explore the incorrect options:

  • A (Both methods use hints, so they produce identical plans): This is incorrect. While SQL Profiles apply internal hints, SQL Plan Baselines do not override optimizer behavior—they restrict the plan to approved ones. The two tools may result in different plans, particularly if the hints suggested by the Profile are not compatible with the baseline.

  • B (The SQL Profile controls the chosen execution plan): This is incorrect because, in the presence of a SQL Plan Baseline, the baseline takes precedence. The optimizer cannot use plans introduced solely by a SQL Profile unless those plans are added and accepted into the baseline.

  • D (The baseline must be accepted to affect the plan): This is partially true, but irrelevant here because the question already states the SQL Plan Baseline is enforced. If it were not accepted, it wouldn’t be enforced. Hence, this is not one of the best two correct answers.

  • E (Oracle raises an error due to conflicting methods): This is incorrect. Oracle does not throw an error when both a SQL Profile and a SQL Plan Baseline are present. It gracefully handles the situation by creating additional child cursors as needed and using the plan consistent with the baseline.

In summary, when both SQL Plan Baselines and SQL Profiles are active, the baseline dictates the usable plan, and Oracle may generate multiple child cursors due to differing optimizer environments. The correct answers are C and F.

Question 5

You encounter an ORA-04036 error indicating excessive PGA memory usage that occurred over four hours ago. Which two views help identify the responsible process and SQL? (Choose two.)

A. DBA_HIST_ACTIVE_SESS_HISTORY
B. DBA_HIST_SQLSTAT
C. DBA_HIST_SQLTEXT
D. DBA_HIST_PGASTAT
E. DBA_HIST_PROCESS_MEM_SUMMARY

Correct Answer: A, D

Explanation:
The ORA-04036 error typically indicates that there was excessive use of Program Global Area (PGA) memory by a process. To identify the responsible process and SQL, we need to review historical performance data related to both PGA usage and active sessions during the time when the issue occurred. Let’s evaluate the relevant views:

  • A. DBA_HIST_ACTIVE_SESS_HISTORY: This view contains historical session activity data, including session-level statistics like memory usage. It is a key view to analyze which sessions were active during the time the error occurred and how much PGA each session consumed. This view will provide critical information on active sessions and their resource consumption at the time of the error.

  • D. DBA_HIST_PGASTAT: This view contains historical PGA statistics and can be used to track the total PGA usage for different periods. It allows us to identify peaks in PGA memory usage, which can be correlated with the ORA-04036 error. It will help pinpoint when the excessive memory usage occurred.

The other views are less relevant for this specific scenario:

  • B. DBA_HIST_SQLSTAT: This view tracks SQL statistics over time, but it is more focused on execution statistics (like CPU time, elapsed time) for SQL statements rather than memory usage or identifying specific sessions responsible for excessive memory consumption.

  • C. DBA_HIST_SQLTEXT: This view contains the text of SQL queries that were executed. While it’s useful for tracking which queries were executed, it doesn't provide information on memory usage or session activity that could help identify the cause of the ORA-04036 error.

  • E. DBA_HIST_PROCESS_MEM_SUMMARY: This view provides memory consumption statistics for database processes, but it’s typically more geared towards overall process memory usage rather than specifically tracking PGA memory usage at the session level.

Thus, the most relevant views are A and D, as they provide session activity history and PGA statistics, which will help pinpoint the cause of the excessive memory usage that triggered the ORA-04036 error.

Question 6

A performance slowdown occurs nightly between 23:15 and 23:30, but your hourly AWR snapshots and ADDM show no issues. Which tool should you use for deeper analysis?

A. SQL Performance Analyzer
B. AWR Period Comparison Report
C. SQL Tuning Advisor
D. Active Session History (ASH) Report

Correct Answer: D

Explanation:
When a performance issue occurs at a specific time of day (like the nightly slowdown between 23:15 and 23:30), and standard AWR snapshots and ADDM don’t show any significant problems, it suggests that the performance degradation may be transient or short-lived, which means it might be missed in the hourly snapshots.

The best tool for deeper analysis in this scenario is the Active Session History (ASH) Report (D), which provides real-time session activity data. ASH stores detailed information on active sessions every second, including the SQL being executed, wait events, and resource consumption. This makes it ideal for troubleshooting periodic performance slowdowns that may be masked in periodic snapshots. Since ASH provides granular, session-level statistics over short periods (like a 15-minute window), it’s perfect for pinpointing issues during specific time frames, such as the one in this scenario.

Let’s review the other options:

  • A. SQL Performance Analyzer: This tool is used for evaluating the performance impact of SQL changes before deploying them in production. It’s not suitable for real-time or historical performance diagnostics during a specific time frame.

  • B. AWR Period Comparison Report: While an AWR report provides detailed historical performance data, it’s based on hourly snapshots, and in this case, it has already been reviewed with no issues detected. The Period Comparison Report might miss transient issues that happen in a smaller time window (like 15 minutes).

  • C. SQL Tuning Advisor: The SQL Tuning Advisor is used to optimize specific SQL queries based on execution statistics, but it doesn’t focus on the broader system-level performance analysis or help track periodic performance slowdowns that could be due to resource contention or other system-wide factors.

Thus, D. ASH Report is the most suitable tool for investigating performance issues that are intermittent and short-lived, especially when they occur over a specific time window, as in this scenario.

Question 7

Which Oracle feature enables the database to adaptively switch between different join methods during query execution based on real-time statistics?

A. Dynamic Statistics
B. Statistics Feedback
C. SQL Plan Baseline
D. Automatic Reoptimization
E. Statistics Collector

Correct Answer: D

Explanation:
Oracle’s Automatic Reoptimization is the feature that allows the optimizer to adapt execution plans at runtime based on real-time feedback from initial executions. It is particularly powerful in cases where the optimizer’s original estimates are significantly incorrect, such as with complex joins or data skew, which can lead to suboptimal plans.

Here’s how it works:
When a SQL statement is executed, and the actual statistics (like row counts) collected during that execution differ significantly from the optimizer's estimates, Oracle can mark the query for reoptimization. In subsequent executions, the optimizer uses feedback from the earlier execution to create a new, more accurate plan.

There are two modes:

  • Adaptive plans: The optimizer switches join methods during the execution of a query (for example, from a nested loop to a hash join) based on real-time cardinality feedback. This happens within the same query execution.

  • Reoptimization: Happens on subsequent runs based on the previous execution’s feedback.

Let’s evaluate the other options:

  • A (Dynamic Statistics): This helps the optimizer during parsing when existing statistics are inadequate or missing, especially for complex predicates or joins. However, it does not adjust execution plans during execution—only before the execution begins.

  • B (Statistics Feedback): This was an earlier version of reoptimization introduced in Oracle 11g as Cardinality Feedback. It evolved into Statistics Feedback, which is now a component of Automatic Reoptimization. Alone, it doesn’t dynamically switch join methods during execution—it helps future executions.

  • C (SQL Plan Baseline): This is used to stabilize execution plans and avoid performance regressions by ensuring that only verified plans are used. It does not dynamically adjust plans during execution.

  • E (Statistics Collector): This is a general mechanism used to gather optimizer statistics, like table and index stats, but it’s not responsible for runtime adaptation of execution plans.

In conclusion, the only feature that actively and dynamically adjusts join methods during execution based on real-time statistics is Automatic Reoptimization, particularly when adaptive execution plans are involved. Therefore, the correct answer is D.

Question 8

In which phase of the application lifecycle is reactive management most commonly applied?

A. Design and Development
B. Testing
C. Production
D. Deployment
E. Upgrade or Migration

Correct Answer: C

Explanation:
Reactive management refers to addressing issues after they have occurred, rather than proactively preventing them. In the context of Oracle database systems and application lifecycles, reactive management is most commonly associated with the Production phase.

Here’s why:
During the Production phase, the application is live, and real users are interacting with it. Despite the best efforts during design, development, and testing, unexpected issues—such as performance bottlenecks, SQL plan regressions, data anomalies, or hardware failures—may arise. These are often only observable under real-world loads or due to data growth, which can differ significantly from test environments.

In response to such issues, database administrators and performance engineers engage in:

  • Query tuning

  • Plan management

  • Resource reallocation

  • Patching

  • Runtime diagnostics using AWR, ASH, and SQL Monitor

These actions are reactive because they are driven by monitoring alerts, user complaints, or observed system degradation.

Now, let's analyze why the other options are less appropriate for reactive management:

  • A (Design and Development): This phase is proactive by nature, focused on building and optimizing code and architecture to prevent problems down the line. It involves modeling, indexing strategies, and proper schema design—not reactive fixes.

  • B (Testing): Testing is about detecting issues before production, ideally catching problems early. Though some reactive fixes may occur in response to test failures, the bulk of reactive management (especially performance- or load-related) still belongs to Production.

  • D (Deployment): During deployment, the focus is on environment setup and validation. If problems arise, they are often resolved quickly before users are impacted. Deployment can have some reactive elements, but it's not the primary reactive management phase.

  • E (Upgrade or Migration): While upgrades often require reactive patches or adjustments post-migration, the routine nature of reactive management (monitoring, patching, fixing live issues) is most prevalent during ongoing production operations.

In summary, Production is the phase where reactive management is most heavily applied, as this is where real-time issues occur that must be resolved quickly to maintain system health and user satisfaction. The correct answer is C.

Question 9

When is it appropriate to consider Oracle database tuning as complete?

A. When the allocated budget for performance tuning has been exhausted
B. When all concurrency waits are eliminated from the Top 10
C. When the buffer cache and library cache hit ratio exceeds 95%
D. When I/O is less than 10% of the DB time
E. When the tuning goal has been achieved

Correct Answer: E

Explanation:
Oracle database tuning is a structured and objective-driven process that aims to improve performance in a measurable and meaningful way. The correct stopping point for any tuning effort is when the predefined tuning objectives have been achieved, not when specific statistics or resource usage metrics reach arbitrary thresholds.

Let’s explore this in more detail.

The key to effective tuning lies in identifying a goal—this could be improving a specific SQL query’s performance, reducing response time for a business-critical application, or ensuring that a report completes within a defined window. Once that goal is met, further tuning may result in minimal benefit while introducing risk or unnecessary complexity.

Now let’s assess why the other options are not sufficient reasons to declare tuning complete:

  • A (When the allocated budget for performance tuning has been exhausted): While budgetary constraints can limit how much tuning work is feasible, they should not define success. Running out of budget doesn’t guarantee that performance targets have been met, making this an unreliable benchmark.

  • B (When all concurrency waits are eliminated from the Top 10): This metric can be misleading. Some level of concurrency waits is expected and acceptable in high-throughput systems. Eliminating all concurrency waits is neither practical nor necessarily beneficial if your system is already meeting its goals.

  • C (When the buffer cache and library cache hit ratio exceeds 95%): This is a classic tuning myth. High cache hit ratios do not automatically translate into good performance. In fact, some systems may have excellent hit ratios but still suffer from poorly written SQL or inefficient execution plans. Focusing too much on these ratios can divert attention from real performance bottlenecks.

  • D (When I/O is less than 10% of the DB time): This could suggest efficient memory usage, but it's not a reliable indicator of tuning completion. A system might be CPU-bound with very low I/O yet still perform poorly due to inefficient query plans or excessive parsing.

Ultimately, tuning is a goal-driven activity, not a statistics-chasing exercise. Metrics and reports are tools to help meet the defined performance objective, but they are not the final goal themselves. Therefore, tuning should stop when the system behavior aligns with business or user-defined performance goals, which makes E the correct answer.

Question 10

Which two views can be used to identify the session and SQL statement responsible for exceeding the PGA_AGGREGATE_LIMIT? (Choose two.)

A. DBA_HIST_ACTIVE_SESS_HISTORY
B. DBA_HIST_SQLSTAT
C. DBA_HIST_SQLTEXT
D. DBA_HIST_PGASTAT
E. DBA_HIST_PROCESS_MEM_SUMMARY

Correct Answers: A, E

Explanation:
Oracle introduced the PGA_AGGREGATE_LIMIT parameter to place an upper bound on the total amount of Program Global Area (PGA) memory that can be used by all processes in an Oracle instance. If this limit is exceeded, Oracle may terminate sessions consuming excessive memory. To identify which session or SQL is responsible, you need access to diagnostic views that track historical memory usage and session activity.

Let’s explore the two correct answers:

  • A (DBA_HIST_ACTIVE_SESS_HISTORY): This view contains a sampled history of active sessions, including their SQL IDs, session IDs, and various resource usage metrics. It can be queried to identify sessions that were active around the time the PGA_AGGREGATE_LIMIT was exceeded, and it includes wait class, memory usage, and session state, which are crucial in correlating memory usage with SQL execution.

  • E (DBA_HIST_PROCESS_MEM_SUMMARY): This view provides historical memory usage statistics per process, including both PGA and UGA allocations. It’s particularly useful for identifying memory-heavy background or user processes. This is where you’d look to find which process was responsible for consuming excessive PGA memory.

Now, let’s examine the incorrect options:

  • B (DBA_HIST_SQLSTAT): While this view provides historical statistics per SQL statement, it focuses on performance metrics like CPU time, buffer gets, and executions. It does not directly track memory usage, particularly PGA memory.

  • C (DBA_HIST_SQLTEXT): This view simply stores the SQL text for statements referenced in other AWR views. It does not contain any execution or memory-related metrics.

  • D (DBA_HIST_PGASTAT): This view contains aggregate statistics about PGA usage across the entire system. While it shows trends and peak memory usage, it does not break down PGA usage by session, making it inadequate for pinpointing which user or SQL caused the problem.

In summary, when investigating PGA_AGGREGATE_LIMIT violations, you need detailed session-level and process-level memory usage data. Therefore, the most effective and relevant views are DBA_HIST_ACTIVE_SESS_HISTORY and DBA_HIST_PROCESS_MEM_SUMMARY, making A and E the correct answers.