freefiles

Snowflake SnowPro Advanced Architect Exam Dumps & Practice Test Questions

Question 1

In a deployment that spans multiple regions, which two Snowflake components need to be replicated to ensure full failover capability for critical business workloads? (Choose 2.)

A. Warehouses
B. Databases
C. Resource monitors
D. User roles
E. External stages

Correct Answers: B, D

Explanation:
In a multi-region Snowflake deployment, especially for business-critical workloads, achieving full failover readiness requires more than just maintaining data availability. It involves replicating key objects that ensure both data integrity and access continuity across regions. The two most crucial Snowflake objects that must be replicated to meet this objective are databases and user roles.

Let’s examine why these two are essential:

B. Databases are the core units of data in Snowflake. For failover readiness, it's vital that the actual data—including tables, schemas, and the information they contain—be available in the secondary region. Snowflake supports database replication, allowing organizations to create a read-only or failover-ready copy of a database in another region. This ensures that, in the event of a regional outage, the database can be quickly promoted to primary, preserving business continuity.

D. User roles control access to Snowflake objects through role-based access control (RBAC). If user roles are not replicated to the failover region, users may be unable to access resources, even if the data is available. Snowflake offers the ability to replicate roles using account-level replication, ensuring consistent user permissions and policies across regions. This is especially important in large organizations where security compliance and access governance are strictly regulated.

Why the other options are less critical for full failover:

A. Warehouses are compute resources. They are recreated easily in a different region and are not a requirement for data replication or access policies. While important for running queries, they don't need to be replicated in advance.

C. Resource monitors are used to control and monitor credit usage. While useful for governance, they are not essential for failover functionality and can be reconfigured in the secondary region.

E. External stages are references to external data (e.g., S3, Azure Blob). These are generally globally accessible, and do not need to be replicated within Snowflake, as long as the underlying external storage is available.

In summary, for full failover readiness, it's not just about replicating data but also ensuring that access control mechanisms like roles are preserved. This makes databases and user roles the two most essential objects for replication in a multi-region Snowflake deployment.

Question 2

Which two Snowflake features allow architects to reduce data duplication while maintaining consistent data access for multiple departments? (Choose 2.)

A. Secure Data Share
B. Zero-Copy Cloning
C. Snowpipe Auto-Ingest
D. Materialized Views
E. Streams & Tasks

Correct Answers: A, B

Explanation:
Snowflake offers several features that help organizations reduce unnecessary data copies and ensure that multiple business units can access consistent, up-to-date data. The two most effective features for this purpose are Secure Data Share and Zero-Copy Cloning.

A. Secure Data Share enables data to be shared across Snowflake accounts without physically copying the data. Instead of duplicating datasets for each business unit or external partner, Secure Data Sharing allows read-only access to live data in the provider’s account. This eliminates redundancy and ensures all parties access the same source of truth. It's particularly valuable in complex organizations where different departments need access to centralized data while maintaining data governance.

B. Zero-Copy Cloning allows users to create clones of databases, schemas, or tables without duplicating the underlying data. Instead, Snowflake uses metadata pointers, enabling business units to have independent, fully-functional copies of datasets instantly. Clones can be modified without affecting the source until changes are committed, making them ideal for testing, development, or sandbox environments. This significantly reduces storage costs and prevents the proliferation of inconsistent data versions.

The following options, while useful, don’t directly address the challenge of avoiding data copies for multi-department access:

C. Snowpipe Auto-Ingest is focused on automating data ingestion into Snowflake. While it helps keep data up-to-date, it doesn’t reduce the need for copies across departments.

D. Materialized Views precompute and store results from queries for performance gains. They do store data separately, so while helpful for performance, they do not eliminate copies—in fact, they can add redundancy.

E. Streams & Tasks enable change data capture (CDC) and orchestration of workflows. While important for building pipelines, they do not inherently reduce data duplication or ensure consistent cross-departmental access.

In conclusion, Secure Data Share and Zero-Copy Cloning provide the architecture needed for efficient, scalable, and consistent access to data without making unnecessary copies. They are integral tools for modern data sharing and workload management within and across organizational units.

Question 3

Which two Snowflake warehouse configuration techniques are recommended to reduce query latency during unexpected or highly variable usage patterns, while still managing cost efficiently? (Choose 2.)

A. Enable multi-cluster AUTO-SUSPEND and AUTO-RESUME
B. Use SNOWPARK-OPTIMIZED warehouses
C. Configure multi-cluster warehouses in AUTO scaling mode
D. Set Max Clusters = 1 and Min Clusters = 1
E. Enable Query Acceleration Service

Correct Answers: A, C

Explanation:
Snowflake provides a variety of warehouse configuration strategies designed to balance performance and cost-efficiency, especially when dealing with unpredictable, spiky workloads. Two key features that directly address query latency and cost management in these scenarios are multi-cluster AUTO scaling and the use of AUTO-SUSPEND/AUTO-RESUME.

A. Enable multi-cluster AUTO-SUSPEND and AUTO-RESUME is a best practice for controlling costs in Snowflake. AUTO-SUSPEND allows warehouses to shut down automatically when they are not in use, and AUTO-RESUME enables them to restart instantly when a new query is submitted. This means compute resources are only consumed when necessary, significantly lowering costs during idle periods. When combined with multi-cluster features, this setup is ideal for workloads that are unpredictable or vary widely in intensity over time.

C. Configure multi-cluster warehouses in AUTO scaling mode is critical for handling concurrent workloads efficiently. With multi-cluster warehouses, Snowflake can automatically start up additional clusters in response to increased demand and scale them down during quiet periods. This helps maintain low query latency, even when multiple users or applications are sending queries simultaneously. AUTO scaling prevents performance bottlenecks by allocating resources dynamically, which is especially important during sudden usage spikes.

The other options, while valid in certain contexts, are not the most suitable for the specific goal of minimizing latency under spiky usage while controlling cost:

B. Use SNOWPARK-OPTIMIZED warehouses are tailored for data science workloads using Snowpark, not necessarily for handling query concurrency or latency in general-purpose analytics.

D. Set Max Clusters = 1 and Min Clusters = 1 effectively disables the multi-cluster feature. This static configuration restricts scalability and does not help with spiky or unpredictable workloads, as it only allows one cluster to serve all concurrent queries, potentially increasing latency.

E. Enable Query Acceleration Service is primarily designed for query performance enhancements using serverless compute, but it incurs additional cost and is typically used in specific performance tuning scenarios, not as a primary strategy for managing concurrency or spiky workloads.

In summary, using multi-cluster warehouses in AUTO scaling mode ensures that Snowflake can dynamically adjust compute resources, while AUTO-SUSPEND and AUTO-RESUME features allow you to control costs when those resources are not needed. Together, they provide a balanced approach to optimizing both performance and budget for unpredictable query patterns.

Question 4

Which two Snowflake security features operate after a valid query is submitted by an authenticated user, ensuring the query results are restricted by masking or filtering rules? (Choose 2.)

A. Row Access Policies
B. Network Policies
C. Dynamic Data Masking Policies
D. MFA (Multi-factor Authentication)
E. OAuth integration

Correct Answers: A, C

Explanation:
In Snowflake, data security is enforced at multiple layers, but two features specifically govern what data is visible in the result set after a valid query is run by an authorized user. These are Row Access Policies and Dynamic Data Masking Policies.

A. Row Access Policies are Snowflake’s mechanism for implementing fine-grained access control at the row level. Once a query is authorized and submitted, the row access policy is evaluated against the context of the user (such as their role or session attributes). This determines which rows of the dataset the user is permitted to see. For example, an HR analyst may only be allowed to see employees in their own department. The row access policy ensures that even though the query syntax is valid and the user has general access to the table, the returned data is filtered according to policy conditions.

C. Dynamic Data Masking Policies are used to obscure specific data elements (such as credit card numbers or social security numbers) from unauthorized users at query runtime. These policies are applied to columns and dynamically alter the output based on who is querying the data. For example, a user might see full data if they have the appropriate role, but another user might only see masked values like 'XXXX-XXXX-XXXX-1234'. This type of policy helps enforce data privacy regulations and is executed after query authorization, right before the results are returned.

Let’s review why the other choices are incorrect in this context:

B. Network Policies are pre-query controls. They limit access to Snowflake accounts based on IP addresses or networks but do not influence the content of query results.

D. MFA (Multi-factor Authentication) is another pre-authentication mechanism. It ensures that a user is who they claim to be before they can log in but has no effect on query result filtering or data masking once the session is active.

E. OAuth integration is a method for authenticating users via third-party identity providers. Like MFA and network policies, it governs access at the authentication stage, not the query result level.

In conclusion, Row Access Policies and Dynamic Data Masking work together to enforce data visibility rules after a user submits a valid query. These features ensure that even authenticated users only receive the data they are authorized to see, providing robust post-authentication data security within Snowflake.

Question 5

Which two Snowflake features help reduce both latency and administrative burden when building a high-throughput, continuous data ingestion pipeline from object storage? (Choose 2.)

A. Snowpipe with auto-ingest event notifications
B. Copy Into with ON_ERROR = SKIP_FILE
C. File Metadata Validation Service
D. Kafka Connector with Snowpipe Streaming API
E. Internal Named Stage with MANAGED ACCESS

Correct Answers: A, D

Explanation:
When designing high-throughput and continuous data loading workflows from object storage (like Amazon S3 or Azure Blob Storage), it is important to use tools and features that not only enable real-time or near real-time ingestion, but also minimize manual maintenance and reduce operational latency. Snowflake provides two key solutions that align with these requirements: Snowpipe with auto-ingest and the Kafka Connector with Snowpipe Streaming API.

A. Snowpipe with auto-ingest event notifications is one of the most efficient mechanisms for continuous data ingestion from object storage. With auto-ingest enabled, Snowflake listens for file arrival events via cloud-native notification services (such as AWS SNS/SQS or Azure Event Grid). As soon as a new file lands in the stage, Snowpipe automatically triggers ingestion—without requiring manual intervention. This drastically reduces latency, because ingestion happens as soon as data becomes available. Additionally, it removes the need for batch scheduling or polling, which lowers administrative overhead. Snowpipe also manages load history, retry logic, and scaling, making it a powerful tool for automated data pipelines.

D. Kafka Connector with Snowpipe Streaming API enables low-latency ingestion of streaming data directly into Snowflake tables. This is especially useful for enterprises collecting data from applications or sensors in real time. The Snowpipe Streaming API allows events from Kafka topics to be written directly into Snowflake, bypassing staging and minimizing delays. This approach is optimized for high-throughput, event-driven pipelines, and allows ingestion to scale with demand. It also supports schema evolution and minimizes management complexity.

Why the other options are less suitable:

B. Copy Into with ON_ERROR = SKIP_FILE is a manual or batch-driven data loading process. While it provides some error tolerance, it is not designed for real-time or high-throughput ingestion, and it requires scheduled or manual execution, increasing operational load.

C. File Metadata Validation Service is not a standalone Snowflake feature. While Snowpipe performs some internal file validation (e.g., tracking what has been loaded), there is no formal "File Metadata Validation Service" as a configurable tool. This option is misleading or imprecise.

E. Internal Named Stage with MANAGED ACCESS defines a storage location with controlled access, which is useful for organizing file uploads and managing data security. However, it doesn’t by itself reduce latency or handle ingestion. It complements a data loading strategy but is not an ingestion tool.

In conclusion, for high-throughput and continuous data loading from object storage, Snowpipe with auto-ingest and the Kafka Connector using the Snowpipe Streaming API are the most effective strategies. They automate ingestion, scale well with demand, and reduce both latency and administrative overhead.

Question 6

In a large enterprise with multiple accounts, which two architectural approaches help simplify cross-account data governance while enforcing least-privilege access to sensitive production data? (Choose 2.)

A. Centralized Account-level RBAC hierarchy and inheritance
B. Secure View wrapper objects shared via outbound share
C. Reader Accounts for downstream analytics vendors
D. Custom Snowflake Roles delegated through role hierarchy instead of SYSADMIN
E. Full database replication to satellite accounts for read-only workloads

Correct Answers: B, D

Explanation:
In large organizations, especially those managing multiple Snowflake accounts, data governance becomes a critical concern. The ideal design pattern should balance centralized control with decentralized access, all while maintaining least-privilege principles—granting users the minimum access they need to perform their job. Two approaches that effectively achieve this are secure view sharing via outbound shares and delegating access through custom roles instead of SYSADMIN.

B. Secure View wrapper objects shared via outbound share allow organizations to mask sensitive data and control what is exposed across accounts. Rather than sharing raw tables, a secure view allows data to be filtered, masked, or aggregated based on governance policies. These secure views are then published via Snowflake Secure Data Sharing, which enables real-time, zero-copy access by consumer accounts. This architecture is powerful because it simplifies governance while still enabling data access across business units or partners. All logic is defined and enforced in the provider account, centralizing policy control.

D. Custom Snowflake Roles delegated through role hierarchy instead of SYSADMIN supports least-privilege access by scoping access based on business function rather than giving broad or admin-level roles to users. Instead of assigning all permissions through the SYSADMIN role, organizations can create bespoke roles (e.g., READ_ONLY_FINANCE or DATA_ENGINEER_MKTG) and control their privilege inheritance through a tightly managed hierarchy. This limits access exposure and makes audit trails more meaningful. It’s a cornerstone of enterprise-scale RBAC.

Let’s explore why the other options are less appropriate:

A. Centralized Account-level RBAC hierarchy and inheritance sounds appealing but is not technically feasible across multiple Snowflake accounts. RBAC hierarchies are enforced within a single account. Therefore, this approach doesn't solve cross-account governance.

C. Reader Accounts for downstream analytics vendors are useful for sharing with external parties, not internal business units. Additionally, Reader Accounts are limited in functionality, including restrictions on roles and features like UDFs. They also don't offer flexibility in access control or scaling.

E. Full database replication to satellite accounts for read-only workloads offers high availability and localized access, but it increases administrative overhead, data duplication, and storage costs. Furthermore, it complicates governance by decentralizing data control, potentially violating the principle of least privilege if replicated data isn’t masked or scoped appropriately.

In summary, to maintain governance clarity and uphold least-privilege access, organizations should use secure views shared via outbound shares and delegate access through well-defined custom roles, avoiding over-reliance on admin roles like SYSADMIN. These patterns ensure centralized policy enforcement while enabling controlled, auditable data access across accounts.

Question 7

Which two Snowflake features help minimize storage consumption and reduce refresh time when creating dashboards that aggregate data from very large fact tables? (Choose 2.)

A. Result Set Caching
B. Clustering Keys
C. Materialized Views
D. Search Optimization Service
E. Automatic Micro-partition Pruning

Correct Answers: C, E

Explanation:
Building high-performance dashboards on top of very large fact tables is a common requirement in data warehousing. To meet the demands of fast, frequent refreshes while minimizing resource consumption, Snowflake provides specific features that support efficient aggregation and optimized data retrieval. Two of the most effective features in this scenario are Materialized Views and Automatic Micro-partition Pruning.

C. Materialized Views are designed specifically to store pre-aggregated or precomputed results from frequently run queries. When a materialized view is created, Snowflake stores the result set and incrementally updates it as the base table changes. This significantly reduces the refresh time for dashboards since the data doesn't need to be recomputed from scratch with each query. Instead, dashboards can pull directly from the materialized view, ensuring low-latency performance. In addition, because only changes to the underlying data are computed, this process is also storage-efficient over time.

E. Automatic Micro-partition Pruning allows Snowflake to skip scanning irrelevant data blocks (micro-partitions) based on filter conditions in a query. Snowflake automatically maintains metadata for each micro-partition, such as min/max values, which allows it to quickly prune unnecessary partitions during query execution. This feature dramatically improves query performance without requiring user intervention or additional storage. It's particularly effective when dashboards filter large fact tables by date, category, or other indexed attributes.

Let’s examine why the other options are not optimal for this specific use case:

A. Result Set Caching stores the results of identical queries for a short period (up to 24 hours) and returns them instantly if the same user submits the same query with no data changes. While this improves response time for some repeat queries, it doesn't help with frequently refreshed dashboards, as data changes would invalidate the cache.

B. Clustering Keys can improve performance by organizing data based on specific columns, but they don’t directly reduce storage usage. In fact, clustering large tables may increase costs because Snowflake needs to maintain clustering metadata and may trigger background re-clustering. Clustering is beneficial for range-based filtering, but it’s not as impactful on aggregation-heavy queries like those used in dashboards.

D. Search Optimization Service improves performance for point lookup queries, particularly in semi-structured data or unstructured text fields. It is not designed for aggregate analytics or reducing refresh times on dashboards, and it can increase storage due to the additional indexing structures.

In summary, Materialized Views allow dashboards to access precomputed aggregates, which significantly improves refresh time and reduces compute usage, while Automatic Micro-partition Pruning minimizes the data that needs to be scanned, speeding up queries without requiring extra storage. Together, they are the most effective features for optimizing dashboard performance on large fact tables.

Question 8

Which two Snowflake platform features should be implemented to achieve a Recovery Point Objective (RPO) of zero and a Recovery Time Objective (RTO) of under 5 minutes for critical analytics workloads? (Choose 2.)

A. Failover/Failback databases between paired accounts
B. Time Travel extended to 90 days
C. Cross-cloud database replication with automatic lag monitoring
D. Continuous Data Protection snapshots stored in object storage
E. Active-active multi-cluster warehouses across regions

Correct Answers: A, C

Explanation:
In the context of disaster recovery and high-availability design, two key metrics must be satisfied for mission-critical analytics:

  • RPO (Recovery Point Objective) = 0, meaning no data loss is acceptable.

  • RTO (Recovery Time Objective) < 5 minutes, meaning the system must recover and be fully functional in under 5 minutes.

To meet these stringent goals, Snowflake provides enterprise-grade features that enable instant recovery and seamless failover. The two features best suited to accomplish this are Failover/Failback databases and Cross-cloud replication with lag monitoring.

A. Failover/Failback databases between paired accounts is Snowflake’s built-in mechanism to support disaster recovery across Snowflake accounts, typically in different regions. Using database replication, Snowflake maintains a live secondary copy of your database, and with failover/failback, you can promote the secondary to primary almost instantly in the event of an outage. This ensures that data access continues with minimal disruption. Because the system is already synchronized, the RTO can be kept under 5 minutes, and if replication is frequent (or continuous), RPO is zero.

C. Cross-cloud database replication with automatic lag monitoring extends the above capabilities to support multi-region, multi-cloud environments. It ensures that data remains synchronized across cloud providers (e.g., AWS to Azure). Snowflake offers automatic monitoring of replication lag, alerting administrators to latency issues before they become critical. When combined with failover capabilities, this allows organizations to architect for business continuity, even in the face of cloud-specific outages, and helps maintain RPO = 0.

Why the other options fall short:

B. Time Travel extended to 90 days allows users to access historical data for up to 90 days, useful for auditing or recovering from user errors. However, Time Travel does not support instant failover or automated recovery and cannot meet an RTO of under 5 minutes. It also doesn’t replicate data across accounts.

D. Continuous Data Protection snapshots stored in object storage is not a Snowflake feature. While Snowflake has internal recovery mechanisms and Time Travel, it does not export snapshots to object storage as part of CDP. This option is not relevant in Snowflake’s architecture.

E. Active-active multi-cluster warehouses across regions allows multiple compute clusters to handle concurrent workloads in a single region or regionally isolated clusters. However, warehouses are stateless compute resources, and their configuration alone does not protect data or meet RPO = 0. They support performance scaling, not data recovery.

In conclusion, to satisfy the stringent recovery objectives of RPO = 0 and RTO < 5 minutes, Snowflake architects must design with cross-account failover capabilities and cross-cloud replication with automated lag tracking. These features enable true enterprise-grade business continuity.

Question 9

Which two scenarios trigger Snowflake to automatically re-cluster the micro-partitions of a table that has a clustering key defined? (Choose 2.)

A. A large COPY INTO operation inserts > 20 percent new data
B. An explicit ALTER TABLE RECLUSTER command is issued
C. The clustering depth metric exceeds Snowflake’s internal threshold
D. Table storage size surpasses 1 TB uncompressed
E. A warehouse query hints RECLUSTER = TRUE

Correct Answers: A, C

Explanation:
In Snowflake, when a clustering key is defined on a table, Snowflake uses automatic reclustering to maintain efficient data organization. Reclustering is important because it ensures that micro-partitions remain well-aligned with the clustering key, which in turn improves query performance by enabling partition pruning. Snowflake determines when to automatically recluster based on certain internal heuristics and thresholds.

A. A large COPY INTO operation inserts > 20 percent new data is one such trigger. When a significant volume of new data (typically more than 20% of the existing table’s size) is inserted into a clustered table via a bulk operation like COPY INTO, it can cause the distribution of data across micro-partitions to become suboptimal. This disrupts the existing clustering structure, which may prompt Snowflake to schedule automatic reclustering in the background to realign the data with the defined clustering key.

C. The clustering depth metric exceeds Snowflake’s internal threshold is another major trigger for automatic reclustering. Snowflake tracks a metric called clustering depth, which measures how well data is organized with respect to the clustering key. If this depth becomes too high—indicating that micro-partitions contain data from too wide a range of key values—Snowflake will automatically schedule reclustering to restore optimal clustering. This ensures efficient partition pruning during queries and maintains performance standards.

Let’s clarify why the other options are incorrect:

B. An explicit ALTER TABLE RECLUSTER command is issued is not valid because Snowflake does not support manual reclustering. Reclustering is fully automatic and controlled by Snowflake. Unlike some other platforms, users cannot directly force reclustering with SQL commands.

D. Table storage size surpasses 1 TB uncompressed might increase the need for clustering due to data volume, but size alone does not trigger reclustering. It is the clustering quality (as measured by clustering depth) and data change activity that are the deciding factors.

E. A warehouse query hints RECLUSTER = TRUE is not a supported query hint in Snowflake. Users cannot trigger reclustering through query hints; reclustering is governed by Snowflake’s internal processes and system-managed thresholds.

In summary, Snowflake maintains clustering automatically based on data change events and clustering performance metrics. The two primary triggers are large data insertions that disrupt the clustering structure and the clustering depth metric crossing a system-defined threshold, making A and C the correct answers.

Question 10

When setting up a VPC that uses AWS PrivateLink to connect securely to Snowflake, which two configuration tasks are required to ensure that data plane traffic remains private and secure? (Choose 2.)

A. Create an AWS PrivateLink interface endpoint to Snowflake’s account-specific URL
B. Assign a Private Link-only network policy to the Snowflake account
C. Configure DNS forwarding so the Snowflake account host resolves to the interface endpoint
D. Disable OCSP certificate stapling on the client side
E. Set INBOUND rule in the security group to allow 0.0.0.0/0 HTTPS

Correct Answers: A, C

Explanation:
When connecting a VPC to Snowflake over AWS PrivateLink, the goal is to ensure that all data plane traffic—queries, file uploads, and result sets—remains within the AWS backbone, avoiding the public internet. To accomplish this, Snowflake provides account-specific PrivateLink endpoints and requires specific network and DNS configurations.

A. Create an AWS PrivateLink interface endpoint to Snowflake’s account-specific URL is a required step. Each Snowflake account enabled for PrivateLink is assigned a unique AWS endpoint hostname (e.g., <account>.<region>.privatelink.snowflakecomputing.com). To access Snowflake privately, an interface VPC endpoint (powered by AWS PrivateLink) must be created for this hostname. This endpoint serves as the private ingress point for traffic from your VPC to Snowflake.

C. Configure DNS forwarding so the Snowflake account host resolves to the interface endpoint is also essential. By default, domain names like *.snowflakecomputing.com resolve to public IP addresses. You must configure DNS forwarding or DNS override rules (via Route 53 or a custom DNS server) so that requests to Snowflake’s PrivateLink hostname resolve to the IP address of the interface endpoint. This ensures all traffic routes through PrivateLink, maintaining security and avoiding public internet exposure.

Let’s address why the other options are not necessary or incorrect:

B. Assign a Privatelink-only network policy to the Snowflake account is optional, not required. While you can enforce that all access must come through PrivateLink by applying such a network policy, this is a governance choice, not a configuration requirement for connectivity.

D. Disable OCSP certificate stapling on the client side is not recommended. Snowflake uses OCSP (Online Certificate Status Protocol) for TLS validation, and disabling it weakens security. Snowflake handles OCSP checks internally, and clients should not interfere with it.

E. Set INBOUND rule in the security group to allow 0.0.0.0/0 HTTPS is dangerous and unnecessary. PrivateLink is initiated by the client VPC to Snowflake, not the other way around. Therefore, inbound rules for wide-open traffic are not required, and doing so could compromise VPC security.

In conclusion, to securely connect to Snowflake over AWS PrivateLink, architects must create a VPC interface endpoint targeting the Snowflake PrivateLink hostname and configure DNS resolution so that the endpoint is correctly used. These two steps ensure that all data plane traffic stays securely within the AWS infrastructure, meeting enterprise security and compliance standards.