What built-in Snowflake features make use of the change tracking metadata for a table? (Choose two.)
AThe MERGE command
BThe UPSERT command
CThe CHANGES clause
DA STREAM object
EThee CHANGE_DATA_CAPTURE command
Which organization-related tasks can be performed by the ORGADMIN role? (Choose three.)
AChanging the name of the organization
BCreating an account
CViewing a list of organization accounts
DChanging the name of an account
EDeleting an account
FEnabling the replication of a database
A retail company has over 3000 stores all using the same Point Of Sale (POS) system. The company wants to deliver near real-time sales results to category managers. The stores operate in a variety of time zones and exhibit a dynamic range of transactions each minute, with some stores having higher sales volumes than others.
Sales results are provided in a uniform fashion using data engineered fields that will be calculated in a complex data pipeline. Calculations include exceptions, aggregations, and scoring using external functions interfaced to scoring algorithms. The source data for aggregations has over 100M rows.
Every minute, the POS sends all sales transactions files to a cloud storage location with a naming convention that includes store numbers and timestamps to identify the set of transactions contained in the files. The files are typically less than 10MB in size.
How can the near real-time results be provided to the category managers? (Choose two.)
AAll files should be concatenated before ingestion into Snowflake to avoid micro-ingestion.
BA Snowpipe should be created and configured with AUTO_INGEST = TRUE. A stream should be created to process INSERTS into a single target table using the stream metadata to inform the store number and timestamps.
CA STREAM should be created to accumulate the near real-time data and a TASK should be created that runs at a frequency that matches the real-time analytics needs.
DAn external scheduler should examine the contents of the cloud storage location and issue SnowSQL commands to process the data at a frequency that matches the real-time analytics needs.
EThe COPY INTO command with a task scheduled to run every second should be used to achieve the near-real time requirement.
A company needs to share its product catalog data with one of its partners. The product catalog data is stored in two database tables: PRODUCT_CATEGORY, and PRODUCT_DETAILS. Both tables can be joined by the PRODUCT_ID column. Data access should be governed, and only the partner should have access to the records.
The partner is not a Snowflake customer. The partner uses Amazon S3 for cloud storage.
Which design will be the MOST cost-effective and secure, while using the required Snowflake features?
AUse Secure Data Sharing with an S3 bucket as a destination.
BPublish PRODUCT_CATEGORY and PRODUCT_DETAILS data sets on the Snowflake Marketplace.
CCreate a database user for the partner and give them access to the required data sets.
DCreate a reader account for the partner and share the data sets as secure views.
A company has a Snowflake environment running in AWS us-west-2 (Oregon). The company needs to share data privately with a customer who is running their Snowflake environment in Azure East US 2 (Virginia).
What is the recommended sequence of operations that must be followed to meet this requirement?
A
Create a share and add the database privileges to the share
Create a new listing on the Snowflake Marketplace
Alter the listing and add the share
Instruct the customer to subscribe to the listing on the Snowflake Marketplace
B
Ask the customer to create a new Snowflake account in Azure EAST US 2 (Virginia)
Create a share and add the database privileges to the share
Alter the share and add the customer's Snowflake account to the share
C
Create a new Snowflake account in Azure East US 2 (Virginia)
Set up replication between AWS us-west-2 (Oregon) and Azure East US 2 (Virginia) for the database objects to be shared
Create a share and add the database privileges to the share
Alter the share and add the customer's Snowflake account to the share
D
Create a reader account in Azure East US 2 (Virginia)
Create a share and add the database privileges to the share
Add the reader account to the share
Share the reader account's URL and credentials with the customer
Company A has recently acquired company B. The Snowflake deployment for company B is located in the Azure West Europe region.
As part of the integration process, an Architect has been asked to consolidate company B's sales data into company A's Snowflake account which is located in the AWS us-east-1 region.
How can this requirement be met?
AReplicate the sales data from company B's Snowflake account into company A's Snowflake account using cross-region data replication within Snowflake. Configure a direct share from company B's account to company A's account.
BExport the sales data from company B's Snowflake account as CSV files, and transfer the files to company A's Snowflake account. Import the data using Snowflake's data loading capabilities.
CMigrate company B's Snowflake deployment to the same region as company A's Snowflake deployment, ensuring data locality. Then perform a direct database-to-database merge of the sales data.
DBuild a custom data pipeline using Azure Data Factory or a similar tool to extract the sales data from company B's Snowflake account. Transform the data, then load it into company A's Snowflake account.
A Snowflake Architect created a new data share and would like to verify that only specific records in secure views are visible within the data share by the consumers.
What is the recommended way to validate data accessibility by the consumers?
ACreate reader accounts as shown below and impersonate the consumers by logging in with their credentials.create managed account reader_acct1 admin_name = user1 , admin_password = 'Sdfed43da!44' , type = reader;
BCreate a row access policy as shown below and assign it to the data share.create or replace row access policy rap_acct as (acct_id varchar) returns boolean -> case when 'acct1_role' = current_role() then true else false end;
CSet the session parameter called SIMULATED_DATA_SHARING_CONSUMER as shown below in order to impersonate the consumer accounts.alter session set simulated_data_sharing_consumer = 'Consumer Acct1'
DAlter the share settings as shown below, in order to impersonate a specific consumer account.alter share sales_share set accounts = 'Consumer1' share_restrictions = true
A company is using Snowflake in Azure in the Netherlands. The company analyst team also has data in JSON format that is stored in an Amazon S3 bucket in the AWS Singapore region that the team wants to analyze.
The Architect has been given the following requirements:
Provide access to frequently changing data
Keep egress costs to a minimum
Maintain low latency
How can these requirements be met with the LEAST amount of operational overhead?
AUse a materialized view on top of an external table against the S3 bucket in AWS Singapore.
BUse an external table against the S3 bucket in AWS Singapore and copy the data into transient tables.
CCopy the data between providers from S3 to Azure Blob storage to collocate, then use Snowpipe for data ingestion.
DUse AWS Transfer Family to replicate data between the S3 bucket in AWS Singapore and an Azure Netherlands Blob storage, then use an external table against the Blob storage.
Based on the Snowflake object hierarchy, what securable objects belong directly to a Snowflake account? (Choose three.)
ADatabase
BSchema
CTable
DStage
ERole
FWarehouse
What is a characteristic of Role-Based Access Control (RBAC) as used in Snowflake?
APrivileges can be granted at the database level and can be inherited by all underlying objects.
BA user can use a "super-user" access along with SECURITYADMIN to bypass authorization checks and access all databases, schemas, and underlying objects.
CA user can create managed access schemas to support future grants and ensure only schema owners can grant privileges to other roles.
DA user can create managed access schemas to support current and future grants and ensure only object owners can grant privileges to other roles.
Assuming all Snowflake accounts are using an Enterprise edition or higher, in which development and testing scenarios would copying of data be required, and zero-copy cloning not be suitable? (Choose two.)
ADevelopers create their own datasets to work against transformed versions of the live data.
BProduction and development run in different databases in the same account, and Developers need to see production-like data but with specific columns masked.
CData is in a production Snowflake account that needs to be provided to Developers in a separate development/testing Snowflake account in the same cloud region.
DDevelopers create their own copies of a standard test database previously created for them in the development account, for their initial development and unit testing.
EThe release process requires pre-production testing of changes with data of production scale and complexity. For security reasons, pre-production also runs in the production account.
A new user user_01 is created within Snowflake. The following two commands are executed:
Command 1 --> show grants to user user_01;
Command 2 --> show grants on user user_01;
What inferences can be made about these commands?
ACommand 1 defines which user owns user_01
Command 2 defines all the grants which have been given to user_01
BCommand 1 defines all the grants which are given to user_01
Command 2 defines which user owns user_01
CCommand 1 defines which role owns user_0l
Command 2 defines all the grants which have been given to user_01
DCommand 1 defines all the grants which are given to user_01
Command 2 defines which role owns user_01
A Data Engineer is designing a near real-time ingestion pipeline for a retail company to ingest event logs into Snowflake to derive insights. A Snowflake Architect is asked to define security best practices to configure access control privileges for the data load for auto-ingest to Snowpipe.
What are the MINIMUM object privileges required for the Snowpipe user to execute Snowpipe?
AOWNERSHIP on the named pipe, USAGE on the named stage, target database, and schema, and INSERT and SELECT on the target table
BOWNERSHIP on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table
CCREATE on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table
DUSAGE on the named pipe, named stage, target database, and schema, and INSERT and SELECT on the target table
The IT Security team has identified that there is an ongoing credential stuffing attack on many of their organization’s system.
What is the BEST way to find recent and ongoing login attempts to Snowflake?
ACall the LOGIN_HISTORY Information Schema table function.
BQuery the LOGIN_HISTORY view in the ACCOUNT_USAGE schema in the SNOWFLAKE database.
CView the History tab in the Snowflake UI and set up a filter for SQL text that contains the text "LOGIN".
DView the Users section in the Account tab in the Snowflake UI and review the last login column.
An Architect has a VPN_ACCESS_LOGS table in the SECURITY_LOGS schema containing timestamps of the connection and disconnection, username of the user, and summary statistics.
What should the Architect do to enable the Snowflake search optimization service on this table?
AAssume role with OWNERSHIP on future tables and ADD SEARCH OPTIMIZATION on the SECURITY_LOGS schema.
BAssume role with ALL PRIVILEGES including ADD SEARCH OPTIMIZATION in the SECURITY LOGS schema.
CAssume role with OWNERSHIP on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in the SECURITY_LOGS schema.
DAssume role with ALL PRIVILEGES on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in the SECURITY_LOGS schema.
The table contains five columns and it has millions of records. The cardinality distribution of the columns is shown below:
Column C4 and C5 are mostly used by SELECT queries in the GROUP BY and ORDER BY clauses. Whereas columns C1, C2 and C3 are heavily used in filter and join conditions of SELECT queries.
The Architect must design a clustering key for this table to improve the query performance.
Based on Snowflake recommendations, how should the clustering key columns be ordered while defining the multi-column clustering key?
AC5, C4, C2
BC3, C4, C5
CC1, C3, C2
DC2, C1, C3
Which security, governance, and data protection features require, at a MINIMUM, the Business Critical edition of Snowflake? (Choose two.)
AExtended Time Travel (up to 90 days)
BCustomer-managed encryption keys through Tri-Secret Secure
CPeriodic rekeying of encrypted data
DAWS, Azure, or Google Cloud private connectivity to Snowflake
EFederated authentication and SSO
A company wants to deploy its Snowflake accounts inside its corporate network with no visibility on the internet. The company is using a VPN infrastructure and Virtual Desktop Infrastructure (VDI) for its Snowflake users. The company also wants to re-use the login credentials set up for the VDI to eliminate redundancy when managing logins.
What Snowflake functionality should be used to meet these requirements? (Choose two.)
ASet up replication to allow users to connect from outside the company VPN.
BProvision a unique company Tri-Secret Secure key.
CUse private connectivity from a cloud provider.
DSet up SSO for federated authentication.
EUse a proxy Snowflake account outside the VPN, enabling client redirect for user logins.
How do Snowflake databases that are created from shares differ from standard databases that are not created from shares? (Choose three.)
AShared databases are read-only.
BShared databases must be refreshed in order for new data to be visible.
CShared databases cannot be cloned.
DShared databases are not supported by Time Travel.
EShared databases will have the PUBLIC or INFORMATION_SCHEMA schemas without explicitly granting these schemas to the share.
FShared databases can also be created as transient databases.
What integration object should be used to place restrictions on where data may be exported?
AStage integration
BSecurity integration
CStorage integration
DAPI integration
The following DDL command was used to create a task based on a stream:
Assuming MY_WH is set to auto_suspend – 60 and used exclusively for this task, which statement is true?
AThe warehouse MY_WH will be made active every five minutes to check the stream.
BThe warehouse MY_WH will only be active when there are results in the stream.
CThe warehouse MY_WH will never suspend.
DThe warehouse MY_WH will automatically resize to accommodate the size of the stream.
When using the Snowflake Connector for Kafka, what data formats are supported for the messages? (Choose two.)