Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam DEA-C01 Topic 4 Question 46 Discussion

Actual exam question for Snowflake's DEA-C01 exam
Question #: 46
Topic #: 4
[All DEA-C01 Questions]

A CSV file around 1 TB in size is generated daily on an on-premise server A corresponding table. Internal stage, and file format have already been created in Snowflake to facilitate the data loading process

How can the process of bringing the CSV file into Snowflake be automated using the LEAST amount of operational overhead?

Show Suggested Answer Hide Answer
Suggested Answer: C

This option is the best way to automate the process of bringing the CSV file into Snowflake with the least amount of operational overhead. SnowSQL is a command-line tool that can be used to execute SQL statements and scripts on Snowflake. By scheduling a SQL file that executes a PUT command, the CSV file can be pushed from the on-premise server to the internal stage in Snowflake. Then, by creating a pipe that runs a COPY INTO statement that references the internal stage, Snowpipe can automatically load the file from the internal stage into the table when it detects a new file in the stage. This way, there is no need to manually start or monitor a virtual warehouse or task.


Contribute your Thoughts:

Erick
1 months ago
Haha, did someone say 1 TB CSV file? That's a whole lot of data! I hope they have a good internet connection on that on-premise server.
upvoted 0 times
...
Lindsey
1 months ago
Hmm, I'm not so sure. What if the file is too big for Snowpipe to handle? Maybe option D using Snowpark Python would be better.
upvoted 0 times
Ngoc
2 days ago
I agree, Snowpipe is designed for automatic ingestion of large files. It would be the least amount of operational overhead.
upvoted 0 times
...
Harrison
3 days ago
Option C sounds like a good choice. Snowpipe can handle large files efficiently.
upvoted 0 times
...
...
Glennis
2 months ago
I agree, C is the best option. Automating the process with Snowpipe is the way to go. No need to manually run tasks or scripts.
upvoted 0 times
Shala
2 days ago
A) Create a task in Snowflake that executes once a day and runs a copy into statement that references the internal stage The internal stage will read the files directly from the on-premise server and copy the newest file into the table from the on-premise server to the Snowflake table
upvoted 0 times
...
Buddy
12 days ago
I agree, C is the best option. Automating the process with Snowpipe is the way to go. No need to manually run tasks or scripts.
upvoted 0 times
...
Weldon
1 months ago
C) On the on-premise server schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a pipe that runs a copy into statement that references the internal stage Snowpipe auto-ingest will automatically load the file from the internal stage when the new file lands in the internal stage.
upvoted 0 times
...
...
Vashti
2 months ago
I see your points, but I personally prefer option D. Using a Python script to directly load the data into the table without the need for an internal stage sounds like a more flexible approach.
upvoted 0 times
...
Terrilyn
2 months ago
I disagree, I believe option C is more efficient. Using Snowpipe auto-ingest to automatically load the file seems like a time-saving solution.
upvoted 0 times
...
Tequila
2 months ago
The correct answer is C. Snowpipe will automatically load the file from the internal stage when the new file lands, which is the least amount of operational overhead.
upvoted 0 times
Edison
12 days ago
Yes, that's correct. Snowpipe simplifies the process by automatically ingesting the new file into the table.
upvoted 0 times
...
Teri
13 days ago
Oh, I see. So Snowpipe will handle the loading process without much manual intervention.
upvoted 0 times
...
Leonida
14 days ago
Actually, I believe the answer is C. Using Snowpipe to automatically load the file from the internal stage has the least operational overhead.
upvoted 0 times
...
Sophia
19 days ago
I think the answer is A. It involves creating a task in Snowflake that runs a copy into statement once a day.
upvoted 0 times
...
Polly
20 days ago
Let's go with Snowpipe then, it seems like the most straightforward solution.
upvoted 0 times
...
Chaya
27 days ago
I agree, Snowpipe will save us a lot of operational overhead.
upvoted 0 times
...
Latrice
1 months ago
Snowpipe sounds like the most efficient option for loading the CSV file into Snowflake.
upvoted 0 times
...
Roselle
1 months ago
I think the best way to automate the process is by using Snowpipe.
upvoted 0 times
...
...
Leatha
2 months ago
I think option A is the best choice. It seems like the most straightforward way to automate the process with minimal overhead.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77