Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Associate Data Practitioner Topic 2 Question 11 Discussion

Actual exam question for Google's Associate Data Practitioner exam
Question #: 11
Topic #: 2
[All Associate Data Practitioner Questions]

You have an existing weekly Storage Transfer Service transfer job from Amazon S3 to a Nearline Cloud Storage bucket in Google Cloud. Each week, the job moves a large number of relatively small files. As the number of files to be transferred each week has grown over time, you are at risk of no longer completing the transfer in the allocated time frame. You need to decrease the total transfer time by replacing the process. Your solution should minimize costs where possible. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Comprehensive and Detailed in Depth

Why B is correct:Creating parallel transfer jobs by using include and exclude prefixes allows you to split the data into smaller chunks and transfer them in parallel.

This can significantly increase throughput and reduce the overall transfer time.

Why other options are incorrect:A: Changing the storage class to Standard will not improve transfer speed.

C: Dataflow is a complex solution for a simple file transfer task.

D: Agent-based transfer is suitable for large files or network limitations, but not for a large number of small files.


Contribute your Thoughts:

Becky
2 days ago
Just throw more servers at it, that'll fix it! Or, you know, use option D and let the machines do the work.
upvoted 0 times
...
Viki
5 days ago
I'm leaning towards C. Dataflow seems like the right tool to handle this kind of large-scale data migration in a scalable way.
upvoted 0 times
...
Jolene
25 days ago
I'm leaning towards option C with a batch Dataflow job for a more automated solution.
upvoted 0 times
...
Rosann
1 months ago
Option D seems intriguing, but I'm not sure if the added cost of Compute Engine instances is worth it. Might be overkill for this use case.
upvoted 0 times
Gerald
9 days ago
C) Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.
upvoted 0 times
...
Tess
14 days ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
Merilyn
15 days ago
A) Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the ---custom-storage-class flag.
upvoted 0 times
...
...
Melodie
1 months ago
I disagree, I believe option D with multiple transfer agents would be more efficient.
upvoted 0 times
...
Karima
2 months ago
I think option B is the way to go. Using parallel transfer jobs with prefixes sounds like the most efficient way to get this done.
upvoted 0 times
Judy
10 days ago
D) Create an agent-based transfer job that utilizes multiple transfer agents on Compute Engine instances.
upvoted 0 times
...
Carin
13 days ago
I agree, that seems like the best way to speed up the transfer process.
upvoted 0 times
...
Sheridan
17 days ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
...
Maybelle
2 months ago
I think option B sounds like a good idea to speed up the transfer.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77