Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 2 Question 90 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 90
Topic #: 2
[All DAS-C01 Questions]

A company collects data from parking garages. Analysts have requested the ability to run reports in near real time about the number of vehicles in each garage.

The company wants to build an ingestion pipeline that loads the data into an Amazon Redshift cluster. The solution must alert operations personnel when the number of vehicles in a particular garage exceeds a specific threshold. The alerting query will use garage threshold values as a static reference. The threshold values are stored in

Amazon S3.

What is the MOST operationally efficient solution that meets these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

Galen
12 months ago
I agree with User1, option D seems like a strategic approach to improve the COPY process by applying sharding.
upvoted 0 times
...
Gracia
12 months ago
I disagree, I believe option B would be more effective because splitting the files to match the number of slices in the Redshift cluster would optimize the COPY process.
upvoted 0 times
...
Tammara
1 years ago
I think option D would be the best solution for accelerating the COPY process.
upvoted 0 times
...
Peggy
1 years ago
That's true. Sharding based on DISTKEY columns could be worth considering.
upvoted 0 times
...
Melissa
1 years ago
But what about option D? Applying sharding could also improve the COPY process.
upvoted 0 times
...
Anastacia
1 years ago
I agree. Splitting the files to match the number of slices in the Redshift cluster makes sense.
upvoted 0 times
...
Peggy
1 years ago
I think option B would be the best solution.
upvoted 0 times
Lashanda
1 years ago
So, yeah, option B seems like the most practical solution for accelerating the COPY process.
upvoted 0 times
...
Lenora
1 years ago
Ultimately, that would lead to faster data loading into the Redshift cluster.
upvoted 0 times
...
Margery
1 years ago
And having the right number of files could improve parallelism during the COPY operation.
upvoted 0 times
...
Melita
1 years ago
It would ensure that the workload is evenly distributed across the cluster.
upvoted 0 times
...
Gerri
1 years ago
That could definitely help optimize the COPY process and make it more efficient.
upvoted 0 times
...
Candra
1 years ago
I agree, splitting the files based on the number of slices in the Redshift cluster makes sense.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77