Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 2 Question 83 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 83
Topic #: 2
[All DAS-C01 Questions]

An education provider's learning management system (LMS) is hosted in a 100 TB data lake that is built on Amazon S3. The provider's LMS supports hundreds of schools. The provider wants to build an advanced analytics reporting platform using Amazon Redshift to handle complex queries with optimal performance. System users will query the most recent 4 months of data 95% of the time while 5% of the queries will leverage data from the previous 12 months.

Which solution meets these requirements in the MOST cost-effective way?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Pearlene
1 years ago
Haha, I hear you. Spark can be a bit intimidating, but it's worth it in the long run. Although, I have to say, option D also seems like a decent choice. Stored procedures in Redshift can be pretty efficient, and we wouldn't have to worry about the complexity of Spark. Decisions, decisions...
upvoted 0 times
Alfreda
1 years ago
Haha, I hear you. Spark can be a bit intimidating, but it's worth it in the long run. Although, I have to say, option D also seems like a decent choice. Stored procedures in Redshift can be pretty efficient, and we wouldn't have to worry about the complexity of Spark. Decisions, decisions...
upvoted 0 times
...
Jin
1 years ago
B) Use AWS Glue to create an Apache Spark job that joins the fact table with the dimensions. Load the data into a new table
upvoted 0 times
...
Afton
1 years ago
A) Use QuickSight to modify the current dataset to use SPICE
upvoted 0 times
...
...
Douglass
1 years ago
Ooh, I like that idea! Plus, with Spark, we can leverage its powerful processing capabilities to handle those billions of records. I'm definitely leaning towards option B as well. Though, I have to admit, the thought of dealing with Spark makes my head spin a little. Maybe I should have studied more during the Spark training session.
upvoted 0 times
...
Yuonne
1 years ago
I'm not sure about that. Materialized views can be great, but they require manual maintenance and refreshes. What if the data changes frequently? I think option B, using AWS Glue to create a Spark job, might be a better approach. That way, the data is automatically updated and we don't have to worry about manual maintenance.
upvoted 0 times
...
Cristy
1 years ago
Hmm, this is a tricky one. We need to find a way to speed up the response time without too much implementation effort. I'm leaning towards option C - creating a materialized view in Amazon Redshift. That way, the data is already pre-joined and ready for QuickSight to use, which should improve the performance.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77