Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DOP-C01 Topic 2 Question 85 Discussion

Actual exam question for Amazon's DOP-C01 exam
Question #: 85
Topic #: 2
[All DOP-C01 Questions]

A company is migrating an application to AWS that runs on a single Amazon EC2 instance. Because of licensing limitations, the application does not support horizontal scaling. The application will be using Amazon Aurora for its database. How can the DevOps Engineer architect automated healing to automatically recover from EC2 and Aurora failures, in addition to recovering across Availability Zones (AZs), in the MOST cost-effective manner?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

Shawnda
1 years ago
Ah, the age-old debate: DynamoDB vs. Redis. You both make good points. I think it really comes down to the specific data access patterns and requirements. If the company needs to prioritize fast reads over writes, then yeah, the Redis option could be a better fit. But either way, the global distribution is key, so those are the two best options in my opinion.
upvoted 0 times
...
Aretha
1 years ago
Hmm, I'm not so sure about DynamoDB. While it's a great choice for global data distribution, I'm wondering if the write performance might be a bottleneck with that many users. Maybe something more optimized for reads, like Redis, could work better? Option D with ElastiCache Redis replication groups might be worth considering.
upvoted 0 times
...
Reita
1 years ago
Yeah, I agree that DynamoDB global tables seem like the way to go here. The automatic replication across Regions should meet the need for low latency and data availability. Plus, DynamoDB is designed to handle massive amounts of traffic, so the 20-30 million users won't be an issue.
upvoted 0 times
Alysa
1 years ago
Exactly, DynamoDB is built for high scalability and performance, making it a strong choice for this use case.
upvoted 0 times
...
Stephen
1 years ago
And with DynamoDB's ability to handle large amounts of traffic, the user base should not be a problem.
upvoted 0 times
...
Lashonda
1 years ago
Absolutely, automatic replication across Regions will definitely help with minimizing latency.
upvoted 0 times
...
Ricki
1 years ago
That's what I was thinking too. DynamoDB global tables seem like the best fit for this scenario.
upvoted 0 times
...
Kristel
1 years ago
C) Implement Amazon DynamoDB global tables in each of the six Regions.
upvoted 0 times
...
...
Cheryl
1 years ago
This is a tricky question. The key requirements here are low latency and global data availability. I'm thinking that option C, Amazon DynamoDB global tables, might be the best solution. DynamoDB can handle the high user volume and the global tables feature will ensure the data is accessible across all six Regions.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77