Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam SCS-C02 Topic 6 Question 27 Discussion

Actual exam question for Amazon's SCS-C02 exam
Question #: 27
Topic #: 6
[All SCS-C02 Questions]

A company's data scientists want to create artificial intelligence and machine learning (AI/ML) training models by using Amazon SageMaker. The training models will use large datasets in an Amazon S3 bucket. The datasets contain sensitive information.

On average. the data scientists need 30 days to train models. The S3 bucket has been secured appropriately The companfs data retention policy states that all data that is older than 45 days must be removed from the S3 bucket.

Which action should a security engineer take to enforce this data retention policy?

Show Suggested Answer Hide Answer
Suggested Answer: B

For increased security while ensuring functionality, adjusting NACL3 to allow inbound traffic on port 5432 from the CIDR blocks of the application instance subnets, and allowing outbound traffic on ephemeral ports (1024-65536) back to those subnets creates a secure path for database access. Removing default allow-all rules enhances security by implementing the principle of least privilege, ensuring that only necessary traffic is permitted.


Contribute your Thoughts:

Katy
16 days ago
Option E: Hire a team of highly trained squirrels to scurry around the S3 bucket and manually delete the old files. They'd probably do it faster than any AWS service!
upvoted 0 times
Juliann
7 days ago
B) Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an S3 event notification to invoke the Lambda function for each PutObject operation.
upvoted 0 times
...
...
Albina
25 days ago
Option D is interesting, but I'm not sure if S3 Intelligent-Tiering is the right choice here. We need to make sure the data is completely deleted, not just transitioned to a different storage class.
upvoted 0 times
Chery
6 days ago
C: Yeah, we need to ensure the data is completely removed from the S3 bucket, so option A is the way to go.
upvoted 0 times
...
Altha
7 days ago
B: I agree, that seems like the most straightforward solution to enforce the data retention policy.
upvoted 0 times
...
Kimbery
10 days ago
A: I think option A is the best choice. It will automatically delete objects after 45 days.
upvoted 0 times
...
...
Darrin
1 months ago
Haha, Option C is like hiring a personal assistant to clean up your room every month. Seems a bit overkill for this use case.
upvoted 0 times
Franchesca
14 days ago
User 1: Option A seems like the most straightforward solution.
upvoted 0 times
...
...
Lai
1 months ago
I'm not sure if Option B is the best approach. Invoking a Lambda function for every PutObject operation could get expensive and might not scale well.
upvoted 0 times
Hermila
14 days ago
B: Yeah, I agree. It's a more straightforward approach compared to constantly invoking a Lambda function for every PutObject operation.
upvoted 0 times
...
Cristy
18 days ago
A: I think Option A is the best choice. Setting up an S3 Lifecycle rule to delete objects after 45 days seems like a simple and effective solution.
upvoted 0 times
...
...
Christa
1 months ago
I see both sides, but I think option C is a good compromise. It automates the process while also being scheduled monthly.
upvoted 0 times
...
Na
1 months ago
I disagree, I believe option B is more efficient. Using a Lambda function to automatically delete old objects seems like a better solution.
upvoted 0 times
...
Aretha
1 months ago
Option A seems like the simplest and most straightforward solution. Why complicate things with Lambda functions and EventBridge when we can just set a lifecycle rule?
upvoted 0 times
Donte
7 days ago
Using Lambda functions and EventBridge might be overcomplicating things in this scenario.
upvoted 0 times
...
Nan
14 days ago
I agree, setting a lifecycle rule on the S3 bucket to delete objects after 45 days is the most efficient way to enforce the data retention policy.
upvoted 0 times
...
Bette
1 months ago
Option A seems like the simplest and most straightforward solution.
upvoted 0 times
...
...
Yuette
2 months ago
I think option A is the best choice. It's simple and directly enforces the data retention policy.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77
a