Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Database Engineer Topic 10 Question 14 Discussion

Actual exam question for Google's Professional Cloud Database Engineer exam
Question #: 14
Topic #: 10
[All Professional Cloud Database Engineer Questions]

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

Gracia
1 months ago
I'm just picturing the poor database admin trying to keep up with adding Bigtable nodes like a hamster on a wheel. 'Wheee, another node! Wheee, another node!'
upvoted 0 times
Evelynn
2 days ago
D) Bigtable seems like the best option for handling the required throughput.
upvoted 0 times
...
Lavonne
3 days ago
C) Memorystore for Memcached sounds like a good choice for adding nodes as needed.
upvoted 0 times
...
Tijuana
8 days ago
B) I think Firestore would be a better option for automatic scaling.
upvoted 0 times
...
Kenneth
11 days ago
A) Use Cloud SQL with read replicas for throughput.
upvoted 0 times
...
...
Melodie
1 months ago
Cloud SQL with read replicas? Really? That's like trying to use a bicycle to haul a freight train. This application needs serious big-data firepower, not your grandpa's SQL database.
upvoted 0 times
Hoa
4 days ago
D) Use Bigtable, and add nodes as necessary to achieve the required throughput.
upvoted 0 times
...
Sherron
15 days ago
A) Use Bigtable, and add nodes as necessary to achieve the required throughput.
upvoted 0 times
...
...
Cassie
1 months ago
Ooh, Memorystore for Memcached? That's an interesting idea! But I'm not sure if it can keep up with the insane throughput and data volume this application needs. Definitely not a good fit in my opinion.
upvoted 0 times
...
Aileen
2 months ago
I'm not so sure about Bigtable. What about Firestore? It's serverless, so you don't have to worry about scaling it up yourself. And it can probably handle the data volume and throughput requirements.
upvoted 0 times
Lezlie
6 days ago
D) Use Bigtable, and add nodes as necessary to achieve the required throughput.
upvoted 0 times
...
Martina
11 days ago
B) Use Firestore, and rely on automatic serverless scaling.
upvoted 0 times
...
Taryn
19 days ago
A) Use Cloud SQL with read replicas for throughput.
upvoted 0 times
...
...
Kristofer
2 months ago
Hmm, I think option D is the way to go. Bigtable can handle massive amounts of data and scale up to millions of requests per second. Plus, it's designed for time-series data like IoT sensor data.
upvoted 0 times
Ashlyn
2 months ago
User 2: Yeah, it's definitely built for handling large amounts of data and high throughput.
upvoted 0 times
...
Leota
2 months ago
User 1: I agree, Bigtable seems like the best choice for this scenario.
upvoted 0 times
...
...
Helene
2 months ago
I'm leaning towards option A, Cloud SQL with read replicas, as it provides good throughput and reliability for our needs.
upvoted 0 times
...
Juliana
2 months ago
I disagree, I believe option B, Firestore, would be better as it offers automatic scaling and is serverless.
upvoted 0 times
...
Rolande
3 months ago
I think we should go with option D, Bigtable, because it can handle massive amounts of data and scale easily.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77