Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Exam Heroku Architect Topic 1 Question 20 Discussion

Actual exam question for Salesforce's Heroku Architect exam
Question #: 20
Topic #: 1
[All Heroku Architect Questions]

Universal Containers (UC)uses Apache Kafka on Heroku to stream shipment inventory data in real time throughout the world. A Kafka topic is used to send messages with updates on the shipping container GPS coordinates as they are in transit. UC is using a Heroku Kafka basic-0 plan.The topic was provisioned with 8 partitions, 1 week of retention, and no compaction. The keys for the events are being assigned by Heroku Kafka, which means that they will be randomly distributed between the partitions.

UC has a single-dyno consumer application that persists the data to their Enterprise Data Warehouse (EDW). Recently, they've been noticing data loss in the EDW.

What should an Architect with Kafka experience recommend?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Sang
2 months ago
I like how C tackles the problem from multiple angles - the Redis store and the scaled-up consumers. That's a more robust solution.
upvoted 0 times
Lore
2 days ago
Agreed, C seems like the most comprehensive solution to prevent data loss.
upvoted 0 times
...
Rossana
4 days ago
I think C is the best option here. It covers all the bases.
upvoted 0 times
...
Junita
6 days ago
B) Upgrade to a larger Apache Kafka for Heroku plan, which has greater data capacity.
upvoted 0 times
...
Barabara
19 days ago
A) Enable compaction on the topic to drop older messages, which will drop older messages with the same key.
upvoted 0 times
...
Nan
1 months ago
C) Use Heroku Redis to store message receipt information to account for 'at-least' once delivery, which will guarantee that messages are never processed more than once. Scale up the consumer dynos to match the number of partitions so that there is one process for each partition.
upvoted 0 times
...
...
Zoila
2 months ago
Compaction might help with older messages, but it won't address the data loss issue. C is the best option to guarantee message delivery.
upvoted 0 times
Stefanie
10 days ago
C
upvoted 0 times
...
Argelia
13 days ago
A
upvoted 0 times
...
...
Howard
2 months ago
Option B seems like overkill. Upgrading the Kafka plan is not necessary if the issue is with the consumer application.
upvoted 0 times
Kenneth
6 days ago
Option B seems like overkill. Upgrading the Kafka plan is not necessary if the issue is with the consumer application.
upvoted 0 times
...
Leonida
11 days ago
C) Use Heroku Redis to store message receipt information to account for 'at-least' once delivery, which will guarantee that messages are never processed more than once. Scale up the consumer dynos to match the number of partitions so that there is one process for each partition.
upvoted 0 times
...
Lashawn
13 days ago
A) Enable compaction on the topic to drop older messages, which will drop older messages with the same key.
upvoted 0 times
...
Leoma
14 days ago
Option B seems like overkill. Upgrading the Kafka plan is not necessary if the issue is with the consumer application.
upvoted 0 times
...
Lashanda
22 days ago
C) Use Heroku Redis to store message receipt information to account for 'at-least' once delivery, which will guarantee that messages are never processed more than once. Scale up the consumer dynos to match the number of partitions so that there is one process for each partition.
upvoted 0 times
...
Whitley
1 months ago
A) Enable compaction on the topic to drop older messages, which will drop older messages with the same key.
upvoted 0 times
...
...
Amina
2 months ago
I think the correct answer is C. Using Heroku Redis to store message receipt information and scaling up the consumer dynos will help ensure at-least once delivery and prevent data loss.
upvoted 0 times
...
Tamie
2 months ago
I believe upgrading to a larger Apache Kafka plan might also solve the issue.
upvoted 0 times
...
Norah
2 months ago
I agree with Cassi. Compaction will help prevent data loss in the EDW.
upvoted 0 times
...
Cassi
2 months ago
I think we should enable compaction on the topic to drop older messages.
upvoted 0 times
...
Sharika
2 months ago
I believe upgrading to a larger Apache Kafka plan might also solve the issue.
upvoted 0 times
...
Rodrigo
2 months ago
I agree with Ellsworth. Compaction will help prevent data loss in the EDW.
upvoted 0 times
...
Ellsworth
3 months ago
I think we should enable compaction on the topic to drop older messages.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77