Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Confluent Exam CCDAK Topic 4 Question 74 Discussion

Actual exam question for Confluent's CCDAK exam
Question #: 74
Topic #: 4
[All CCDAK Questions]

Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

Show Suggested Answer Hide Answer
Suggested Answer: C

Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/


Contribute your Thoughts:

Cristina
2 months ago
Haha, I bet the exam writers are trying to trick us with these options. Kafka is all about distributed logging, so 'compaction' must be some sort of magic that makes it all work, right?
upvoted 0 times
Leanna
8 days ago
Compaction changes the offset of messages
upvoted 0 times
...
Jackie
23 days ago
D) After cleanup, only one message per key is retained with the latest value
upvoted 0 times
...
Rolande
1 months ago
A) After cleanup, only one message per key is retained with the first value
upvoted 0 times
...
...
Jeannetta
2 months ago
This is a tricky one. I know compaction changes the offsets, so I'm not sure if option A or D is the right choice. I'll have to think about this one more.
upvoted 0 times
...
Carmelina
2 months ago
I'm a bit confused. Isn't log compaction supposed to compress the messages as well? Option B seems like it could be the answer.
upvoted 0 times
Orville
1 months ago
User 2: Orville is right. Log compaction actually removes duplicate keys and retains the latest value.
upvoted 0 times
...
Gilma
2 months ago
User 1: Option B is not correct. Log compaction does not compress messages.
upvoted 0 times
...
...
Lorrie
2 months ago
Hmm, I was thinking option C was the right answer. Doesn't Kafka de-duplicate messages based on the key hash during log compaction?
upvoted 0 times
...
Xochitl
2 months ago
I'm not sure about this. I think the answer might be A) After cleanup, only one message per key is retained with the first value. It could be more efficient to keep the initial value for each key.
upvoted 0 times
...
Torie
2 months ago
I think option D is the correct answer. Log compaction retains only the latest value for each unique key.
upvoted 0 times
Gracia
23 days ago
Compaction changes the offset of messages
upvoted 0 times
...
Jacqueline
25 days ago
D) After cleanup, only one message per key is retained with the latest value
upvoted 0 times
...
Virgie
2 months ago
A) After cleanup, only one message per key is retained with the first value
upvoted 0 times
...
...
Mireya
2 months ago
I agree with Lewis. It's important to retain the latest value for each key after compaction to ensure the most up-to-date information is available in the topic.
upvoted 0 times
...
Lewis
2 months ago
I think the answer is D) After cleanup, only one message per key is retained with the latest value. It makes sense to keep the most recent value for each key.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77