Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Confluent Exam CCDAK Topic 4 Question 63 Discussion

Actual exam question for Confluent's CCDAK exam
Question #: 63
Topic #: 4
[All CCDAK Questions]

You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?

Show Suggested Answer Hide Answer
Suggested Answer: C

Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.


Contribute your Thoughts:

Margurite
1 months ago
Oh boy, I can just picture the intern panicking and frantically hitting the restart button. 'Please don't crash, please don't crash!' Gotta keep that Kafka cluster healthy, folks!
upvoted 0 times
Tawanna
13 days ago
B) The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
upvoted 0 times
...
Sean
15 days ago
A) The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
upvoted 0 times
...
...
Billy
1 months ago
Hmm, I'm going with B. As long as the replication factor is 3, the other brokers should have the data covered. Though I bet the intern's getting an earful for that little incident.
upvoted 0 times
...
Paulina
1 months ago
C, the broker will crash. I mean, come on, deleting all the data? That's a surefire way to make a broker go down in flames. Literally.
upvoted 0 times
...
Kimbery
1 months ago
D! The broker will start without any data, and if it becomes a leader, we'll have data loss. Ouch, that's a tough situation. Gotta be extra careful with those brokers.
upvoted 0 times
Nina
12 days ago
Oh no, that sounds risky. Hopefully, the replication process happens smoothly and we don't lose any important data.
upvoted 0 times
...
Cyndy
18 days ago
B) The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
upvoted 0 times
...
Tamar
1 months ago
D! The broker will start without any data, and if it becomes a leader, we'll have data loss. Ouch, that's a tough situation. Gotta be extra careful with those brokers.
upvoted 0 times
...
...
Samira
2 months ago
I think the correct answer is B. The broker will start, but it won't be online until all the data it needs is replicated from the other leaders. Losing a broker is never good, but at least the data is safe.
upvoted 0 times
Clare
14 days ago
I agree. It's a good practice to have replication factor in Kafka clusters to prevent data loss in case of failures.
upvoted 0 times
...
Elden
24 days ago
Yes, that's correct. It's important to have replication factor to ensure data availability and fault tolerance.
upvoted 0 times
...
Ludivina
1 months ago
I think the correct answer is B. The broker will start, but it won't be online until all the data it needs is replicated from the other leaders. Losing a broker is never good, but at least the data is safe.
upvoted 0 times
...
...
Mable
2 months ago
I'm not sure, but I think the answer might be D. If the broker becomes leader without any data, there could be data loss.
upvoted 0 times
...
Ronny
2 months ago
I agree with Arletta. If the data needs to be replicated, it makes sense that the broker won't be online right away.
upvoted 0 times
...
Arletta
3 months ago
I think the answer is B. The broker won't be online until all the data is replicated from other leaders.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77