Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Cloudera Exam CCA175 Topic 4 Question 57 Discussion

Actual exam question for Cloudera's CCA175 exam
Question #: 57
Topic #: 4
[All CCA175 Questions]

Problem Scenario 92 : You have been given a spark scala application, which is bundled in jar named hadoopexam.jar.

Your application class name is com.hadoopexam.MyTask

You want that while submitting your application should launch a driver on one of the cluster node.

Please complete the following command to submit the application.

spark-submit XXX -master yarn \

YYY SSPARK HOME/lib/hadoopexam.jar 10

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

Cecilia
4 months ago
Got it, so the command should be spark-submit -class com.hadoopexam.MyTask -master yarn --deploy-mode cluster SSPARK HOME/lib/hadoopexam.jar 10.
upvoted 0 times
...
Eura
4 months ago
And we should specify the jar file path as SSPARK HOME/lib/hadoopexam.jar.
upvoted 0 times
...
Harris
4 months ago
Yes, that's necessary to launch the driver on one of the cluster nodes.
upvoted 0 times
...
Cecilia
4 months ago
Should we also include --deploy-mode cluster in the command?
upvoted 0 times
...
Eura
4 months ago
Yes, that makes sense. It specifies the main class of the application.
upvoted 0 times
...
Harris
5 months ago
I think I should use -class com.hadoopexam.MyTask in the spark-submit command.
upvoted 0 times
...
Belen
5 months ago
I think B) makes more sense because we need to specify deploy mode
upvoted 0 times
...
Cammy
5 months ago
I agree with Svetlana, A) seems like the right choice
upvoted 0 times
...
Curtis
5 months ago
No, I believe the correct answer is B)
upvoted 0 times
...
Svetlana
6 months ago
I think the answer is A)
upvoted 0 times
...
Twila
6 months ago
Hmm, I'm still a bit unsure. Maybe we should double-check the Spark documentation to be sure?
upvoted 0 times
...
Gail
6 months ago
Good point. I think the 'deploy-mode cluster' option is only necessary if you want to explicitly specify that the driver should run on a cluster node, rather than on your local machine.
upvoted 0 times
...
Ngoc
6 months ago
But wait, doesn't the 'spark-submit' command automatically launch the driver on a cluster node by default? Do we really need the 'deploy-mode cluster' part?
upvoted 0 times
...
Scarlet
6 months ago
Yeah, I agree. The 'deploy-mode cluster' option will ensure the driver runs on a cluster node, which is what the question is asking for.
upvoted 0 times
...
Fatima
7 months ago
Okay, let's think this through. The question says we want to launch the driver on one of the cluster nodes, so the 'deploy-mode cluster' option seems like the right choice.
upvoted 0 times
...
Cornell
7 months ago
Hmm, this question seems straightforward enough, but I'm a bit unsure about the 'deploy-mode' part. Does that mean the driver will run on a cluster node or on my local machine?
upvoted 0 times
Hannah
5 months ago
YYY: --deploy-mode cluster
upvoted 0 times
...
Ena
5 months ago
XXX: -class com.hadoopexam.MyTask
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77