Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Nutanix Exam NCP-MCI Topic 1 Question 24 Discussion

Actual exam question for Nutanix's NCP-MCI exam
Question #: 24
Topic #: 1
[All NCP-MCI Questions]

An administrator is implementing a VDI solution. The workload will be a series of persistent desktops in a dedicated storage container within a four-node cluster Storage optimizations should be set on the dedicated storage container to give optimal performance including during a node failure event

Which storage optimizations should the administrator set to meet the requirements?

Show Suggested Answer Hide Answer
Suggested Answer: B

A) This statement is incorrect because there is no static threshold set to trigger a critical alert at 6000 MB. The graph shows a peak that goes above 6000 MB, but the alert configuration below does not specify a static threshold at this value.

B) This is the correct statement. The configuration under 'Behavioral Anomaly' is set to alert every time there is an anomaly, with a critical level alert set to trigger when the I/O working set size is between 0 MB and 4000 MB. The graph illustrates that the anomalies (highlighted in pink) occur when the working set size exceeds the normal range (blue band). Therefore, any anomaly detected above 4000 MB would trigger a critical alert.

C) This statement is incorrect because there is no indication that a warning alert is configured to trigger after 3 anomalies. The exhibit does not show any configuration that specifies an alert based on the number of anomalies.

D) This statement is incorrect as there's no indication that a warning alert will be triggered based on the I/O working set size exceeding the blue band. The alert settings are configured to ignore anomalies below 4000 MB and to trigger a critical alert for anomalies above this threshold.

The settings displayed in the exhibit are typically part of Nutanix's Prism infrastructure management platform, which can set various thresholds for performance metrics and trigger alerts based on those thresholds. The behavior is defined in the Prism documentation where the alert configuration is outlined.


Contribute your Thoughts:

Francoise
22 days ago
Ah yes, the age-old question: how many storage optimizations can you fit into a single VDI solution? The answer is always 'more is better', right?
upvoted 0 times
...
Gaston
23 days ago
Hold on, did someone say 'dedicated storage container'? That sounds like a fancy way of saying 'cloud storage for dummies'.
upvoted 0 times
...
Lauran
24 days ago
Wait, is this a trick question? Compression and deduplication are the obvious choices. Who needs erasure coding for a VDI solution?
upvoted 0 times
...
My
27 days ago
I'd go with option B. Deduplication and erasure coding should provide the best balance of storage efficiency and fault tolerance.
upvoted 0 times
...
Wilda
28 days ago
Compression, deduplication, and erasure coding? That's a lot of optimization! I'm not sure if all of that is necessary for a VDI workload.
upvoted 0 times
...
Helga
1 months ago
Hmm, I think compression and deduplication would be the best option here. It'll help optimize storage without sacrificing too much performance, even during a node failure.
upvoted 0 times
Pansy
4 days ago
Yes, thin provisioning can definitely help with storage efficiency.
upvoted 0 times
...
Marsha
8 days ago
User 3: I think it's the best choice for ensuring efficiency and resilience in case of a node failure.
upvoted 0 times
...
Cecily
14 days ago
Thin provisioning could also be helpful to efficiently allocate storage space.
upvoted 0 times
...
Christa
14 days ago
User 2: Yeah, that combination should provide the optimal performance we need.
upvoted 0 times
...
Paris
15 days ago
I agree, compression and deduplication are key for optimizing storage in this scenario.
upvoted 0 times
...
Gilma
17 days ago
User 1: I agree, compression and deduplication should do the trick.
upvoted 0 times
...
...
Devon
2 months ago
Hmm, you might be right. Compression alone may not be enough for a four-node cluster setup.
upvoted 0 times
...
Brent
2 months ago
I disagree, I believe Deduplication and Erasure Coding would be more beneficial for optimal performance.
upvoted 0 times
...
Devon
2 months ago
I think the administrator should set Compression only for storage optimizations.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77