Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam AZ-204 Topic 16 Question 80 Discussion

Actual exam question for Microsoft's AZ-204 exam
Question #: 80
Topic #: 16
[All AZ-204 Questions]

You are developing a solution that will use a multi-partitioned Azure Cosmos DB database. You plan to use the latest Azure Cosmos DB SDK for development.

The solution must meet the following requirements:

Send insert and update operations to an Azure Blob storage account.

Process changes to all partitions immediately.

Allow parallelization of change processing.

You need to process the Azure Cosmos DB operations.

What are two possible ways to achieve this goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Lizbeth
1 months ago
Ah, the joys of Cosmos DB. I bet the developers at Microsoft spent months debating whether to call it 'change feed' or 'feed change'. Either way, it sounds like a delicious breakfast option.
upvoted 0 times
Lezlie
18 days ago
A) Create an Azure App Service API and implement the change feed estimator of the SDK. Scale the API by using multiple Azure App Service instances.
upvoted 0 times
...
...
Annelle
1 months ago
Wait, we have to choose two options? I thought this was a single-select question. *scratches head* Well, I guess I'll go with options C and D then. Double the points, double the fun!
upvoted 0 times
...
Golda
1 months ago
Option B with Azure Kubernetes Service sounds interesting, but I'm not sure if it's overkill for this use case. Seems like a lot of overhead just to process Cosmos DB changes.
upvoted 0 times
Shoshana
14 days ago
Option B might be overkill for this scenario. Maybe consider a simpler solution like option A.
upvoted 0 times
...
Shawn
17 days ago
B) Create a background job in an Azure Kubernetes Service and implement the change feed feature of the SDK.
upvoted 0 times
...
Annabelle
19 days ago
A) Create an Azure App Service API and implement the change feed estimator of the SDK. Scale the API by using multiple Azure App Service instances.
upvoted 0 times
...
...
Mabel
2 months ago
I like the idea of using Azure Functions in option D. The ability to parallelize the change feed processing across multiple functions is really appealing.
upvoted 0 times
Merlyn
15 days ago
I agree, Azure Functions in option D seem to be the best way to achieve the goal. It allows for parallel processing of the change feed.
upvoted 0 times
...
Lavelle
17 days ago
Option D sounds like a good choice. Using Azure Functions to parallelize the change feed processing is efficient.
upvoted 0 times
...
Erasmo
1 months ago
I agree, Option D seems like the most efficient way to handle the Azure Cosmos DB operations with parallelization.
upvoted 0 times
...
Kristeen
1 months ago
Option D sounds like a great choice. It allows for parallel processing of the change feed using multiple functions.
upvoted 0 times
...
...
Denny
2 months ago
Option C seems like the simplest and most straightforward way to achieve the requirements. Azure Functions with a Cosmos DB trigger will handle the change processing automatically.
upvoted 0 times
Leanna
11 days ago
Azure Functions with a Cosmos DB trigger definitely seems like the way to go for processing the operations efficiently.
upvoted 0 times
...
Dong
15 days ago
Creating an Azure Function with a trigger for Cosmos DB sounds efficient and easy to implement.
upvoted 0 times
...
Florinda
1 months ago
I agree, using Azure Functions with a Cosmos DB trigger simplifies the process and automates the change processing.
upvoted 0 times
...
Roselle
2 months ago
Option C seems like the simplest and most straightforward way to achieve the requirements. Azure Functions with a Cosmos DB trigger will handle the change processing automatically.
upvoted 0 times
...
...
Gracia
2 months ago
I'm not sure, I think option D could also work well with parallelizing the processing.
upvoted 0 times
...
Jaime
2 months ago
I agree with Yasuko. Option A seems like the best way to achieve the goal.
upvoted 0 times
...
Yasuko
2 months ago
I think option A is a good choice because it allows scaling with multiple instances.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77