You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.show()
Does this meet the goal?
The df.show() method also does not meet the goal. It is used to show the contents of the DataFrame, not to compute statistical functions. Reference = The usage of the show() function is documented in the PySpark API documentation.
Janine
10 months agoLucina
10 months agoClaudio
10 months agoMy
11 months agoJame
11 months agoRima
11 months agoMitsue
11 months agoLeatha
10 months agoColene
10 months agoShizue
10 months agoRolland
11 months agoStevie
11 months agoRegenia
11 months agoStevie
12 months ago