Computer Science, asked by divanshijindal1311, 1 month ago

spark is used to take advantage of (a) parallel execution (b) sequential execution​

Answers

Answered by sharwansharma830
0

Answer:

Spark is great for scaling up data science tasks and workloads! As long as you’re using Spark data frames and libraries that operate on these data structures, you can scale to massive data sets that distribute across a cluster. However, there are some scenarios where libraries may not be available for working with Spark data frames, and other approaches are needed to achieve parallelization with Spark. This post discusses three different ways of achieving parallelization in PySpark:

Answered by knowledgebased123
0

Explanation:

Mark me as Brainlist answer

Similar questions