spark is used to take advantage of (a) parallel execution (b) sequential execution
Answers
Answered by
0
Answer:
Spark is great for scaling up data science tasks and workloads! As long as you’re using Spark data frames and libraries that operate on these data structures, you can scale to massive data sets that distribute across a cluster. However, there are some scenarios where libraries may not be available for working with Spark data frames, and other approaches are needed to achieve parallelization with Spark. This post discusses three different ways of achieving parallelization in PySpark:
Answered by
0
Explanation:
Mark me as Brainlist answer
Similar questions
French,
20 days ago
Computer Science,
20 days ago
English,
1 month ago
World Languages,
1 month ago
Hindi,
9 months ago