首页 > 数据库技术 > 详细

[Spark]What's the difference between spark.sql.shuffle.partitions and spark.default.parallelism?

时间:2018-10-30 11:57:26      阅读:418      评论:0      收藏:0      [点我收藏+]

From the answer here,

spark.sql.shuffle.partitions configures the number of partitions that are used when shuffling data for joins or aggregations.

spark.default.parallelism is the default number of partitions in RDDs returned by transformations like join, reduceByKey, and parallelize when not set explicitly by the user. Note that spark.default.parallelism seems to only be working for raw RDD and is ignored when working with dataframes.

If the task you are performing is not a join or aggregation and you are working with dataframes then setting these will not have any effect. You could, however, set the number of partitions yourself by calling df.repartition(numOfPartitions) (don‘t forget to assign it to a new val) in your code.

[Spark]What's the difference between spark.sql.shuffle.partitions and spark.default.parallelism?

原文:https://www.cnblogs.com/szss/p/9875914.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!