【问题标题】:Values of Spark executor, driver, executor cores, executor memorySpark executor、driver、executor cores、executor memory的值
【发布时间】:2017-10-07 19:23:15
【问题描述】:

我对 Spark executor、driver、executor cores、executor memory 的值有一些疑问。

  1. 如果集群上没有运行应用程序,如果您正在提交作业,Spark executor、executor core、executor memory 的默认值是多少?
  2. 如果我们要计算您要提交的作业所需的 Spark 执行器、执行器核心、执行器内存的值,您将如何做?

【问题讨论】:

    标签: apache-spark


    【解决方案1】:

    Avishek 的回答涵盖了默认值。我将阐明计算最佳值。举个例子,

    示例:6 个节点,每个节点有 16 个内核和 64Gb RAM

    每个执行器都是 JVM 实例。所以node上可以执行多个executor。

    让我们从选择每个执行程序的核心数开始:

    Number of cores = Concurrent tasks as executor can run 
    
    One may think if there is higher concurrency, performance will be better. However, experiments have shown that spark jobs perform well when the number of cores = 5.
    
    If number of cores > 5, it leads to poor performance.
    
    Note that 1 core and 1 Gb is needed for OS and Hadoop Daemons.
    

    现在,计算执行者的数量:

    As discussed earlier, there are 15 cores available for each node and we are planning for 5 cores per executors.
    
    Thus number of executors per node = 15/5 = 3
    Total number of executors = 3*6 = 18
    
    Out of all executors, 1 executor is needed for AM management by YARN.
    Thus, final executors count = 18-1 = 17 executors.
    

    每个执行器的内存:

    Executor per node = 3
    RAM available per node = 63 Gb (as 1Gb is needed for OS and Hadoop Daemon)
    Memory per executor = 63/3 = 21 Gb.
    
    Some memory overhead is required by spark. Which is max(384, 7% of memory per executor).
    Thus, 7% of 21 = 1.47
    As 1.47Gb > 384Mb, subtract 1.47 from 21.
    Hence, 21 - 1.47 ~ 19 Gb
    

    最终数字:

    Executors - 17, Cores 5, Executor Memory - 19 GB
    

    注意:

    1. Sometimes one may feel to allocate lesser memory than 19 Gb. As memory decreases, the number of executors will increase and the number of cores will decrease. As discussed earlier, number of cores = 5 is best value. However, if you reduce it will still give good results. Just dont exceed value beyond 5.
    
    2. Memory per executor should be less than 40 else there will be a considerable GC overhead.
    

    【讨论】:

      【解决方案2】:

      如果集群上没有运行应用程序,如果你正在提交作业,Spark executor、executor core、executor memory 的默认值是多少?

      默认值存储在安装 spark 的集群中的spark-defaults.conf 中。因此,您可以验证这些值。一般都是默认值。

      检查默认值。请参考这个document

      如果我们要计算您要提交的作业所需的 Spark 执行器、执行器核心、执行器内存的值,您将如何做?

      取决于以下几点

      1. 你有什么类型的工作,即它是洗牌密集型或只有地图操作。如果是随机播放,您可能需要更多内存。

      2. 数据大小,数据越大内存占用越大

      3. 集群约束。你能负担多少内存。

      基于这些因素,您需要从一些数字开始,然后查看 Spark UI,您需要了解瓶颈并增加或减少内存占用。

      保持执行程序内存超过 40G 可能会提高效率,因为 JVM GC 变得更慢。还有太多的核心可能会减慢这个过程。

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 1970-01-01
        • 2023-03-16
        • 2016-12-19
        • 1970-01-01
        • 2015-07-13
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        相关资源
        最近更新 更多