【发布时间】:2013-06-14 20:30:17
【问题描述】:
当使用
在 spark (0.7.0) 中停止整个集群时$SPARK_HOME/bin/stop-all.sh
并非所有工作人员都正确停止。 更具体地说,如果我想重新启动集群
$SPARK_HOME/bin/start-all.sh
我明白了:
host1: starting spark.deploy.worker.Worker, logging to [...]
host3: starting spark.deploy.worker.Worker, logging to [...]
host2: starting spark.deploy.worker.Worker, logging to [...]
host5: starting spark.deploy.worker.Worker, logging to [...]
host4: spark.deploy.worker.Worker running as process 8104. Stop it first.
host7: spark.deploy.worker.Worker running as process 32452. Stop it first.
host6: starting spark.deploy.worker.Worker, logging to [...]
在 host4 和 host7 上,确实有一个 StandaloneExecutorBackend 仍在运行:
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps
简单重复
$SPARK_HOME/bin/stop-all.sh
不幸的是也没有阻止工人。 Spark 只是告诉我工人即将停止:
host2: no spark.deploy.worker.Worker to stop
host7: stopping spark.deploy.worker.Worker
host1: no spark.deploy.worker.Worker to stop
host4: stopping spark.deploy.worker.Worker
host6: no spark.deploy.worker.Worker to stop
host5: no spark.deploy.worker.Worker to stop
host3: no spark.deploy.worker.Worker to stop
没有spark.deploy.master.Master 停止
然而,
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps
另有说法。
有人知道stop-all.sh 如何正常工作吗?
谢谢。
【问题讨论】:
标签: scala mapreduce apache-spark