首页 > 大数据平台 > spark > 安装spark后启动报错
2015
07-31

安装spark后启动报错

starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.hadoop.out
failed to launch org.apache.spark.deploy.master.Master:
Failed to find Spark assembly in /usr/local/spark/assembly/target/scala-2.10.
You need to build Spark before running this program.
full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.hadoop.out
slave2.hadoop: chown: changing ownership of `/usr/local/spark/sbin/../logs': Operation not permitted
slave4.hadoop: chown: changing ownership of `/usr/local/spark/sbin/../logs': Operation not permitted
slave2.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out
slave2.hadoop: /usr/local/spark/sbin/spark-daemon.sh: line 148: /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out: Permission denied
slave3.hadoop: chown: changing ownership of `/usr/local/spark/sbin/../logs': Operation not permitted
slave1.hadoop: chown: changing ownership of `/usr/local/spark/sbin/../logs': Operation not permitted
slave4.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave4.hadoop.out
slave4.hadoop: /usr/local/spark/sbin/spark-daemon.sh: line 148: /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave4.hadoop.out: Permission denied
slave3.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.hadoop.out
slave1.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out
slave3.hadoop: /usr/local/spark/sbin/spark-daemon.sh: line 148: /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.hadoop.out: Permission denied
slave1.hadoop: /usr/local/spark/sbin/spark-daemon.sh: line 148: /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out: Permission denied
slave2.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave2.hadoop: tail: cannot open `/usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out' for reading: No such file or directory
slave2.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out
slave4.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave4.hadoop: tail: cannot open `/usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave4.hadoop.out' for reading: No such file or directory
slave3.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave4.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave4.hadoop.out
slave1.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave1.hadoop: tail: cannot open `/usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out' for reading: No such file or directory
slave3.hadoop: tail: cannot open `/usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.hadoop.out' for reading: No such file or directory
slave3.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.hadoop.out
slave1.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out
 
 
logs目录没权限,改成hadoop后提示
10>$ ./sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.hadoop.out
failed to launch org.apache.spark.deploy.master.Master:
Failed to find Spark assembly in /usr/local/spark/assembly/target/scala-2.10.
You need to build Spark before running this program.
full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.hadoop.out
slave1.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out
slave3.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.hadoop.out
slave4.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave4.hadoop.out
slave2.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out
slave1.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave1.hadoop:   JAVA_HOME is not set
slave1.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out
slave3.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave4.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave2.hadoop: failed to launch org.apache.spark.deploy.worker.Worker:
slave3.hadoop:   JAVA_HOME is not set
slave3.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.hadoop.out
slave4.hadoop:   JAVA_HOME is not set
slave2.hadoop:   JAVA_HOME is not set
slave2.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out
slave4.hadoop: full log in /usr/local/spark/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave4.hadoop.out

我下载的spark源码包 需要 在spark目录 执行下 sbt/sbt assembly   这会编译spark 下载一些jar 包
built Spark

sbt/sbt assembly #使用此命令需要在目录的home下
命令完成后,就会下载插件或jar包,效果如下:


安装spark后启动报错 - 第1张  | 大话运维

SBT是Simple Build Tool的简称,如果读者使用过Maven,那么可以简单将SBT看做是Scala世界的Maven,虽然二者各有优劣,但完成的工作基本是类似的。

上面的命令:sbt assembly 愚下认为是使用的sbt-assembly插件,这个插件的目的是:

  • 可以将当前项目的二进制包以及依赖的所有第三方库都打包成一个jar包发布,即one-jar, 对于那种直接运行的应用程序很方便

经过此命令编译后的结果是:

安装spark后启动报错 - 第2张  | 大话运维

但是编译了很久提示有些jar 下载不了,没办法只能从官网上面重新下载个编译好了的spark包
spark-1.4.0-bin-hadoop2.6.tgz  有239M
最后编辑:
作者:saunix
大型互联网公司linux系统运维攻城狮,专门担当消防员

留下一个回复