Spark平台上提交作业到集群生成的日志文件是什么-成都创新互联网站建设

关于创新互联

多方位宣传企业产品与服务 突出企业形象

公司简介 公司的服务 荣誉资质 新闻动态 联系我们

Spark平台上提交作业到集群生成的日志文件是什么

今天就跟大家聊聊有关Spark平台上提交作业到集群生成的日志文件是什么,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。

创新互联服务项目包括白云网站建设、白云网站制作、白云网页制作以及白云网络营销策划等。多年来,我们专注于互联网行业,利用自身积累的技术优势、行业经验、深度合作伙伴关系等,向广大中小型企业、政府机构等提供互联网行业的解决方案,白云网站推广取得了明显的社会效益与经济效益。目前,我们服务的客户以成都为中心已经辐射到白云省份的部分城市,未来相信会继续扩大服务区域并继续获得客户的支持与信任!

Created by Wang, Jerry, last modified on Aug 16, 2015

./spark-class org.apache.spark.deploy.worker.Worker spark://NKGV50849583FV1:7077
NKGV50849583FV1:~/devExpert/spark-1.4.1/bin # ./spark-class org.apache.spark.deploy.worker.Worker spark://NKGV50849583FV1:7077
added by Jerry: loading load-spark-env.sh !!!1
added by Jerry:…
/root/devExpert/spark-1.4.1/conf
added by Jerry, number of Jars: 1
added by Jerry, launch_classpath: /root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar
added by Jerry,RUNNER:/usr/jdk1.7.0_79/bin/java
added by Jerry, printf argument list: org.apache.spark.deploy.worker.Worker spark://NKGV50849583FV1:7077
added by Jerry, I am in if-else branch: /usr/jdk1.7.0_79/bin/java -cp /root/devExpert/spark-1.4.1/conf/:/root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-rdbms-3.2.9.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-core-3.2.10.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar -Xms512m -Xmx512m -XX:MaxPermSize=256m org.apache.spark.deploy.worker.Worker spark://NKGV50849583FV1:7077
Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
15/08/16 12:55:28 INFO Worker: Registered signal handlers for [TERM, HUP, INT]

15/08/16 12:55:28 WARN Utils: Your hostname, NKGV50849583FV1 resolves to a loopback address: 127.0.0.1; using 10.128.184.131 instead (on interface eth0)
15/08/16 12:55:28 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/08/16 12:55:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
15/08/16 12:55:29 INFO SecurityManager: Changing view acls to: root
15/08/16 12:55:29 INFO SecurityManager: Changing modify acls to: root
15/08/16 12:55:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/08/16 12:55:30 INFO Slf4jLogger: Slf4jLogger started
15/08/16 12:55:30 INFO Remoting: Starting remoting
15/08/16 12:55:30 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker@10.128.184.131:42568]
15/08/16 12:55:30 INFO Utils: Successfully started service ‘sparkWorker’ on port 42568.

15/08/16 12:55:30 INFO Worker: Starting Spark worker 10.128.184.131:42568 with 8 cores, 30.4 GB RAM

15/08/16 12:55:30 INFO Worker: Running Spark version 1.4.1
15/08/16 12:55:30 INFO Worker: Spark home: /root/devExpert/spark-1.4.1
15/08/16 12:55:30 INFO Utils: Successfully started service ‘WorkerUI’ on port 8081.
15/08/16 12:55:30 INFO WorkerWebUI: Started WorkerWebUI at http://10.128.184.131:8081

15/08/16 12:55:30 INFO Worker: Connecting to master akka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master…
15/08/16 12:55:30 INFO Worker: Successfully registered with master spark://NKGV50849583FV1:7077

If I quit the worker session,

Spark平台上提交作业到集群生成的日志文件是什么

I can also observe this in master’s log:

28 15/08/16 12:55:30 INFO Master: Registering worker 10.128.184.131:42568 with 8 cores, 30.4 GB RAM
29 15/08/16 13:00:19 WARN Master: Removing worker-20150816125530-10.128.184.131-42568 because we got no heartbeat in 60 seconds
30 15/08/16 13:00:19 INFO Master: Removing worker worker-20150816125530-10.128.184.131-42568 on 10.128.184.131:42568

看完上述内容,你们对Spark平台上提交作业到集群生成的日志文件是什么有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注创新互联行业资讯频道,感谢大家的支持。


名称栏目:Spark平台上提交作业到集群生成的日志文件是什么
浏览地址:http://kswsj.cn/article/pgocdg.html

其他资讯