基于cdh的Kafka配置及部署(详细,成功运行)-创新互联-成都创新互联网站建设

关于创新互联

多方位宣传企业产品与服务 突出企业形象

公司简介 公司的服务 荣誉资质 新闻动态 联系我们

基于cdh的Kafka配置及部署(详细,成功运行)-创新互联

一、下载

http://archive.cloudera.com/kafka/parcels/2.2.0/

创新互联是一家业务范围包括IDC托管业务,网站空间、主机租用、主机托管,四川、重庆、广东电信服务器租用,光华机房服务器托管,成都网通服务器托管,成都服务器租用,业务范围遍及中国大陆、港澳台以及欧美等多个国家及地区的互联网数据服务公司。
wget http://archive.cloudera.com/kafka/parcels/2.2.0/KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
wget http://archive.cloudera.com/kafka/parcels/2.2.0/KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha1

二、校验

[hadoop@hadoop003 softwares]$ sha1sum KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
359509e028ae91a2a082adfad5f64596b63ea750  KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
[hadoop@hadoop003 softwares]$ cat KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha1
359509e028ae91a2a082adfad5f64596b63ea750

校验码相同,说明文件在下载过程中没有任何损坏,可正常使用

三、解压并设置软连接

[hadoop@hadoop003 softwares]$ tar -zxf  KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel -C ~/app
[hadoop@hadoop003 app]$ ln -s /home/hadoop/app/KAFKA-2.2.0-1.2.2.0.p0.68/ /home/hadoop/app/kafka

四、重要目录说明

[hadoop@hadoop003 kafka]$ pwd
/home/hadoop/app/kafka
[hadoop@hadoop003 kafka]$ ll
total 20
drwxr-xr-x 2 hadoop hadoop 4096 Jul 7  2017 bin
drwxr-xr-x 5 hadoop hadoop 4096 Jul 7  2017 etc
drwxr-xr-x 3 hadoop hadoop 4096 Jul 7  2017 lib
drwxr-xr-x 2 hadoop hadoop 4096 Jul 7  2017 meta
###     kafka配置文件目录,我们修改配置文件就在这里修改
[hadoop@hadoop003 kafka]$ ll etc/kafka/conf.dist/
total 48
-rw-r--r-- 1 hadoop hadoop  906 Jul 7  2017 connect-console-sink.properties
-rw-r--r-- 1 hadoop hadoop  909 Jul 7  2017 connect-console-source.properties
-rw-r--r-- 1 hadoop hadoop 2760 Jul 7  2017 connect-distributed.properties
-rw-r--r-- 1 hadoop hadoop  883 Jul 7  2017 connect-file-sink.properties
-rw-r--r-- 1 hadoop hadoop  881 Jul 7  2017 connect-file-source.properties
-rw-r--r-- 1 hadoop hadoop 1074 Jul 7  2017 connect-log4j.properties
-rw-r--r-- 1 hadoop hadoop 2061 Jul 7  2017 connect-standalone.properties
-rw-r--r-- 1 hadoop hadoop 4369 Jul 7  2017 log4j.properties
-rw-r--r-- 1 hadoop hadoop 5679 Jun  1 01:24 server.properties
-rw-r--r-- 1 hadoop hadoop 1032 Jul 7  2017 tools-log4j.properties

###     kafka功能目录
[hadoop@hadoop003 kafka]$ ll lib/kafka/
total 112
drwxr-xr-x 2 hadoop hadoop  4096 Jul 7  2017 bin
drwxr-xr-x 2 hadoop hadoop  4096 Jul 7  2017 cloudera
lrwxrwxrwx 1 hadoop hadoop    43 Jun  1 02:11 config -> /etc/kafka/conf  #注意这是红色
-rw-rw-r-- 1 hadoop hadoop 48428 Jun  1 02:17 KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
drwxr-xr-x 2 hadoop hadoop 12288 Jul 7  2017 libs
-rwxr-xr-x 1 hadoop hadoop 28824 Jul 7  2017 LICENSE
drwxrwxr-x 2 hadoop hadoop  4096 Jun  1 01:39 logs
-rwxr-xr-x 1 hadoop hadoop   336 Jul 7  2017 NOTICE
drwxr-xr-x 2 hadoop hadoop  4096 Jul 7  2017 site-docs
### config软连接此时默认链接的getaway的配置文件,也就是CM客户端的配置文件,因为我们没有使用cm,所以也就没有自动生成/etc/kafka/conf 故报错闪烁红色
###     bin目录下是kafka的相关脚本,例如server启动关闭&&consumer&&producer的启动脚本

五、修改配置文件

# 第一步:
[hadoop@hadoop003 kafka] cd etc/kafka/conf.dist

# 第二步:

vim  server.properties

# 第三步:(主要修改其中的6个参数)

broker.id=0  #标示符

log.dirs=/home/hadoop/app/kafka/logs  #数据保存的位置

log.retention.hours=168  #数据的保留时间(168 hours=7天)

zookeeper.connect=hadoop001:2181,hadoop002:2181,hadoop003:2181/kafka
# zookeeper存储kafka数据的位置
delete.topic.enable=true #可以删除已创建主题

六、启动kafka

[hadoop@hadoop003 kafka]$ lib/kafka/bin/kafka-server-start.sh /home/hadoop/app/kafka/etc/kafka/conf.dist/server.properties 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/KAFKA-2.2.0-1.2.2.0.p0.68/lib/kafka/libs/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/KAFKA-2.2.0-1.2.2.0.p0.68/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
log4j:ERROR Could not read configuration file from URL [file:lib/kafka/bin/../config/log4j.properties].
java.io.FileNotFoundException: lib/kafka/bin/../config/log4j.properties (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.(FileInputStream.java:138)
    at java.io.FileInputStream.(FileInputStream.java:93)
    at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
    at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
    at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
    at org.apache.log4j.LogManager.(LogManager.java:127)
    at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
    at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
    at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
    at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
    at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
    at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
    at org.apache.kafka.common.utils.Utils.(Utils.java:59)
    at kafka.Kafka$.getPropsFromArgs(Kafka.scala:41)
    at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:72)
    at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala)
log4j:ERROR Ignoring configuration file [file:lib/kafka/bin/../config/log4j.properties].
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (kafka.server.KafkaConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

出现了bug找不到配置文件

java.io.FileNotFoundException: lib/kafka/bin/../config/log4j.properties

由于 lrwxrwxrwx 1 hadoop hadoop 43 Jun 1 02:11 config -> /etc/kafka/conf 找不到,所以要指定成etc/kafka/conf.dist/

[hadoop@hadoop003 kafka]$ rm lib/kafka/config
[hadoop@hadoop003 kafka]$ ln -s  /home/hadoop/app/kafka/etc/kafka/conf.dist/ /home/hadoop/app/kafka/lib/kafka/config

基于cdh的Kafka配置及部署(详细,成功运行)

重新启动

[hadoop@hadoop003 kafka]$ nohup kafka-server-start.sh /home/hadoop/app/kafka/etc/kafka/conf.dist/server.properties > /home/hadoop/app/kafka/server-logs/kafka-server.log 2>&1 &

没有报错信息了。。。

另外有需要云服务器可以了解下创新互联scvps.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。


分享文章:基于cdh的Kafka配置及部署(详细,成功运行)-创新互联
本文路径:http://kswsj.cn/article/dpiehe.html

其他资讯