我正在尝试将Twitter提要流式传输到hdfs,然后使用配置单元.但第一部分,流数据和加载到hdfs不起作用,并给出Null指针异常.
这就是我尝试过的.
1.下载了apache-flume-1.4.0-bin.tar.提取它.将所有内容复制到/ usr/lib/flume /.在/ usr/lib/i中将所有者更改为flume目录的用户.当我做LS命令在/ usr/lib中/水槽/,它表明
bin CHANGELOG conf DEVNOTES docs lib LICENSE logs NOTICE README RELEASE-NOTES tools
2.移至conf /目录.我将文件复制flume-env.sh.template
为flume-env.sh并将JAVA_HOME编辑为我的java路径/usr/lib/jvm/java-7-oracle
.
3.接下来,我在同一目录中创建了一个名为flume.conf的文件,conf
并添加了以下内容
TwitterAgent.sources = Twitter TwitterAgent.channels = MemChannel TwitterAgent.sinks = HDFS TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource TwitterAgent.sources.Twitter.channels = MemChannel TwitterAgent.sources.Twitter.consumerKey =TwitterAgent.sources.Twitter.consumerSecret = TwitterAgent.sources.Twitter.accessToken = TwitterAgent.sources.Twitter.accessTokenSecret = TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, couldera, data science, data scientist, business intelligence, mapreduce, datawarehouse, data ware housing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing TwitterAgent.sinks.HDFS.channel = MemChannel TwitterAgent.sinks.HDFS.type = hdfs TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:8020/user/flume/tweets/%Y/%m/%d/%H/ TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000 TwitterAgent.sinks.HDFS.hdfs.rollSize = 0 TwitterAgent.sinks.HDFS.hdfs.rollCount = 600 TwitterAgent.channels.MemChannel.type = memory TwitterAgent.channels.MemChannel.capacity = 10000 TwitterAgent.channels.MemChannel.transactionCapacity = 100
我在twitter上创建了一个应用程序.生成令牌并将所有键添加到上面的文件中.API密钥我添加为消费者密钥.
我从这里提到的cloudera -files下载了flume -sources jar .
4.我将flume-sources-1.0-SNAPSHOT.jar添加到/ user/lib/flume/lib.
5.启动Hadoop并完成以下操作
hadoop fs -mkdir /user/flume/tweets hadoop fs -chown -R flume:flume /user/flume hadoop fs -chmod -R 770 /user/flume
6.我在/ user/lib/flume中运行以下命令
/usr/lib/flume/conf$ bin/flume-ng agent -n TwitterAgent -c conf -f conf/flume-conf
它显示它正在显示然后退出的JAR.
当我检查hdfs时,没有文件.hadoop fs -ls /user/flume/tweets
它什么都没有显示出来.
在hadoop中,core-site.xml 文件具有以下配置
fs.default.name hdfs://localhost:8020 true
谢谢