结合HBase和HDFS会导致makeDirOnFileSystem出现异常

 拉扯作乱_991 发布于 2023-01-10 12:41

介绍

尝试将HBase和HDFS结合使用会产生以下结果:

2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir
ectory, retries exhausted
2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled
exception. Starting shutdown.
java.io.IOException: Exception in makeDirOnFileSystem
        at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile
System.java:136)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi
leSystem.java:428)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst
emLayout(MasterFileSystem.java:148)
        at org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSyst
em.java:133)
        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j
ava:572)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432)
        at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied:
 user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe
rmissionChecker.java:224)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe
rmissionChecker.java:204)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi
ssion(FSPermissionChecker.java:149)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F
SNamesystem.java:4891)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F
SNamesystem.java:4873)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce
ss(FSNamesystem.java:4847)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS
Namesystem.java:3192)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames
ystem.java:3156)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst
em.java:3137)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN
odeRpcServer.java:669)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra
nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl
ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497
0)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal
l(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1438)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce
ption.java:90)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc
eption.java:57)
        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122)
        at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy
stem.java:545)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915)
        at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile
System.java:129)
        ... 6 more

配置和系统设置如下:

[vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/
Found 1 items
-rw-r--r--   3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u
buntu-14.04-desktop-amd64.iso
[vagrant@localhost hadoop-hdfs]$

/etc/hadoop/conf/core-site.xml


  
    fs.defaultFS
    hdfs://localhost:8020
  

/etc/hbase/conf/hbase-site.xml


  
    hbase.rootdir
    hdfs://localhost:8020/hbase
  
  
    hbase.cluster.distributed
    true
  

/etc/hadoop/conf/hdfs-site.xml


  
    dfs.name.dir
    /var/lib/hadoop-hdfs/cache
  
  
    dfs.data.dir
    /tmp/hellodatanode
  

NameNode目录权限

[vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache
total 8
-rwxrwxrwx. 1 hbase hdfs   15 Jun  8 23:43 in_use.lock
drwxrwxrwx. 2 hbase hdfs 4096 Jun  8 23:43 current
[vagrant@localhost hadoop-hdfs]$

HMaster如果fs.defaultFS已对房产进行评论,则可以启动core-site.xml

NameNode 在听

[vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070
tcp        0      0 0.0.0.0:50070               0.0.0.0:*                   LIST
EN      off (0.00/0/0)
tcp        0      0 33.33.33.33:50070           33.33.33.1:57493            ESTA
BLISHED off (0.00/0/0)

并可通过导航访问http://33.33.33.33:50070/dfshealth.jsp.

如何解决makeDirOnFileSystem异常并让HBase连接到HDFS?

1 个回答
  • 您需要知道的是在堆栈跟踪的这一行:

    引起:org.apache.hadoop.security.AccessControlException:权限被拒绝:user = hbase,access = WRITE,inode ="/":vagrant:supergroup:drwxr-xr-x

    用户hbase无权写入HDFS root(/),因为它所拥有,vargrant并且设置为仅允许所有者写入它.

    使用hadoop fs -chmod修改权限.

    编辑:

    您也可以成功创建目录/hbase并将hbase用户设置为所有者.这样您就不必允许hbase写入根目录.

    2023-01-10 12:43 回答
撰写答案
今天,你开发时遇到什么问题呢?
立即提问
热门标签
PHP1.CN | 中国最专业的PHP中文社区 | PNG素材下载 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有