热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

java3报错_Phoenix报错(3)java.io.IOException:Brokenpipe

解决办法1,在cm上加环境变量cm→spark→gateway→高级exportHADOOP_CONF_DIRetchbaseconf:etchadoopconf:e

解决办法

1,在cm上加环境变量

cm→spark→gateway→高级

export HADOOP_CONF_DIR=/etc/hbase/conf:/etc/hadoop/conf:/etc/hive/conf

squares.svg

2,在代码里创建hbase链接的时候,使用hbaseConfiguration.create()

Configuration configuration = HBaseConfiguration.create();

报错信息如下

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

Mon Jan 29 16:00:03 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68171: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat209.life.com,60020,1517191627572, seqNum=0

at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)

at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)

at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)

at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)

at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:862)

at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)

at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)

at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:421)

at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2333)

... 21 more

Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68171: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat209.life.com,60020,1517191627572, seqNum=0

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)

at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)

... 3 more

Caused by: java.io.IOException: Broken pipe

at sun.nio.ch.FileDispatcherImpl.write0(Native Method)

at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)

at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)

at sun.nio.ch.IOUtil.write(IOUtil.java:65)

at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)

at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)

at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)

at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)

at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)

at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)

at java.io.DataOutputStream.flush(DataOutputStream.java:123)

at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:278)

at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:266)

at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:921)

at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:874)

at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1243)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)

at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)

at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)

at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:355)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)

... 4 more

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1457)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1445)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1444)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1444)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)

at scala.Option.foreach(Option.scala:236)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1668)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1627)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1862)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1875)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1888)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:1959)

at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920)

at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)

at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)

at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918)

at scala.com.chinalife.StreamingToHbase$$anonfun$main$2.apply(StreamingToHbase.scala:99)

at scala.com.chinalife.StreamingToHbase$$anonfun$main$2.apply(StreamingToHbase.scala:98)

at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)

at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)

at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)

at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)

at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)

at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)

at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)

at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)

at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)

at scala.util.Try$.apply(Try.scala:161)

at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)

at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)

at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)

at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)

at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)

at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

Mon Jan 29 16:00:03 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68171: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat209.life.com,60020,1517191627572, seqNum=0



推荐阅读
  • Java开发面试问题,2021网易Java高级面试题及答案,实战案例
    前言大厂面试真题向来都是各大求职者的最佳练兵场,而今天小编带来的便是“HUAWEI”面经!这是一次真实的面试经历,虽然不是我自己亲身经历 ... [详细]
  • 本文介绍了在Win10上安装WinPythonHadoop的详细步骤,包括安装Python环境、安装JDK8、安装pyspark、安装Hadoop和Spark、设置环境变量、下载winutils.exe等。同时提醒注意Hadoop版本与pyspark版本的一致性,并建议重启电脑以确保安装成功。 ... [详细]
  • FineReport平台数据分析图表显示部分系列接口的应用场景和实现思路
    本文介绍了FineReport平台数据分析图表显示部分系列接口的应用场景和实现思路。当图表系列较多时,用户希望可以自己设置哪些系列显示,哪些系列不显示。通过调用FR.Chart.WebUtils.getChart("chartID").getChartWithIndex(chartIndex).setSeriesVisible()接口,可以获取需要显示的系列图表对象,并在表单中显示这些系列。本文以决策报表为例,详细介绍了实现方法,并给出了示例。 ... [详细]
  • 本文由编程笔记小编整理,主要介绍了使用Junit和黄瓜进行自动化测试中步骤缺失的问题。文章首先介绍了使用cucumber和Junit创建Runner类的代码,然后详细说明了黄瓜功能中的步骤和Steps类的实现。本文对于需要使用Junit和黄瓜进行自动化测试的开发者具有一定的参考价值。摘要长度:187字。 ... [详细]
  • 重入锁(ReentrantLock)学习及实现原理
    本文介绍了重入锁(ReentrantLock)的学习及实现原理。在学习synchronized的基础上,重入锁提供了更多的灵活性和功能。文章详细介绍了重入锁的特性、使用方法和实现原理,并提供了类图和测试代码供读者参考。重入锁支持重入和公平与非公平两种实现方式,通过对比和分析,读者可以更好地理解和应用重入锁。 ... [详细]
  • 1Lock与ReadWriteLock1.1LockpublicinterfaceLock{voidlock();voidlockInterruptibl ... [详细]
  • 基于分布式锁的防止重复请求解决方案
    一、前言关于重复请求,指的是我们服务端接收到很短的时间内的多个相同内容的重复请求。而这样的重复请求如果是幂等的(每次请求的结果都相同,如查 ... [详细]
  • 一面自我介绍对象相等的判断,equals方法实现。可以简单描述挫折,并说明自己如何克服,最终有哪些收获。职业规划表明自己决心,首先自己不准备继续求学了,必须招工作了。希望去哪 ... [详细]
  • 本文整理了Java中org.apache.hadoop.hbase.client.Increment.getDurability()方法的一些代码示例,展示了 ... [详细]
  • 2018-02-1420:07:13,610ERROR[main]regionserver.HRegionServerCommandLine:Regionserverexiting ... [详细]
  • Kylin 单节点安装
    软件环境Hadoop:2.7,3.1(sincev2.5)Hive:0.13-1.2.1HBase:1.1,2.0(sincev2.5)Spark(optional)2.3.0K ... [详细]
  • Java开发实战讲解!字节跳动三场技术面+HR面
    二、回顾整理阿里面试题基本就这样了,还有一些零星的问题想不起来了,答案也整理出来了。自我介绍JVM如何加载一个类的过程,双亲委派模型中有 ... [详细]
  • 本文_大数据之非常详细Sqoop安装和基本操作
    篇首语:本文由编程笔记#小编为大家整理,主要介绍了大数据之非常详细Sqoop安装和基本操作相关的知识,希望对你有一定的参考价值。大数据大数据之 ... [详细]
  • SparkOnYarn在YARN上启动Spark应用有两种模式。在cluster模式下,Spark驱动器(driver)在YARNApp ... [详细]
  • 探索MLlib机器学习
    公众号后台回复关键词:pyspark,获取本项目github地址。MLlib是Spark的机器学习库,包括以下主要功能。实用工具ÿ ... [详细]
author-avatar
十一
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有