热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.hadoop.yarn.api.records.YarnClusterMetrics.getNumNodeManagers()方法的使用及代码示例

本文整理了Java中org.apache.hadoop.yarn.api.records.YarnClusterMetrics.getNumNodeManagers()

本文整理了Java中org.apache.hadoop.yarn.api.records.YarnClusterMetrics.getNumNodeManagers()方法的一些代码示例,展示了YarnClusterMetrics.getNumNodeManagers()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。YarnClusterMetrics.getNumNodeManagers()方法的具体详情如下:
包路径:org.apache.hadoop.yarn.api.records.YarnClusterMetrics
类名称:YarnClusterMetrics
方法名:getNumNodeManagers

YarnClusterMetrics.getNumNodeManagers介绍

[英]Get the number of NodeManagers in the cluster.
[中]获取群集中的NodeManager个数。

代码示例

代码示例来源:origin: apache/flink

ps.append("NodeManagers in the ClusterClient " + metrics.getNumNodeManagers());
List nodes = yarnClient.getNodeReports(NodeState.RUNNING);
final String format = "|%-16s |%-16s %n";

代码示例来源:origin: alibaba/jstorm

+ ", numNodeManagers=" + clusterMetrics.getNumNodeManagers());

代码示例来源:origin: Qihoo360/XLearning

yarnClient.init(conf);
yarnClient.start();
LOG.info("Requesting a new application from cluster with " + yarnClient.getYarnClusterMetrics().getNumNodeManagers() + " NodeManagers");
newAPP = yarnClient.createApplication();

代码示例来源:origin: apache/metron

+ ", numNodeManagers=" + clusterMetrics.getNumNodeManagers());

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient

public ClusterMetrics getClusterMetrics() throws IOException,
InterruptedException {
try {
YarnClusterMetrics metrics = client.getYarnClusterMetrics();
ClusterMetrics oldMetrics =
new ClusterMetrics(1, 1, 1, 1, 1, 1,
metrics.getNumNodeManagers() * 10,
metrics.getNumNodeManagers() * 2, 1,
metrics.getNumNodeManagers(), 0, 0);
return oldMetrics;
} catch (YarnException e) {
throw new IOException(e);
}
}

代码示例来源:origin: org.apache.tez/tez-mapreduce

public ClusterMetrics getClusterMetrics() throws IOException,
InterruptedException {
YarnClusterMetrics metrics;
try {
metrics = client.getYarnClusterMetrics();
} catch (YarnException e) {
throw new IOException(e);
}
ClusterMetrics oldMetrics = new ClusterMetrics(1, 1, 1, 1, 1, 1,
metrics.getNumNodeManagers() * 10, metrics.getNumNodeManagers() * 2, 1,
metrics.getNumNodeManagers(), 0, 0);
return oldMetrics;
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient

public ClusterMetrics getClusterMetrics() throws IOException,
InterruptedException {
try {
YarnClusterMetrics metrics = client.getYarnClusterMetrics();
ClusterMetrics oldMetrics =
new ClusterMetrics(1, 1, 1, 1, 1, 1,
metrics.getNumNodeManagers() * 10,
metrics.getNumNodeManagers() * 2, 1,
metrics.getNumNodeManagers(), 0, 0);
return oldMetrics;
} catch (YarnException e) {
throw new IOException(e);
}
}

代码示例来源:origin: org.apache.flink/flink-yarn_2.11

ps.append("NodeManagers in the ClusterClient " + metrics.getNumNodeManagers());
List nodes = yarnClient.getNodeReports(NodeState.RUNNING);
final String format = "|%-16s |%-16s %n";

代码示例来源:origin: org.apache.flink/flink-yarn

ps.append("NodeManagers in the ClusterClient " + metrics.getNumNodeManagers());
List nodes = yarnClient.getNodeReports(NodeState.RUNNING);
final String format = "|%-16s |%-16s %n";

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-client

protected NodesInformation getNodesInfo() {
NodesInformation nodeInfo = new NodesInformation();
YarnClusterMetrics yarnClusterMetrics;
try {
yarnClusterMetrics = client.getYarnClusterMetrics();
} catch (IOException ie) {
LOG.error("Unable to fetch cluster metrics", ie);
return nodeInfo;
} catch (YarnException ye) {
LOG.error("Unable to fetch cluster metrics", ye);
return nodeInfo;
}
nodeInfo.decommissiOnedNodes=
yarnClusterMetrics.getNumDecommissionedNodeManagers();
nodeInfo.totalNodes = yarnClusterMetrics.getNumNodeManagers();
nodeInfo.runningNodes = yarnClusterMetrics.getNumActiveNodeManagers();
nodeInfo.lostNodes = yarnClusterMetrics.getNumLostNodeManagers();
nodeInfo.unhealthyNodes = yarnClusterMetrics.getNumUnhealthyNodeManagers();
nodeInfo.rebootedNodes = yarnClusterMetrics.getNumRebootedNodeManagers();
return nodeInfo;
}

代码示例来源:origin: hopshadoop/hopsworks

getNumNodeManagers());
List nodes = yarnClient.getNodeReports(NodeState.RUNNING);
final String format = "|%-16s |%-16s %n";

代码示例来源:origin: io.hops/hadoop-yarn-client

protected NodesInformation getNodesInfo() {
NodesInformation nodeInfo = new NodesInformation();
YarnClusterMetrics yarnClusterMetrics;
try {
yarnClusterMetrics = client.getYarnClusterMetrics();
} catch (IOException ie) {
LOG.error("Unable to fetch cluster metrics", ie);
return nodeInfo;
} catch (YarnException ye) {
LOG.error("Unable to fetch cluster metrics", ye);
return nodeInfo;
}
nodeInfo.decommissiOnedNodes=
yarnClusterMetrics.getNumDecommissionedNodeManagers();
nodeInfo.totalNodes = yarnClusterMetrics.getNumNodeManagers();
nodeInfo.runningNodes = yarnClusterMetrics.getNumActiveNodeManagers();
nodeInfo.lostNodes = yarnClusterMetrics.getNumLostNodeManagers();
nodeInfo.unhealthyNodes = yarnClusterMetrics.getNumUnhealthyNodeManagers();
nodeInfo.rebootedNodes = yarnClusterMetrics.getNumRebootedNodeManagers();
return nodeInfo;
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-applications-distributedshell

+ ", numNodeManagers=" + clusterMetrics.getNumNodeManagers());

代码示例来源:origin: apache/hama

YarnClusterMetrics clusterMetrics = yarnClient.getYarnClusterMetrics();
LOG.info("Got Cluster metric info from ASM"
+ ", numNodeManagers=" + clusterMetrics.getNumNodeManagers());

代码示例来源:origin: org.apache.apex/apex-engine

LOG.info("Got Cluster metric info from ASM, numNodeManagers={}", clusterMetrics.getNumNodeManagers());

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-tests

/**
* Wait for all the NodeManagers to connect to the ResourceManager.
*
* @param timeout Time to wait (sleeps in 10 ms intervals) in milliseconds.
* @return true if all NodeManagers connect to the (Active)
* ResourceManager, false otherwise.
* @throws YarnException if there is no active RM
* @throws InterruptedException if any thread has interrupted
* the current thread
*/
public boolean waitForNodeManagersToConnect(long timeout)
throws YarnException, InterruptedException {
GetClusterMetricsRequest req = GetClusterMetricsRequest.newInstance();
for (int i = 0; i ResourceManager rm = getResourceManager();
if (rm == null) {
throw new YarnException("Can not find the active RM.");
}
else if (nodeManagers.length == rm.getClientRMService()
.getClusterMetrics(req).getClusterMetrics().getNumNodeManagers()) {
LOG.info("All Node Managers connected in MiniYARNCluster");
return true;
}
Thread.sleep(10);
}
LOG.info("Node Managers did not connect within 5000ms");
return false;
}

代码示例来源:origin: linkedin/dynamometer

LOG.info("Got Cluster metric info from ASM, numNodeManagers=" + clusterMetrics.getNumNodeManagers());

代码示例来源:origin: ch.cern.hadoop/hadoop-yarn-server-tests

/**
* Wait for all the NodeManagers to connect to the ResourceManager.
*
* @param timeout Time to wait (sleeps in 10 ms intervals) in milliseconds.
* @return true if all NodeManagers connect to the (Active)
* ResourceManager, false otherwise.
* @throws YarnException
* @throws InterruptedException
*/
public boolean waitForNodeManagersToConnect(long timeout)
throws YarnException, InterruptedException {
GetClusterMetricsRequest req = GetClusterMetricsRequest.newInstance();
for (int i = 0; i ResourceManager rm = getResourceManager();
if (rm == null) {
throw new YarnException("Can not find the active RM.");
}
else if (nodeManagers.length == rm.getClientRMService()
.getClusterMetrics(req).getClusterMetrics().getNumNodeManagers()) {
LOG.info("All Node Managers connected in MiniYARNCluster");
return true;
}
Thread.sleep(10);
}
return false;
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-router

public static GetClusterMetricsResponse merge(
Collection responses) {
YarnClusterMetrics tmp = YarnClusterMetrics.newInstance(0);
for (GetClusterMetricsResponse response : responses) {
YarnClusterMetrics metrics = response.getClusterMetrics();
tmp.setNumNodeManagers(
tmp.getNumNodeManagers() + metrics.getNumNodeManagers());
tmp.setNumActiveNodeManagers(
tmp.getNumActiveNodeManagers() + metrics.getNumActiveNodeManagers());
tmp.setNumDecommissionedNodeManagers(
tmp.getNumDecommissionedNodeManagers() + metrics
.getNumDecommissionedNodeManagers());
tmp.setNumLostNodeManagers(
tmp.getNumLostNodeManagers() + metrics.getNumLostNodeManagers());
tmp.setNumRebootedNodeManagers(tmp.getNumRebootedNodeManagers() + metrics
.getNumRebootedNodeManagers());
tmp.setNumUnhealthyNodeManagers(
tmp.getNumUnhealthyNodeManagers() + metrics
.getNumUnhealthyNodeManagers());
}
return GetClusterMetricsResponse.newInstance(tmp);
}
}

推荐阅读
  • 本文讨论了在shiro java配置中加入Shiro listener后启动失败的问题。作者引入了一系列jar包,并在web.xml中配置了相关内容,但启动后却无法正常运行。文章提供了具体引入的jar包和web.xml的配置内容,并指出可能的错误原因。该问题可能与jar包版本不兼容、web.xml配置错误等有关。 ... [详细]
  • 重入锁(ReentrantLock)学习及实现原理
    本文介绍了重入锁(ReentrantLock)的学习及实现原理。在学习synchronized的基础上,重入锁提供了更多的灵活性和功能。文章详细介绍了重入锁的特性、使用方法和实现原理,并提供了类图和测试代码供读者参考。重入锁支持重入和公平与非公平两种实现方式,通过对比和分析,读者可以更好地理解和应用重入锁。 ... [详细]
  • 标题: ... [详细]
  • 本文整理了Java中org.gwtbootstrap3.client.ui.Icon.addDomHandler()方法的一些代码示例,展示了Icon.ad ... [详细]
  • 本文介绍了为什么要使用多进程处理TCP服务端,多进程的好处包括可靠性高和处理大量数据时速度快。然而,多进程不能共享进程空间,因此有一些变量不能共享。文章还提供了使用多进程实现TCP服务端的代码,并对代码进行了详细注释。 ... [详细]
  • XML介绍与使用的概述及标签规则
    本文介绍了XML的基本概念和用途,包括XML的可扩展性和标签的自定义特性。同时还详细解释了XML标签的规则,包括标签的尖括号和合法标识符的组成,标签必须成对出现的原则以及特殊标签的使用方法。通过本文的阅读,读者可以对XML的基本知识有一个全面的了解。 ... [详细]
  • 本文介绍了Web学习历程记录中关于Tomcat的基本概念和配置。首先解释了Web静态Web资源和动态Web资源的概念,以及C/S架构和B/S架构的区别。然后介绍了常见的Web服务器,包括Weblogic、WebSphere和Tomcat。接着详细讲解了Tomcat的虚拟主机、web应用和虚拟路径映射的概念和配置过程。最后简要介绍了http协议的作用。本文内容详实,适合初学者了解Tomcat的基础知识。 ... [详细]
  • 本文介绍了计算机网络的定义和通信流程,包括客户端编译文件、二进制转换、三层路由设备等。同时,还介绍了计算机网络中常用的关键词,如MAC地址和IP地址。 ... [详细]
  • 个人学习使用:谨慎参考1Client类importcom.thoughtworks.gauge.Step;importcom.thoughtworks.gauge.T ... [详细]
  • 本文介绍了在Linux下安装和配置Kafka的方法,包括安装JDK、下载和解压Kafka、配置Kafka的参数,以及配置Kafka的日志目录、服务器IP和日志存放路径等。同时还提供了单机配置部署的方法和zookeeper地址和端口的配置。通过实操成功的案例,帮助读者快速完成Kafka的安装和配置。 ... [详细]
  • 移动端常用单位——rem的使用方法和注意事项
    本文介绍了移动端常用的单位rem的使用方法和注意事项,包括px、%、em、vw、vh等其他常用单位的比较。同时还介绍了如何通过JS获取视口宽度并动态调整rem的值,以适应不同设备的屏幕大小。此外,还提到了rem目前在移动端的主流地位。 ... [详细]
  • Sleuth+zipkin链路追踪SpringCloud微服务的解决方案
    在庞大的微服务群中,随着业务扩展,微服务个数增多,系统调用链路复杂化。Sleuth+zipkin是解决SpringCloud微服务定位和追踪的方案。通过TraceId将不同服务调用的日志串联起来,实现请求链路跟踪。通过Feign调用和Request传递TraceId,将整个调用链路的服务日志归组合并,提供定位和追踪的功能。 ... [详细]
  • 本文总结了初学者在使用dubbo设计架构过程中遇到的问题,并提供了相应的解决方法。问题包括传输字节流限制、分布式事务、序列化、多点部署、zk端口冲突、服务失败请求3次机制以及启动时检查。通过解决这些问题,初学者能够更好地理解和应用dubbo设计架构。 ... [详细]
  • 本文整理了Java中org.apache.solr.common.SolrDocument.setField()方法的一些代码示例,展示了SolrDocum ... [详细]
  • 本文整理了Java中org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc.getTypeInfo()方法的一些代码示例,展 ... [详细]
author-avatar
樱花落下的那天
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有