作者:幻想6666_321 | 来源:互联网 | 2023-06-05 21:20
普罗米修斯部署起来非常简单,但是监控hadoop的节点还是比较麻烦,以下是我部署的整个过程。
使用docker安装比较方便,我的ubuntu系统版本是18.04,以下1234步骤都在监控的机器执行,第5步在hadoop部署的机器执行
1.下载镜像包
docker pull prom/node-exporter
docker pull prom/prometheus
docker pull grafana/grafana
docker pull prom/blackbox-exporter
2.创建目录和配置文件
创建prometheus配置文件
sudo mkdir /opt/prometheus
cd /opt/prometheus/
vim prometheus.yml新建空文件夹grafana,用来存储grafana数据
mkdir /opt/grafana
设置权限
chmod 777 -R /opt/grafana
prometheus.yml文件内容
global:scrape_interval: 60sevaluation_interval: 60sscrape_configs:- job_name: prometheusstatic_configs:- targets: ['localhost:9090']labels:instance: prometheus- job_name: linuxstatic_configs:- targets: ['192.168.10.82:9100']labels:instance: localhost
3.创建docker容器并启动
sudo docker run -d -p 9090:9090 -v /opt/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheussudo docker run -d -p 3000:3000 --name=grafana -v /opt/grafana:/var/lib/grafana grafana/grafanasudo docker run -d -p 9100:9100 -v "/proc:/host/proc:ro" -v "/sys:/host/sys:ro" -v "/:/rootfs:ro" --net="host" prom/node-exportersudo docker run -d -p 9115:9115 -v /opt/blackbox/blackbox.yml --name blackbox_exporter prom/blackbox-exporter
查看容器
4.访问webUI
http://localhost:3000/
http://localhost:9090/targets
http://localhost:9090/graph
5.Prometheus实现对大数据Hadoop节点集群监控
通过jmx_exporter对大数据各个组件进行参数监控,通过node_exporter对服务器基础进行参数监控,使用prometheus进行数据存贮,通过grafana进行界面展示
5.1 分别在监控的机器创建以下文件夹和文件
sudo mkdir -p /opt/prometheus/monitoring
sudo mkdir -p /opt/prometheus/monitoring/zookeeper
sudo mkdir -p /opt/prometheus/monitoring/hadoop
/opt/prometheus/monitoring创建以下文件,每个组件都需要一个yaml后缀文件
datanode.yaml
resourcemanager.yaml
nodemanager.yaml
journalnode.yaml
zkfc.yaml
hffps.yaml
proxyserver.yaml
historyserver.yaml
5.2 下载jmx_prometheus_javaagent-0.15.0.jar
jmx 方式是直接在hadoop插件中启用一个http端口,并抽取组件的相关参数。
https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.15.0/jmx_prometheus_javaagent-0.15.0.jar
mv jmx_prometheus_javaagent-0.13.0.jar /opt/prometheus/monitoring
5.3 Hadoop 配置
vim hadoop/etc/hadoop/hadoop-env.sh
export HADOOP_NAMENODE_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -com.sun.management.jmxremote.port=1234 -javaagent:/opt/prometheus/monitoring/jmx_prometheus_javaagent-0.15.0.jar=9211:/opt/prometheus/monitoring/namenode.yaml"
export HADOOP_DATANODE_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -com.sun.management.jmxremote.port=1244 -javaagent:/opt/prometheus/monitoring/jmx_prometheus_javaagent-0.15.0.jar=9212:/opt/prometheus/monitoring/datanode.yaml"
复制到每个hadoop的节点
每个组件都需要一个yaml后缀文件,只需修改端口配置即可(端口可随意指定),并不用来http访问,保证每个组件的yaml文件中端口不重复即可namenode.yaml
5.4监听节点配置文件
5.4.1 namenode.yaml配置文件,如下
如何配置参考https://codechina.csdn.net/mirrors/prometheus/jmx_exporter?utm_source=csdn_github_acceleratorstartDelaySeconds: 0
hostPort: localhost:1234 #master为本机IP(一般可设置为localhost);1234为想设置的jmx端口
#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmi
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false
5.4.1 dataname.yaml配置文件,如下
snametartDelaySeconds: 0
hostPort: 127.0.0.1:1244 #master为本机IP(一般可设置为localhost);1244为想设置的jmx端口(>可设置为未被占用的端口)
#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmi
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false
5.5 hadoop启动文件/hadoop/hadoop-3.2.2/bin/hdfs添加以下内容
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_DATANODE_JMX_OPTS" #添加datanode jmx变量
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_JMX_OPTS" #添加namenode JMX环境变量
6 修改promethues配置
- job_name: 'hdfs-namenode'
static_configs:
- targets: ['192.168.10.28:9211','192.168.10.238:9211']
- job_name: 'hdfs-datanode'
static_configs:
- targets: ['192.168.10.28:9212','192.168.10.238:9212','192.168.10.239:9212']
启动hadoop重启promethues