首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >利用Ganglia监控Hadoop多节点簇

利用Ganglia监控Hadoop多节点簇
EN

Stack Overflow用户
提问于 2015-02-23 12:00:09
回答 2查看 1.6K关注 0票数 1

我希望使用ganglia监视Hadoop (HadoopVersion0.20.2)多节点集群。我的Hadoop工作正常,我在看完博客后安装了Ganglia --

http://hakunamapdata.com/ganglia-configuration-for-a-small-hadoop-cluster-and-some-troubleshooting/

http://hokamblogs.blogspot.in/2013/06/ganglia-overview-and-installation-on.html

我还研究了Ganglia.pdf(附录B节和Hadoop/HBase )的监测。​

代码语言:javascript
复制
I have modified only the  following lines in **Hadoop-metrics.properties**(same on all Hadoop Nodes)==>



// Configuration of the "dfs" context for ganglia
 dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
 dfs.period=10
 dfs.servers=192.168.1.182:8649

// Configuration of the "mapred" context for ganglia
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
mapred.period=10
mapred.servers=192.168.1.182:8649:8649


// Configuration of the "jvm" context for ganglia
 jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
 jvm.period=10
 jvm.servers=192.168.1.182:8649


 **gmetad.conf** (Only on Hadoop master Node )


data_source "Hadoop-slaves" 5 192.168.1.182:8649
RRAs "RRA:AVERAGE:0.5:1:302400" //Because i want  to analyse one week data.



 **gmond.conf** (on all the Hadoop Slave nodes and Hadoop Master)

globals {
  daemonize = yes
  setuid = yes
  user = ganglia
  debug_level = 0
  max_udp_msg_len = 1472
  mute = no
  deaf = no
  allow_extra_data = yes
  host_dmax = 0 /*secs */
  cleanup_threshold = 300 /*secs */
  gexec = no
  send_metadata_interval = 0
}

cluster {
  name = "Hadoop-slaves"
  owner = "Sandeep Priyank"
  latlong = "unspecified"
  url = "unspecified"
}

/* The host section describes attributes of the host, like the location */
host {
  location = "CASL"
}

/* Feel free to specify as many udp_send_channels as you like.  Gmond
   used to only support having a single channel */
udp_send_channel {
  host = 192.168.1.182
  port = 8649
  ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
  port = 8649

}

/* You can specify as many tcp_accept_channels as you like to share
   an xml description of the state of the cluster */
tcp_accept_channel {
  port = 8649
 }

现在Ganglia只提供系统度量(mem、disk等)。所有的节点。但是它没有显示Hadoop度量(比如jvm、mapred度量等等)。在网络界面上。我怎样才能解决这个问题?

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2015-03-14 12:44:46

感谢大家,如果您使用的是较早版本的Hadoop,那么将以下文件(来自新版本的Hadoop) ==>

  1. GangliaContext31.java
  2. GangliaContext.java

在新版本的Hadoop的路径hadoop/src/core/org/apache/hadoop/metrics/ganglia ==>中。

使用ant编译Hadoop (并在编译时设置适当的代理)。如果出现类似函数定义的错误,请将函数定义(来自新版本)放入适当的java文件中,然后再次编译Hadoop。看起来不错。

票数 0
EN

Stack Overflow用户

发布于 2015-02-23 19:31:31

我确实与Ganglia一起工作Hadoop,是的,我在Ganglia上看到了很多Hadoop (容器、映射任务、vmem)的度量标准。事实上,Hadoop向Ganglio报告了更多的100项指标。

hokamblogs的文章已经足够了。

我在主节点上编辑Hadoop-emeics2.属性,内容如下:

代码语言:javascript
复制
namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=10
namenode.sink.ganglia.servers=gmetad_hostname_or_ip:8649

resourcemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
resourcemanager.sink.ganglia.period=10
resourcemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649

我还编辑了奴隶上的相同文件:

代码语言:javascript
复制
datanode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
datanode.sink.ganglia.period=10
datanode.sink.ganglia.servers=gmetad_hostname_or_ip:8649

nodemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
nodemanager.sink.ganglia.period=10
nodemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649

请记住,在更改文件后重新启动Hadoop和Ganglia。

希望这能帮到你。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/28673330

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档