Filebeat+ElasticSearch+Grafana实现Nginx日志监控

Filebeat安装

官网:https://www.elastic.co/downloads/beats/filebeat

# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.0-linux-x86_64.tar.gz
# tar -zxf filebeat-7.1.0-linux-x86_64.tar.gz -C /usr/local/
# mv /usr/local/filebeat-7.1.0-linux-x86_64 /usr/local/filebeat
# cd /usr/local/filebeat/ 
# vim filebeat.yml

Filebeat配置

filebeat.yml

参考:https://github.com/elastic/beats/issues/11866
参考:https://iminto.github.io/post/filebeat修改index的一个坑/

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  enabled: false
  paths:
    - /usr/local/nginx/logs/access.log
  #scan_frequency: 10s

#============================= Filebeat modules ===============================
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

#==================== Elasticsearch template setting ==========================
# 7.x的版本中需要禁用此索引生命周期,否则在指定es索引名字的时候会有问题
setup.ilm.enabled: false
# 添加模板配置,否则无法指定es的索引名
setup.template.enabled: true
setup.template.name: "nginx-log"
setup.template.pattern: "nginx-log-*"
setup.template.overwrite: true

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replicas: 1

#============================== Kibana =====================================
setup.kibana:
  host: "192.168.165.239:5601"

#================================ Outputs =====================================
output.elasticsearch:
  hosts: ["192.168.16.20:9200"]  # ["192.168.16.21:9200", "192.168.16.22:9200", "192.168.16.23:9200"]
  index: "nginx-log-%{+yyyy.MM.dd}"

#================================ Processors =====================================
processors:
  #- add_host_metadata: ~
  #- add_cloud_metadata: ~
  - drop_fields:
        fields: ["beat.name", "beat.version", "host.architecture","host.architecture","host.name","beat.hostname","log.file.path"]

无法自定义索引名称是因为,索引生命周期管理ilm功能默认开启,开启的情况下索引名称只能为filebeat-*, 通过setup.ilm.enabled: false进行关闭;如果要使用自定义的索引名称,同时又需要启用ilm,可以修改filebeat的模板

配置模板:https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html

setup.ilm.enabled: auto
setup.ilm.rollover_alias: "filebeat"
setup.ilm.pattern: "{now/d}-000001"

启用模块nginx

# cp modules.d/nginx.yml.disabled modules.d/nginx.yml
# vim modules.d/nginx.yml
# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.1/filebeat-module-nginx.html

- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/usr/local/nginx/logs/access.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: true

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/usr/local/nginx/logs/error.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: true

Nginx配置

http {
    log_format   main   '$remote_addr -'
                        ' $remote_user'
                        ' [$time_local]'
                        ' "$request"'
                        ' $status'
                        ' $body_bytes_sent'
                        ' "$http_referer"'
                        ' "$http_user_agent"'
                        ' "$http_x_forwarded_for"'
                        ' $upstream_response_time'
                        ' $upstream_addr';

    access_log  logs/access.log  main;
}

ingest配置

# cp module/nginx/access/ingest/default.json module/nginx/access/ingest/default.json.bak
# vim module/nginx/access/ingest/default.json
{
  "description": "Pipeline for parsing Nginx access logs. Requires the geoip and user_agent plugins.",
  "processors": [{
    "grok": {
      "field": "message",
      "patterns":[
        "\"?%{IP_LIST:nginx.access.remote_ip_list} - %{DATA:nginx.access.user_name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{GREEDYDATA:nginx.access.info}\" %{NUMBER:nginx.access.response_code} %{NUMBER:nginx.access.body_sent.bytes} \"%{DATA:nginx.access.referrer}\" \"%{DATA:nginx.access.agent}\" \"%{GREEDYDATA:nginx.access.xforwardedfor}\" %{GREEDYDATA:nginx.access.upstream_response_time} %{GREEDYDATA:nginx.access.upstream_addr}"
        ],
      "pattern_definitions": {
        "IP_LIST": "%{IP}(\"?,?\\s*%{IP})*"
      },
      "ignore_missing": true
    }
  }, {
    "grok": {
      "field": "nginx.access.info",
      "patterns": [
          "%{WORD:nginx.access.method} %{DATA:nginx.access.url} HTTP/%{NUMBER:nginx.access.http_version}",
          ""
      ],
      "ignore_missing": true
    }
  }, {
    "remove": {
      "field": "nginx.access.info"
    }
  }, {
    "split": {
      "field": "nginx.access.remote_ip_list",
      "separator": "\"?,?\\s+"
    }
  }, {
    "script": {
      "lang": "painless",
      "inline": "boolean isPrivate(def ip) { try { StringTokenizer tok = new StringTokenizer(ip, '.'); int firstByte = Integer.parseInt(tok.nextToken());       int secondByte = Integer.parseInt(tok.nextToken());       if (firstByte == 10) {         return true;       }       if (firstByte == 192 && secondByte == 168) {         return true;       }       if (firstByte == 172 && secondByte >= 16 && secondByte <= 31) {         return true;       }       if (firstByte == 127) {         return true;       }       return false;     } catch (Exception e) {       return false;     }   }   def found = false;   for (def item : ctx.nginx.access.remote_ip_list) {     if (!isPrivate(item)) {       ctx.nginx.access.remote_ip = item;       found = true;       break;     }   }   if (!found) {     ctx.nginx.access.remote_ip = ctx.nginx.access.remote_ip_list[0];   }"
      }
  }, {
    "remove":{
      "field": "message"
    }
  }, {
    "rename": {
      "field": "@timestamp",
      "target_field": "read_timestamp"
    }
  }, {
    "date": {
      "field": "nginx.access.time",
      "target_field": "@timestamp",
      "formats": ["dd/MMM/YYYY:H:m:s Z"]
    }
  },{
    "remove": {
      "field": "nginx.access.time"
    }
  }, {
    "user_agent": {
      "field": "nginx.access.agent",
      "target_field": "nginx.access.user_agent"
    }
  }, {
    "rename": {
      "field": "nginx.access.agent",
      "target_field": "nginx.access.user_agent.original"
    }
  }, {
    "geoip": {
      "field": "nginx.access.remote_ip",
      "target_field": "nginx.access.geoip"
    }
  }, {
    "script": {
      "lang": "painless",
      "inline": "String tmp=ctx.nginx.access.upstream_response_time; if (tmp=='-'){ctx.nginx.access.upstream_response_time=-1.0}else{ctx.nginx.access.upstream_response_time=Float.parseFloat(tmp)}"
      }
  }],
  "on_failure" : [{
    "set" : {
      "field" : "error.message",
      "value" : "{{ _ingest.on_failure_message }}"
    }
  }]
}

Filebeat启动

nohup ./filebeat -e -c filebeat.yml >&/dev/null &

Grafana配置

Data Source

![es-nginx日志](http://www.yezhou.me/AppBlog/images/运维/Grafana Data Sources - es-nginx日志.png)

Dashboard

![NGINX 访问量统计](http://www.yezhou.me/AppBlog/images/运维/Grafana Dashboard - NGINX 访问量统计.png)

版权声明:
作者:Joe.Ye
链接:https://www.appblog.cn/index.php/2023/03/25/implement-nginx-log-monitoring-with-filebeat-elasticsearch-grafana/
来源:APP全栈技术分享
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
打赏
海报
Filebeat+ElasticSearch+Grafana实现Nginx日志监控
Filebeat安装 官网:https://www.elastic.co/downloads/beats/filebeat # wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.0-linu……
<<上一篇
下一篇>>
文章目录
关闭
目 录