ELK部署记录

Kafka

首先安装JDK环境,然后安装Kafka并创建topic logstash

官方下载:http://kafka.apache.org/downloads

1
# bin/kafka-topics.sh --create --zookeeper 192.168.165.243:2181 --replication-factor 1 --partitions 1 --topic logstash

ElasticSearch

官方下载:https://www.elastic.co/downloads/elasticsearch

1
2
3
# tar -zxf elasticsearch-7.1.0-linux-x86_64.tar.gz -C /data/server/
# mv /data/server/elasticsearch-7.1.0 /data/server/elasticsearch
# vim config/elasticsearch.yml
1
2
3
4
5
6
7
8
9
10
11
12
network.host: 192.168.165.239  #设置访问地址和端口号,否则不能在浏览器中访问
http.port: 9200

#cluster.name: es_cluster
node.name: node-1 #设置ES集群的集群名称,以及这台机器在集群中的名称
node.attr.rack: r1

path.data: /data/server/elasticsearch/data #设置ES存储data和log的路径
path.logs: /data/logs/elasticsearch

#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["node-1"]

注:Elasticsearch 要求不能使用超级用户root运行,所以我们建立一个es账号

1
2
3
4
5
6
7
# 创建es账户
adduser es
# 修改密码
passwd es

# 为esuser用户授予elasticsearch目录权限
# chown es -R /data/server/elasticsearch

前台启动:

1
# ./bin/elasticsearch

后台启动:

1
# ./bin/elasticsearch -d

在浏览器中访问:http://192.168.16.20:9200/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.1.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "606a173",
"build_date" : "2019-05-16T00:43:15.323135Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

若报如下错误:

1
2
3
4
5
bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[3]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

(1)max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量

1
2
ulimit -Hn
ulimit -Sn

修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效

1
2
*               soft    nofile          65536
* hard nofile 65536

(2)max number of threads [3818] for user [es] is too low, increase to at least [4096]

问题同上,最大线程个数太低。修改配置文件/etc/security/limits.conf,增加配置

1
2
*               soft    nproc           4096
* hard nproc 4096

可通过命令查看

1
2
ulimit -Hu
ulimit -Su

(3)max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

修改/etc/sysctl.conf文件,在末尾增加配置vm.max_map_count=262144

1
2
vi /etc/sysctl.conf
sysctl -p

执行命令sysctl -p生效

Logstash

官方下载:https://www.elastic.co/downloads/logstash

1
2
3
4
5
# tar -zxf logstash-7.1.0.tar.gz -C /data/server/
# mv /data/server/logstash-7.1.0 /data/server/logstash
# cd /data/server/logstash/
# mkdir config_file
# vim config_file/log.conf

前台启动:

1
# bin/logstash -f config_file/log.conf

后台启动:

1
# nohup bin/logstash -f config_file/log.conf >/dev/null &

采集日志文件并传入Kafka的log.conf配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
input {
file {
path => ["/home/dubbo/applogs/*.log"]
type => "appblog"
start_position => beginning
#sincedb_path => "/dev/null"
#ignore_older => 0
codec => multiline {
pattern => "^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}"
negate => true
what => "previous"
}
}
}

output {
kafka {
topic_id => "logstash"
bootstrap_servers => "192.168.16.20:9092" # kafka的地址
batch_size => 5
codec => json
}
}

接收Kafka并传入elasticsearch的log.conf配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
input{
kafka {
bootstrap_servers => "192.168.16.20:9092"
topics => "logstash"
group_id => "logstash"
consumer_threads => 5
decorate_events => true
codec => json
type => "appblog"
#auto_offset_reset => "smallest"
#reset_beginning => true
}
}

filter {
if [type] == "appblog" {
if [message] =~ "^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3}\s+\[[a-zA-Z0-9._-]+\]\s+\[[a-zA-Z0-9._-]+\][\s\S]*$" {
grok {
patterns_dir => "./patterns"
add_field => {"logmatch" => "99999"}
match => { "message" => "%{TIME_STAMP_A:logtime}\s+\[%{APP_NAME:appname}\]\s+\[%{LOG_LVL:loglvl}\]\s+\[%{TRACE_ID:traceid}\]\s+\[%{SPAN_ID:spanid}\]" }
}
date {
match => ["logtime", "yyyy-MM-dd HH:mm:ss.SSS"]
target => "messagetime"
#locale => "en"
#timezone => "+00:00"
#remove_field => ["logtime"]
}
}
}
}

output {
elasticsearch {
hosts => ["192.168.16.20:9200"]
#hosts => ["192.168.16.20:9200","192.168.16.22:9200"]
index => "%{type}"
}
}

Kibana

官方下载:https://www.elastic.co/downloads/kibana

1
2
3
4
# tar -zxf kibana-7.1.0-linux-x86_64.tar.gz -C /data/server/
# mv /data/server/kibana-7.1.0-linux-x86_64 /data/server/kibana
# cd /data/server/kibana/
# vim config/kibana.yml
1
2
3
4
5
server.port: 5601
server.host: "192.168.16.25"
elasticsearch.hosts: ["http://192.168.16.20:9200"]
xpack.reporting.encryptionKey: "yezhou"
xpack.security.encryptionKey: "78C87E5FC3656BE577BB41A80F45F537"

前台启动:

1
# ./bin/kibana

后台启动:

1
# nohup ./bin/kibana >/dev/null &

在浏览器中访问:192.168.16.20:5601,即可访问搜索

查看Kibana进程:

1
# ps -ef | grep node

Powered by AppBlog.CN     浙ICP备14037229号

Copyright © 2012 - 2020 APP开发技术博客 All Rights Reserved.

访客数 : | 访问量 :