ELK+FileBeat-7.5.0安装部署

系统环境:Centos 7.6
服务器:阿里云ECS
软件版本:elasticsearch-7.5.0 + kibana-7.5.0 + logstash-7.5.0 + Filebeat+jdk-11.0.5
部署要求:elasticsearch 一主两从

系统配置:

添加如下配置

#vim /etc/sysctl.conf

vm.max_map_count=262144

创建普通用户和密码,elk服务全部使用该用户。

#useradd elk
#passwd elk

上传二进制软件包到指定目录

#rz
elasticsearch-7.5.0-linux-x86_64.tar.gz
jdk-11.0.5_linux-x64_bin.rpm
kibana-7.5.0-linux-x86_64.tar.gz
logstash-7.5.0.tar.gz

安装jdk,7.x版本需要jdk 11

#rpm -ivh jdk-11.0.5_linux-x64_bin.rpm

解压ES软件包

#tar zxvf elasticsearch-7.5.0-linux-x86_64.tar.gz

自定义文件名称

#mv elasticsearch-7.5.0 elasticsearch-master

如果单节点部署忽略该步骤

#cp -rf elasticsearch-master elasticsearch-slave01
#cp -rf elasticsearch-master elasticsearch-slave02

修改ES配置文件

#vim config/elasticsearch.yml

配置文件实例:
主节点Master:

cluster.name: es_xxxx(之定义集群名称)
node.name: master
node.master: true
node.data: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.17.0.0:9300", "172.17.0.0:9301","172.17.0.0:9302"](填写自己IP地址)
cluster.initial_master_nodes: ["master"]

从节点slave-01

cluster.name: es_xxxx
node.name: slave01
node.master: false
node.data: true
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9301
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.17.4.180:9300", "172.17.0.0:9301","172.17.0.0:9302"](填写自己IP地址)
cluster.initial_master_nodes: ["master","slave01","slave02"]

从节点slave-02

cluster.name: es_xxxx
node.name: slave02
node.master: false
node.data: true
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9302
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.17.0.0:9300", "172.17.0.0:9301","172.17.0.0:9302"](填写自己IP地址)
cluster.initial_master_nodes: ["master","slave01","slave02"]

启动ES服务

./bin/elasticsearch -d (-d参数是后台启动,不加-d可打印启动日志。)
9200和9300端口监听即为启动正常,单点部署仅9200启动!!  

Elasticsearch-head下载(Elasticsearch-head非必选安装)

https://github.com/mobz/elasticsearch-head
https://nodejs.org/dist/v12.13.1/node-v12.13.1-linux-x64.tar.xz(下载自己想要的版本)

Node安装

tar axvf node-v12.13.1-linux-x64.tar.xz(解压二进制软件包)
mv node-v12.13.1-linux-x64 /usr/local/node-v12.13.1(移动到指定目录,目录自己定义)

配置系统变量

vim /etc/profile(在文件最后添加以下参数)

export NODE_HOME=/usr/local/node-v12.13.1
export PATH=$NODE_HOME/bin:$PATH

source /etc/profile(执行该命令使配置项生效)

验证安装是否成功

node -v (输出: v12.13.1 表示安装成功)
npm -v  (输出: 6.12.1 表示安装成功)

Elasticsearch-head安装

cd /data/work/elasticsearch-head/(进入elasticsearch-head下)
npm install  (执行该条命令安装,如有安装报错自行百度即可。)
nohup npm run start & (elasticsearch-head下执行启动命令,改名了为后台启动。)
cat nohup.out (启动日志输出文件,在启动时所在的路径下。)

添加ES跨域访问:

http.cors.enabled: true 
http.cors.allow-origin: "*"
(不添加elasticsearch-head无法连接到ES服务)

查看ES状态信息
http://x.x.x.x:9200/_cluster/health?pretty

cluster_name    "es_xxxx"
status    "green"
timed_out    false
number_of_nodes    3
number_of_data_nodes    3
active_primary_shards0
active_shards    0
relocating_shards    0
initializing_shards    0
unassigned_shards    0
delayed_unassigned_shards    0
number_of_pending_tasks    0
number_of_in_flight_fetch    0
task_max_waiting_in_queue_millis    0
active_shards_percent_as_number    100

http://x.x.x.x:9200/_cat/nodes?v

ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
x.x.x.x           10          39   0    0.02    0.19     0.20 dilm      *      master
x.x.x.x           33          39   0    0.02    0.19     0.20 dil       -      slave01
x.x.x.x           32          39   0    0.02    0.19     0.20 dil       -      slave02

http://x.x.x.x:9200/

name    "master"
cluster_name    "es_xxxx"
cluster_uuid    "wENHFlshSXmEnk_q2lS4kw"
version    
number    "7.5.0"
build_flavor    "default"
build_type    "tar"
build_hash    "e9ccaed468e2fac2275a3761849cbee64b39519f"
build_date    "2019-11-26T01:06:52.518245Z"
build_snapshot    false
lucene_version    "8.3.0"
minimum_wire_compatibility_version    "6.8.0"
minimum_index_compatibility_version    "6.0.0-beta1"
tagline    "You Know, for Search"

Kibana安装

tar zxvf kibana-7.5.0-linux-x86_64.tar.gz(上传二进制软件并解压)
mv kibana-7.5.0-linux-x86_64 ../work/kibana (将软件包移动到指定目录并自定义目录名称)
vim kibana/config/kibana.yml(修改kibana配置文件)

配置文件实例:

[elk@ZdZ]# cat /data/work/kibana/config/kibana.yml |egrep -v "#|^$"
server.port: 5601 (监听端口号)
server.host: "0.0.0.0" (监听地址,可设置成内网或外网的指定IP,0表示全网允许)
i18n.locale: "zh-CN" (界面中文,中文支持很一般!!!)
去掉以上选项的注释。

启动Kibana

/data/work/kibana/bin/kibana  (前台启动,观察启动日志是否正常)
nohup /data/work/kibana/bin/kibana & (后台启动,日志会输出到nohup.out)

检查安装是否完成

netstat -lntup (查看5601端口是否被监听)
x.x.x.x:5601 (浏览器访问是否能打开kibana WEB管理界面)

logstash安装部署

tar zxvf logstash-7.5.0.tar.gz (上传二进制软件并解压)
mv logstash-7.5.0 logstash (移动目录到指定路径并自定义目录名称)

配置实例:

[elk@ZdZ]$ cat  /data/work/logstash/config/logstash.yml |egrep -v "#|^$"
config.support_escapes: true

配置logstash.conf配置文件(该文件需要手动创建)

[elk@ZdZ logstash]$ cat  config/logstash.conf
input {
  beats {
    host => '172.17.4.180'
    port => 5044
  }
}

filter {
  if [type] == "access"{
    grok {
        match => {
        "message" => '(?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - - \[(?<requesttime>[^ ]+ \+[0-9]+)\] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/\d.\d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
    }
    remove_field => ["message","@version","path","input"]
  }
    date {
    match => ["requesttime", "dd/MMM/yyyy:HH:mm:ss Z"]
    target => "@timestamp"
  }
}
else if [type] == "laravel"{
  grok {
      match => {
        "message" => '(?<time>\[\d{4}-\d{1,2}-\d{1,2}\s+(20|21|22|23|[0-1]\d):[0-5]\d:[0-5]\d\]\s)(?<msg>[A-Za-z]+(.*)([\s\S]*))'
     }

 }
mutate {
}
}
else if [type] == "querysql"{
  grok {
     # match => {
     #   "message" => '(?<time>\[\d{4}-\d{1,2}-\d{1,2}\s+(20|21|22|23|[0-1]\d):[0-5]\d:[0-5]\d\]\s)(?<msg>[A-Za-z]+(.*)([\s\S]*))'
    #}

}
    mutate {
}
}

}
output {
  if [type] == "access"{
  elasticsearch {
     hosts => ["http://172.17.4.180:9200"]
     index => "access-%{+YYYY.MM.dd}"
  }
 }

else if [type] == "laravel"{
  elasticsearch {
  hosts => ["http://172.17.4.180:9200"]
  index => "laravel-%{+YYYY.MM.dd}"
  }  

 }

else if [type] == "querysql"{
  elasticsearch { 
  hosts => ["http://172.17.4.180:9200"]
  index => "querysql-%{+YYYY.MM.dd}"
  }  

 }

 }

Filebeat安装

tar zxvf filebeat-7.5.0-linux-x86_64.tar.gz (上传并解压软件包)
mv filebeat-7.5.0-linux-x86_64 elk/filebeat (移动到指定路径并自定义文件名称)
vim /data/work/elk/filebeat/filebeat.yml (编写配置文件)

配置文件实例:

[elk@nginx]# cat  /data/work/elk/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  tail_files: true
  backoff: "1s"
  paths:
      - /data/work/nginx/logs/access_xdzx_443.log (日志文件路径)
  fields:
    type: access (多日志收集定义类型,便于logstash判断输出索引,结合logstash配置文件。)
  fields_under_root: true
- type: log
  tail_files: true
  backoff: "1s"
  multiline.pattern: '\[\d{4}-\d{1,2}-\d{1,2}\s+(20|21|22|23|[0-1]\d):[0-5]\d:[0-5]\d\]\s'(格式化输出日志收集合并配置,支持正则)
  multiline.negate: true (同上)
  multiline.match: after  (同上)
  paths:
      - /develop/XdoService/storage/logs/laravel-*.log (日志路径 * 表示收集该类型的所有日志,默认最新)
  fields:
    type: laravel
  fields_under_root: true

- type: log
  tail_files: true
  backoff: "1s"
  paths:
      - /develop/XdoService/storage/logs/*_querySql.log (配置同上,正则匹配)
  fields:
    type: querysql
  fields_under_root: true

output:
  logstash:
    hosts: ["172.17.4.180:5044"] (输出到logstash)

添加新评论