首页 > 其他 > 详细

ELK+Kafka+Filebeat搭建日志系统

时间:2020-06-10 01:17:25      阅读:147      评论:0      收藏:0      [点我收藏+]
服务器
IP 服务
192.168.113.107 ES1+Kibana
192.168.113.108 ES2
192.168.113.109 ES3
192.168.113.101 kafka+zookeeper1
192.168.113.102 kafka+zookeeper2
192.168.113.103 kafka+zookeeper3
192.168.113.100 logstash
192.168.113.99 filebeat

实验环境,关闭防火墙和selinux

一、ES集群

  1. 安装ES

    rpm -ivh jdk-8u181-linux-x64.rpm      
    rpm -ivh elasticsearch-6.7.1.rpm      
    systemctl daemon-reload               
    systemctl enable elasticsearch.service
  2. 修改配置文件

    cp /etc/elasticsearch/elasticsearch.yml  /etc/elasticsearch/elasticsearch.yml.bak
    
    vim /etc/elasticsearch/elasticsearch.yml
    cluster.name: myes
    node.name: master
    node.master: true
    node.data: false
    network.host: 0.0.0.0
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    http.port: 9200
    discovery.zen.ping.unicast.hosts: ["192.168.113.107", "192.168.113.108","192.168.113.109"]
    
    scp /etc/elasticsearch/elasticsearch.yml 192.168.113.108:/etc/elasticsearch/
    scp /etc/elasticsearch/elasticsearch.yml 192.168.113.109:/etc/elasticsearch/
  3. 配置data节点

    #其他两台节点需要更改配置以下几点
    node.name: node-1  #第二台就写node-2
    node.master: false
    node.data: true
    
    systemctl stop firewalld.service
  4. 查看集群状态

    curl ‘192.168.113.107:9200/_cluster/health?pretty‘  #查看集群信息
    
    {
           "cluster_name" : "myes",      #集群名
           "status" : "green",           #健康状态,green代表没问题
           "timed_out" : false,          #是否超时
           "number_of_nodes" : 3,        #集群节点数
           "number_of_data_nodes" : 2,   #集群data节点数
           "active_primary_shards" : 0,
           "active_shards" : 0,
           "relocating_shards" : 0,
           "initializing_shards" : 0,
           "unassigned_shards" : 0,
           "delayed_unassigned_shards" : 0,
           "number_of_pending_tasks" : 0,
           "number_of_in_flight_fetch" : 0,
           "task_max_waiting_in_queue_millis" : 0,
           "active_shards_percent_as_number" : 100.0
    }
  5. 调优(实验环境未调)

    vim /etc/sysctl.conf
    fs.file-max=655360   #系统最大打开文件描述符数
    vm.max_map_count = 262144  #用于限制一个进程可以拥有的VMA(虚拟内存区域)的大小,系统默认是65530,建议修改成262144或者更高
    
    vim /etc/security/limits.conf
    
    #进程最大打开文件描述符(nofile)、最大用户进程数(nproc)和最大锁定内存地址空间(memlock)
    * soft nproc 20480
    * hard nproc 20480
    * soft nofile 65536
    * hard nofile 65536
    * soft memlock unlimited
    * hard memlock unlimited
    
    vim /etc/security/limits.d/20-nproc.conf
    将
    * soft nproc 4096  修改为  * soft nproc 20480 

二、kafka+zookeeper集群

zookeeper
  1. 安装

    rpm -ivh jdk-8u181-linux-x64.rpm
    tar  -xvf zookeeper-3.4.14.tar.gz -C /usr/local/
    mv /usr/local/zookeeper-3.4.14/ /usr/local/zookeeper
    cp /usr/local/zookeeper/conf/zoo_sample.cfg  /usr/local/zookeeper/conf/zoo.cfg
    mkdir /usr/local/zookeeper/data
    echo 1 >> /usr/local/zookeeper/data/myid     #三台机器分别对应写入,123
  2. 修改配置

    vim /usr/local/zookeeper/conf/zoo.cfg 
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/usr/local/zookeeper/data
    clientPort=2181
    server.1=192.168.113.101:2888:3888
    server.2=192.168.113.102:2888:3888
    server.3=192.168.113.103:2888:3888
  3. 启动

    /usr/local/zookeeper/bin/zkServer.sh start
    jps #查看进程
kafaka
  1. 安装

    tar -xvf kafka_2.11-2.1.1.tgz  -C /usr/local/
    mv /usr/local/kafka_2.11-2.1.1/ /usr/local/kafka
    mkdir /usr/local/kafka logs
  2. 配置

    #重点修改以下几项
    vim /usr/local/kafka/config/server.properties
    broker.id=1                                         #根据节点数依次更改       
    listeners=PLAINTEXT://192.168.113.101:9092          #kafka的监听地址与端口
    log.dirs=/usr/local/kafka/logs
    num.partitions=6                                    #设置新创建的topic有多少个分区
    log.retention.hours=60                              #配置kafka中消息保存的时间
    log.segment.bytes=1073741824                        #partition中每个segment数据文件的大小          zookeeper.connect=192.168.113.101:2181,192.168.113.102:2181,192.168.113.103:2181   #zookeeper所在的地址      
    auto.create.topics.enable=true                      #设置是否自动创建topic
    delete.topic.enable=true                            #Kafka提供了删除topic的功能,但是默认并不会直接将topic数据物理删除。如果要从物理上删除(即删除topic后,数据文件也会一同删除),就需要设置此配置项为true
  3. 启动

    cd /usr/local/kafka/
    nohup bin/kafka-server-start.sh  config/server.properties &
    jps #查看进程
  4. 常用命令

    cd /usr/local/kafka/bin/
    #查看当前topic                                                                     
     ./kafka-topics.sh --list --zookeeper 192.168.113.101:2181,192.168.113.102:2181,192.168.113.103:2181 
    #消费信息     
    ./kafka-console-consumer.sh  --bootstrap-server  192.168.113.101:9092,192.168.113.102:9092,192.168.113.103:9092  --from-beginning  --topic osmessages

三、Kibana、Logstash、Filebeat配置

Kibana
  1. 安装

    rpm -ivh kibana-6.7.1-x86_64.rpm 
    touch /var/log/kibana.log; chmod 777 /var/log/kibana.log
  2. 配置

    vim /etc/kibana/kibana.yml
    i18n.locale: "zh-CN"
    server.port: 5601
    server.host: "0.0.0.0"
    elasticsearch.url: "http://192.168.113.107:9200"
    kibana.index: ".kibana"          
    logging.dest: /var/log/kibana.log
  3. 启动

    systemctl start kibana.service 

Logstash

  1. 安装

    rpm -ivh jdk-8u181-linux-x64.rpm 
    rpm -ivh logstash-6.7.1.rpm
  2. 配置

    vim /etc/logstash/conf.d/kafka_os_into_es.conf
    input {
       kafka {
           bootstrap_servers => "192.168.113.101:9092,192.168.113.102:9092,192.168.113.103:9092"
           topics => ["osmessages"]
           codec => "json"
             }
         }
    
    output {
       elasticsearch {
           hosts => ["192.168.113.107:9200","192.168.113.108:9200","192.168.113.109:9200"]
           index => "osmessageslog-%{+YYYY-MM-dd}"
                     }
          }
  3. 启动

    cd /usr/share/logstash/bin
    ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/kafka_os_into_es.conf  #测试
    nohup ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/kafka_os_into_es.conf &
Fliebeat
  1. 安装

      rpm -ivh filebeat-6.7.1-x86_64.rpm
  2. 配置
mv /etc/filebeat/filebeat.yml  /etc/filebeat/filebeat.yml.bak
vim filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
    - /var/log/secure
fields:
    log_topic: osmessages
name: "192.168.113.99"
output.kafka:
  enabled: true
  hosts: ["192.168.113.101:9092", "192.168.113.102:9092", "192.168.113.103:9092"]
  version: "0.10"
  topic: ‘%{[fields][log_topic]}‘
  partition.round_robin:
    reachable_only: true
  worker: 2
  required_acks: 1
  compression: gzip
  max_message_bytes: 10000000
logging.level: info
  1. 启动

      systemctl start filebeat.service 

四、最后

? 启动所有服务之后在kibana中配置索引即可。

ELK+Kafka+Filebeat搭建日志系统

原文:https://blog.51cto.com/13950323/2502774

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!