您的位置:新葡亰496net > 电脑系统 > ELK初学搭建,ELK环境搭建和SpringBoot集成测试

ELK初学搭建,ELK环境搭建和SpringBoot集成测试

发布时间:2019-11-24 07:18编辑:电脑系统浏览(74)

    目录:幼功筹算

     Elasticstack 5.1.2 集群日志系统计划及进行

    日记解析系统ELK搭建

    ELK 简介

    Elasticsearch是个开源布满式寻觅引擎,它的天性有:遍及式,零配置,自动发掘,索引自动分片,索引别本机制,restful风格接口,好些个据源,自动物检疫索负载等。

    Logstash是三个完全开源的工具,他得以对您的日记进行收罗、过滤,并将其积存供之后使用(如,寻找卡塔 尔(英语:State of Qatar)。

    Kibana 也是一个开源和免费的工具,它Kibana可认为 Logstash 和 ElasticSearch 提供的日志深入分析自身的 Web 分界面,能够协理您汇总、分析和寻觅首要数据日志。

    环境:Centos7.3

    1.  修正有关系统布局
    2. 安装elasticsearch

    3. 安装 kibana

    4. 安装logstash

    5. X-pack插件的装置

    6. 报到网页查看

    一、ELK Stack简介

    ELK

    ELK是日记收罗、索引与搜索三件套,蕴含了七个零器件

    • ElasticSearch
    • Logstash
    • Kibana

    其间ElasticSearch完毕日志的目录,并提供查询接口,Logstash完毕日志的收罗,Kibana则提供可视化体现

    有了ELK,大家不再须要到线上的每大器晚成台机械上grep日志,並且能可视化查询任何你想查询的日志消息。通过Kibana能拾贰分直接能够的来得广大音信,ELK还是能够充当监督系统应用。先看看效果图:

    新葡亰496net 1

    image.png

    ELK下载

    下载地址:https://www.elastic.co/downloads/

    下载Elasticsearch、Logstash、Kibana八个照料的安装包文件

    一、ElasticSearch安装

    目录:/usr/local/elk/es

    1.获取ElasticSearch安装包

    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.2.tar.gz
    

    2.解压后运营

    tar xf elasticsearch-6.1.2.tar.gz
    
    sh elasticsearch-6.1.2/bin/elasticsearch
    

    会报如下错误:

    [2018-01-24T13:59:16,633][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
    org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.1.2.jar:6.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.1.2.jar:6.1.2]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.1.2.jar:6.1.2]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.1.2.jar:6.1.2]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.1.2.jar:6.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.1.2.jar:6.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.1.2.jar:6.1.2]
    Caused by: java.lang.RuntimeException: can not run elasticsearch as root
    

    该难点是因为运转es不能够应用root顾客,由此要新建顾客es

    useradd es
    passwd es
    
    修改文件所属为es
    
    chown -R es:es /usr/local/es
    

    修改elasticsearch.yml

    network.host: 192.168.15.38
    http.port: 9200
    

    重复启航: 报如下难题:

    [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
    
    解决:
    vim /etc/security/limits.conf
    
    在最后面追加下面内容
    es hard nofile 65536
    es soft nofile 65536
    
    
    [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    
    解决:
    切换到root用户
    vi /etc/sysctl.conf 
    添加
    vm.max_map_count=655360
    执行命令:
    sysctl -p
    

    再一次施行./elasticsearch

    [2018-01-24T15:36:35,412][INFO ][o.e.n.Node               ] [] initializing ...
    [2018-01-24T15:36:35,508][INFO ][o.e.e.NodeEnvironment    ] [KMyyO-3] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [46.8gb], net total_space [49.9gb], types [rootfs]
    [2018-01-24T15:36:35,509][INFO ][o.e.e.NodeEnvironment    ] [KMyyO-3] heap size [990.7mb], compressed ordinary object pointers [true]
    [2018-01-24T15:36:35,510][INFO ][o.e.n.Node               ] node name [KMyyO-3] derived from node ID [KMyyO-3KRPy_Q3Eb0mYDaw]; set [node.name] to override
    [2018-01-24T15:36:35,511][INFO ][o.e.n.Node               ] version[6.1.2], pid[3404], build[5b1fea5/2018-01-10T02:35:59.208Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
    [2018-01-24T15:36:35,511][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX: UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX: UseCMSInitiatingOccupancyOnly, -XX: AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX: HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/es/elasticsearch-6.1.2, -Des.path.conf=/usr/local/es/elasticsearch-6.1.2/config]
    [2018-01-24T15:36:36,449][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [aggs-matrix-stats]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [analysis-common]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [ingest-common]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [lang-expression]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [lang-mustache]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [lang-painless]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [mapper-extras]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [parent-join]
    [2018-01-24T15:36:36,450][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [percolator]
    [2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [reindex]
    [2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [repository-url]
    [2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [transport-netty4]
    [2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService     ] [KMyyO-3] loaded module [tribe]
    [2018-01-24T15:36:36,451][INFO ][o.e.p.PluginsService     ] [KMyyO-3] no plugins loaded
    [2018-01-24T15:36:37,956][INFO ][o.e.d.DiscoveryModule    ] [KMyyO-3] using discovery type [zen]
    [2018-01-24T15:36:38,643][INFO ][o.e.n.Node               ] initialized
    [2018-01-24T15:36:38,643][INFO ][o.e.n.Node               ] [KMyyO-3] starting ...
    [2018-01-24T15:36:38,880][INFO ][o.e.t.TransportService   ] [KMyyO-3] publish_address {192.168.15.38:9300}, bound_addresses {192.168.15.38:9300}
    [2018-01-24T15:36:38,890][INFO ][o.e.b.BootstrapChecks    ] [KMyyO-3] bound or publishing to a non-loopback address, enforcing bootstrap checks
    [2018-01-24T15:36:41,955][INFO ][o.e.c.s.MasterService    ] [KMyyO-3] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {KMyyO-3}{KMyyO-3KRPy_Q3Eb0mYDaw}{RY8JlkNjT3iTPoO_VT1isw}{192.168.15.38}{192.168.15.38:9300}
    [2018-01-24T15:36:41,961][INFO ][o.e.c.s.ClusterApplierService] [KMyyO-3] new_master {KMyyO-3}{KMyyO-3KRPy_Q3Eb0mYDaw}{RY8JlkNjT3iTPoO_VT1isw}{192.168.15.38}{192.168.15.38:9300}, reason: apply cluster state (from master [master {KMyyO-3}{KMyyO-3KRPy_Q3Eb0mYDaw}{RY8JlkNjT3iTPoO_VT1isw}{192.168.15.38}{192.168.15.38:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
    [2018-01-24T15:36:41,990][INFO ][o.e.h.n.Netty4HttpServerTransport] [KMyyO-3] publish_address {192.168.15.38:9200}, bound_addresses {192.168.15.38:9200}
    [2018-01-24T15:36:41,990][INFO ][o.e.n.Node               ] [KMyyO-3] started
    [2018-01-24T15:36:41,997][INFO ][o.e.g.GatewayService     ] [KMyyO-3] recovered [0] indices into cluster_state
    

    起步成功

    浏览器中输入:

    {
      "name" : "KMyyO-3",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "Z2ReGjxgTx28uA3wHT-gZg",
      "version" : {
        "number" : "6.1.2",
        "build_hash" : "5b1fea5",
        "build_date" : "2018-01-10T02:35:59.208Z",
        "build_snapshot" : false,
        "lucene_version" : "7.1.0",
        "minimum_wire_compatibility_version" : "5.6.0",
        "minimum_index_compatibility_version" : "5.0.0"
      },
      "tagline" : "You Know, for Search"
    }
    

    下边进行安装ElasticSearch的head插件 下载安装包:

    wget https://nodejs.org/dist/v8.9.1/node-v8.9.1-linux-x64.tar.xz
    

    解压:

    tar -xJf node-v8.9.1-linux-x64.tar.xz
    

    配备景况变量:

    vi /etc/profile
    

    添加:

    export JAVA_BIN=$JAVA_HOME/bin
    export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$NODE_HOME/bin:$PATH
    
    source /etc/profile
    

    眼看生效

    翻看版本:

    [root@localhost node]# node -v
    v8.9.1
    [root@localhost node]# npm -v
    5.5.1
    

    2.安装git

    yum install git -y
    

    查看git版本:

    [root@localhost node]# git --version
    git version 1.8.3.1
    

    卸载git命令

    yum remove git
    

    3.通过git获取head插件

    git clone git://github.com/mobz/elasticsearch-head.git
    

    步向head根目录,切换来root客户实行设置

    [root@localhost elasticsearch-head]# npm install
    

    启动head插件

    [es@localhost elasticsearch-head]$ npm run start
    

    4.布署config/elasticsearch.yml文件 在最终增加:

    http.cors.enabled: true
    http.cors.allow-origin: "*"
    

    5.启动es服务和head插件 切换到es用户下

    [es@localhost elasticsearch-head]$ sh ../../elasticsearch-6.1.2/bin/elasticsearch -d
    
    [es@localhost elasticsearch-head]$ npm run start
    

    浏览器中拜谒: 新葡亰496net 2

    ELK名字解释

    ELK正是ElasticSearch LogStash Kibana,那三者是基本套件,但绝不任何。

    • Elasticsearch是个开源遍及式寻觅引擎,它的特征有:遍及式,零配置,自动开采,索引自动分片,索引别本机制,restful风格接口,好些个据源,自动物检疫索负载等。

    • Logstash是三个截然开源的工具,他能够对你的日志实行征集、过滤,并将其储存供今后使用(如,搜索卡塔 尔(阿拉伯语:قطر‎。

    • Kibana 也是三个开源和无需付费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日记深入分析本人的 Web 分界面,能够辅助您汇总、剖析和索求主要数据日志。

    ELK Stack 是Elasticsearch、Logstash、Kibana八个开源软件的结缘,在实时数据检索和分析场面,三者常常是格外共用的。

    机器必要

    三台机械,机器的配置视界上日志量而定

    ES集群:三台机械

    Logstash:风华正茂台机械

    Kibana:大器晚成台机械

    当中意气风发台机器不存款和储蓄ES数据,那台机械同一时间安装了ES、Logstash和Kibana

    设置前提

    安装JDK8环境

    安装基本参数

    /etc/sysctl.conf

    #追加以下参数

    vm.max_ELK初学搭建,ELK环境搭建和SpringBoot集成测试。map_count=655360

    #实施以下命令,确认保障生效配置生效:

    /sbin/sysctl –p

    设置能源参数

    /etc/security/limits.conf

    #修改

    * soft nofile 65536

    * hard nofile 131072

    * soft nproc 65536

    * hard nproc 131072

    设置elk客商参数

    /etc/security/limits.d/20-nproc.conf

    #追加(注意elk为后文创制的顾客卡塔 尔(阿拉伯语:قطر‎

    elk soft nproc 65536

    始建ELK相关文书夹 

    mkdir /usr/local/elk       #elk系统职责

    mkdir /usr/local/elk/es   # es日志及数量寄放文件夹

    创建elasticsearch实行顾客及权限

    groupadd elk    #创建elk用户组

    useradd elk -g elk -p 密码

    改进elasticsearch文件夹及中间文件的所属客户及组为elsearch:elsearch:

    chown -R elk:elk /usr/local/elk          #  "/usr/local/elk"为elk等的设置及日志、数据文件贮存地点

    二、Logstash安装

    目录:/usr/local/elk/logstash 获取安装包:

    wget https://artifacts.elastic.co/downloads/logstash/logstash-6.1.2.tar.gz
    
    tar zxvf logstash-6.1.2.tar.gz
    

    解压后步向config目录下,创立log_to_es.conf配置文件

    充足底下内容:

    input用来拉开tcp连接的4560端口,用来抽出日志音讯

    output配置ElasticSearch的地点,将吸取到的消息输出到ES中

    input {
      tcp {
        host => "192.168.15.38"
        port => 4560
        mode => "server"
        tags => ["tags"]
        codec => json_lines
      }
    }
    output {
     stdout{codec =>rubydebug}
      elasticsearch {
       hosts => "192.168.15.38"
      }
    }
    

    开启logstash

    ./logstash -f ../config/log_to_es.conf
    
    [root@localhost bin]# ./logstash -f ../config/log_to_es.conf 
    Sending Logstash's logs to /usr/local/logstash/logstash-6.1.2/logs which is now configured via log4j2.properties
    [2018-01-30T15:54:55,100][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/local/logstash/logstash-6.1.2/modules/fb_apache/configuration"}
    [2018-01-30T15:54:55,118][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/local/logstash/logstash-6.1.2/modules/netflow/configuration"}
    [2018-01-30T15:54:55,724][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
    [2018-01-30T15:54:56,433][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.1.2"}
    [2018-01-30T15:54:56,856][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    [2018-01-30T15:55:02,031][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.15.38:9200/]}}
    [2018-01-30T15:55:02,044][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.15.38:9200/, :path=>"/"}
    [2018-01-30T15:55:02,263][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.15.38:9200/"}
    [2018-01-30T15:55:02,342][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
    [2018-01-30T15:55:02,346][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
    [2018-01-30T15:55:02,365][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
    [2018-01-30T15:55:02,384][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
    [2018-01-30T15:55:02,436][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.15.38"]}
    [2018-01-30T15:55:02,458][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x2731383b run>"}
    [2018-01-30T15:55:02,554][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"192.168.15.38:4560", :ssl_enable=>"false"}
    [2018-01-30T15:55:02,765][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"=>"main"}
    [2018-01-30T15:55:02,882][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
    

    出口上面的内容后,logstash搭建设成功

    系统境况消息:

    CentOS Linux release 7.3.1611 (Core) 

    可参考:

    架构

    系统架构图如下

    新葡亰496net 3

    image.png

    • Shipper安装在每意气风发台必要搜罗日志的顾客机上,即供给在每大器晚成台湾游顾客机上安装logstash
    • Redis作为中间转播
    • Indexer安装在服务器上
    • Kibana提供可视化显示

    安装Elasticsearch

    解压elasticsearch

    cd /usr/local/elk

    tar -zxvf elasticsearch-6.1.1.tar.gz

    mv elasticsearch-6.1.1 elasticsearch

    修改配置文件

    vim elasticsearch/config/elasticsearch.yml

    #此处钦命的是集群名称,必要改革为相应的,开启了自开掘成效后,ES会依据此集群名称举办集群开采

    cluster.name:es_dev

    #数量目录

    path.data:/usr/local/elk/es/data

    # log目录

    path.logs:/usr/local/elk/es/logs

    # 节点名称

    node.name: es-node1

    #修正一下ES的监听地址,那样其余机器也能够访谈

    network.host:0.0.0.0

    #暗中同意的端口号

    http.port:9200

    进去elasticsearch的bin目录,使用./bin/elasticsearch -d命令运维elasticsearch。

    使用

    ps -ef|grep elasticsearch

    翻开进程

    使用

    curl -X GET http://localhost:9200

    三、搭建Kibana

    目录:/usr/local/elk/kibana

    wget https://artifacts.elastic.co/downloads/kibana/kibana-6.1.2-linux-x86_64.tar.gz
    

    解压:

    tar zxvf kibana-6.1.2-linux-x86_64.tar.gz
    

    推行命令:

    [root@localhost bin]# ./kibana
      log   [07:03:29.712] [info][status][plugin:kibana@6.1.2] Status changed from uninitialized to green - Ready
      log   [07:03:29.775] [info][status][plugin:elasticsearch@6.1.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
      log   [07:03:29.817] [info][status][plugin:console@6.1.2] Status changed from uninitialized to green - Ready
      log   [07:03:29.840] [info][status][plugin:elasticsearch@6.1.2] Status changed from yellow to green - Ready
      log   [07:03:29.856] [info][status][plugin:metrics@6.1.2] Status changed from uninitialized to green - Ready
      log   [07:03:30.088] [info][status][plugin:timelion@6.1.2] Status changed from uninitialized to green - Ready
      log   [07:03:30.094] [info][listening] Server running at http://192.168.15.38:5601
    

    运维成功

    输入:

    新葡亰496net 4

    底子条件筹算:

    闭馆防火墙:systemctl stop firewalld

    SeLinux设为disabled: setenforce 0

    jdk版本:jdk_1.8

    这一次搭建使用了多少个节点,分别是:node1(ElasticSearch LogStash Kibana

    • x-pack)

                    node2(ElasticSearch x-pack)

                    node3(ElasticSearch x-pack)

    此次使用的安装包已经提前下载好了,如有要求活动去官方网址下载,官方下载地址:

    $ll /apps/tools/
    total 564892
    -rw-r--r-- 1 root root  29049540 Feb 27 15:19 elasticsearch-6.2.2.tar.gz
    -rw-r--r-- 1 root root  12382174 Jun  1 10:49 filebeat-6.2.2-linux-x86_64.tar.gz
    -rw-r--r-- 1 root root  83415765 Feb 27 15:50 kibana-6.2.2-linux-x86_64.tar.gz
    -rw-r--r-- 1 root root 139464029 Feb 27 16:13 logstash-6.2.2.tar.gz
    -rw-r--r-- 1 root root 314129017 Jun  1 10:48 x-pack-6.2.2.zip
    

    二、Elasticstack主要器件

    软件版本

    ElasticSearch:5.0.2

    Logstash:5.1.1

    Kibana:5.0.2

    安装Logstash

    下载及解压

    cd到安装目录:cd /usr/local/elk

    tar -zxvf  logstash-6.1.1.tar.gz

    在logstash的config目录创设logstash.conf文件

    内容(备注:elasticsearch 集成es)

    input{

       stdin { }

    }

    output {

    elasticsearch {

      hosts => "192.168.102.139:9200"

      index => "logstash-test"

     }

       stdout {

          codec => rubydebug {}

       }

    }

    启动logstash

    步入logstash安装目录

    施行命令:./bin/logstash -f config/logstash.conf

    后台运营:nohup ./bin/logstash -f config/logstash.conf &

    四、SpringBoot集成

    1.增添依赖

            <dependency>
                <groupId>net.logstash.logback</groupId>
                <artifactId>logstash-logback-encoder</artifactId>
                <version>4.11</version>
            </dependency>
    

    2.增添安排文件:logback.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
        <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
            <destination>192.168.15.38:4560</destination>
            <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
        </appender>
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <encoder charset="UTF-8"> <!-- encoder 可以指定字符集,对于中文输出有意义 -->
                <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n
                </pattern>
            </encoder>
        </appender>
    
        <root level="INFO">
            <appender-ref ref="LOGSTASH" />
            <appender-ref ref="STDOUT" />
        </root>
    
    </configuration>
    

    3.application.yml配置

    logging:
      config: classpath:logback.xml
    

    4.集成测量试验

        @Autowired
        private StudentService studentService;
    
        @RequestMapping("/findAll")
        public List<Student> findAll() {
    
            logger.info("log info ...");
            logger.error("log error ...");
    
            logger.debug("log debug ...");
            return studentService.findAll();
        }
    

    查看Logstash输出:

    {
        "logger_name" => "com.spark.Application",
        "level_value" => 20000,
        "thread_name" => "main",
              "level" => "INFO",
               "host" => "10.10.30.98",
           "@version" => 1,
            "message" => "Starting Application on DESKTOP-DBPFNEL with PID 6980 (E:\workplace\es-demo\target\classes started by admin in E:\workplace\es-demo)",
               "port" => 63561,
               "tags" => [
            [0] "tags"
        ],
         "@timestamp" => 2018-01-30T07:04:27.523Z
    }
    {
        "logger_name" => "com.spark.Application",
        "level_value" => 20000,
        "thread_name" => "main",
              "level" => "INFO",
               "host" => "10.10.30.98",
           "@version" => 1,
            "message" => "No active profile set, falling back to default profiles: default",
               "port" => 63561,
               "tags" => [
            [0] "tags"
        ],
         "@timestamp" => 2018-01-30T07:04:27.525Z
    }
    {
        "logger_name" => "org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext",
        "level_value" => 20000,
        "thread_name" => "main",
              "level" => "INFO",
               "host" => "10.10.30.98",
           "@version" => 1,
            "message" => "Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@7e3181aa: startup date [Tue Jan 30 15:04:27 CST 2018]; root of context hierarchy",
               "port" => 63561,
               "tags" => [
            [0] "tags"
        ],
         "@timestamp" => 2018-01-30T07:04:27.604Z
    }
    {
        "logger_name" => "org.hibernate.validator.internal.util.Version",
        "level_value" => 20000,
        "thread_name" => "background-preinit",
              "level" => "INFO",
               "host" => "10.10.30.98",
           "@version" => 1,
            "message" => "HV000001: Hibernate Validator 5.3.6.Final",
               "port" => 63561,
               "tags" => [
            [0] "tags"
        ],
         "@timestamp" => 2018-01-30T07:04:27.677Z
    }
    

    输出如下内容,表明大家的ELK遭遇搭建变成功 在浏览器中查看:

    新葡亰496net 5

    SpringBoot测量试验项目源码:

     风流倜傥、改正相关系统布署

    1.  修改 /etc/security/limits.conf 文件,增多如下所示内容

    es hard nofile 65536
    es soft nofile 65536              # 最大文件句柄数
    es soft memlock unlimited           # 内存锁不限制
    es hard memlock unlimited
    
    1. 修改 /etc/sysctl.conf 文件,增多如下所示内容

      vm.max_map_count=262144            # 一个进程能抱有的最多的内部存款和储蓄器区域

    Elasticsearch: 准实时索引

    1. Java

    安装kibana

    解压包文件

    tar -zxvf kibana-6.1.1-linux-x86_64.tar.gz

    布置文件

    vmi config/kibana.yml

    布署内容:

    server.port:5601

    server.host:192.168.102.139  #kibana服务器地址

    elasticsearch.url: ""  #elasticsearch服务器地址

    启动kibana

     ./bin/kibana

    后台运转方式 

    nohup ./bin/kibana &

    二、安装elasticsearch

    elasticsearch是此番安顿八个节点同期安装,配置生机勃勃体大同小异

    1.  解压安装包

    tar xf elasticsearch-6.2.2.tar.gz
    

    2.  改进配置文件 elasticsearch.yml

    cluster.name: ctelk                                             # 集群名称,各个节点的集群名称都要一样
    node.name: node-1                                               # 节点名称
    bootstrap.memory_lock: true                                     # 是否允许内存swapping
    network.host: IP                                                # 提供服务的ip,通常是本机ip
    http.port: 9200                                                 # 服务端口
    discovery.zen.ping.unicast.hosts: ["IP", "IP", "IP"]            # 服务发现,集群中的主机
    discovery.zen.minimum_master_nodes: 2                           # 决定了有资格作为master的节点的最小数量,官方推荐N/2   1
    gateway.recover_after_nodes: 3                                  # 少于三台的时候,recovery
    

    3.  修改 jvm.options 配置

    -Xms8g                                  # 最大内存
    -Xmx8g                                  # 最小内存
    

      4. es必需用非root客商运转,所以大家在这其创设三个普通客商,用来管理es

    groupadd es
    useradd -g es es
    chown –R es.es elasticsearch-6.2.2/
    bin/elasticsearch –d
    

    Logtash: 搜集数据,配置使用 Ruby DSL

    1.1 Java版本须求

    java版本供给为1.8 ,最低版本为1.8

    Redis集成

    redis作为输入端

    编纂logstash的logstash.conf文件,输入端改正为redis

    安插内容

    input {

            redis {

                    data_type => "list"

                    type => "redis-input"

                    key => "logstash:redis"

                    host => "192.168.102.140"

                    port => 6379

                    threads => 5

                    codec => "json"

            }

    }

    output {

            elasticsearch {

                    hosts => "192.168.102.139:9200"

                    index => "logstash-test"

            }

    ELK初学搭建,ELK环境搭建和SpringBoot集成测试。        stdout {

                    codec => rubydebug {}

            }

    }

    重新开动logstash,有redis注册消息日志说明成功:能够查阅logstash中的日志文件

    三、安装 kibana

    kibana铺排在任性三个节点都能够,只须要叁个。

    1.  解压安装包

    tar xf kibana-6.2.2-linux-x86_64.tar.gz
    

    2.  改变配置文件 kibana.yml

    server.port: 5601                                                       # Kibana端口号
    server.host: "IP"                                                       # KibanaIP
    elasticsearch.url: "http://esIP:port"                                   # es的IP地址及端口号
    

    3.  起步程序

    ./bin/kibana -l /apps/product/kibana-6.2.2-linux-x86_64/logs/kibana.log &                   # 自己创建一个logs目录用来记录日志
    

    Kibana 显示数据,查询聚合,生成报表

    1.2 Centos Java版本晋级

    查看java版本

    java -version
    

    要是版本号达不到须要则必要升级java版本

    四、安装logstash

    logstash安插在随机多个节点都得以,只需求三个。

    1.  解压安装包

    tar xf logstash-6.2.2.tar.gz
    

    2.  开发银路程序

    ./bin/logstash -f /apps/product/logstash-6.2.2/config/logstash.conf &
    

    卡夫卡音讯队列,做为日志接入的缓冲区

    1.2.1 下载java 1.8 jdk

    前去地方http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

    下载tar.gz结尾的jdk文件举办下载

    五、X-pack插件的设置

    此次使用的安装包已经整整整个下载至地面,只须求离线安装就能够。

    1.  es、kibana、logstatic安装x-pack

    es安装x-pack,中途会要你选用 y就行了。

    ./bin/elasticsearch-plugin install file:///apps/product/x-pack-6.2.2.zip                        # es安装插件
    -> Downloading file:///apps/product/x-pack-6.2.2.zip
    [=================================================] 100%   
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @ WARNING: plugin requires additional permissions @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    * java.io.FilePermission \.pipe* read,write
    * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries
    * java.lang.RuntimePermission getClassLoader
    * java.lang.RuntimePermission setContextClassLoader
    * java.lang.RuntimePermission setFactory
    * java.net.SocketPermission * connect,accept,resolve
    * java.security.SecurityPermission createPolicy.JavaPolicy
    * java.security.SecurityPermission getPolicy
    * java.security.SecurityPermission putProviderProperty.BC
    * java.security.SecurityPermission setPolicy
    * java.util.PropertyPermission * read,write
    See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
    for descriptions of what these permissions allow and the associated risks.
    
    Continue with installation? [y/N]y
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @ WARNING: plugin forks a native controller @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    This plugin launches a native controller that is not subject to the Java
    security manager nor to system call filters.
    
    Continue with installation? [y/N]y
    Elasticsearch keystore is required by plugin [x-pack-security], creating...
    -> Installed x-pack with: x-pack-core,x-pack-deprecation,x-pack-graph,x-pack-logstash,x-pack-ml,x-pack-monitoring,x-pack-security,x-pack-upgrade,x-pack-watcher
    

     此次下载的为未破解版本,须求破解,次破解进度由同事完结,此时秩序改革已破解jar包就可以。

    [root@dev161 product]# find ./ -name x-pack-core-6.2.2.jar 
    ./elasticsearch-6.2.2/plugins/x-pack/x-pack-core/x-pack-core-6.2.2.jar                    # 将下边已破解的 jar包替换过来即可
    ./x-pack-core-6.2.2.jar
    

    es配置活动创建索引权限,在 elasticsearch.yml 文件中增添

    action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,*
    

     kibanak安装x-pack

    ./bin/kibana-plugin install file:///apps/product/x-pack-6.2.2.zip 
    Attempting to transfer from file:///apps/product/x-pack-6.2.2.zip
    Transferring 314129017 bytes....................
    Transfer complete
    Retrieving metadata from plugin archive
    Extracting plugin archive
    Extraction complete
    Optimizing and caching browser bundles...
    Plugin installation complete
    

    logstash安装x-pack

    ./bin/logstash-plugin install file:///apps/product/x-pack-6.2.2.zip 
    Installing file: /apps/product/x-pack-6.2.2.zip
    Install successful
    

     2.  安装改进密码,第二次初始化使用setup-passwords interactive,之后修正使用setup-passwords auto

    ./binx-pack/setup-passwords interactive                                                                 # 初始化密码
    Initiating the setup of passwords for reserved users elastic,kibana,logstash_system.
    You will be prompted to enter passwords as the process progresses.
    Please confirm that you would like to continue [y/N]y
    
    
    Enter password for [elastic]:                                                                           # 修改es密码
    Reenter password for [elastic]: 
    Enter password for [kibana]:                                                                            # 修改kibana密码
    Reenter password for [kibana]: 
    Enter password for [logstash_system]:                                                                   # 修改logstash密码
    Reenter password for [logstash_system]: 
    Changed password for user [kibana]
    Changed password for user [logstash_system]
    Changed password for user [elastic]
    

     3.  配置集群内部通信的TLS/SSL

    生成CA文件:./bin/x-pack/certutil ca

    ./bin/x-pack/certutil ca
    This tool assists you in the generation of X.509 certificates and certificate
    signing requests for use with SSL/TLS in the Elastic stack.
    
    The 'ca' mode generates a new 'certificate authority'
    This will create a new X.509 certificate and private key that can be used
    to sign certificate when running in 'cert' mode.
    
    Use the 'ca-dn' option if you wish to configure the 'distinguished name'
    of the certificate authority
    
    By default the 'ca' mode produces a single PKCS#12 output file which holds:
        * The CA certificate
        * The CA's private key
    
    If you elect to generate PEM format certificates (the -pem option), then the output will
    be a zip file containing individual files for the CA certificate and private key
    
    Please enter the desired output file [elastic-stack-ca.p12]: es-oldwang-ca.p12                                                         # 输出文件名称
    Enter password for es-oldwang-ca.p12 :                                                            # 文件密码(123456)
    

      使用CA文件生成密钥文件: ./bin/x-pack/certutil cert --ca es-oldwang-ca.p12 

    ./certutil cert --ca es-oldwang-ca.p12 
    This tool assists you in the generation of X.509 certificates and certificate
    signing requests for use with SSL/TLS in the Elastic stack.
    
    The 'cert' mode generates X.509 certificate and private keys.
        * By default, this generates a single certificate and key for use
           on a single instance.
        * The '-multiple' option will prompt you to enter details for multiple
           instances and will generate a certificate and key for each one
        * The '-in' option allows for the certificate generation to be automated by describing
           the details of each instance in a YAML file
    
        * An instance is any piece of the Elastic Stack that requires a SSL certificate.
          Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
          may all require a certificate and private key.
        * The minimum required value for each instance is a name. This can simply be the
          hostname, which will be used as the Common Name of the certificate. A full
          distinguished name may also be used.
        * A filename value may be required for each instance. This is necessary when the
          name would result in an invalid file or directory name. The name provided here
          is used as the directory name (within the zip) and the prefix for the key and
          certificate files. The filename is required if you are prompted and the name
          is not displayed in the prompt.
        * IP addresses and DNS names are optional. Multiple values can be specified as a
          comma separated string. If no IP addresses or DNS names are provided, you may
          disable hostname verification in your SSL configuration.
    
        * All certificates generated by this tool will be signed by a certificate authority (CA).
        * The tool can automatically generate a new CA for you, or you can provide your own with the
             -ca or -ca-cert command line options.
    
    By default the 'cert' mode produces a single PKCS#12 output file which holds:
        * The instance certificate
        * The private key for the instance certificate
        * The CA certificate
    
    If you elect to generate PEM format certificates (the -pem option), then the output will
    be a zip file containing individual files for the instance certificate, the key and the CA certificate
    
    If you elect to generate multiple instances certificates, the output will be a zip file
    containing all the generated certificates
    
    Enter password for CA (es-oldwang-ca.p12) :                                                          # 输入es-oldwang-ca.p12文件密码
    Please enter the desired output file [elastic-certificates.p12]: es-oldwang.p12                      # 输出文件名称
    Enter password for es-oldwang.p12 :                                                                  # 输入本文件密码
    
    Certificates written to /apps/product/elasticsearch-6.2.2/bin/x-pack/es-oldwang.p12
    
    This file should be properly secured as it contains the private key for 
    your instance.
    
    This file is a self contained file and can be copied and used 'as is'
    For each Elastic product that you wish to configure, you should copy
    this '.p12' file to the relevant configuration directory
    and then follow the SSL configuration instructions in the product guide.
    
    For client applications, you may only need to copy the CA certificate and
    configure the client to trust this certificate.
    

    将调换的多个文本迁移至config目录下,创造新目录ssl

    ll ssl/
    total 8
    -rw------- 1 es es 2524 Jun  4 18:53 es-oldwang-ca.p12
    -rw------- 1 es es 3440 Jun  4 18:55 es-oldwang.p12
    

    修改逐风流倜傥节点安顿文件 elasticsearch.yml ,将以下四行加多至文件末尾

    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /apps/product/elasticsearch-6.2.2/config/ssl/es-oldwang.p12
    xpack.security.transport.ssl.truststore.path: /apps/product/elasticsearch-6.2.2/config/ssl/es-oldwang.p12
    

    将SSL证书音讯导入

    ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
    Enter value for xpack.security.transport.ssl.keystore.secure_password: 
    ./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
    Enter value for xpack.security.transport.ssl.truststore.secure_password: 
    

    4.  导入license文件

    此番试验,license文件已经上传至服务器,寄存至es根目录,文件名:license.json

    修改梯次节点布置文件 elasticsearch.yml ,文件末尾增加,一碗水端平启集群

    xpack.security.enabled:false
    

    导入license文件,须求elastic客户的密码,导入完毕后会提醒导入成功。

    curl -XPUT -u elastic 'http://10.20.88.161:9200/_xpack/license' -H "Content-Type: application/json" -d @license.json
    Enter host password for user 'elastic':
    {"acknowledged":true,"license_status":"valid"}
    

    导入达成后注释掉配置文件elasticsearch.yml 中的,同仁一视启集群

    # xpack.security.enabled:false
    

    三、Elasticstack专门的学业流程

    1.2.2 上传至服务器

    动用rz命令将下载的jdk8上传播服务器的/usr/lib/jvm 目录下

    运用命令解压

    tar -xzf jdk-8u111-linux-x64.tar.gz
    

    六、登陆网页查看

    网页登入集群查看

    新葡亰496net 6

    修正kibana配置文件 kibana.yml,改正登入顾客密码

    elasticsearch.username: "elastic"                      # es用户
    elasticsearch.password: "elastic"                      # 之前修改过的es密码
    

    网页登入查看kibana

    新葡亰496net 7

    新葡亰496net 8

     从kibana端也可以知道到,licence修正以往晚点光阴为2050年

    新葡亰496net 9新葡亰496net 10新葡亰496net 11

       彩蛋:

    http://IP:9200/_cluster/health?pretty                     # 集群用户检查
    http://IP:9200/_cat/health
    http://10.20.88.161:9200/_cat/health?v
    

     

    新葡亰496net 12

    1.2.3 加入到alternatives列表中
    alternatives --install /usr/bin/java  java  /usr/lib/jvm/jdk1.8.0_111/bin/java 400
    

    简轻松单表明:

    1.2.4 更改java版本号
    alternatives --config java
    

    筛选java 8对应的序号就可以。

    1)日志机器上安排logstash服务,用于监察和控制并征集日志,然后,将访问到的日志发送到broker上。

    1.2.5 查看java版本号

    2)Indexer会将这么些日记搜集到协同,统一发送到Elasticsearch上展开仓储。

    1.2.6 卸载系统自带jdk

    生龙活虎旦不卸载会招致elasticsearch没办法运营

    详见:http://linux.it.net.cn/CentOS/server/set/2014/1006/6242.html

    3)最后Kibana会将急需的多寡开展体现,能够进行自定义寻找

    2. 安装Elasticsearch

    纪事不要以root身份安装

    设置教程:https://www.elastic.co/guide/en/elasticsearch/reference/5.0/zip-targz.html

    筛选通过tar.gz文件安装

    开发银行elasticsearch时遭受二种等级次序的荒唐(WA帕杰罗N卡塔尔:

    1. log文件写入无权力,消亡办法:root客商下chmod加权限
    2. CONFIG_SECCOMP and CONFIG_SECCOMP_FILTEENVISION,能够忽略,不影响运维
    3. max file descriptors,root顾客下改良配置文件vim /etc/security/limits.conf,将soft nofile和hard nofile的值全体制改过为65536,保存推出,重新登入
    4. max virtual memory,root客商下实行命令sysctl -w vm.max_map_count=262144
    java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5  with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
            at org.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:349) ~[elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:630) ~[elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.JNANatives.trySeccomp(JNANatives.java:215) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Natives.trySeccomp(Natives.java:99) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:104) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:158) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:291) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.cli.Command.main(Command.java:62) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) [elasticsearch-5.0.2.jar:5.0.2]
            at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) [elasticsearch-5.0.2.jar:5.0.2] 
    
              ****
              ERROR: bootstrap checks failed
    max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
    max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    

    解决完上述难题后退出重新以普通顾客身份登入,重新起动elasticsearch

    浏览器访谈http://ip:port/
    此处的ip和port是在ElasticSearch的布局文件中布署的

    做客效果如下:

    新葡亰496net 13

    image.png

    四、意况打算

    ElasticSearch集群配置

    参考https://my.oschina.net/shyloveliyi/blog/653751

    能够在长久以来台机器上的不如节点配置集群,也足以在不相同机器上布署集群,测验中动用的是后意气风发种格局。

    实际上的配备如下:

    cluster.name: es-cluster
    node.name: node0
    path.data: /tmp/elasticsearch/data 
    path.logs:  /tmp/elasticsearch/logs
    network.host: ***
    http.port: 9200
    discovery.zen.ping.unicast.hosts: ["***"]
    

    该配置表达如下:

    • cluster.name,集群名,同叁个集群下安排同三个名字
    • node.name,节点名,同二个集群下区别节点配置不一样的名号
    • path.data,数据存储目录,临盆情况中必要内定二个体量一点都超大的磁盘
    • path.logs,日志存款和储蓄目录
    • network.host,本机ip
    • http.port,默认为9200
    • discovery.zen.ping.unicast.hosts,集群中别的机器的ip地址

    铺排达成后重启ES,另生机勃勃台机械上配置相通后重启,ES就可以自行开采。

    访问http://ip:port/_cat/health?v查看集群状态

    新葡亰496net 14

    image.png

    node.total=2代表集群中有五个节点

    集群配置落成后,集群间的数额是分享的。尽管此中任何生龙活虎台机器挂了,通过另后生可畏台机器也能访谈全体的数目。

    系统:centos 7.2

    查看集群master状态

    http://ip:port/_cat/master?pretty

    JDK: 1.8.0_111

    ElasticSearch删除索引

    运用命令

    curl -XDELETE 'http://ip:port/logstash-2016.12.12?pretty'
    

    中间logstash-二零一五.12.12为索引名

    filebeat: 5.1.2

    查看ElasticSearch全部索引

    http://ip:port/_cat/indices

    logstash: 5.1.2

    ES后台运转

    日常的话大家不愿意关闭终端时,ES进度中止,那时候急需未来台运转的情势运营ES

    ./elasticsearch -d 
    

    elasticsearch: 5.1.2  (注:ELK stack 5.1之上版本JDK必需是1.8以上)

    安装xpack

    链接https://www.elastic.co/downloads/x-pack

    安份守己教程来就能够,假诺下载比比较慢,能够先下载到本地然后上流传服务器。

    应用当麻芋果件安装的指令:

    ./elasticsearch-plugin install file:///search/odin/xpackfilename
    

    注意:安装完xpack后会招致访谈es必要证实,能够在安顿文件少将其关闭,在elasticsearch.yml中增多如下配置

    # x-pack
    xpack.security.enabled: false
    xpack.monitoring.enabled: true
    xpack.graph.enabled: false
    xpack.watcher.enabled: false 
    

    重启es即可

    kibana: 5.1.2

    3. 安装logstash

    logstash要求java版本为1.8及以上

    安装进度为下载tar.gz文件后上传至服务器

    X-Pack:5.1

    3.1 配置文件

    在logstash的config同级目录下新建etc文件夹用于置放配置文件,新建配置文件es-test.conf,内容如下:

    input
    {
     file
    {
      path =>"/home/contentdev/elk/test.log"
    }
    }
    
    output
    {
      elasticsearch{
        hosts => ["ip:port"]
        index => "logstash-%{type}-%{ YYYY.MM.dd}"
    }
    } 
    

    具体的释义见https://www.elastic.co/products/logstash

    梗概解释如下:

    • input:从钦定的文书中读取内容
    • output:存款和储蓄至内定的es中,index为索引名,自定义

    kafka: 2.11-0.10.1.0

    3.2 测量检验配置文件语法是或不是精确

    执行命令:

    ./logstash -t -f ../etc/es-test.conf 
    

    测验进度中只怕遇到如下错误:

    Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME. 
    

    采纳命令打印出JAVA_HOME的值:

    echo $JAVA_HOME
    

    拜会是还是不是科学,若是不科学,则切换成root剧中人物登入,改正配置文件/etc/profile,在结尾设置JAVA_HOME

    export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_111
    export PATH=$PATH:$JAVA_HOME/bin 
    

    安装完后试行命令

    source /etc/profile
    

    重新以普通客户的身价登入,再次施行测量试验命令就能够

     

    3.3 正式开发银行logstash

    其后台实施的情势运转

    nohup  ./logstash  -f ../etc/redis-test.conf  --config.reload.automatic &
    

    测量检验服务器思索:

    3.4 写入测量试验数据

    向/home/contentdev/elk/test.log中写入测验数据

    主机名称:node01   IP:192.168.2.14   职责:主机节点以至数额节点、kafka/logstash

    3.5 访问elasticsearch

    访问elasticsearch验证数据是不是被储存

    http://ip:port/_search?pretty

    长机名称:node02   IP: 192.168.2.15   职务:主机节点以致数据节点、kibana

    3.6 使用grok解析日志

    至于logstash中的grok插件正则表达式的事例详见:http://blog.csdn.net/liukuan73/article/details/52318243

    长机名称:node03   IP: 192.168.2.17   职务:主机节点甚至数额节点、Elasticstack-head插件

    3.7 服务端logstash配置文件

    服务端的logstash只怕要求从redis中读取多少个key的日记音讯囤积到不一样的ES索引中,当时能够在计划文件中使用type来代表。

    input
    {
    redis { type => "A-nginx-log" host => "***" port => **  password => "***" data_type => "list" key => "A_nginx_log"}
    redis { type => "B-nginx-log" host => "***" port => ***  password => "***" data_type => "list" key => "B_nginx_log" }
    }
    output
    {
      elasticsearch{
        hosts => ["ip:port"]
        index => "%{type}-%{ YYYY.MM.dd.HH}"
    }
    }
    

    终极的索引名会满含type,ES的索引名最棒能包蕴日期,这样方便归类以至按日期删除

    elasticsearch中的hosts最好是计划为不存款和储蓄数据的那台机器。

    长机名称:test  IP: 192.168.2.70   职务:客商端

    3.8 安装xpack

    设置命令跟es安装xpack相同,安装完结后必要在布署文件logstash.yml最终加上后生可畏行

    xpack.monitoring.elasticsearch.url: "http://**:9200"
    

    注:分配内部存储器建议超出2G

    4. 安装kibana

    安装流程详细:https://www.elastic.co/guide/en/kibana/current/targz.html

    设置收尾后须求安排config下的kibana.yml配置文件,配置一下几项就能够:

    • server.port:5601,张开注释就可以
    • server.host,配置本机ip,运维后得以由此ip 端口拜谒
    • elasticsearch.url,配置es的域名(ip) 端口
    • kibana.index,张开注释就能够

    新葡亰496net 15

    image.png

    测验服务器设置:

    4.1 启动kibana

    cd到bin目录,实践上边包车型地铁命令

    nohup ./kibana &
    

    配置hosts(/etc/hosts)

    4.2 访问web页面

    http://ip:port/

    若果es中早已存在index,kibana会自动展现出来

    192.168.2.14  node01

    4.3 安装xpack

    同elasticsearch安装xpack教程相仿

    安装收尾后就能够在左侧看见monitoring等新面板

    192.168.2.15  node02

    5. Redis

    Redis在ELK系统中得以扮演二种角色:新闻订阅和音信中间转播。Redis存在的意义是为了消除log写入ES的瓶颈。

    • 消息订阅。自行google
    • 音信中间转播。该情势下,shipper将日志内容写入到redis中,indexer从redis中读取并存入ES

    此番搭建选择第二种形式。要求表明的是,indexer从redis中读取相应的多寡后会将其除去,不会促成redis中数量的堆叠

    192.168.2.17  node03

    6. ELK系统监察和控制与报告急察方

    ELK系统也许现身的标题

    • 进度一了百了

      • ES进度一了百了,借使是logstash indexer写入的机器ES进度与世长辞,会促成数据不可能存入ES,但不清楚redis中的数据能或不可能被寻常消耗

        • 消除办法:

          脚本准时检查ES过程是或不是存活,去世则邮件报告急察方

      • logstash shipper进度玉陨香消,会促成顾客机的日记不能被访问

        • 解除办法:

          脚本准时检查进程是还是不是存活

      • logstash indexer进度离世,会造成redis中暂存的数量堆叠,撑爆redis

        • 扑灭办法:
          1. 脚本准期检查ES进度是还是不是存活,一命呜呼则邮件报告急察方
          2. redis用量预先警报,要是系统运维如常,redis的用量应该是在四个比较稳固的数值,极度时会以致redis用量大幅度增涨
    • 机械宕机

      • 检查redis用量
    • ES数据存款和储蓄磁盘空间不足

      • 使用监督脚本监察和控制磁盘用量并计划报告急察方

    看来,系统十三分能够透过以下方式开掘

    • 本子检查进程存活状态
    • redis用量是不是不荒谬
    • 监察脚本上报磁盘使用量

    关门防火墙&Sellinux

    7. ES数目依期删除

    大器晚成经不删除ES数据,将会招致ES存款和储蓄的多寡更是多,磁盘满理解后将不大概写入新的数量。此时可以运用脚本准时去除过期数据。

    #/bin/bash
    #es-index-clear
    #只保留15天内的日志索引
    LAST_DATA=`date -d "-15 days" " %Y.%m.%d"`
    #删除上个月份所有的索引
    curl -XDELETE 'http://ip:port/*-'${LAST_DATA}'*'
    

    能够视个人意况调解保留的天数,这里的ip和port同样设置为不存款和储蓄数据的那台机器。该脚本只须求在ES中后生可畏台机器准期运营就可以。

    crontab -e增多定期职责:

    0 1 * * * /search/odin/elasticsearch/scripts/es-index-clear.sh
    

    每天的黎明(英文名:lí míng卡塔 尔(阿拉伯语:قطر‎有些革除索引。

    配置yum源:

    8. 其他

    • 仓库储存到ES的多少会有三个字段名称叫@timestamp,该时间戳和巴黎时间差了8钟头,不需求开展调节,Kibana在呈现的时候会自动抬高8钟头
    #yum -y install epel-release
    

    岁月同步:

    #rpm -qa |grep chrony
    

    布置时间同步源:# vi /etc/chrony.conf

    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    server 0.rhel.pool.ntp.org iburst
    server 1.rhel.pool.ntp.org iburst
    server  10.100.2.5              iburst
    

    重启时间协同服务:# systemctl restart chronyd.service

    node01和node02安装配置JDK:

    #yum install java-1.8.0-openjdk  java-1.8.0-openjdk-devel  #安装openjdk
    

     

    1)标准方法陈设遭遇变量:

    vim  /etc/profile
    将下面的三行粘贴到 /etc/profile中:
    export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64
    export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export PATH=$PATH:$JAVA_HOME/bin
    

        2)保存关闭后,施行:source  /etc/profile  #让设置立即生效。

    [root@~]# echo $JAVA_HOME
    [root@ ~]# echo $CLASSPATH
    [root@ ~]# echo $PATH
    

    测量试验是或不是安装配备成功

    # java  -version
    openjdk version "1.8.0_121"
    OpenJDK Runtime Environment (build 1.8.0_121-b13)
    OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
    

        3)下载相应的组件到/home/soft

        #wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.2.zip
        #wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.2-linux-x86_64.tar.gz
        #wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.2.zip
        #wget http://apache.mirrors.lucidnetworks.net/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
    

    五、node01节点安装安排elasticsearch

    1、创建elk用户、组

    [root@node01 soft]groupadd elk
    [root@node01 soft]useradd -g elk elk
    

    2、elasticsearch解压至/usr/local/目录下

    [root@node01 soft]#unzip elasticsearch-5.1.2.zip -d /usr/local/
    

    3、创建data/db和data/logs分别存款和储蓄数据文件和日志文件

    [root@node01 soft]# mkdir -pv /data/{db,logs}
    

    4、授权data/db和data/logs、/usr/local/elasticsearch-5.1.2文书夹elk客户及顾客组读取权限

    [root@node01 soft]chown elk:elk /usr/local/elasticsearch-5.1.2 -R
    [root@node01 soft]chown elk:elk /data/{db,logs} -R
    

    5、编辑/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml 校订为如下参数:

    [root@node01 config]# vim elasticsearch.yml
    cluster.name: ELKstack-5
    node.name: node01
    path.data: /data/db
    path.logs: /data/logs
    bootstrap.memory_lock: true
    network.host: 0.0.0.0
    http.port: 9200
    discovery.zen.ping.unicast.hosts: ["192.168.2.14","192.168.2.15","192.168.2.17"]
    discovery.zen.minimum_master_nodes: 2
    xpack.security.enabled: false #关闭es认证 与kibana对应,不然后面安装x-pack需要用户名密码验证
    

    注:

    cluster.name: ELKstack-5  #集群的名字(可放肆取名称)

    node.name: node01  #换个节点名字

    network.host: 0.0.0.0  #监听地址,0.0.0.0象征任意机器能够访谈

    http.port: 9200  #可默认

    http.cors.enabled: true   #head插件可以访谈es

    http.cors.allow-origin: "*"

    discovery.zen.ping.unicast.hosts: 集群中master节点的启幕列表,能够经过那么些节点来机关开掘新加盟集群的节点

    discovery.zen.minimum_master_nodes: 公投三个Master必要多少个节点(起码候选节点数卡塔尔国,通常设置成 N/2 1,N是集群中节点的数额

        xpack.security.enabled: false #关闭es认证 与kibana对应,禁止使用了印证成效,假若启用了表明,访谈时索要钦命客商名密码

     

    6、依照elk运转条件,要求改进以下参数(修正参数现在提出重启机器)

    1)[root@node01 config]# vim /etc/security/limits.conf  #修改节制参数,允许elk客户访谈mlockall

    # allow user 'elk mlockall
    elk soft memlock unlimited
    elk hard memlock unlimited
    *  soft nofile 65536
    *  hard nofile 131072
    *  soft nproc 2048
    *  hard nproc 4096
    

    2)[root@node01 config]# vim /etc/security/limits.d/20-nproc.conf  #改进可张开的公文陈诉符的最大数(软约束)

    修改如下内容:
    * soft nproc 4096
    #修改为
    * soft nproc 2048
    

    3)[root@node01 config]# vim /etc/sysctl.conf   #约束三个进度能够具备的VMA(虚构内部存款和储蓄器区域)的数码

    丰裕上面配置:

    vm.max_map_count=655360
    
    [root@node01 config]# sysctl -p #刷新修改参数使其生效
    

     

    4)纠正jvm空间分配,因为elasticsearch5.x默许分配jvm空间尺寸为2g

    [root@node01 elasticsearch-5.1.2]# vim config/jvm.options  
    -Xms2g  
    -Xmx2g
    

    修改为

    [root@node01 elasticsearch-5.1.2]# vim config/jvm.options  
    -Xms512m  
    -Xmx512m
    

    否则会报以下错误:

    OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000008a660000, 1973026816, 0) failed; error='Cannot allocate memory' (errno=12)
    #
    # There is insufficient memory for the Java Runtime Environment to continue.
    # Native memory allocation (mmap) failed to map 1973026816 bytes for committing reserved memory.
    # An error report file with more information is saved as:
    # /usr/local/elasticsearch-5.1.2/hs_err_pid11986.log
    

     

    5)运营elasticsearch服务,注:elasticsearch默许不一致敬root顾客运行服务,切换至普通客户运营

    [root@node01 elasticsearch-5.1.2]#su - elk
    [elk@node01 elasticsearch-5.1.2]$cd /usr/local/elasticsearch-5.1.2
    [elk@node01 elasticsearch-5.1.2]$nohup ./bin/elasticsearch &
    [elk@node01 elasticsearch-5.1.2]$./elasticsearch -d  #ElasticSearch后端启动命令
    

    注:截止服务(ps -ef |grep elasticsearch 、kill PID)

     

    6)运转后翻看进度是不是监听端口9200/9300,何况浏览器访谈

    [root@node01 ~]# ss -tlnp |grep '9200'
    LISTEN   0  128  :::9200        :::*         users:(("java",pid=2288,fd=113)
    [root@node01 ~]# curl http://192.168.2.14:9200
    {
      "name" : "node01",
      "cluster_name" : "ELKstack-5",
      "cluster_uuid" : "jZ53M8nuRgyAKqgQCDG4Rw",
      "version" : {
        "number" : "5.1.2",
        "build_hash" : "c8c4c16",
        "build_date" : "2017-01-11T20:18:39.146Z",
        "build_snapshot" : false,
        "lucene_version" : "6.3.0"
      },
      "tagline" : "You Know, for Search"
    }
    

     

    六、相近node01节点安装elasticsearch布置node02、node03节点

    1、安装配置node02节点elasticsearch

    1)编辑/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml 改善为如下参数:

    [root@node02 config]# vim elasticsearch.yml
    cluster.name: ELKstack-5
    node.name: node02
    path.data: /data/db
    path.logs: /data/logs
    bootstrap.memory_lock: true
    network.host: 0.0.0.0
    http.port: 9200
    discovery.zen.ping.unicast.hosts: ["192.168.2.14","192.168.2.15","192.168.2.17"]
    discovery.zen.minimum_master_nodes: 2
    xpack.security.enabled: false #关闭es认证 与kibana对应
    

        注:其余安插安顿同node01

     

    2、安装配备node03节点elasticsearch

    1)编辑/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml 改正为如下参数:

    [root@node02 config]# vim elasticsearch.yml
    cluster.name: ELKstack-5
    node.name: node03
    path.data: /data/db
    path.logs: /data/logs
    bootstrap.memory_lock: true
    network.host: 0.0.0.0
    http.port: 9200
    discovery.zen.ping.unicast.hosts: ["192.168.2.14","192.168.2.15","192.168.2.17"]
    discovery.zen.minimum_master_nodes: 2
    xpack.security.enabled: false #关闭es认证 与kibana对应
    

        注:其余安顿陈设同node01

     

    3、3个节点(node01,node02,node03)运营后,查看集群是否正规,节点是不是符合规律

        常用查询命令如下:

    翻开集群状态:curl -XGET

    翻开集群节点:curl -XGET

    查询索引列表:curl -XGET

    创设索引:curl -XPUT

    查询索引:curl -XGET

    删除索引:curl -XDELETE

    [root@node01 ~]# curl -XGET http://localhost:9200/_cat/health?v
    epoch      timestamp cluster    status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
    1486384674 20:37:54  ELKstack-5 green           3         3      0   0    0    0        0             0                  -                100.0%
    [root@node01 ~]# curl -XGET http://localhost:9200/_cat/nodes?v
    ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
    192.168.2.17           22          94   3    0.58    0.55     0.27 mdi       *      node03
    192.168.2.15           22          93   0    0.59    0.60     0.29 mdi       -      node02
    192.168.2.14           22          93   1    0.85    0.77     0.37 mdi       -      node01
    

     

    七、node3(192.168.2.17)节点上安装head插件(由于elasticsearch5.0版本变化异常的大,近来elasticsearch5.0 临时不扶持直接设置)

    1、在从github上边下载代码,因此先要安装git,授权文件和目录(777)

    [root@node03 local]# yum install git
    [root@node03 local]# git clone git://github.com/mobz/elasticsearch-head.git
    [root@node03 local]# chmod 777 -R elasticsearch-head/*
    

     

    2、下载Node.js,并解压,配置进意况变量

    [root@node03 soft]# wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.6.1-linux-x64.tar.gz
    [root@node03 soft]# tar -xvf node-v4.6.1-linux-x64.tar.gz #解压至当前目录
    [root@node03 soft]#vim /etc/profile
    添加如下: export PATH=/home/soft/node-v4.6.1-linux-x64/bin:$PATH 
    [root@node03 soft]#source  /etc/profile #使配置文件生效。
    

     

    3、在/usr/local/elasticsearch-head/目录下,进行npm install 使用node.js安装

    [root@node03 elasticsearch-head]# npm install -g cnpm --registry=https://registry.npm.taobao.org
    [root@node03 elasticsearch-head]# npm install grunt --save-dev
    

     

    4、更改目录/usr/local/elasticsearch-head/Gruntfile.js

    connect: {
        server: {
            options: {
                port: 9100,
                hostname: '0.0.0.0',
                base: '.',
                keepalive: true
            }
        }
    }
    

    增加hostname属性,设置为*或'0.0.0.0'

     

    5、更改/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml配置文件,扩大一下配备,重新启航ES服务

    # 以下八个为允许跨域,首借使5.1版本的head插件和今后设置的不相似

    http.cors.enabled: true
    http.cors.allow-origin: "*"
    

     

    6、改进目录/usr/local/elasticsearch/plugins/head/_site/Gruntfile.js

    connect: {
        server: {
            options: {
                port: 9100,
                hostname: '0.0.0.0',
                base: '.',
                keepalive: true
            }
        }
    }
    

    增加hostname属性,设置为*或'0.0.0.0'

     

    7、修改/usr/local/elasticsearch-head/_site/app.js连接地址:

    改革head的总是地址:

    this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200";
    

    把localhost更改为es的服务器地址,如:

    this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.2.17:9200";
    

     

    8、改正/usr/local/elasticsearch-5.1.2/config/elasticsearch.yml配置文件,扩展一下安插,重新起动ES服务

    # 以下多少个为允许跨域,主若是5.1版本的head插件和以后设置的不肖似

    http.cors.enabled: true
    http.cors.allow-origin: "*"
    

    9、消灭信赖并运营服务

        实行npm install下载重视的包:

        [root@node03 elasticsearch-head]#npm install
        [root@node03 elasticsearch-head]#./node_modules/grunt/bin/grunt serverb & #后台启动服务
    

    测量试验访问:

        新葡亰496net 16

     

    八、node2(192.168.2.15)节点上设置配备kibana

    1、kibana解压至/usr/local/目录下

    [root@node02 soft]# tar -xvf kibana-5.1.2-linux-x86_64.tar.gz -C /usr/local/
    

    2、修改/usr/local/kibana-5.1.2-linux-x86_64/config/kibana.yml配置文件,如下:并运维kibana服务

    [root@node02 soft]#vim /usr/local/kibana-5.1.2-linux-x86_64/config/kibana.yml  
    server.port: 5601
    server.host: "192.168.2.15"
    elasticsearch_url: "http://192.168.2.15:9200"
    xpack.security.enabled: false  #关闭认证,为后面kibana增加x-pack组件免去用户名密码认证
    
    [root@node02 kibana-5.1.2-linux-x86_64]# bin/kibana > /var/log/kibana.log 2>&1 &  #启动服务
    

    新葡亰496net 17

     

    九、配置顾客端test节点(192.168.2.70)

    1、安装配备JDK(同node01~node03,这里不再演讲)

    2、拷贝logstash至顾客端,并解压至/usr/local/目录下

    [root@node02 config]# scp /home/soft/logstash-5.1.2.zip  root@192.168.2.70:/home/soft/
    [root@test soft]#unzip logstash-5.1.2.zip -d /usr/local/
    

     

    3、编辑logstash服务管理脚本(配置路线可依据真实景况改良)

    [root@test logstash-5.1.2]# mkdir logs etc  #创建目录logs,etc
    [root@test logstash-5.1.2]# vim /etc/init.d/logstash
    [root@test logstash-5.1.2]# chmod  x /etc/init.d/logstash #添加权限
    

    本子如下:

    #!/bin/bash
    #chkconfig: 2345 55 24
    #description: logstash service manager
    #auto: Maoqiu Guo
    FILE='/usr/local/logstash-5.1.2/etc/*.conf'    #logstash配置文件
    LOGBIN='/usr/local/logstash-5.1.2/bin/logstash agent --verbose --config'  #指定logstash配置文件的命令
    LOCK='/usr/local/logstash-5.1.2/locks'        #用锁文件配合服务启动与关闭
    LOGLOG='--log /usr/local/logstash-5.1.2/logs/stdou.log'  #日志
    START() {
     if [ -f $LOCK ];then
     echo -e "Logstash is already 33[32mrunning33[0m, do nothing."
     else
     echo -e "Start logstash service.33[32mdone33[m"
     nohup ${LOGBIN} ${FILE} ${LOGLOG} &
     touch $LOCK
     fi
    }
    STOP() {
     if [ ! -f $LOCK ];then
     echo -e "Logstash is already stop, do nothing."
     else
     echo -e "Stop logstash serivce 33[32mdone33[m"
     rm -rf $LOCK
     ps -ef | greplogstash | grep -v "grep" | awk '{print $2}' | xargskill -s 9 >/dev/null
     fi
    }
    STATUS() {
     psaux | greplogstash | grep -v "grep" >/dev/null
     if [ -f $LOCK ] && [ $? -eq 0 ]; then
     echo -e "Logstash is: 33[32mrunning33[0m..."
     else
     echo -e "Logstash is: 33[31mstopped33[0m..."
     fi
    }
    TEST(){
     ${LOGBIN} ${FILE} --configtest
    }
    case "$1" in
      start)
     START
     ;;
      stop)
     STOP
     ;;
      status)
     STATUS
     ;;
      restart)
     STOP
            sleep 2
            START
     ;;
      test)
     TEST
     ;;
      *)
     echo "Usage: /etc/init.d/logstash (test|start|stop|status|restart)"
     ;;
    esac
    

     

    4、Logstash 向es集群写多少,并测量检验

    1)、在/usr/local/logstash-5.1.2/etc/目录下编写制定七个logstash配置文件logstash.conf

    [root@test etc]# cat logstash.conf
    input {              #数据的输入从标准输入
      stdin {}  
    }
    output {            #数据的输出我们指向了es集群
      elasticsearch {
        hosts => ["192.168.2.14:9200","192.168.2.15:9200",192.168.2.17:9200"]#es主机的ip及端口
      }
    }
    [root@test etc]# /usr/local/logstash-5.1.2/bin/logstash -f logstash.conf -t #查看配置是否正常
    Sending Logstash's logs to /usr/local/logstash-5.1.2/logs which is now configured via log4j2.properties
    Configuration OK
    

     

    2)测量试验数据ES,改革logstash.conf配置文件,把messages输出到日志中,运转logstash服务

    [root@test etc]# cat logstash.conf
    input {#这里的输入使用的文件,即日志文件messsages
      file {
        path => "/var/log/messages"#这是日志文件的绝对路径
        start_position => "beginning"#这个表示从messages的第一行读取,即文件开始处
      }
    }
    output {#输出到es
      elasticsearch {
        hosts => ["192.168.2.14:9200","192.168.2.15:9200",192.168.2.17:9200"]
        index => "messages-%{ YYYY-MM}"#这里将按照这个索引格式来创建索引
      }
    }
    [root@test etc]# /usr/local/logstash-5.1.2/bin/logstash -f logstash.conf 或[root@test etc]# /etc/init.d/logstash start
    

     

    3)验证插件head和kibana是还是不是从ES集群采取到数码并显示

    head插件看见有关索引

    新葡亰496net 18

    kibana依据中相关索引,新建图片

    新葡亰496net 19

     

    九、安装配置卡夫卡音信队列

    1、Kafka是叁个遍及式、可分区、可复制的音信系统, 在卡夫卡集群中,未有『中央主节点』的定义,集群中有着的服务器都以对等的,因而得以在不做任何配置修正的景色下对服务器实行加多和删除,相通的音信临盆者和开销都也能做到随意重启和机械的上下线。

     

    2、卡夫卡相关概念

    kafka宗旨零器件职业流程

    新葡亰496net 20

    Consumer:用于从Broker中取出/消费Message

    Producer:用于往Broker中发送/生产Message

    Broker:卡夫卡中动用Broker来选拔Producer和Consumer的伸手,并把Message漫长化到本地球磁性盘。每一个Cluster当中会大选出二个Broker来负责Controller,负担管理Partition的Leader选举,和煦Partition迁移等专门的工作

    上述组件在布满式情形下均可以是多少个,援助故障转移。同一时间ZooKeeper仅和broker和consumer相关。broker的规划是无状态的,消费的动静音信依附花费者本身维护,通过三个offset偏移量。client和server之间通讯接受TCP左券。

     

    布告音信常常常有三种方式:队列格局(queuing)和公布-订阅格局(publish-subscribe)。队列方式中,consumers 能够並且从服务端读取音信,每个新闻只被内部三个 consumer 读到;发表-订阅情势中国国投息被广播到独具的 consumer 中。更广阔的是,各种topic 都有多少数目标 consumer 组,每种组都以三个逻辑上的『订阅者』,为了容错和更加好的安定,每种组由若干 consumer 组成。这件事实上便是叁个公布-订阅格局,只然而订阅者是个组实际不是单个 consumer。

     

    3、kafka的Topic与Partition专门的学问流程

    新葡亰496net 21

    音信是据守核心来交给到Partition在那之中的。Partition此中的音信是稳步的,consumer从四个如法泡制的分区新闻队列中逐一获取新闻。相关排名定义如下:

    Topic:用于私分Message的逻辑概念,多个Topic能够分布在多少个Broker上

    Partition:是卡夫卡中横向扩展和全方位并行化的根基,每一种Topic都最少被切分为1个Partition

    offset:音信在Partition中的编号,编号挨个不跨Partition

    分区指标:卡夫卡中接收分区的宏图有多少个指标。一是足以管理更加多的音讯,不受单台服务器的节制。Topic具有三个分区意味着它能够不受限的管理愈来愈多的多少。第二,分区可以视作并行管理的单元

    offset:由花费者调节offset,因而分区自身所在broker是无状态的。花费者能够自便调整offset,很灵活

    同个分区内平稳费用:每一个分区都以二个相继的、不可变的音信队列, 而且能够不断的拉长。分区中的音讯都被分配了贰个行列号,称之为偏移量(offset),在每个分区中此偏移量都以唯风度翩翩的

     

    4、node01节点(192.168.2.14)上安装配置kafka(这里测量检验zookeeper未有集群,实际生育遇到提出采纳集群)

    注:安装配置JDK碰到这里差少之甚少,类同node01等节点

    1)kafka解压至/usr/local/目录下,并创立链接kafka

    [root@node01 soft]# tar -xvf kafka_2.11-0.10.1.0.tgz -C /usr/local/
    [root@node01 soft]# cd /usr/local/
    [root@node01 local]# ln -sv kafka_2.11-0.10.1.0 kafka
    

        

        2)创设zookeeper存款和储蓄目录,纠正/usr/local/kafka/config/目录下zookeeper.propertie配置文件

    [root@node01 local]# mkdir /data/zookeeper
    [root@node01 local]#vim /usr/local/kafka/config/zookeeper.propertie
    

    改革配置如下:

    dataDir=/data/zookeeper
    # the port at which the clients will connect
    tickTime=2000 #维持心跳的时间间隔
    initLimit=20
    syncLimit=10
    

        注:尽管zookeeper集群的话,必得标志再安排文件server.2,server.*不等名目而且在/data/zookeeper目录下开创myid文件,里面包车型客车剧情为数字,用于标志主机,假使那一个文件并未的话,zookeeper是必不得已运行的

        如:[root@kafka1 ~]# echo 2 > /data/zookeeper/myid

     

     

    3)修改kafka配置/usr/local/kafka/config/目录下server.properties文件

    broker.id=0        # 唯一,填数字,如果是集群该值必须不同如:2、3、4
    listeners=PLAINTEXT://:9092 #监听端口
    advertised.listeners=PLAINTEXT://192.168.2.14:9092 # 唯一,填服务器IP
    log.dir=/data/kafka-logs  #  该目录可以不用提前创建,在启动时自己会创建
    zookeeper.connect=192.168.2.14:2181 #这个就是zookeeper的ip及端口
    num.partitions=16        # 需要配置较大 分片影响读写速度
    log.dirs=/data/kafka-logs # 数据目录也要单独配置磁盘较大的地方
    log.retention.hours=168  # 时间按需求保留过期时间 避免磁盘满
    

        

     

     

        4)运行kafka和zookeeper服务(先要运转zookeeper再起步kafka,如果是zookeeper集群如是)

        [root@kafka1 ~]# /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &   #zookeeper启动命令
    

        注:/usr/local/kafka/bin/zookeeper-server-stop.sh   #停顿服务

    [root@node01 config]# ss -tlnp |grep '2181'  #启动正常
    LISTEN     0      50          :::2181           :::*     users:(("java",pid=3118,fd=88))
    [root@node01 config]# nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &
    
      #kafka启动的命令
      或
      #/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties  > /dev/null &
    注:/usr/local/kafka/bin/kafka-server-stop.sh    #暂停服务
    

     

    5)kafka成立四个宗旨

    [root@node01 kafka]#bin/kafka-topics.sh --create --zookeeper 192.168.2.14:2181 --replication-factor 1 --partitions 1 --topic linuxtest
    

    #瞩目:factor大小不能够当先broker数

    [root@node01 kafka]# bin/kafka-topics.sh --list --zookeeper 192.168.2.14:2181  #查看主题
    linuxtest
    [root@node01 kafka]#bin/kafka-topics.sh --describe --zookeeper 192.168.2.14:2181 --topic linuxtest
    Topic:linuxtestPartitionCount:1ReplicationFactor:1Configs:
    Topic: linuxtestPartition: 0Leader: 0Replicas: 0Isr: 0
    

     

    6)发送音信,这里运用的是临盆者角色

    [root@node01 kafka]#bin/kafka-console-producer.sh --broker-list 192.168.2.14:9092 --topic linuxtest
    This is a messages
    welcometo kafka
    

     

    7)选拔音信,这里运用的是花费者角色

    [root@node01 kafka]#bin/kafka-console-consumer.sh --zookeeper  192.168.2.14:2181 --topic linuxtest --from-beginning
    This is a messages
    welcometo kafka
    

     

    5、改良顾客端test节点logstash.conf配置文件,输出改到kafka上边,将数据写入到kafka中,重启logstash服务

    [root@test etc]# cat logstash.conf
    input {            #这里的输入还是定义的是从日志文件输入
      file {
        type => "message" 
        path => "/var/log/messages"
        start_position => "beginning"
      }
    }
    output {
        #stdout { codec => rubydebug }   #这是标准输出到终端,可以用于调试看有没有输出,注意输出的方向可以有多个
        kafka {  #输出到kafka
          bootstrap_servers => "192.168.2.14:9092"  #他们就是生产者
          topic_id => "linux-messages"  #这个将作为主题的名称,将会自动创建
          compression_type => "snappy"  #压缩类型
        }
    }
    [root@test etc]#/usr/local/logstash-5.1.2/bin/logstash -f logstash.conf > /dev/null &
    

     

    6、从kafka中读取数据后输出到ES机器,node01安装配备Logstash,安装步骤不再赘言;注意这里的日记主旨名称“linuxtest”,并运转服务

    [root@node01 etc]# more logstash.conf
    input {
        kafka {
            zk_connect => "192.168.2.14:2181"  #消费者们
            topic_id => "linuxtest"
            codec => plain
            reset_beginning => false
            consumer_threads => 5
            decorate_events => true
        }
    }
    output {
        elasticsearch {
          hosts => ["192.168.2.14:9200","192.168.2.15:9200","192.168.2.17:9200"]
          index => "linux-messages-%{ YYYY-MM}"    
      }
      }
    [root@node01 etc]# /usr/local/logstash/bin/logstash -f logstash.conf > /dev/null & #启动服务
    

     

    7、验证在test客商端上写入测量检验内容

    [root@webserver1etc]# echo "test-linux-messages到es集群!!!" >> /var/log/messages
    #启动logstash,让其读取messages中的内容
    

     

    十、安装配备X-Pack(需付费X-Pack License(可登记1年无需付费 License卡塔尔国)

    1、x-pack是elasticsearch的五个扩展包,将安全,警示,监视,图形和报告效能捆绑在二个轻巧安装的软件包中,尽管x-pack被规划为四个无缝的行事,不过你能够轻易的启用恐怕关闭部分功能。

        注:以下为license 注册新闻

    curl -XPUT -u elastic:password '' -d @license.json

    @license.json 申请获得的json文件,复制文件中的全数故事情节,粘贴在这里。

    豆蔻梢头旦提醒须求acknowledge,则设置为true

    curl -XPUT -u elastic:password '' -d @license.json

    翻看安装结果信息

    curl -XGET -u elastic:password ''

    差别版本效果

    X-Pack监察和控制组件令你能够由此Kibana轻巧监察和控制Elasticsearch。 您可以实时查看集群运维情形和品质,以至解析过去的集群,索引和节点指标。 别的,您能够监督Kibana本身的特性。在群集上安装B-Pack时,监视代理会在每种节点上运转,以从Elasticsearch搜罗索引指标。 通过在Kibana中安吹捧-Pack,您能够经过意气风发组专项使用仪表板查看监视数据。

    x-pack安装之后有三个极品顾客elastic ,其暗许的密码是changeme,具备对负有索引和数码的调节权,能够使用该客户创设和修正别的顾客,当然这里能够因此kibana的web分界面进行客商和客商组的管制。

    X-pack的elk之间的数额传递爱戴,如:安装完x-pack之后,大家就能够用大家所创立的顾客来珍爱elk之间的数量传递.

    如下:

    1)kibana<——>elasticsearch

    在kibana.yml文件中配备:

     

    elasticsearch.username: “elastic”

    elasticsearch.password: “changeme”

     

    2)logstash<——>elasticsearch

    在融洽写的布署文件中定义

     

    input {

     stdin{}

     beats{

        port => 5044

     }

    }

    output {

       elasticsearch {

          hosts => ["]

          user => elastic     #必须有照望的客商/密码

          password => changeme

    }

      stdout{

          codec=>rubydebug

      }

    }

    注:这里若是不进行相关配置来讲,elk之间的数据传递就能现身难题

    2、在挨门逐户ES集群节点/usr/local/elasticsearch-5.1.2/目录下以致kibana(本示例kibana在node02上)的/usr/local/kibana-5.1.2-linux-x86_64/

    目录下安装B-Pack,同等对待启Kibana

    [root@node01 elasticsearch-5.1.2]# bin/elasticsearch-plugin install  x-pack
    [root@node02 elasticsearch-5.1.2]# bin/elasticsearch-plugin install  x-pack
    [root@node03 elasticsearch-5.1.2]# bin/elasticsearch-plugin install  x-pack   
    [root@node02 kibana-5.1.2-linux-x86_64]# bin/kibana-plugin install x-pack
    [root@node02 kibana-5.1.2-linux-x86_64]# bin/kibana > /var/log/kibana.log 2>&1 &  #kill进程后启动服务
    

        注:卸载x-pack组件,#bin/elasticsearch-plugin remove x-pack

        页面访问:

        官方文书档案:

        elasticsearch 权威指南:

        ELK stack 权威指南:

        ELK 开拓指南:

     

     

    ****************************************************************************************************

    布满难题总括:

    #Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errno=12)

    出于elasticsearch5.x私下认可分配jvm空间尺寸为2g,改革jvm空间分配

    # vim config/jvm.options  

    -Xms2g  

    -Xmx2g  

    修改为

    # vim config/jvm.options  

    -Xms512m  

    -Xmx512m

    #max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]

    ulimit -SHn 65536

    vim /etc/security/limits.conf

    * soft nofile 65536

    * hard nofile 65536

    * soft nproc 65536

    * hard nproc 65536

    * soft nofile 65536

    * hard nofile 65536

    * soft nproc 65536

    * hard nproc 65536

    * soft nofile 65536

    * hard nofile 65536

    #max number of threads [1024] for user [elasticsearch] is too low, increase to at least [新葡亰496net,2048]

    修改 /etc/security/limits.d/90-nproc.conf 

    *          soft    nproc     1024

    *          soft    nproc     2048

    #max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

    校订/etc/sysctl.conf配置文件,

    cat /etc/sysctl.conf | grep vm.max_map_count

    vm.max_map_count=262144

    假定不设有则增进

    echo "vm.max_map_count=262144" >>/etc/sysctl.conf

     

    本文出自 “意气风发万时辰定律” 博客,请必需保留此出处

    本文由新葡亰496net发布于电脑系统,转载请注明出处:ELK初学搭建,ELK环境搭建和SpringBoot集成测试

    关键词:

上一篇:新葡亰496net:比特币开发

下一篇:没有了