Skip to content

SpringBoot ELK(Elasticsearch + Logstash + Kibana)日志收集

🏷️ Spring Boot ELK

在 SpringBoot 中通过 logback 输出日志到 RabbitMQ,之后通过 Logstash 收集解析日志输出到 Elasticsearch,最后在 Kibana 中查看收集到的日志。

以下是在 windows 环境下执行的。Linux 上基本上只是启动时执行的文件不同。

RabbitMQ

也可以通过在 logstash 中配置好了 Exchange 之后启动会自动创建 Exchange、Queue 和 Binding。

这里是手动在 RabbitMQ 界面创建。

  • 创建 Exchange

    Name: log_logstash
    Type: topic
    Durability: Durable
    Auto delete: No
    Internal: No

  • 创建队列

    Name: OCT_MID_Log
    Durability: Durable
    Auto delete: No

  • 创建 log_logstash Exchange 的 Binding

    To queue: OCT_MID_Log
    Routing key: service.#

Spring

pom.xml 中添加依赖

xml
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.integration</groupId>
    <artifactId>spring-integration-amqp</artifactId>
</dependency>

logback-spring.xml 中配置输出日志到 RabbitMQ。

这里由于使用了 springProperty 来获取 SpringBoot 的配置,所以需要将 application.ymlapplication.properties 更改为 bootstrap.ymlbootstrap.properties。原因可以参考 SpringCloud 入门之常用的配置文件 application.yml 和 bootstrap.yml 区别

xml
<configuration>
    <springProperty scope="context" name="MQHost" source="spring.rabbitmq.host"/>
    <springProperty scope="context" name="MQPort" source="spring.rabbitmq.port"/>
    <springProperty scope="context" name="MQUserName" source="spring.rabbitmq.username"/>
    <springProperty scope="context" name="MQPassword" source="spring.rabbitmq.password"/>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <layout class="ch.qos.logback.classic.PatternLayout">
            <Pattern>%d{ yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
        </layout>
    </appender>

    <appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>
                <![CDATA[%msg]]>
            </pattern>
        </layout>
        <host>${MQHost}</host>
        <port>${MQPort}</port>
        <username>${MQUserName}</username>
        <password>${MQPassword}</password>
        <applicationId>service.ribbon</applicationId>
        <routingKeyPattern>service.ribbon</routingKeyPattern>
        <declareExchange>true</declareExchange>
        <exchangeType>topic</exchangeType>
        <exchangeName>log_logstash</exchangeName>
        <generateId>true</generateId>
        <charset>UTF-8</charset>
        <durable>true</durable>
        <deliveryMode>PERSISTENT</deliveryMode>
    </appender>

    <root level="INFO">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="AMQP" />
    </root>
</configuration>

由于 Logstash.conf 默认使用 json 解析输出的日志,所以这里这里的输入内容设置为仅包含消息主题 <pattern><![CDATA[%msg]]></pattern>

java
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

private static final Logger logger = LoggerFactory.getLogger(TestController.class);

ObjectMapper mapper = new ObjectMapper();
logger.info(mapper.writeValueAsString(new LogModel("log message")));

Elasticsearch

最新版: Download Elasticsearch
5.5.0: Elasticsearch 5.5.0

注意

Elasticsearch 的版本要和 Kibana 的版本一致。

下载后默认启动 bin 目录下的 elasticsearch.bat 即可

bash
elasticsearch.bat

默认地址 http://localhost:9200/,打开显示如下信息:

json
{
    "name" : "NLhpJUb",
    "cluster_name" : "elasticsearch",
    "cluster_uuid" : "HBiqypU0Qx-V4LAM-s0_2Q",
    "version" : {
        "number" : "6.4.3",
        "build_flavor" : "default",
        "build_type" : "zip",
        "build_hash" : "fe40335",
        "build_date" : "2018-10-30T23:17:19.084789Z",
        "build_snapshot" : false,
        "lucene_version" : "7.4.0",
        "minimum_wire_compatibility_version" : "5.6.0",
        "minimum_index_compatibility_version" : "5.0.0"
    },
    "tagline" : "You Know, for Search"
}

elasticsearch-head

  • github: elasticsearch-head

  • 需要配置 Elasticsearch 使其允许跨域请求

    config/elasticsearch.yml 添加配置项

    properties
    http.cors.enabled: true     # elasticsearch 中启用 CORS
    http.cors.allow-origin: "*" # 允许访问的 IP 地址段,* 为所有 IP 都可以访问
  • 启动 Head 插件

    bash
    npm install
    npm run start
  • http://localhost:9100/

LogStash

最新版:Download Logstash
5.5.0:Logstash 5.5.0

参考 config/logstash-simple.conf 创建自己的配置。配置 input 为 RabbitMQ,输出 output 为 Elasticsearch。
RabbitMQ 的其它配置可以参考 Rabbitmq Input Configuration Options

groovy
input {
    rabbitmq {
        type => "oct-mid-ribbon"
        durable => true
        exchange => "log_logstash"
        exchange_type => "topic"
        key => "service.#"
        host => "192.168.0.20"
        port => 5672
        user => "username"
        password => "password"
        queue => "OCT_MID_Log"
        auto_delete => false
        tags => ["service"]
    }
}
output {
    elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "logstash-%{+YYYY.MM.dd}"
        #user => "elastic"
        #password => "changeme"
    }
}

复制配置文件到 bin 目录,启动 logstash

bash
logstash.bat -f logstash-mq.conf

Kibana

最新版:Download Kibana
5.5.0:Kibana 5.5.0

config/kibana.yml 中配置 服务名及 Elasticsearch 地址

properties
# The Kibana server's name.  This is used for display purposes.
server.name: "oct-mid-service-kibana"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"

启动 kibana

bash
kibana.bat

访问 http://localhost:5601/app/kibana

配置索引名为默认值 logstash-* 即可。

使用 5.5.0 版本的 Elasticsearch 和 最新版的 Kibana 6.4.3 启动时会显示如下信息:

It appears you're running the oss-only distribution of Elasticsearch.
To use the full set of free features in this distribution of Kibana, please update Elasticsearch to the default distribution.

原因是 Kibana 的版本号不能比 Elasticsearch 的版本号高,否则不支持。大版本号相同的情况下,中版本号可以比 Elasticsearch 低,但是会有警告。

Kibana should be configured to run against an Elasticsearch node of the same version. This is the officially supported configuration.

Running different major version releases of Kibana and Elasticsearch (e.g. Kibana 5.x and Elasticsearch 2.x) is not supported, nor is running a minor version of Kibana that is newer than the version of Elasticsearch (e.g. Kibana 5.1 and Elasticsearch 5.0).

Running a minor version of Elasticsearch that is higher than Kibana will generally work in order to faciliate an upgrade process where Elasticsearch is upgraded first (e.g. Kibana 5.0 and Elasticsearch 5.1). In this configuration, a warning will be logged on Kibana server startup, so it’s only meant to be temporary until Kibana is upgraded to the same version as Elasticsearch.

Running different patch version releases of Kibana and Elasticsearch (e.g. Kibana 5.0.0 and Elasticsearch 5.0.1) is generally supported, though we encourage users to run the same versions of Kibana and Elasticsearch down to the patch version.

Elasticsearch Log

json
{
    "_index": "oct_mid_log_mq",
    "_type": "all",
    "_id": "AWcFi8Mxc32BOEvQqfc9",
    "_score": 1,
    "_source": {
        "message": "2018-11-11 16:56:17.119 [http-nio-7072-exec-1] DEBUG c.o.middle.api.controller.user.OperatorController.? ? - debug to RabbitMQ",
        "@version": "1",
        "type": "all",
        "tags": [
            "_jsonparsefailure"
        ],
        "@timestamp": "2018-11-12T01:31:44.384Z"
    }
}
json
{
    "time": "2018-11-13 09:48:43.600",
    "thread": "http-nio-7072-exec-2",
    "level": "ERROR",
    "logger": "c.o.m.a.c.test.TestController",
    "message": {
        "type": "Error",
        "code": "5",
        "message": "error to RabbitMQ"
    }
}

logstash.conf 中 rabbitmq.input.codec

默认为 json,此时 _source.message 的值应该是 Json 格式,如果日志中的不能正确反序列化,则在生成的记录中 tags 属性中会出现一个 _jsonparsefailure 值。另外 _source.message 中的属性只能有一级,不能再嵌套子级的属性。

json
message:2018-11-12 15:19:48.811 [http-nio-7072-exec-1] TRACE c.o.middle.api.controller.test.TestController.? ? - trace to RabbitMQ tags:_jsonparsefailure @version:1 @timestamp:November 12th 2018, 15:19:48.831 type:oct-mid-user _id:Di3KBmcBwZN9MRVzbezv _type:doc _index:oct_mid_log_mq _score:

本想在 %msg 输出一个 json 格式的字符串,虽然日志是打印出来了,但是并没有被 Logstash 接收到 Elasticsearch。估计是因为不支持嵌套的子属性导致的。

xml
<pattern><![CDATA[{ "time" : "%d{ yyyy-MM-dd HH:mm:ss.SSS}", "thread" : "%thread", "level" : "%level", "logger" : "%logger{36}", "message" : %msg }%n]]></pattern>

参考