在 SpringBoot 中通过 logback 输出日志到 RabbitMQ,之后通过 Logstash 收集解析日志输出到 ElasticSearch,最后在 Kibana 中查看收集到的日志。 以下是在 windows 环境下执行的。Linux 上基本上只是启动时执行的文件不同。 ## RabbitMQ 也可以通过在 logstash 中配置好了 Exchange 之后启动会自动创建 Exchange、Queue 和 Binding。 这里是手动在 RabbitMQ 界面创建。 - 创建 Exchange > Name: log_logstash > Type: topic > Durability: Durable > Auto delete: No > Internal: No - 创建队列 > Name: OCT_MID_Log > Durability: Durable > Auto delete: No - 创建 log_logstash Exchange 的 Binding > To queue: OCT_MID_Log > Routing key: service.# ## Spring *pom.xml* 中添加依赖 ```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-amqp</artifactId> </dependency> ``` *logback-spring.xml* 中配置输出日志到 RabbitMQ。 这里由于使用了 `springProperty` 来获取 SpringBoot 的配置,所以需要将 *application.yml* 或 *application.properties* 更改为 *bootstrap.yml* 或 *bootstrap.properties*。原因可以参考 [SpringCloud入门之常用的配置文件 application.yml和 bootstrap.yml区别](https://www.cnblogs.com/BlogNetSpace/p/8469033.html)。 ```xml <configuration> <springProperty scope="context" name="MQHost" source="spring.rabbitmq.host"/> <springProperty scope="context" name="MQPort" source="spring.rabbitmq.port"/> <springProperty scope="context" name="MQUserName" source="spring.rabbitmq.username"/> <springProperty scope="context" name="MQPassword" source="spring.rabbitmq.password"/> <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%d{ yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern> </layout> </appender> <appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern> <![CDATA[%msg]]> </pattern> </layout> <host>${MQHost}</host> <port>${MQPort}</port> <username>${MQUserName}</username> <password>${MQPassword}</password> <applicationId>service.ribbon</applicationId> <routingKeyPattern>service.ribbon</routingKeyPattern> <declareExchange>true</declareExchange> <exchangeType>topic</exchangeType> <exchangeName>log_logstash</exchangeName> <generateId>true</generateId> <charset>UTF-8</charset> <durable>true</durable> <deliveryMode>PERSISTENT</deliveryMode> </appender> <root level="INFO"> <appender-ref ref="CONSOLE" /> <appender-ref ref="AMQP" /> </root> </configuration> ``` 由于 Logstash.conf 默认使用 json 解析输出的日志,所以这里这里的输入内容设置为仅包含消息主题 `<pattern><![CDATA[%msg]]></pattern>`。 ```java import com.fasterxml.jackson.databind.ObjectMapper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; private static final Logger logger = LoggerFactory.getLogger(TestController.class); ObjectMapper mapper = new ObjectMapper(); logger.info(mapper.writeValueAsString(new LogModel("log message"))); ``` ## ElasticSearch 最新版: [Download Elasticsearch](https://www.elastic.co/downloads/elasticsearch) 5.5.0: [Elasticsearch 5.5.0](https://www.elastic.co/downloads/past-releases/elasticsearch-5-5-0) **注意**:ElasticSearch 的版本要和 Kibana 的版本一致。 下载后默认启动 *bin* 目录下的 *elasticsearch.bat* 即可 ```bash elasticsearch.bat ``` 默认地址 [http://localhost:9200/](http://localhost:9200/),打开显示如下信息: ```json { "name" : "NLhpJUb", "cluster_name" : "elasticsearch", "cluster_uuid" : "HBiqypU0Qx-V4LAM-s0_2Q", "version" : { "number" : "6.4.3", "build_flavor" : "default", "build_type" : "zip", "build_hash" : "fe40335", "build_date" : "2018-10-30T23:17:19.084789Z", "build_snapshot" : false, "lucene_version" : "7.4.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" } ``` ## elasticsearch-head - [github: elasticsearch-head](https://github.com/mobz/elasticsearch-head) - 需要配置ElasticSearch使其允许跨域请求 *config/elasticsearch.yml* 添加配置项 ```properties http.cors.enabled: true # elasticsearch中启用CORS http.cors.allow-origin: "*" # 允许访问的IP地址段,* 为所有IP都可以访问 ``` - 启动Head插件 ```batch npm install npm run start ``` - [http://localhost:9100/](http://localhost:9100/) ## LogStash 最新版:[Download Logstash](https://www.elastic.co/cn/downloads/logstash) 5.5.0:[Logstash 5.5.0](https://www.elastic.co/downloads/past-releases/logstash-5-5-0) 参考 *config/logstash-simple.conf* 创建自己的配置。配置 input 为 RabbitMQ,输出 output 为 ElasticSearch。 RabbitMQ的其它配置可以参考 [Rabbitmq Input Configuration Options](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html#plugins-inputs-rabbitmq-options)。 ```less input { rabbitmq { type => "oct-mid-ribbon" durable => true exchange => "log_logstash" exchange_type => "topic" key => "service.#" host => "192.168.0.20" port => 5672 user => "username" password => "password" queue => "OCT_MID_Log" auto_delete => false tags => ["service"] } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "logstash-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } ``` 复制配置文件到 *bin* 目录,启动 logstash ```batch logstash.bat -f logstash-mq.conf ``` ## Kibana 最新版:[Download Kibana](https://www.elastic.co/cn/downloads/kibana) 5.5.0:[Kibana 5.5.0](https://www.elastic.co/downloads/past-releases/kibana-5-5-0) *config/kibana.yml* 中配置 服务名及 ElasticSearch 地址 ```properties # The Kibana server's name. This is used for display purposes. server.name: "oct-mid-service-kibana" # The URL of the Elasticsearch instance to use for all your queries. elasticsearch.url: "http://localhost:9200" ``` 启动 kibana ```bash kibana.bat ``` 访问 [http://localhost:5601/app/kibana](http://localhost:5601/app/kibana) 配置索引名为默认值 `logstash-*` 即可。 使用 5.5.0 版本的 ElasticSerach 和 最新版的 Kibana 6.4.3启动时会显示如下信息: > It appears you're running the oss-only distribution of Elasticsearch. > To use the full set of free features in this distribution of Kibana, please update Elasticsearch to the default distribution. 原因是 Kibana 的版本号不能比 ElasticSerach 的版本号高,否则不支持。大版本号相同的情况下,中版本号可以比 ElasticSearch 低,但是会有警告。 > Kibana should be configured to run against an Elasticsearch node of the same version. This is the officially supported configuration. > > Running different major version releases of Kibana and Elasticsearch (e.g. Kibana 5.x and Elasticsearch 2.x) is not supported, nor is running a minor version of Kibana that is newer than the version of Elasticsearch (e.g. Kibana 5.1 and Elasticsearch 5.0). > > Running a minor version of Elasticsearch that is higher than Kibana will generally work in order to faciliate an upgrade process where Elasticsearch is upgraded first (e.g. Kibana 5.0 and Elasticsearch 5.1). In this configuration, a warning will be logged on Kibana server startup, so it’s only meant to be temporary until Kibana is upgraded to the same version as Elasticsearch. > > Running different patch version releases of Kibana and Elasticsearch (e.g. Kibana 5.0.0 and Elasticsearch 5.0.1) is generally supported, though we encourage users to run the same versions of Kibana and Elasticsearch down to the patch version. ## ElasticSearch Log ```json { "_index": "oct_mid_log_mq", "_type": "all", "_id": "AWcFi8Mxc32BOEvQqfc9", "_score": 1, "_source": { "message": "2018-11-11 16:56:17.119 [http-nio-7072-exec-1] DEBUG c.o.middle.api.controller.user.OperatorController.? ? - debug to RabbitMQ", "@version": "1", "type": "all", "tags": [ "_jsonparsefailure" ], "@timestamp": "2018-11-12T01:31:44.384Z" } } ``` ```json { "time": "2018-11-13 09:48:43.600", "thread": "http-nio-7072-exec-2", "level": "ERROR", "logger": "c.o.m.a.c.test.TestController", "message": { "type": "Error", "code": "5", "message": "error to RabbitMQ" } } ``` logstash.conf 中 **rabbitmq.input.codec** : 默认为 json ,此时 `_source.message` 的值应该是 Json 格式,如果日志中的不能正确反序列化,则在生成的记录中 tags 属性中会出现一个 *_jsonparsefailure* 值。另外 `_source.message` 中的属性只能有一级,不能再嵌套子级的属性。 ```json message:2018-11-12 15:19:48.811 [http-nio-7072-exec-1] TRACE c.o.middle.api.controller.test.TestController.? ? - trace to RabbitMQ tags:_jsonparsefailure @version:1 @timestamp:November 12th 2018, 15:19:48.831 type:oct-mid-user _id:Di3KBmcBwZN9MRVzbezv _type:doc _index:oct_mid_log_mq _score: ``` 本想在 `%msg` 输出一个 json 格式的字符串,虽然日志是打印出来了,但是并没有被 Logstash 接收到 ElasticSearch。估计是因为不支持嵌套的子属性导致的。 ```xml <pattern><![CDATA[{ "time" : "%d{ yyyy-MM-dd HH:mm:ss.SSS}", "thread" : "%thread", "level" : "%level", "logger" : "%logger{36}", "message" : %msg }%n]]></pattern> ``` ## 参考 - ELK - [Spring集成Rabbitmq收集Logback日志,利用进行Logstash数据整理存储到Elasticsearch中](https://blog.csdn.net/niugang0920/article/details/81502022) - [Logstash通过RabbitMQ收集Logback日志,保存到ElasticSearch](https://www.jianshu.com/p/fdfd7bac754b) - [Springboot+logback集成ELK处理日志实例](https://blog.csdn.net/yy756127197/article/details/78873310) - Logback - [download logstash](https://www.elastic.co/cn/downloads/logstash) - [logback-spring.xml配置文件](https://blog.csdn.net/xu_san_duo/article/details/80364600) - [【系统学习SpringBoot】SpringBoot配置logging日志及输出日志](https://blog.csdn.net/small_mouse0/article/details/77840582) - [Log日志级别在SpringBoot中的配置](https://blog.csdn.net/Hello_World_QWP/article/details/80908839) - [Spring Boot with Logback + springProperty](https://stackoverflow.com/questions/43482050/spring-boot-with-logback-springproperty) - [Spring Cloud Sleuth](http://cloud.spring.io/spring-cloud-sleuth/single/spring-cloud-sleuth.html) - [Java日志框架-Spring中使用Logback(Spring/Spring MVC)](https://www.cnblogs.com/EasonJim/p/7810852.html) - [logback配置---Spring集成logback](https://blog.csdn.net/qq_35893120/article/details/77838315) - [Log Levels](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-custom-log-levels) - [spring boot使用logback实现多环境日志配置](https://blog.csdn.net/vitech/article/details/53812137) - [Logback Chapter 1: Introduction](https://logback.qos.ch/manual/introduction.html) - [logback高级特性使用(一)](https://blog.csdn.net/chenjie2000/article/details/8881581) - [logback高级特性使用(二)](https://blog.csdn.net/chenjie2000/article/details/8892764) - [SpringCloud入门之常用的配置文件 application.yml和 bootstrap.yml区别](https://www.cnblogs.com/BlogNetSpace/p/8469033.html) - RabbitMQ - [RabbitMQ Exchange类型详解](https://www.cnblogs.com/julyluo/p/6265775.html) - [RabbitMQ基础概念详细介绍](https://blog.csdn.net/whycold/article/details/41119807) - [rabbitmq之Message durability](https://blog.csdn.net/wubinbaoyi/article/details/78913499) - Logstash - [https://www.elastic.co/cn/downloads/logstash](https://www.elastic.co/cn/downloads/logstash) - [Logstash安装和基本使用](https://blog.csdn.net/jy02268879/article/details/80616123) - [logstash篇之入门与运行机制](https://blog.csdn.net/sinat_35930259/article/details/81044846) - [Logstash入门](https://blog.csdn.net/wyyl1/article/details/80517291) - [Rabbitmq input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html) - [logstash-Windows下安装](https://blog.csdn.net/hxtxgfzs/article/details/78040077) - [Configuring Logstash](https://www.elastic.co/guide/en/logstash/current/configuration.html#configuration) - [Structure of a Config File](https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html) - [Input plugins](https://www.elastic.co/guide/en/logstash/current/input-plugins.html) - [Rabbitmq input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html#plugins-inputs-rabbitmq-ack) - [Codec plugins](https://www.elastic.co/guide/en/logstash/6.4/codec-plugins.html) - [logstash-input-rabbitmq](https://github.com/logstash-plugins/logstash-input-rabbitmq) - [Output plugins](https://www.elastic.co/guide/en/logstash/current/output-plugins.html) - [Elasticsearch output plugin](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) - Kibana - [https://www.elastic.co/downloads/kibana](https://www.elastic.co/downloads/kibana) - [Kibana(一张图片胜过千万行日志)](https://www.cnblogs.com/cjsblog/p/9476813.html) - [Configuring Kibana](https://www.elastic.co/guide/en/kibana/current/settings.html) - ElasticSearch - [Common options](https://www.elastic.co/guide/en/elasticsearch/reference/6.4/common-options.html#date-math) - [elasticsearch-head](https://github.com/mobz/elasticsearch-head) - [Elasticsearch中Head插件的使用](https://www.cnblogs.com/aubin/p/8018081.html) - [Lucene查询语法详解](https://www.cnblogs.com/xing901022/p/4974977.html) Loading... 版权声明:本文为博主「佳佳」的原创文章,遵循 CC 4.0 BY-NC-SA 版权协议,转载请附上原文出处链接及本声明。 原文链接:https://www.liujiajia.me/2018/11/14/springboot-elk ← 上一篇 下一篇 → 提交