0%

ELK/EFK简介

E(Elasticsearch)+L(Logstash)+K(Kibana)

E(Elasticsearch)+F(Filebeat)+K(Kibana)

Redis/Mq/kafka大数据高提高弹性时候可选

graph LR
B(beats数据采集)-.->G([redis/mq/kafka])-.->L[Logstash数据管道/处理]-->E[Elasticsearch数据存储/索引]-->K[Kibana数据分析/显示]
B-->L

Logstash

输入支持:tcp,http,file,beats,kafka,rabbitmq,redis,log4j,elasticsearch,jdbc,websocket

过滤器支持:grok,ruby,mutate,json

输出支持:elasticsearch,File,Emial,http,Kafka,Redis,MongoDB,Rabbitmq,Syslog,Tcp,Websocket,Zabbix,Stdout,Csv

Filebeat vs fluent

Filebeat主要用于数据采集,轻量对应用服务器消耗较小,虽然Logstash也可以采集数据,但Logstash占用应用服务器性能比Filebeat大

fluent是Filebeat的高级版,支持很多其他日志输入模式

springboot log框架包

架构选型

方案一 EFK(docker log模式)

利用Filebeat采集docker日志,从而监控docker上所有或指定服务的日志,实现SpringCloud的日志监听

优点: 对现有服务侵入无,不需要改造现有服务

缺点:强依赖于docker,只能监听docker

graph LR
B(Filebeat收集docker日志)-->E[Elasticsearch]-->K[Kibana]

方案二 Logstash采用

优点:简洁,搭建快速

缺点:没缓冲,可能会有瓶颈

graph LR
B(Logstash收集本地log文件)-->E[Elasticsearch]-->K[Kibana]

方案三 Logstash+redis+Logstash(未验证)

参考:搭建 ELK 实时日志平台并在 Spring Boot 和 Nginx 项目中使用

优点:

  1. 直接读取日志文件,对原来的系统入侵无,
  2. 支持所有服务,例如nginx,springboot等只要能生成日志文件的

缺点:

  1. 需要在读取日志文件的服务器都安装Logstash(shipper角色)
  2. 采用docker部署的时候,springboot需要映射日志目录
graph LR
B(Logstash收集本地log文件/Shipper)--写入-->G([redis])--读取-->L[Logstash/Indexer角色]-->E[Elasticsearch]-->K[Kibana]

方案四 kafka+logstash(未验证)

参考:Spring Cloud集成ELK完成日志收集实战(elasticsearch、logstash、kibana

优点:

  1. 不需要在应用服务器安装额外的服务
  2. 支持docker部署,不需要额外映射服务目录

缺点:

  1. 需要改造springboot
  2. 不支持nginx、数据库等服务
graph LR
B(springboot)--写入-->G([kafka])--读取-->L[Logstash]-->E[Elasticsearch]-->K[Kibana]

方案五 EFK+logback-more-appenders

参考:sndyuk/logback-more-appenders

优点:直接通过jar集成logback框架,干净

缺点:只适合于springboot

graph LR
A(springboot依赖logback-more-appenders)-->B(fluent)-->E[Elasticsearch]-->K[Kibana]

步骤:

  1. Docker安装fluent,fluent镜像需要自己制作exxk/fluent-elasticsearch:latest
    参考fluentd/container-deployment/docker-compose

    1
    2
    3
    4
    5
    # fluentd/Dockerfile
    FROM fluent/fluentd:v1.6-debian-1
    USER root
    RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "3.5.2"]
    USER fluent
  2. springboot引入maven依赖

    1
    2
    3
    // https://mvnrepository.com/artifact/com.sndyuk/logback-more-appenders
    compile group: 'com.sndyuk', name: 'logback-more-appenders', version: '1.4.2'
    compile group: 'org.fluentd', name: 'fluent-logger', version: '0.3.4'
  3. springboot在logback.xml添加fluentd的日志输出模式(具体见logback的配置)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
      <!--EFK ip -->
    <springProperty scope="context" name="fluentHost" source="logback.fluent.host" />
    <!--EFK 端口 -->
    <springProperty scope="context" name="fluentPort" source="logback.fluent.port" />
    <!--FLUENT输出 -->
    <appender name="FLUENT"
    class="ch.qos.logback.more.appenders.FluentLogbackAppender">
    <tag>${APP_NAME}</tag>
    <label>logback</label>
    <remoteHost>${fluentHost}</remoteHost>
    <port>${fluentPort}</port>
    <layout>
    <pattern>${LOG_FORMAT}</pattern>
    </layout>
    </appender>
  4. 动态配置application

    1
    2
    3
    spring.cloud.config.logback-profile=FLUENT
    logback.fluent.host=${LOGBACK_FLUENT_HOST:xxx.cn}
    logback.fluent.port=${LOGBACK_FLUENT_PORT:14021}
  5. 然后运行springboot触发日志

  6. 然后去页面配置见日志EFK框架中elastic的配置使用

EFK/ELK部署

参考deviantony/docker-elk

docker-stack.yml内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
configs:
- source: elastic_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
TZ: Asia/Shanghai
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == me]
logstash:
image: docker.elastic.co/logstash/logstash:7.9.0
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml
- source: logstash_pipeline
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
TZ: Asia/Shanghai
deploy:
mode: replicated
replicas: 0
placement:
constraints: [node.hostname == me]
kibana:
image: docker.elastic.co/kibana/kibana:7.9.0
environment:
#无效,但是似乎不重要
TZ: Asia/Shanghai
ports:
- "14020:5601"
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == me]
filebeat:
image: docker.elastic.co/beats/filebeat:7.9.0
user: root
command: filebeat -e -strict.perms=false
environment:
TZ: Asia/Shanghai
configs:
- source: filebeat_config
target: /usr/share/filebeat/filebeat.yml
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
mode: replicated
replicas: 0
placement:
constraints: [node.hostname == me]
fluent:
image: exxk/fluent-elasticsearch:latest
environment:
TZ: Asia/Shanghai
ports:
- "14021:24224"
- "14021:24224/udp"
configs:
- source: fluent_config
target: /fluentd/etc/fluent.conf
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == me]
configs:
elastic_config:
external: true
logstash_config:
external: true
logstash_pipeline:
external: true
kibana_config:
external: true
filebeat_config:
external: true
fluent_config:
external: true

各个配置文件配置内容如下:

elastic_config

1
2
3
4
5
6
7
8
9
10
11
12
13
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true

kibana_config

1
2
3
4
5
6
7
8
9
10
11
12
13
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme

logstash_config

1
2
3
4
5
6
7
8
9
10
11
12
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

## X-Pack security credentials
#
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme

logstash_pipeline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
input {
tcp {
port => 5000
}
}

## Add your filters / logstash plugins configuration here

output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
}
}

filebeat_config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true

processors:
- add_cloud_metadata: ~

output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: 'changeme'

fluent_config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# fluentd/conf/fluent.conf

<source>
@type forward
port 24224
bind 0.0.0.0
</source>

<match *.**>
@type copy

<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
user elastic
password changeme
</store>

<store>
@type stdout
</store>
</match>

springboot的配置使用很简单,也经常用,下面就是常用的AutoConfiguration入口

1
2
3
@EnableAutoConfiguration //启用自动配置
@ComponentScan
public class Application{}

但是这里主要讲下自动配置的禁用

如何禁用某些自动配置

需求场景

  1. 在公共依赖里面引入了数据库的依赖,你会发现,某些项目不需要数据库,但是由于公共依赖,也得配置数据库的连接地址
  2. 例如使用spring cloud config-bus热更新配置时,这个功能属于可选功能,因为有些环境可能不支持mq,但是不能每次打包都把依赖移除

禁用设置

启动文件的修改

主要是在启动类上面的@EnableAutoConfiguration注解添加参数exclude后面填入需要禁用的启动类,这种适合场景1,因为已经确定这个服务是不需要这个配置的了

1
2
3
@EnableAutoConfiguration(exclude = {MongoAutoConfiguration.class, MongoDataAutoConfiguration.class})
@EnableConfigurationProperties
public class App {}

配置文件的修改

application.properties里面添加spring.autoconfigure.exclude这个配置项,这个就很灵活了,只需要修改配置文件,就可以开启或禁用某些功能

1
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,org.springframework.boot.actuate.autoconfigure.metrics.amqp.RabbitMetricsAutoConfiguration,org.springframework.boot.actuate.autoconfigure.amqp.RabbitHealthIndicatorAutoConfiguration,org.springframework.boot.actuate.autoconfigure.health.HealthEndpointAutoConfiguration,org.springframework.cloud.bus.BusAutoConfiguration

常用的实战场景

禁用操作很简单,但是要找到合适的配置禁用,这就需要了解功能用到了那些自动配置项,下面介绍几个常用场景,需要禁用的自动配置类

1
2
3
4
5
6
7
8
#mongodb数据库 依赖 org.springframework.data:spring-data-mongodb
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration
#redis数据库 依赖 org.springframework.boot:spring-boot-starter-data-redis
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration
#rabbitMQ 依赖 org.springframework.boot:spring-boot-starter-amqp
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration
#springCloud-Bus 依赖org.springframework.cloud:spring-cloud-starter-bus-amqp
spring.autoconfigure.exclude=org.springframework.cloud.bus.BusAutoConfiguration,org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration

minio搭建图床

minio部署见docker脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
version: "3.5"
services:
minio:
image: minio/minio
ports:
- "14033:9000"
volumes:
- /home/dockerdata/v-minio:/data
environment:
MINIO_ACCESS_KEY: "username"
MINIO_SECRET_KEY: "password"
command: server /data
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == me]

minio设置永久分享

1
2
3
4
docker exec -it <容器id> bash
curl https://dl.minio.io/client/mc/release/linux-amd64/mc --output mc
./mc config host add minio http://ip:14033 username password
./mc policy set public minio/<桶的名字>

设置成功后该桶就可以通过url了进行拼接访问了

图床客户端工具

方案一:typora+python+minio(采用)

安装
1
2
3
4
git clone https://github.com/minio/minio-py
cd minio-py
#需要代理下载
sudo python setup.py install
脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import os
import time
import uuid
import sys
import requests
from minio import Minio
from minio.error import ResponseError
import warnings

warnings.filterwarnings('ignore')
images = sys.argv[1:]
minioClient = Minio("ip:port",
access_key='用户名', secret_key='密码', secure=False)
result = "Upload Success:\n"
date = time.strftime("%Y%m%d%H%M%S", time.localtime())

for image in images:
file_type = os.path.splitext(image)[-1]
new_file_name = date + file_type
if image.endswith(".png") or image.endswith(".jpg") or image.endswith(".gif"):
content_type ="image/"+file_type.replace(".", "");
else:
content_type ="image/jpg"
continue
try:
minioClient.fput_object(bucket_name='blog', object_name= new_file_name, file_path=image,content_type=content_type)
if image.endswith("temp"):
os.remove(image)
result = result +"http://ip:port" + "/blog/" + new_file_name + "\n"
except ResponseError as err:
result = result + "error:" + err.message + "\n"
print(result)

参考

Minio+Nginx搭建私有图床,写博客从未这么爽

python-client-quickstart-guide

方案二:upic+typora+minio

参考:Typora搭配uPic使用minIO自建图床

注意事项

minio图片不能预览

  1. 需要设置

image-20200805182053298

  1. 以及上传图片需要设置content_type为image/jpg

动态监听队列

需求

这里需要监听多个队列,而且运行途中可能会增加监听,或减少监听,因此实现需要采用SimpleMessageListenerContainer

步骤

  1. 添加gradle依赖

    1
    2
    implementation 'org.springframework.boot:spring-boot-starter-amqp'
    compile 'cn.hutool:hutool-all:5.3.8'
  2. 添加application.properties

    1
    2
    3
    4
    5
    6
    7
    8
    9
    spring.rabbitmq.host=10.10.10.11
    spring.rabbitmq.port=14012
    spring.rabbitmq.username=test
    spring.rabbitmq.password=test
    spring.rabbitmq.virtual-host=/
    spring.rabbitmq.connection-timeout=5000
    spring.rabbitmq.countDownLatch=5
    spring.rabbitmq.webport=14013
    spring.rabbitmq.websocket-port=14014
  3. 创建一个监听类RbMQReceiverHandler.java

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    /**
    * 监听接收消息
    */
    @Component
    public class RbMQReceiverHandler implements MessageListener {
    private final Logger log = LoggerFactory.getLogger(getClass());

    @Override
    public void onMessage(Message message) {
    log.info("====接收到" + message.getMessageProperties().getConsumerQueue() + "队列的消息=====");
    log.info(message.getMessageProperties().toString());
    log.info(new String(message.getBody()));
    }
    }
  4. 创建一个RabbitMQConfig.java配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    @Configuration
    @Import(cn.hutool.extra.spring.SpringUtil.class) //huTool添加,才能用getBean
    public class RabbitMQConfig {

    @Autowired
    RbMQReceiverHandler rbMQReceiverHandler;

    @Bean
    public SimpleMessageListenerContainer messageListenerContainer(ConnectionFactory connectionFactory) {
    SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
    container.setConnectionFactory(connectionFactory);
    container.setQueueNames("test1_staff");
    container.setMessageListener(rbMQReceiverHandler);
    return container;
    }
    }
  5. 添加一个动态添加队列的接口

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    @RestController
    @RequestMapping("/queue")
    public class RbController {

    @PostMapping
    public String addQueue(@RequestParam String queueNmae) {
    SimpleMessageListenerContainer container = SpringUtil.getBean(SimpleMessageListenerContainer.class);//获取实例
    container.addQueueNames(queueNmae);
    return "add " + queueNmae + " ok";
    }

    @DeleteMapping
    public String delQueue(@RequestParam String queueNmae) {
    SimpleMessageListenerContainer container = SpringUtil.getBean(SimpleMessageListenerContainer.class);
    container.removeQueueNames(queueNmae);
    return "delete " + queueNmae + " ok";
    }
    }
  6. 测试调用post 127.0.0.1:8080/queue 接口就能添加队列了,发送mq的消息没写测试方法,但是可以直接到mq的管理页面push一条消息进行测试

多线程监听队列

监听队列时,单线程

参考

工作随笔——rabbitmq的多线程监听(Springboot)

springboot 配置jconsole

  1. 设置远程连接访问密码
1
2
3
4
5
6
7
8
9
10
11
12
# 查看java安装目录
echo $PATH
# 切换到java安装目录
cd /usr/local/jvm/jdk1.8.0_77/jre/lib/management
# 创建一个密码文件
cp jmxremote.password.template jmxremote.password
# 添加文件可写权限
chmod +x jmxremote.password
# 取消最后两行注释 monitorRole QED 和controlRole R&D
vim jmxremote.password
# 修改文件权限为400或600解决启动`错误: 必须限制口令文件读取访问`
chmod 400 jmxremote.password
  1. 修改启动命令,启动springboot
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#方式一
##环境变量
JAVA_OPTS='-Djava.rmi.server.hostname=当前服务器公网ip
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8888
-Dcom.sun.management.jmxremote.rmi.port=8888
-Dcom.sun.management.jmxremote.authenticate=true
-Dcom.sun.management.jmxremote.ssl=false'
##启动命令
java $JAVA_OPTS -jar springboot.jar
#方式二 直接在启动命令里面加,不通过环境变量
java Djava.rmi.server.hostname=当前服务器公网ip
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8888
-Dcom.sun.management.jmxremote.rmi.port=8888
-Dcom.sun.management.jmxremote.authenticate=true
-Dcom.sun.management.jmxremote.ssl=false
-jar springboot.jar
  1. 在本地启动jconsole

    终端里面执行jconsole就会打开jconsole

    然后选择远程连接,输入ip:端口 eg:10.10.10.11:8888然后输入用户名(monitorRole)和密码(QED)或者用户名:controlRole密码:R&D

参考

基于Springboot项目使用jconsole远程监控JVM

Linux 错误: 必须限制口令文件读取访问: /sy/java/jdk1.6.0_26/jre/lib/management/jmxremote.password

配置修改脚本

测试文件a.conf

1
2
3
4
sex=boy
age=8
url=http://www.baidu.com
"systemUrl": "http://10.254.197.9:9304",

常用脚本命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#查看包含sex的行
cat a.conf | grep sex
#替换sex=boy为sex=girl,-i为写入文件
sed -i "s/sex=boy/sex=girl/" a.conf
# 替换sex的值,\S用于匹配除单个空格符之外的所有字符,输出age=8
sed -i "s/age=\S*/age=9/" a.conf
# 注释sex开头的配置,&代表任意字符
sed -i 's/^sex/;&/' a.conf
# 取消注释
sed -i 's/^;\(sex\)/\1/' a.conf
# 在age配置后加一行;this is age
sed -i '/age/a\;this is age' a.conf
# 在age配置前加一行;this is age
sed -i '/age/i\;this is age' a.conf
# 删除所有匹配;this is age的行
sed -i '/;this is age/d' a.conf
# 修改url,如果url里面有空格会失败,因为S匹配非空格
sed -i "s/url=\S*/url=http:\/\/www.baidu.com/" a.conf
# 替换ip
sed -i "s/10.254.197.9/127.0.0.1/" a.conf
# 匹配行头,然后替换整行,适用于包含空格的格式 '/^行头/c\整行替换的值' 加反斜杠\是为了区分内容可省略
sed -i '/^url/c\url = 2' a.conf

SpringBoot 遇到问题总结

request.getInputStream() 为null问题解决

知识总结

request.getInputStream(); request.getReader(); 和request.getParameter(“key”);

这三个函数中任何一个方法执行一次后(可正常读取body数据),之后再执行就无效了。读取之后游标就向后面移动了

问题描述

  1. 使用postman请求没有任何问题,都能读取到
  2. 升级springboot 2.2.0以上版本,也不会有任何问题,能读取到

问题分析

  1. 在control使用request.getInputStream()时获取不到数据流,标记数据被读,断点看req->request->inputStream->ib->state 如果state=2代表该输入流已读,再次读取就会为null,state=0代码未读
  2. postman能读取,是因为设置了请求头Content-Type所以不存在读取不了,因此也和后面那个过滤器进入的条件有关,所以带请求头的并不会进入过滤器
  3. springboot 2.2.0以上可以读取,估计是修复该问题
  4. springboot 2.2.0以下版本到底是哪里读取了InputStream,最后找到了一个过滤器HiddenHttpMethodFilter,里面断点,的确进去了,并用了request.getParameter方法

问题解决

升级,或者添加配置禁用HiddenHttpMethodFilter过滤器,代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
@Configuration
public class ConfigurationData {
@Bean
public HttpPutFormContentFilter httpPutFormContentFilter() {
return new HttpPutFormContentFilter();
}
@Bean
public FilterRegistrationBean disableSpringBootHttpPutFormContentFilter(HttpPutFormContentFilter filter) {
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
filterRegistrationBean.setFilter(filter);
filterRegistrationBean.setEnabled(false);
return filterRegistrationBean;
}
@Bean
public HiddenHttpMethodFilter hiddenHttpMethodFilter() {
return new HiddenHttpMethodFilter();
}
@Bean
public FilterRegistrationBean disableSpringBootHiddenHttpMethodFilter(HiddenHttpMethodFilter filter) {
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
filterRegistrationBean.setFilter(filter);
filterRegistrationBean.setEnabled(false);
return filterRegistrationBean;
}
}

测试源码

  1. 测试接口接收类

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    @RestController
    @RequestMapping(value = "/xmbankaccess")
    public class XmBankAccessControl {
    private final Logger logger = LoggerFactory.getLogger(getClass());

    @RequestMapping(value = "/facecompare")
    public void facecompare(HttpServletRequest req, HttpServletResponse rsp) throws IOException {
    logger.info("begin");
    byte[] reqByte = readReqData(req);
    String str = new String(reqByte);
    logger.info(str);
    }

    private byte[] readReqData(HttpServletRequest request) throws IOException {
    BufferedInputStream bis = null;
    byte[] reqBuff = null;
    try {
    bis = new BufferedInputStream(request.getInputStream());
    byte[] buff = new byte[1024];
    int len = 0;
    int count = 0;
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    while ((len = bis.read(buff, 0, buff.length)) != -1) {
    baos.write(buff, 0, len);
    count += len;
    }
    baos.close();
    reqBuff = new byte[count];
    System.arraycopy(baos.toByteArray(), 0, reqBuff, 0, count);

    } catch (IOException e) {
    logger.error("读请求信息异常:", e);
    } finally {
    if (bis != null) {
    bis.close();
    bis = null;
    }
    }
    return reqBuff;
    }
    }
  2. 测试请求类

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    public class AddFaceTest {

    private static final Logger log = LoggerFactory.getLogger("aaa");

    public static void main(String[] args) throws IOException, InterruptedException {
    sendMsgHttp("aaaa".getBytes());
    }

    public static byte[] sendMsgHttp(Object paramObj) {
    // 日志-开始处理
    if (log.isInfoEnabled()) {
    log.info("HTTP通讯处理开始。。。");
    }
    // 参数初始化
    byte[] inData = null;
    byte[] outData = null;
    URL url = null;
    URLConnection conn = null;
    // 读取输入的数据
    if ((paramObj instanceof byte[])) {
    inData = (byte[]) paramObj;
    } else {
    log.error("数据错误:输入的参数必须是byte[]或CompositeData类型的数据");
    return null;
    }
    OutputStream os = null;
    BufferedInputStream is = null;
    try {
    // 建立连接
    // url = new URL("http://127.0.0.1:8080/xmbankaccess/facecompare");
    url = new URL("http://127.0.0.1:9980/xmbankaccess/facecompare");
    conn = url.openConnection();
    conn.setConnectTimeout(6000);
    conn.setReadTimeout(6000);
    conn.setDoOutput(true);
    if (log.isInfoEnabled()) {
    log.info("URL连接已打开。。。");
    }
    // 发送请求数据
    os = conn.getOutputStream();
    if (log.isDebugEnabled()) {
    log.debug("向Servlet发送的请求数据为:" + new String(inData, "UTF-8"));
    }
    os.write(inData);
    os.flush();
    // 读取响应数据
    is = new BufferedInputStream(conn.getInputStream());
    int availableSize = 0;
    byte[] buffer = new byte[8192];
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    while ((availableSize = is.read(buffer)) != -1) {
    baos.write(buffer, 0, availableSize);
    }
    outData = baos.toByteArray();
    baos.close();
    } catch (Exception e) {
    log.error("通讯发生异常:", e);
    } finally {
    try {
    if (null != os) {
    os.close();
    }
    if (null != is) {
    is.close();
    }
    } catch (IOException e) {
    log.error(e.getMessage());
    }
    }
    // 判断响应的内容是否为空,空则直接返回
    if (outData == null) {
    outData = "aa".getBytes();
    }
    return outData;
    }
    }

    参考

    SpringMVC 中 request.getInputStream() 为空解惑

git多项目管理方案

git-repo 多项目管理

安装

1
2
3
4
5
6
7
8
9
10
11
# Debian/Ubuntu. 系统
$ sudo apt-get install repo
# Gentoo. 系统
$ sudo emerge dev-vcs/repo
# 其他系统下载脚本
curl https://storage.googleapis.com/git-repo-downloads/repo
# 添加执行权限
chmod a+rx repo
# 需要全局执行命令根据自己的系统进行配置环境变量,也可以放进项目里面,再项目里面运行
# mac 安装
mv repo /usr/local/bin

使用

1
2
3
4
5
# 修改repo文件里面的地址,不然需要外网
# REPO_URL = 'https://gerrit.googlesource.com/git-repo'
REPO_URL = 'https://mirrors.ustc.edu.cn/aosp/git-repo'
# 也可以使用环境变量,或者使用命令时代参数
repo init --repo-url=https://gerrit-google.tuna.tsinghua.edu.cn/git-repo
repo init
1
2
3
4
5
6
7
8
9
./repo init -u your_project_git_url
./repo init -u git@github.com:xuanfong1/springLeaning.git
#----------------------可选参数-----------------------------------------
#-b 选取的 manifest 仓库分支,默认 master
#-m 选取的默认配置文件,默认 default.xml
#--depth=1 git clone 的深度,一般如在 Jenkins 上打包时可用,加快代码 clone 速度
#--repo-url=URL 使用自定义的 git-repo 代码,如前面说到的 fix 了 bug 的 git-repo
#--no-repo-verify 不验证 repo 的源码,如果自定义了 repo url 那么这个一般也加上
#----------------------------------------------------------------------