作者 :於老三
來源:https://www.cnblogs.com/yuhuLin/p/7018858.html
一、ELK搭建篇
官網地址:https://www.elastic.co/cn/
官網權威指南:https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
安裝指南:https://www.elastic.co/guide/en/elasticsearch/reference/5.x/rpm.html
ELK是Elasticsearch、Logstash、Kibana的簡稱,這三者是核心套件,但並非全部。
Elasticsearch是實時全文搜索和分析引擎,提供搜集、分析、存儲數據三大功能;是一套開放REST和JAVA API等結構提供高效搜索功能,可擴展的分布式系統。它構建於Apache Lucene搜尋引擎庫之上。
Logstash是一個用來搜集、分析、過濾日誌的工具。它支持幾乎任何類型的日誌,包括系統日誌、錯誤日誌和自定義應用程式日誌。它可以從許多來源接收日誌,這些來源包括 syslog、消息傳遞(例如 RabbitMQ)和JMX,它能夠以多種方式輸出數據,包括電子郵件、websockets和Elasticsearch。
Kibana是一個基於Web的圖形介面,用於搜索、分析和可視化存儲在 Elasticsearch指標中的日誌數據。它利用Elasticsearch的REST接口來檢索數據,不僅允許用戶創建他們自己的數據的定製儀錶板視圖,還允許他們以特殊的方式查詢和過濾數據
# 環境
# 安裝
# 安裝elasticsearch的環境
創建elasticsearch data的存放目錄,並修改該目錄的屬主屬組
# mkdir -p /data/es-data (自定義用於存放data數據的目錄)
# chown -R elasticsearch:elasticsearch /data/es-data
修改elasticsearch的日誌屬主屬組
# chown -R elasticsearch:elasticsearch /var/log/elasticsearch/
修改elasticsearch的配置文件
啟動服務
注意事項
通過瀏覽器請求下9200的埠,看下是否成功
如何和elasticsearch交互
安裝插件
# LogStash的使用
logstash使用配置文件
官方指南:
https://www.elastic.co/guide/en/logstash/current/configuration.html
創建配置文件01-logstash.conf
# vim /etc/logstash/conf.d/elk.conf
文件中添加以下內容
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.1.202:9200"] }
stdout { codec => rubydebug }
}
使用配置文件運行logstash
# logstash -f ./elk.conf
運行成功以後輸入以及標準輸出結果
logstash的資料庫類型
1. Input插件
權威指南:https://www.elastic.co/guide/en/logstash/current/input-plugins.html
file插件的使用
# vim /etc/logstash/conf.d/elk.conf
添加如下配置
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
運行logstash指定elk.conf配置文件,進行過濾匹配
#logstash -f /etc/logstash/conf.d/elk.conf
來一發配置安全日誌的並且把日誌的索引按類型做存放,繼續編輯elk.conf文件
# vim /etc/logstash/conf.d/elk.conf
添加secure日誌的路徑
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
}
運行logstash指定elk.conf配置文件,進行過濾匹配
# logstash -f ./elk.conf
這些設置都沒有問題之後,接下來安裝下kibana,可以讓在前台展示
Kibana的安裝及使用
安裝kibana環境
官方安裝手冊:https://www.elastic.co/guide/en/kibana/current/install.html
下載kibana的tar.gz的軟體包
# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.4.0-linux-x86_64.tar.gz
解壓kibana的tar包
# tar -xzf kibana-5.4.0-linux-x86_64.tar.gz
進入解壓好的kibana
# mv kibana-5.4.0-linux-x86_64 /usr/local
創建kibana的軟連接
# ln -s /usr/local/kibana-5.4.0-linux-x86_64/ /usr/local/kibana
編輯kibana的配置文件
# vim /usr/local/kibana/config/kibana.yml
修改配置文件如下,開啟以下的配置
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.202:9200"
kibana.index: ".kibana"
安裝screen,以便於kibana在後台運行(當然也可以不用安裝,用其他方式進行後台啟動)
# yum -y install screen
# screen
# /usr/local/kibana/bin/kibana
netstat -antp |grep 5601
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 17007/node
打開瀏覽器並設置對應的index
http://IP:5601
二、ELK實戰篇
好,現在索引也可以創建了,現在可以來輸出nginx、apache、message、secrue的日誌到前台展示(Nginx有的話直接修改,沒有自行安裝)
編輯nginx配置文件,修改以下內容(在http模塊下添加)
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domian":"$host",'
'"host":"$server_addr",'
'"size":"$body_bytes_sent",'
'"responsetime":"$request_time",'
'"referer":"$http_referer",'
'"ua":"$http_user_agent"'
'}';
修改access_log的輸出格式為剛才定義的json
access_log logs/elk.access.log json;
繼續修改apache的配置文件
LogFormat "{ \\
"@timestamp": "%{%Y-%m-%dT%H:%M:%S%z}t", \\
"@version": "1", \\
"tags":["apache"], \\
"message": "%h %l %u %t \\\\"%r\\\\" %>s %b", \\
"clientip": "%a", \\
"duration": %D, \\
"status": %>s, \\
"request": "%U%q", \\
"urlpath": "%U", \\
"urlquery": "%q", \\
"bytes": %B, \\
"method": "%m", \\
"site": "%{Host}i", \\
"referer": "%{Referer}i", \\
"useragent": "%{User-agent}i" \\
}" ls_apache_json
一樣修改輸出格式為上面定義的json格式
CustomLog logs/access_log ls_apache_json
編輯logstash配置文件,進行日誌收集
vim /etc/logstash/conf.d/full.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
}
運行看看效果如何
logstash -f /etc/logstash/conf.d/full.conf
可以發現所有創建日誌的索引都已存在,接下來就去Kibana創建日誌索引,進行展示(按照上面的方法進行創建索引即可),看下展示的效果
接下來再來一發MySQL慢日誌的展示
由於MySQL的慢日誌查詢格式比較特殊,所以需要用正則進行匹配,並使用multiline能夠進行多行匹配(看具體配置)
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
file {
path => "/var/log/mysql/mysql.slow.log"
type => "mysql"
start_position => "beginning"
codec => multiline {
pattern => "^# User@Host:"
negate => true
what => "previous"
}
}
}
filter {
grok {
match => { "message" => "SELECT SLEEP" }
add_tag => [ "sleep_drop" ]
tag_on_failure => []
}
if "sleep_drop" in [tags] {
drop {}
}
grok {
match => { "message" => "(?m)^# User@Host: %{USER:User}\\[[^\\]]+\\] @ (?:(?\\S*) )?\\[(?:%{IP:Client_IP})?\\]\\s.*# Query_time: %{NUMBER:Query_Time:float}\\s+Lock_time: %{NUMBER:Lock_Time:float}\\s+Rows_sent: %{NUMBER:Rows_Sent:int}\\s+Rows_examined: %{NUMBER:Rows_Examined:int}\\s*(?:use %{DATA:Database};\\s*)?SET timestamp=%{NUMBER:timestamp};\\s*(? (? \\w+)\\s+.*)\\n# Time:.*$" }
}
date {
match => [ "timestamp", "UNIX" ]
remove_field => [ "timestamp" ]
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
if [type] == "mysql" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-mysql-slow-%{+YYYY.MM.dd}"
}
}
}
查看效果(一條慢日誌查詢會顯示一條,如果不進行正則匹配,那麼一行就會顯示一條)
具體的日誌輸出需求,進行具體的分析
三:ELK終極篇
安裝reids
# yum install -y redis
修改redis的配置文件
# vim /etc/redis.conf
修改內容如下
daemonize yes
bind 192.168.1.202
啟動redis服務
# /etc/init.d/redis restart
測試redis的是否啟用成功
# redis-cli -h 192.168.1.202
輸入info如果有不報錯即可
redis 192.168.1.202:6379> info
redis_version:2.4.10
....
編輯配置redis-out.conf配置文件,把標準輸入的數據存儲到redis中
# vim /etc/logstash/conf.d/redis-out.conf
添加如下內容
input {
stdin {}
}
output {
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
}
}
運行logstash指定redis-out.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
運行成功以後,在logstash中輸入內容(查看下效果)
編輯配置redis-in.conf配置文件,把reids的存儲的數據輸出到elasticsearch中
# vim /etc/logstash/conf.d/redis-out.conf
添加如下內容
input{
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
batch_count => 1 #這個值是指從隊列中讀取數據時,一次性取出多少條,默認125條(如果redis中沒有125條,就會報錯,所以在測試期間加上這個值)
}
}
output {
elasticsearch {
hosts => ['192.168.1.202:9200']
index => 'redis-test-%{+YYYY.MM.dd}'
}
}
運行logstash指定redis-in.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
把之前的配置文件修改一下,變成所有的日誌監控的來源文件都存放到redis中,然後通過redis在輸出到elasticsearch中
更改為如下,編輯full.conf
input {
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
if [type] == "http" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
}
}
if [type] == "nginx" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
}
}
if [type] == "secure" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
}
}
if [type] == "system" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
}
}
}
運行logstash指定shipper.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/full.conf
在redis中查看是否已經將數據寫到裡面(有時候輸入的日誌文件不產生日誌,會導致redis裡面也沒有寫入日誌)
把redis中的數據讀取出來,寫入到elasticsearch中(需要另外一台主機做實驗)
編輯配置文件
# vim /etc/logstash/conf.d/redis-out.conf
添加如下內容
input {
redis {
type => "system"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
batch_count => 1
}
redis {
type => "http"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
batch_count => 1
}
redis {
type => "nginx"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
batch_count => 1
}
redis {
type => "secure"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
batch_count => 1
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
}
注意:
input是從客戶端收集的
output是同樣也保存到192.168.1.202中的elasticsearch中,如果要保存到當前的主機上,可以把output中的hosts修改成localhost,如果還需要在kibana中顯示,需要在本機上部署kabana,為何要這樣做,起到一個松耦合的目的
說白了,就是在客戶端收集日誌,寫到服務端的redis里或是本地的redis裡面,輸出的時候對接ES伺服器即可
運行命令看看效果
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES伺服器輸出一樣的(這樣是先將日誌存到redis資料庫,然後再從redis資料庫里取出日誌)
上線ELK
1. 日誌分類
系統日誌 rsyslog logstash syslog插件
訪問日誌 nginx logstash codec json
錯誤日誌 file logstash mulitline
運行日誌 file logstash codec json
設備日誌 syslog logstash syslog插件
Debug日誌 file logstash json 或者 mulitline
2. 日誌標準化
路徑 固定
格式 儘量json
3. 系統個日誌開始-->錯誤日誌-->運行日誌-->訪問日誌
因為ES保存日誌是永久保存,所以需要定期刪除一下日誌,下面命令為刪除指定時間前的日誌
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n days"`