1.logstash配置不多,主要编写数据处理解析规则,具体可以参考 http://udn.yyuap.com/doc/logstash-best-practice-cn/get_start/daemon.html 这里。
2.filebeat文件配置详解(新版本里面有两个新文件filebeat.yml或者 filebeat.full.yml)在实际生成中只需要在 filebeat.full.yml配置运行即可
# Filebeat Configuration Example #
Output
#protocol: “https” #username: “admin”
#password: “s3cr3t”
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is “filebeat” and generates
# [filebeat-]YYYY.MM.DD keys.
#index: “filebeat”
# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
#template: # Template name. By default the template name is filebeat.
#name: “filebeat”
# Path to template file
#path: “filebeat.template.json”
# Overwrite existing template #overwrite: false
# Optional HTTP Path
#path: “/elasticsearch”
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn’t succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index # requests are made.
#flush_interval: 1
# Boolean that sets if the topology is kept in Elasticsearch.
The default is # false. This option makes sense only for Packetbeat.
#save_topology: false
# The time to live in seconds for the topology information that is stored in # Elasticsearch. The default is 15 seconds. #topology_expire: 15 # tls configuration. By default is off. #tls: # List of root certificates for HTTPS server verifications #certificate_authorities: [“/etc/pki/root/ca.pem”] # Certificate for TLS client authentication #certificate: “/etc/pki/client/cert.pem” # Client Certificate Key #certificate_key: “/etc/pki/client/cert.key” # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] # Configure minimum TLS version allowed for connection to logstash #min_version: 1.0 # Configure maximum TLS version allowed for connection to logstash #max_version: 1.2 Logstash as output #logstash: # The Logstash hosts #hosts: [“localhost:5044”] # Number of workers per Logstash host. #worker: 1 # The maximum number of events to bulk into a single batch window. The # default is 2048. #bulk_max_size: 2048 # Set gzip compression level. #compression_level: 3 # Optional load balance the events between the Logstash hosts #loadbalance: true # Optional index name. The default index name depends on the each beat. # For Packetbeat, the default is set to packetbeat, for Topbeat # top topbeat and for Filebeat to filebeat. #index: filebeat # Optional TLS. By default is off. #tls: # List of root certificates for HTTPS server verifications #certificate_authorities: [“/etc/pki/root/ca.pem”] # Certificate for TLS client authentication #certificate: “/etc/pki/client/cert.pem” # Client Certificate Key #certificate_key: “/etc/pki/client/cert.key” # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] File as output #file: # Path to the directory where to save the generated files. The option is mandatory.
#path: “/tmp/filebeat”
# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc. #filename: filebeat # Maximum size in kilobytes of each file. When this size is reached, the files are # rotated. The default value is 10 MB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first.
The default # is 7 files.
#number_of_files: 7
Console output # console:
# Pretty print json event
#pretty: false
Shipper shipper:
# The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used. #name:
# The tags of the shipper are included in their own field with each # transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: [“service-X”, “web-tier”]
# Uncomment the following if you want to ignore transactions created
# by the server on which the shipper is installed. This option is useful
# to remove duplicates if shippers are installed on multiple servers.
#ignore_outgoing: true
# How often (in seconds) shippers are publishing their IPs to the topology map.
# The default is 10 seconds. #refresh_topology_freq: 10
# Expiration time (in seconds) of the IPs published by a shipper to the topology map.
# All the IPs will be deleted afterwards. Note, that the value must be higher than
# refresh_topology_freq. The default is 15 seconds.
#topology_expire: 15
# Internal queue size for single events in processing pipeline
#queue_size: 1000
# Configure local GeoIP database support.
# If no paths are not configured geoip is disabled.
#geoip: #paths:
# – “/usr/share/GeoIP/GeoLiteCity.dat”
# – “/usr/local/var/GeoIP/GeoLiteCity.dat”
Logging
# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
# 建议在开发时期开启日志并把日志调整为debug或者info级别,在生产环境下调整为error级别
# 开启日志 必须设置to_files 属性为true logging: # Send all logging output to syslog. On Windows default is false, otherwise # default is true.
# 配置beats日志。日志可以写入到syslog也可以是轮滚日志文件。默认是syslog # tail -f /var/log/messages #to_syslog: true # Write all logging output to files. Beats automatically rotate files if rotateeverybytes # limit is reached.
# 日志发送到轮滚文件 #to_files: false # To enable logging to files, to_files option has to be set to true # to_files设置为true才可以开启轮滚日志记录 files: # The directory where the log files will written to.
# 指定日志路径
#path: /var/log/mybeat
# The name of the files where the logs are written to.
# 指定日志名称 #name: mybeat
# Configure log file size limit. If limit is reached, log file will be # automatically rotated
# 默认文件达到10M就会滚动生成新文件 rotateeverybytes: # = 10MB
# Number of rotated log files to keep. Oldest files will be deleted first.
# 保留日志文件周期。 默认 7天。值范围为2 到 1024
#keepfiles: 7
# Enable debug output for selected components. To enable all selectors use [“*”]
# Other available selectors are beat, publish, service # Multiple selectors can be chained. #selectors: [ ]
# Sets log level. The default log level is error. # Available log levels are: critical, error, warning, info, debug # 日志级别,默认是error #level: error
转载于:https://my.oschina.net/VILLE/blog/
发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/212573.html原文链接:https://javaforall.net

