This Github document shows how to set up X-Pack security enabled ELK stack with SSL using Nginx as a reverse proxy. X-Pack enabled ELK stack uses SSL encryption to communicate with other components of ELK stack or clients. It also allows Role-based user control that you can manage in the Kibana. Nginx redirects port 80 traffic to port 443. It redirects traffic from port 443 to Kibana port, which is 5601. The Nginx requires a public CA signed SSL certificate to avoid privacy warnings. ELK stack sets up its own private CA internally and signs certificates for ELK stack components (Kibana, Logstash, and Elasticsearch).
This has been tested with CentOS Linux release 7.5.1804 (Core) and Ubuntu 20.04 LTS.
- "ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch. The Elastic Stack is the next evolution of the ELK Stack.
- Official website: https://www.elastic.co/
Install Docker CE and Docker-Compose if you do not have. Link to instructions
You may need to increase max_map_count on the Docker host.
sudo sysctl -w vm.max_map_count=262144
git clone https://github.com/fabric-testbed/fabric-docker-images.git
Create CSR for the Nginx and get a signed SSL certificate from a public CA.
openssl req -new -newkey rsa:2048 -nodes -keyout your.hostname.com.key -out your.hostname.com.csr
You can change
docker-compose.yml
file to adjust required RAM for each container (es01
,es02
,es03
, andlogstash
) based on your host VM's available resource. By default, each container uses4GB
. Therefore, you may need16GB
RAM in your host VM.
By running create-certs docker-compose file, it will automatically create all certificates signed by own private CA for components of ELK stack.
cd fabric-docker-images/elk-ssl-xpack
docker-compose -f create_certs.yml run --rm create_certs
./setdatafolders.sh
docker-compose up -d es01 es02 es03
Get into es01
container and execute ./elasticsearch-setup-passwords
command. This will create randomly generate password for pre-defined users (apm_system
, kibana_system
, kibana
, logstash_system
, beats_system
, remote_monitoring_user
, and elastic
).
docker-compose exec es01 bash
cd bin
./elasticsearch-setup-passwords auto --batch --url https://es01:9200
exit
Keep the generated random passwords safe place. It is needed to set up ELK stack properly in the following steps.
Update ELASTICSEARCH_PASSWORD
of environment
section of kib01
in the docker-compose.yml
file using the randomly generated password for kibana_system
in the previous step.
kib01:
environment:
ELASTICSEARCH_USERNAME: kibana_system
ELASTICSEARCH_PASSWORD: randomly-generated-password
# Start kibana
docker-compose up -d kib01
# Check logs
docker-compose logs -f kib01
Change a public CA signed SSL certificate and key locations under nginx
's volumes
section in the docker-compose.yml
file.
nginx:
volumes:
- "/certs/your.hostname.com.cer:/etc/nginx/certs/your.hostname.com.cer"
- "/certs/your.hostname.com.key:/etc/nginx/certs/your.hostname.com.key"
Change the your.hostname.com
to your FQDN that matches your SSL certificate signed by a public CA.
server {
listen 80 default_server;
server_name your.hostname.com; # Change to your FQDN
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name your.hostname.com; # Change to your FQDN
access_log /var/log/nginx/nginx.vhost.access.log;
error_log /var/log/nginx/nginx.vhost.error.log;
ssl_certificate "/etc/nginx/certs/your.hostname.com.cer"; # Change to your cert
ssl_certificate_key "/etc/nginx/certs/your.hostname.com.key"; # Change to your key
}
docker-compose up -d nginx
docker-compose logs -f nginx
The Logstash requires PKCS8 format key to use SSL encryption on its pipeline. To convert it, we need to use openssl
installed Ubuntu docker container with access to shared certificate volume.
docker-compose build ubuntu
docker-compose run ubuntu
# Inside ubuntu docker container
cd /usr/share/elasticsearch/config/certificates/logstash
openssl pkcs8 -in logstash.key -topk8 -nocrypt -out logstash.pkcs8.key
chmod 664 logstash.pkcs8.key
exit
Change password of xpack.monitoring.elasticsearch.password
in the logstash/config/logstash.yml
file. The password has been automatically generated in the previous step.
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: 'logstash_system-password-here'
Change password of password
in the output
section of logstash/pipeline/01.beat.conf
file. The password has been automatically generated in the previous step.
# Output
output {
elasticsearch {
hosts => ["https://es01:9200"]
cacert => "/usr/share/elasticsearch/config/certificates/ca/ca.crt"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "elastic-password-here"
}
}
docker-compose up -d logstash
docker-compose logs -f logstash
You will need ELK stack's own CA certificate if you send data from remote clients using Beats (e.g. Filebeat, Metricbeat). Code below shows how to copy the certificate from es01
container's shared volume.
docker cp es01:/usr/share/elasticsearch/config/certificates/ca/ca.crt ca.crt
This section shows how to set up a Metricbeat from remote clients that use SSL encryption to send data to ELK stack.
Copy ELK stack's own CA certificate file
ca.crt
to remote client
Metricbeat version 7.8.1 is used to match our ELK stack's version
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.8.1-x86_64.rpm
sudo rpm -vi metricbeat-7.8.1-x86_64.rpm
sudo systemctl enable metricbeat
sudo vim /etc/hosts
# Add IP address of ELK stack and set host name as logstash
ip_address_of_elk logstash
Update the password of beats_system
under output.logstash
in the /etc/metricbeat/metricbeat.yml
on the remote client.
Update the file location of ssl.certificate_authorities
to the ELK's own CA certificate file ca.crt
.
# ------------------------ Logstash Output -------------------------
output.logstash:
# The Logstash hosts
hosts: ["logstash:5044"]
username: "beats_system"
password: "beats_system-password-here"
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
ssl.certificate_authorities: ["/certs/ca.crt"]
Start Metricbeat on the remote client.
sudo systemctl start metricbeat