Secure Your Free Wazuh SIEM

Secure Your Free Wazuh SIEM

If you have been looking for a free SIEM tool to fulfill PCI-DSS requirements such as FIM, centralized logging, alerting on suspicious activities and lots more, then the OSSEC fork Wazuh is the tool for you. Wazuh architecture is based on ELK stack with an additional RESTful API, additional features, and great documentation. Within this article, I will give a quick guide on how to get started with a high availability setup of Wazuh across two environments.

Overview of the architecture:

Architecture Design Overview

The environments can be two data centers or two VLAN within your environment or even one of the many IaaS clouds out there.

I will not cover securing your network perimeter but concentrate on secure communication between all the components within the diagram.

The Wazuh team has already taken care of encrypting the traffic between the agents, the managers, filebeat, logstash, kibana, and elasticsearch but they have not documented the encryption between elasticsearch nodes of the elasticsearch cluster when running in distributed mode. Also, the standard setup opens up the elasticsearch interface to potentially harmful direct queries. One could restrict the access based on the source IP but this would not resolve the unencrypted traffic problem. There is also no restriction on what users can do since there is only one user role in the current setup. In order to address some of these missing features, I will be using the community edition of elasticsearch guard, which adds encryption and some simple role-based access control.

Installing Wazuh

The Wazuh project has done an excellent job in documenting the installation so I will skip this step. Please refer to Installation Guide for details on the installation and lots more. For a quick installation, I just took the latest OVA images, replicated them several times and disabled the services not being used.

The table below is my reference table for IP’s, hostnames and roles (setup of IP’s address will not be done in this guide), I will only cover one side of the environment:

IP Host Name Role
10.0.1.21 wazuhmg-node-01 Wazuh Manager (master)
10.0.1.22 wazuhmg-node-02 Wazuh Manager (client)
10.0.1.31 wazuhelk-node-01 ELK stack (node)
10.0.1.32 wazuhelk-node-02 ELK stack (node)
10.0.1.41 wazuhelk-client-01 ELK stack (client)

I also updated some of my .bash_profile to include the additional paths to make life a bit more convertible by updating the PATH variable like so:

PATH=$PATH:$HOME/bin:/var/ossec/bin:/usr/share/elasticsearch/bin:\
/usr/share/kibana/bin

You will also need to update the /etc/hosts file or your central DNS service for all servers to be able to talk to each other using DNS Names.

cat >> /etc/hosts <<\EOF
10.0.0.21 wazuhelk-node-01 wazuhelk-node-01.osmsdemo.ilan
10.0.0.22 wazuhelk-node-02 wazuhelk-node-02.osmsdemo.ilan
10.0.0.31 wazuhmg-node-01 wazuhmg-node-01.osmsdemo.ilan
10.0.0.32 wazuhmg-node-02 wazuhmg-node-02.osmsdemo.ilan
10.0.0.41 wazuhelk-client-01 wazuhelk-client-01.osmsdemo.ilan
EOF

Wazuh Managers Configuration

The Wazuh manager in the distributed setup does not need all the services on the OVA so we will disable ELK services and install filebeat packages which will be used to send our logs over to the ELK cluster.

Disable services and stop them:

systemctl disable elasticsearch.service logstash.service kibana.service
systemctl stop elasticsearch.service logstash.service kibana.service

Install the additional package filebeat:

yum -y install filebeat

Install the additional package for Wazuh cluster

yum -y install python-setuptools python-cryptography

Wazuh Configuration

In order for the two managers to talk to each in cluster mode we need to generate a 32 character long key and change the hostnames:

openssl rand -hex 16

Which gave me this for the setup ca3fc8a415644308f8cb7f930eb23183

Setting the hostname on server 10.0.1.21:

hostnamectl set-hostname wazuhmg-node-01

Setting the hostname on server 10.0.1.22:

hostnamectl set-hostname wazuhmg-node-02

Now we will need to update the configuration files in order to get the Wazuh managers to talk to each other in cluster mode:

Edit the cluster section in /var/ossec/etc/ossec.conf with your favorite editor:

Configuration for 10.0.1.21:

<cluster>
<name>wazuh</name>
<node_name>wazuhmg-node-01</node_name>
<node_type>master</node_type>
<key>ca3fc8a415644308f8cb7f930eb27183</key>
<interval>2m</interval>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>10.0.1.21</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>

Configuration for 10.0.1.22:

<cluster>
<name>wazuh</name>
<node_name>wazuhmg-node-02</node_name>
<node_type>client</node_type>
<key>ca3fc8a415644308f8cb7f930eb27183</key>
<interval>2m</interval>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>10.0.1.21</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>

Restart the managers and check everything is up and running

/var/ossec/bin/ossec-control restart

Verify cluster daemons are running and are seeing each other:

ps axu|grep clusterd

You should see processes running like below:

wazuh-clusterd
wazuh-clusterd-internal -tmaster
wazuh-clusterd
wazuh-clusterd

With the cluster now running you will see two managers when you run the following command:

cluster_control -l

You should see some output like the following:


---------------------------------------------------------------
Name Address Type Version
---------------------------------------------------------------
wazuhmg-node-01 10.0.1.21 master 3.3.1
wazuhmg-node-02 10.0.1.22 client 3.3.1
---------------------------------------------------------------

Now that we have the master manager replicating all configs to the client manager we can start the setup of agent authentication. Remember, from now on you must only make configuration changes on the master manager for Wazuh. All agents will need to be registered with the master manager but will be able to communicate with both managers, once authenticated.

Let’s now enable the authentication daemon in order to facilitate client registration.

/var/ossec/bin/ossec-control enable auth

Set the password to use when registering new agents.

openssl rand -hex 16 > /var/ossec/etc/authd.pass

On the Wazuh agent, you will need to register each agent with the master manager and update its configuration files to talk with both managers.

/var/ossec/bin/agent-auth -m 10.0.1.21 -P "<password that is in file master /var/ossec/etc/authd.pass> "

We will also need to update the client configuration file to talk to both managers. Update the agent config server section with the below /var/ossec/etc/ossec.conf (this is the syntax for agents of version 3.1.0 and above):


<server>
<address>10.0.1.21</address>
<port>1514</port>
<protocol>udp</protocol>
</server>
<server>
<address>10.0.1.22</address>
<port>1514</port>
<protocol>udp</protocol>
</server>

So now we have the agents talking to two managers, if one is not reachable the agent will try to contact the other manager. All configuration changes on the primary manager will be synced over to all other client managers.

If your setup does not have enough agents to justify a distributed architecture, you can just enable the ELK stack again and you will have a fully functional Wazuh cluster.
Enable services and start them:

systemctl enable elasticsearch.service logstash.service kibana.service

systemctl start elasticsearch.service logstash.service kibana.service

For those who want a fully distributed architecture, we’re going to start looking at the setup of our ELK cluster.

Search Guard Configuration

In order for the ELK cluster to communicate securely within itself, we will need to install some helper packages called “elasticsearch guard”. You must ensure that your version section is matched up with your version of elasticsearch, in my case it was 6.2.1.

elasticsearch-plugin install -b com.floragunn:search-guard-6:6.2.1-22.1

If you need to install it offline you can download it from Here.

elasticsearch-plugin install -b file:///path/to/search-guard-6-<version>.zip

If you need a GUI to manage the user accounts in your setup, there is a search guard kibana plugin that can be used to configure usernames, roles and groups. This plugin is not for free though, It will only work during the trial period unless you purchase the enterprise license of search guard.

kibana-plugin install file:///path/to/search-guard-kibana-plugin-6.2.4-12.zip

Now that we have search guard installed you will need to get your hands dirty in the world of SSL/TLS certificates. If you have your own PKI you can use it to create and sign your certs, for this guide we will use the toolkit provided by the search guard team to generate the certs that we need for every component to talk the same language.

git clone https://github.com/floragunncom/search-guard-ssl.git

You will need to make some changes to the following files in order to have your own custom CA:

search-guard-ssl/example-pki-scripts/etc/root-ca.conf
search-guard-ssl/example-pki-scripts/etc/signing-ca.conf

Change the following line to suit your environment:

[ ca_dn ]
0.domainComponent = "com"
1.domainComponent = "example"
organizationName = "Example Com Inc."
organizationalUnitName = "Example Com Inc. Root CA"
commonName = "Example Com Inc. Root CA"

Let’s also change the default password ‘changeit’ of the certs since it’s not suitable for a production environment.

cd search-guard-ssl/example-pki-scripts/
sed -i -- 's/changeit/newpassword/g' example.sh
sed -i -- 's/example.com, OU=SSL, O=Test, L=Test, C=DE/osmsdemo.ilan, OU=SSL, O=OSMS, L=OSMSDEMO, C=AT/g' *.sh
sed -i -- 's/OU=client, O=client, L=Test, C=DE/ OU=client, O=OSMS, L=OSMSDEMO, C=AT/g' *.sh
sed -i -- 's/example.com/osmsdemo.ilan/g' *.sh

Once this is done you can generate the keys you need for all components. The example.sh script will generate all the keys you need for the setup except for some, so I updated the example script with following entries:

./gen_node_cert_openssl.sh "/CN=wazuhmg-node-01.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhmg-node-01.osmsdemo.ilan" "wazuhmg-node-01" newpassword capass
./gen_node_cert_openssl.sh "/CN=wazuhmg-node-02.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhmg-node-02.osmsdemo.ilan" "wazuhmg-node-02" newpassword capass
./gen_node_cert_openssl.sh "/CN=wazuhelk-node-01.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhelk-node-01.osmsdemo.ilan" "wazuhelk-node-01" newpassword capass
./gen_node_cert_openssl.sh "/CN=wazuhelk-node-02.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhelk-node-02.osmsdemo.ilan" "wazuhelk-node-02" newpassword capass
./gen_node_cert_openssl.sh "/CN=wazuhelk-client-01.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhelk-client-01.osmsdemo.ilan" "wazuhelk-client-01" newpassword capass
./gen_node_cert_openssl.sh "/CN=wazuhk-node-01.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhk-node-01.osmsdemo.ilan" "wazuhk-node-01" newpassword capass
./gen_node_cert_openssl.sh "/CN=wazuhl-node-01.osmsdemo.ilan/OU=SSL/O=OSMS/L=OSMSDEMO/C=AT" "wazuhl-node-01.osmsdemo.ilan" "wazuhl-node-01" newpassword capass

Running the example.sh will generate all the certificates you need and clean.sh will delete them, in case you need to start over for any reason.

Now if we want to use the openssl admin key called “kirk”, we will need to change its format to pkcs8.

We will need to copy the keys over to the appropriate location of the applications like elasticsearch, logstash and kibana:

mkdir /etc/{logstash,elasticsearch,kibana}/ssl
chmod -R 0750 /etc/{logstash,elasticsearch,kibana}/ssl
chgrp -R elasticsearch /etc/{logstash,elasticsearch,kibana}/ssl
cp example-pki-scripts/{wazuhelk,kirk}* /etc/elasticsearch/ssl
cp example-pki-scripts/wazuhl* /etc/logstash/ssl
cp example-pki-scripts/wazuhk* /etc/kibana/ssl
cp example-pki-scripts/ca/root-ca.pem /etc/elasticsearch/ssl
ln -s /etc/elasticsearch/ssl/root-ca.pem /etc/kibana/ssl/root-ca.pem
ln -s /etc/elasticsearch/ssl/root-ca.pem /etc/logstash/ssl/root-ca.pem
openssl pkcs8 -topk8 -inform PEM -outform PEM -in /etc/elasticsearch/ssl/kirk.key.pem -out /etc/elasticsearch/ssl/kirk.pkcs8.key.pem
openssl rsa -in /etc/kibana/ssl/wazuhk-node-01.key -out /etc/kibana/ssl/wazuhk-node-01.nopass.key
gpasswd -M logstash,kibana elasticsearch

ELK Cluster Configuration

Setting the hostname on server 10.0.1.31:

hostnamectl set-hostname wazuhelk-node-01

Setting the hostname on server 10.0.1.32:

hostnamectl set-hostname wazuhelk-node-02

Let’s get started with the configuration of elasticsearch to communicate via an encrypted channel with each node.
The following config will be the same on both nodes or any additional nodes you may want to deploy later on.

Edit /etc/elasticsearch/elasticsearch.yml

Update the nodes descriptive name, in our case, we will be using the OS hostname:

node.name: ${HOSTNAME}

Next, we configure the interface/IP that elasticsearch will be listening on, I have picked all interfaces and the localhost:

network.host: [ "_site_", "_local_" ]

Let’s also configure the nodes to see each other.

discovery.zen.ping.unicast.hosts: ["wazuhelk-node-01", "wazuhelk-node-02", "wazuhmg-node-01"]
discovery.zen.minimum_master_nodes: 2

In the next step the configuration will differ a bit on each node:

searchguard.enterprise_modules_enabled: false

### SSL Trasport node-2-node OpenSSL #####
searchguard.ssl.transport.pemcert_filepath: ssl/wazuhelk-node-01.crt.pem
searchguard.ssl.transport.pemkey_filepath: ssl/wazuhelk-node-01.key
searchguard.ssl.transport.pemkey_password: newpassword
searchguard.ssl.transport.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.nodes_dn:
- 'CN=wazuhelk-node-*'
- 'CN=wazuhmg-node-*'.

#### HTTP/REST layer OpenSSL #############

searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: ssl/wazuhelk-node-01.crt.pem
searchguard.ssl.http.pemkey_filepath: ssl/wazuhelk-node-01.key
searchguard.ssl.http.pemkey_password: newpassword
searchguard.ssl.http.pemtrustedcas_filepath: ssl/root-ca.pem
#searchguard.allow_unsafe_democertificates: true
searchguard.allow_default_init_sgindex: true
searchguard.authcz.admin_dn:
- CN=kirk,OU=client,O=OSMS,L=OSMSDEMO,C=AT

searchguard.restapi.roles_enabled: ["sg_all_access"]

Now it’s time to restart the elasticsearch and load the access control configuration from the config files.

The default location of the access control configuration files is :

/usr/share/elasticsearch/plugins/search-guard-6/sgconfig

I recommend changing its location to the elasticsearch config directory so it is preserved when you upgrade the plugin.

cp -r /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/ /etc/elasticsearch/

Now we need to make some changes to logstash and kibana roles so they can access and work on wazuh indexes. We’re also going to change the default password.
The script “hash.sh” will help you generate your passwords.

chmod 755 /usr/share/elasticsearch/plugins/search-guard-6/tools/*.sh
/usr/share/elasticsearch/plugins/search-guard-6/tools/hash.sh

Then update the roles file with some additional permission.
Update the hashes in /etc/elasticsearch/sgconfig/sg_internal_users.yml to update your passwords.
Update the sg_kibana_server following role in /etc/elasticsearch/sgconfig/sg_roles.yml

'*wazuh*':
'*':
- CRUD
- CREATE_INDEX
- INDICES_ALL

Update the sg_logstash role in /etc/elasticsearch/sgconfig/sg_roles.yml

'*wazuh*':
'*':
- CRUD
- CREATE_INDEX

Now we can update the index in our elasticsearch cluster with the following command:

/usr/share/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -cd /etc/elasticsearch/sgconfig/ -icl -key /etc/elasticsearch/ssl/kirk.pkcs8.key.pem -keypass newpassword -cert /etc/elasticsearch/ssl/kirk.all.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -nhnv

Finally, we are ready to get kibana and logstash to talk to our cluster securely.

Kibana Configuration

Now that we have configured elasticsearch to only talk with authorized clients, we must tell kibana where and what the keys are.
The Kibana configuration file will need to be updated with elasticsearch’s https URL and provide it with some login details including making it communicate over an encrypted channel.
Update or add the following entries in /etc/kibana/kibana.yml

elasticsearch.url: “https://wazuhelk-node-01.osmsdemo.ilan:9200”
elasticsearch.username: "kibanaserver"
elasticsearch.password: "newpassword"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/ssl/wazuhk-node-01.crt.pem
server.ssl.key: /etc/kibana/ssl/wazuhk-node-01.nopass.key
elasticsearch.ssl.certificateAuthorities: /etc/kibana/ssl/root-ca.pem
elasticsearch.ssl.keyPassphrase: newpassword

Restart kibana to talk securely to the elasticsearch node.
Systemctl restart kibana

Logstash Configuration

The Logstash configuration file also needs to be updated to enable the writing to the elasticsearch index and receive input from filebeat over encrypted channels.
We need to update the file /etc/logstash/conf.d/01-wazuh.conf and update the input and output sections to look as such:

input:
input {
beats {
port => 5000
codec => "json_lines"
ssl => true
ssl_certificate => "/etc/logstash/ssl/wazuhl-node-01.crt.pem"
ssl_key => "/etc/logstash/ssl/wazuhl-node-01.key"
ssl_key_passphrase => "newpassword"
}
}

output:
output {
elasticsearch {
hosts => ["wazuhelk-node-01.osmsdemo.ilan:9200"]
user => "logstash"
password => "newpassword"
index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
document_type => "wazuh"
ssl => "true"
ssl_certificate_verification => "true"
cacert => "/etc/logstash/ssl/root-ca.pem"
}
}

Filebeat configuration

Our logstash server is ready to listen for a secure connection and we can start to configure our filebeat to send via these channels.
Edit /etc/filebeat/filebeat.yml and update the output section on both Wazuh managers, also ensure you have the SSL/TLS certificate you generated in the right location on the Manager.

output:
logstash:
# The Logstash hosts
hosts: ["wazuhl-node-01.osmsdemo.ilan:5000","wazuhl-node-02.osmsdemo.ilan:5000"]
SSL:
certificate_authorities: ["/etc/filebeat/SSL/wazuhl-node-01.crt.pem","/etc/filebeat/SSL/wazuhl-node-02.crt.pem"]

Restart the filebeat server to ensure that new configuration is loaded.

systemctl restart filebeat

Final steps

Now that everything is setup we need to see that it all working as designed.
Using your favorite web browsers go to https://wazuhelk-node-01.osmsdemo.ilan:5601 and login using the admin role:

Username: admin
Password: newpassword (remember, for production purposes this should definitely be changed to something secure)

In order to avoid a split-brain in our elasticsearch cluster it is recommended to add an additional master node which holds no data, I nominated the wazuhmg-node-01. Update /etc/elasticsearch/elasticsearch.yml with the below config. The important part here is the node.data: false setting:

node.name: ${HOSTNAME}
network.host: [ "_site_", "_local_" ]
discovery.zen.ping.unicast.hosts: [ "wazuhelk-node-01", "wazuhelk-node-02", "wazuhmg-node-01" ]
discovery.zen.minimum_master_nodes: 2
node.data: false
searchguard.enterprise_modules_enabled: false
### SSL Trasport node-2-node OpenSSL #####
searchguard.ssl.transport.pemcert_filepath: ssl/wazuhmg-node-01.crt.pem
searchguard.ssl.transport.pemkey_filepath: ssl/wazuhmg-node-01.key
searchguard.ssl.transport.pemkey_password: newpassword
searchguard.ssl.transport.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.nodes_dn:
- 'CN=wazuhelk-node-*'
- 'CN=wazuhmg-node-*'.

Now you are ready to replicate the whole configuration for the next datacenter or cloud environment. There are for sure lots of other topic’s that I have not covered yet, one is to set up another elasticsearch client to talk to all environments, but this will be a topic for another day and time.

For further information on the software in this guide visit the following links:
Wazuh Documentation
Search Guard Documentation
ELK Documentation

 

Leave a Reply

Your email address will not be published. Required fields are marked *