How to install Elasticsearch 2.4 and OpenResty on CentOS 7

 
 

In this tutorial we’ll explain how to install and configure a three node Elasticsearch 2.4 cluster on CentOS 7. Also, we’ll learn how to install OpenResty that will provide a layer of security for our Elasticsearch cluster, Elasticsearch-HQ that will give us some nice metrics, Kopf plugin and of course Kibana. Why ES 2.4? Well it’s complicated but lets say we’ve choose 2.4 just because this particular version isn’t supported anymore by Elastic and some of you may be “forced” to use this version for a while until you redesign your software to comply with a newer version in terms of code and standards. We strongly recommend to download and use the latest Elasticsearch version available if possible and make use of the latest features, improved performance and security enhancements.

VMs Configuration for our Elasticsearch cluster

Let’s assume that our VMs are using this hardware and software configuration:
– 32 vCPUs
– 64G RAM Memory
– 50G OS Disk
– 1T Data Disk
– 100G Logs Disk
– CentOS 7
– 10.10.10.0/24 Subnet

Each individual node will use one of the next IP addresses, so for the first node called node01 we’ll use the IP address 10.10.10.1, for the second node named node02 we’ll use 10.10.10.2 and for the third node node03 we have 10.10.10.3.

Basic preparation

We can now start with our very first step by checking if the OS needs any software updates, assuming that we’ve got already root access and we won’t use the magic sudo each time.

Please bear in mind that we’ll repeat all the below steps on all three nodes.


$ yum update

Do all the necessary updates if you see any shown on your console.

Now lets begin the actual work, on this step we’ll start by creating the folders needed for Elasticsearch, one for ES data where the Elasticsearch data will be held and another one for ES logs, obviously for Elasticsearch logs.


$ mkdir -p /mnt/elasticsearch/{data,logs}

Adding new disks for Elasticsearch

As mentioned in the beginning of our tutorial we’ll be using 3 Disks, a 50G disk needed for the OS and two disks for Elasticsearch, data and respectively logs. If you don’t have the second pair of disks needed for ES configured already we can do it now. By the way this is not mandatory so you can skip the Disks configuration and jump straight to the software installation and configuration if you won’t have the same setup / requirement.


$ ls -la /dev/sd*

The console output should be similar to this:


brw-rw---- 1 root disk 8,  0 Mar 20 14:25 /dev/sda
brw-rw---- 1 root disk 8,  1 Mar 20 14:25 /dev/sda1
brw-rw---- 1 root disk 8,  2 Mar 20 14:25 /dev/sda2
brw-rw---- 1 root disk 8,  3 Mar 20 14:25 /dev/sda3
brw-rw---- 1 root disk 8, 16 Mar 20 14:25 /dev/sdb
brw-rw---- 1 root disk 8, 17 Mar 20 14:25 /dev/sdc

We can see that the disk /dev/sdb and /dev/sdc doesn’t contain any partitions yet, meaning that those doesn’t have a number after their names like 1, 2, 3 etc.. We’ll achieve this by invoking the mighty fdisk command like show below:


$ fdisk /dev/sdb

Lets go through the fdisk utility steps quickly, below you have the full output of the process:


Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x2062f231.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-XXXXXXX, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-XXXXXX, default XXXXXXXX):
Using default value XXXXXXXX
Partition 1 of type Linux and of size XX GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

A quick view over the input keys that needs to be used for fdisk in our specific case:
n, p, 1, [Enter], [Enter], t, 8e, w.

We’re now done with /dev/sdb disk and we need to repeat this process for /dev/sdc as well.

Formating and mounting disks

Having now both disks partitioned, /dev/sdb1 and /dev/sdc1 we can continue with the next task, formatting these partition as ext4 and mount them.


$ mkfs.ext4 /dev/sdb1

The output of the above command should look like in the example below:


mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
XXXXXX inodes, XXXXXX blocks
XXXXXX blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=XXXXXX
XXXXXX block groups
XXXXXX blocks per group, XXXXXX fragments per group
XXXXXX inodes per group
Superblock backups stored on blocks:
	XXXXXX, XXXXXX, XXXXXX, XXXXXX, XXXXXX, XXXXXX, XXXXXX, XXXXXX, XXXXXX,
	XXXXXX

Allocating group tables: done
Writing inode tables: done
Creating journal (XXXXXX blocks): done
Writing superblocks and filesystem accounting information: done

Repeat this step for the second partition as well, /dev/sdc1.

Now that we have everything in place in terms of disks we can mount our new partitions and start using them. We need to edit the fstab file and add two new lines, each line for each new partition that we’ve just created.

(vi cheatsheet)


$ vi /etc/fstab

Once we have the file open we’ll be adding the two new lines that we’ve mentioned like shown below, at the end of the file. By amending fstab file we’ll make sure that the partitions will be mounted automatically when the VM gets rebooted for various reasons like patching, scheduled maintenance etc..


...
/dev/sdb1 	/mnt/elasticsearch/data 	ext4 	defaults 	1 1
/dev/sdc1 	/mnt/elasticsearch/logs 	ext4 	defaults 	1 1

We can save the fstab file and try to mount the new partitions by using the command mount.


$ mount -a

If the fstab entries were correctly added and the mount process didn’t fail we should now be able to get some nice details about these by using df command.


$ df -TH

A successful console output should look like this:


Filesystem                  Type      Size  Used Avail Use% Mounted on
...
/dev/sdb1                   ext4        1T     0    1T   0% /mnt/elasticsearch/data
/dev/sdc1                   ext4      100G     0  100G   0% /mnt/elasticsearch/logs
...

Now that we have completed our Hardware checklist (assigned IPs, disks, firewall etc.), on all three nodes, we can jump to our Software stack installation step.

Software installation

We will be using yum to install (almost) all necessary packages that we need, once again on all three nodes.


$ yum -y install epel-release\
   yum-utils\
   python36\
   python36-tools\
   gcc\
   pcre-devel\
   openssl-devel\
   httpd-tools\
   curl\
   git\
   java-1.8.0-openjdk

$ yum -y groupinstall development

Once we have these base packages in place we need to create two repo files, one for Elasticsearch and one for Kibana.

Elasticsearch and Kibana repos


$ cd /etc/yum.repos.d/
$ touch elasticsearch.repo kibana.repo

As we said before we’ll be using Elasticsearch 2.4 so we need to make sure that we’ll get the right version by creating our own repository file.


$ vi elasticsearch.repo

Add the next lines to elasticsearch.repo.


[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

Now let’s edit the kibana.repo file.


$ vi kibana.repo

Add these lines for our kibana source.


[kibana-4.6]
name=Kibana repository for 4.6.x packages
baseurl=https://packages.elastic.co/kibana/4.6/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

All good, now we should be able to actually install Elasticsearch 2.4.X and Kibana 4.6.

Elasticsearch and Kibana installation

Lets try that by using once again the yum command as shown below.


$ yum -y install elasticsearch kibana

Once the installation has been completed we can start to configure our Elasticsearch stack.

Elasticsearch and Kibana configuration


$ vi /etc/elasticsearch/elasticsearch.yml

Elasticsearch configuration file, elasticsearch.yml could look similar to this:


cluster.name: mycluster
node.name: node01
path.data: /mnt/elasticsearch/data
path.logs: /mnt/elasticsearch/logs
bootstrap.memory_lock: true
network.host: 10.10.10.1
http.port: 9200
http.compression: true
http.compression_level: 9
discovery.zen.ping.unicast.hosts: ["10.10.10.1", "10.10.10.2", "10.10.10.3"]
discovery.zen.minimum_master_nodes: 3
gateway.recover_after_nodes: 3
action.destructive_requires_name: true

Do the same for the other two nodes by replacing just node.name: node0X and network.host: 10.10.10.X with the correct values for each individual node.

Now we can configure Kibana.


$ vi /opt/kibana/config/kibana.yml

The short version of our kibana.yml configuration file will look like this:


server.port: 5601
server.host: "node01"
server.maxPayloadBytes: 1048576
elasticsearch.url: "http://node01:9200"
kibana.index: ".kibana"
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.startupTimeout: 5000

Same configuration must be used for the other two nodes but first we’ll need to change two values that defines our source, server.host: "node0X" and elasticsearch.url: "http://node0X:9200".

At this stage we can say that we’re done with the configuration step for Elasticsearch and Kibana and we can move on to the next major step where we can enable these services and start using them.

Lets enable these services to start automatically every time an Elasticsearch node gets rebooted, we’ll repeat these commands on all three nodes.


$ systemctl enable elasticsearch
$ systemctl enable kibana

And now lets start the services as shown below.


$ systemctl start elasticsearch
$ systemctl start kibana

If everything went fine and no errors were shown on the console we can actually move to the next step and check our entire Elasticsearch Stack

Test Elasticsearch and Kibana

By using only one CLI command with the help of curl utility like show below we can check our entire Elasticsearch Stack:


$ curl http://node0[1-3]:9200

A successful console output will be a JSON formatted output and will look similar to this:


[1/3]: http://node01:9200 --> 
--_curl_--http://node01:9200
{
  "name" : "node01",
  "cluster_name" : "mycluster",
  "cluster_uuid" : "LOAQ9uItSQitNPmrMMwORw",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}

[2/3]: http://node02:9200 --> 
--_curl_--http://node02:9200
{
  "name" : "node02",
  "cluster_name" : "mycluster",
  "cluster_uuid" : "LOAQ9uItSQitNPmrMMwORw",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}

[3/3]: http://node03:9200 --> 
--_curl_--http://node03:9200
{
  "name" : "node03",
  "cluster_name" : "mycluster",
  "cluster_uuid" : "LOAQ9uItSQitNPmrMMwORw",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}

All good so far now lets try to see if Kibana is up and running, we’ll simply check the header response by using -I option and port 5601 that we’ve configured early:


$ curl -I http://node0[1-3]:5601

If we’re getting back a 200 OK response like in the example below then everything is fine, we’ve managed to install, configure and start Elasticsearch and Kibana successfully.


[1/3]: http://node01:5601 --> 
--_curl_--http://node01:5601
HTTP/1.1 200 OK
kbn-name: kibana
kbn-version: 4.6.6
cache-control: no-cache
Date: Fri, 20 Apr 2018 11:51:47 GMT
Connection: keep-alive


[2/3]: http://node02:5601 --> 
--_curl_--http://node02:5601
HTTP/1.1 200 OK
kbn-name: kibana
kbn-version: 4.6.6
cache-control: no-cache
Date: Fri, 20 Apr 2018 11:51:47 GMT
Connection: keep-alive


[3/3]: http://node03:5601 --> 
--_curl_--http://node03:5601
HTTP/1.1 200 OK
kbn-name: kibana
kbn-version: 4.6.6
cache-control: no-cache
Date: Fri, 20 Apr 2018 11:51:47 GMT
Connection: keep-alive

We are now ready to move to our next step where we’ll install Elasticsearch-HQ in order to get some nice metrics out of our Elasticsearch stack.

Install Elasticsearch-HQ

We need to keep in mind that Elasticsearch-HQ is not really a plugin, is basically a self-contained software that is running on top of Elasticsearch providing monitoring and management for our Elasticsearch stack.


$ cd /opt/
$ git clone https://github.com/ElasticHQ/elasticsearch-HQ.git
$ cd elasticsearch-HQ/
$ /usr/bin/pip3.6 install -r requirements.txt

If the installation process went fine then we can start E-HQ using Python3 which we’ve installed in the beginning of our short tutorial.


$ nohup /usr/bin/python3.6 /opt/elasticsearch-HQ/manage.py runserver &

Elasticsearch-HQ will now run in background as we’ve added nohup at the beginning of our statement line.

Test Elasticsearch-HQ

Now we can check Elasticsearch-HQ by opening a browser and accessing http://node01:5000, simple as that. You may change the node name within the URL if you want to access Elasticsearch-HQ from another node. In any of the cases Elasticsearch-HQ will give you a good overview of the entire cluster, not only for the node that you’re accessing it.

Install Kopf plugin

What? Monitoring again? Well the idea is that Kopf is a plugin and we’ll learn how to install a plugin using Elasticsearch. Kopf has its own role within our stack and on top of that is quite good, so lets jump and install this plugin.


$ cd /usr/share/elasticsearch
$ bin/plugin install lmenezes/elasticsearch-kopf/2.1.1

Done, we’ve just installed an Elasticsearch plugin.

Test Kopf plugin

Just open a new browser window and access this URL http://node01:9200/_plugin/kopf to get a nice web GUI, you can do the same on the other two Elasticsearch nodes and you should get the same view.

Install OpenResty (NGiNX & LuaJIT)

So far we’ve learned how to install Elasticsearch, Kibana, Elasticsearch-HQ and Kopf but now it’s time to secure our environment. We’ll achieve this by using OpenResty which is basically a package that contains NGiNX core and LuaJIT scripting.


$ mkdir -p /opt/openresty/
$ cd /opt/

Now let’s download, unpack and symlink OpenResty, please make sure that you’re downloading the latest version available from OpenResty’s website.


$ wget https://openresty.org/download/openresty-1.11.2.5.tar.gz
$ tar -zxf openresty-1.11.2.5.tar.gz
$ ln -s /opt/openresty-1.11.2.5/ openresty

Once the decompression step is completed we can configure OpenResty:


$ cd /opt/openresty
$ ./configure --prefix=/usr/local/openresty --with-luajit --with-http_auth_request_module

We need now to gmake it:


$ gmake

If the above step doesn’t fail (and shouldn’t) then we can proceed with gmake install:


$ gmake install

Next step is about creating openresty service, let’s edit the file openresty.service using vi:


$ vi /etc/systemd/system/openresty.service

Add the lines below to our service definition file:


[Unit]
Description=A dynamic web platform based on Nginx and LuaJIT.
After=network.target

[Service]
Type=forking
PIDFile=/run/openresty.pid
ExecStartPre=/usr/local/openresty/bin/openresty -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/local/openresty/bin/openresty -g 'daemon on; master_process on;'
ExecReload=/usr/local/openresty/bin/openresty -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/openresty.pid
TimeoutStopSec=5
KillMode=mixed

[Install]
WantedBy=multi-user.target

We’ve configure the service sequence now let’s configure OpenResty, we’ll dump all the default configs and we’ll add ours:


$ cat /dev/null > /usr/local/openresty/nginx/conf/nginx.conf
$ vi /usr/local/openresty/nginx/conf/nginx.conf

Add the lines below to our main nginx.conf file:


user nobody;
worker_processes auto;
pid /run/openresty.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    tcp_nopush      on;
    tcp_nodelay     on;

    keepalive_timeout  65;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;

    access_log /var/log/openresty/access.log;
    error_log /var/log/openresty/error.log;

    gzip  on;
    gzip_disable "msie6";

    include ../sites/*;
}

Please bear in mind that we don’t have any reference in place for TLSv3 as that could raise a security issue (Poodle).
Next step is about creating the log directory for all of our virtual hosts:


$ mkdir -p /var/log/openresty

All good, service files has been created previously and now let’s make sure that the daemon knows about our new service by invoking daemon-reload:


$ systemctl daemon-reload

We can now enable the service to start automatically when the nodes are rebooted and we’ll start the service:


$ systemctl enable openresty
$ systemctl start openresty

We can simply check if the service is running or not by executing:


$ systemctl status openresty

If OpenResty service is up and running then we may proceed with our next step, configuring our default vHost file that will respond on port 80, we need to create a folder called sites and a default.conf file like shown in the example below:


$ mkdir -p /usr/local/openresty/nginx/sites
$ vi /usr/local/openresty/nginx/sites/default.conf

Our default.conf file will contain the next lines which are pretty self explanatory:


server {
    listen 80 default_server;

    root /usr/local/openresty/nginx/html/default;
    index index.html index.htm;

    server_name _;

    location / {
        try_files $uri $uri/ =404;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root /usr/local/openresty/nginx/html;
    }
}

Save the above file and let’s continue by creating the default folder and the index.html file for it that’ll display our node hostname:


$ mkdir -p /usr/local/openresty/nginx/html/default
$ touch /usr/local/openresty/nginx/html/default/index.html
$ echo $HOSTNAME > /usr/local/openresty/nginx/html/default/index.html
$ systemctl reload openresty

We can now test our default configuration by using once again curl command:


$ curl -I localhost

We should get back a nice HTTP/1.1 200 OK header response if everything went fine.


HTTP/1.1 200 OK
Server: openresty/1.11.2.5
Date: Fri, 27 Apr 2018 15:26:32 GMT
Content-Type: text/html
Content-Length: 13
Last-Modified: Thu, 26 Apr 2018 15:01:42 GMT
Connection: keep-alive
ETag: "5ae1e9d6-d"
Accept-Ranges: bytes

We’ve made a lot of progress but next we’ll need to create three more .conf files, one for Elasticsearc, one for Kibana and another one for Elastic-HQ, let’s keep things simple and create those files like shown here:


$ cd /usr/local/openresty/nginx/sites/
$ touch es-es246.mydomain.com.conf hq-es246.mydomain.com.conf kb-es246.mydomain.com.conf

You may replace or eliminate .mydomain.com from your file name but we like to keep it, is up to you. As you figure out already es-es246 file will manage Elasticsearch only, hq-es246 will handle Elastic-HQ and kb-es246 will serve our Kibana service. Let’s configure each individual .conf files that will server our services.

Let’s start editing our es-es246.mydomain.com.conf configuration file:


$ vi es-es246.mydomain.com.conf

And insert these lines:


upstream es-es246 {
    	server node01.mydomain.com:9200;
	server node02.mydomain.com:9200;
	server node03.mydomain.com:9200;
    	keepalive 900;
}

server {

	listen 80;
	server_name es-es246.mydomain.com;
	server_tokens off;

	access_log /var/log/openresty/es-es246_access.log;
    	error_log /var/log/openresty/es-es246_error.log;

	location / {
		auth_basic "Protected Elasticsearch";
      		auth_basic_user_file /usr/local/openresty/nginx/authorize/elasticsearch/users;
      		access_by_lua_file /usr/local/openresty/nginx/authorize/elasticsearch/authorize.lua;

      		proxy_pass 		http://es-es246;
      		proxy_redirect 		off;
      		proxy_buffering 	off;
                proxy_http_version      1.1;
                proxy_set_header        Connection "Keep-Alive";
                proxy_set_header        Proxy-Connection "Keep-Alive";
                proxy_set_header        Host $http_host;
                proxy_set_header        X-Real-IP $remote_addr;
                proxy_connect_timeout   150;
                proxy_send_timeout      900;
                proxy_read_timeout      900;
                proxy_buffers           16 64k;
                proxy_busy_buffers_size 64k;
                client_body_buffer_size 128k;
	}
}

Save the file and let’s jump to the next configuration file that’ll serve Elastic-HQ:


$ vi hq-es246.mydomain.com.conf

Next we’ll add the lines below which are pretty much similar to previous ones that serves Elasticsearch but on a different port where Elastic-HQ responds:


upstream hq-es246 {
        server node01.mydomain.com:5000;
        server node02.mydomain.com:5000;
        server node03.mydomain.com:5000;
        keepalive 900;
}

server {

	listen 80;
	server_name hq-es246.mydomain.com;
	server_tokens 	off;

	access_log /var/log/openresty/hq-es246_access.log;
    	error_log /var/log/openresty/hq-es246_error.log;

	location / {
		auth_basic "Protected Elastic-HQ";
      		auth_basic_user_file /usr/local/openresty/nginx/authorize/elasticsearch/users;
      		access_by_lua_file /usr/local/openresty/nginx/authorize/elasticsearch/authorize.lua;

      		proxy_pass 		http://hq-es246;
      		proxy_redirect 		off;
      		proxy_buffering 	off;
                proxy_http_version      1.1;
                proxy_set_header        Connection "Keep-Alive";
                proxy_set_header        Proxy-Connection "Keep-Alive";
                proxy_set_header        Host $http_host;
                proxy_set_header        X-Real-IP $remote_addr;
                proxy_connect_timeout   150;
                proxy_send_timeout      900;
                proxy_read_timeout      900;
                proxy_buffers           16 64k;
                proxy_busy_buffers_size 64k;
                client_body_buffer_size 128k;

	}
}

Save this file as well and let’s edit our last configuration file that will serve our Kibana service:


$ vi kb-es246.mydomain.com.conf

And let’s insert the next lines to it which are similar to the other ones excepting the backend port:


upstream kb-es246 {
        server node01.mydomain.com:5601;
        server node02.mydomain.com:5601;
        server node03.mydomain.com:5601;
        keepalive 900;
}

server {

	listen 80;
	server_name kb-es246.mydomain.com;
	server_tokens off;

	access_log /var/log/openresty/kb-es246_access.log;
    	error_log /var/log/openresty/kb-es246_error.log;

	location / {
		auth_basic "Protected Kibana";
      		auth_basic_user_file /usr/local/openresty/nginx/authorize/elasticsearch/users;
      		access_by_lua_file /usr/local/openresty/nginx/authorize/elasticsearch/authorize.lua;

      		proxy_pass 		http://kb-es246;
      		proxy_redirect 		off;
      		proxy_buffering 	off;
      		proxy_http_version 	1.1;
      		proxy_set_header 	Connection "Keep-Alive";
      		proxy_set_header 	Proxy-Connection "Keep-Alive";
      		proxy_set_header 	Host $http_host;
      		proxy_set_header 	X-Real-IP $remote_addr;
    		proxy_connect_timeout   150;
    		proxy_send_timeout      900;
    		proxy_read_timeout      900;
    		proxy_buffers           16 64k;
    		proxy_busy_buffers_size 64k;
    		client_body_buffer_size 128k;

	}
}

Nice, save this file as we’re done with OpenResty configuration files.

We can now reload openresty service and take a short test for each service


$ systemctl reload openresty
$ curl -I http://es-es246.mydomain.com; curl -I http://hq-es246.mydomain.com; curl -I http://kb-es246.mydomain.com;

The output should look similar to this:


HTTP/1.1 401 Unauthorized
Date: Wed, 02 May 2018 10:14:52 GMT
Content-Type: text/html
Content-Length: 192
Connection: keep-alive
WWW-Authenticate: Basic realm="Protected Elasticsearch"
Strict-Transport-Security: max-age=157680000; includeSubDomains; preload
Set-Cookie: NSC_JOjvi5w5esmotsmd4txbk4c4afl1db3=ffffffff096f780345525d5f4f58455e445a4a423660;expires=Wed, 02-May-2018 10:29:52 GMT;path=/;httponly

HTTP/1.1 401 Unauthorized
Date: Wed, 02 May 2018 10:16:19 GMT
Content-Type: text/html
Content-Length: 192
Connection: keep-alive
WWW-Authenticate: Basic realm="Protected Elastic-HQ"
Strict-Transport-Security: max-age=157680000; includeSubDomains; preload
Set-Cookie: NSC_JOjvi5w5esmotsmd4txbk4c4afl1db3=ffffffff096f780045525d5f4f58455e445a4a423660;expires=Wed, 02-May-2018 10:31:19 GMT;path=/;httponly

HTTP/1.1 401 Unauthorized
Date: Wed, 02 May 2018 10:16:51 GMT
Content-Type: text/html
Content-Length: 192
Connection: keep-alive
WWW-Authenticate: Basic realm="Protected Kibana"
Strict-Transport-Security: max-age=157680000; includeSubDomains; preload
Set-Cookie: NSC_JOjvi5w5esmotsmd4txbk4c4afl1db3=ffffffff096f780245525d5f4f58455e445a4a423660;expires=Wed, 02-May-2018 10:31:51 GMT;path=/;httponly

If we’re getting a 401 Unauthorized header response then we’re good, we’ve managed to block the access to our Elasticsearch stack.

Are the default ports still accessible?
The short answer is yes, the old ones like 5000 (Elastic-HQ), 5601 (Kibana) and 9200 (Elasticsearch) are still accessible, these can be blocked via iptables (CentOS 6.x) or firewalld (CentOS 7.x), these ports should be allowed only within 10.10.10.0/24 network for backend communication between our nodes.
Also we can block the access by using a LoadBalancer in front of our Elasticsearch stack by simply allowing only ports 80 and 443 (SSL) to pass through.

Can we use an SSL for our connection?
The short answer is once again yes, we can configure this directly on each node or using one or more (for high availability) Load Balancers in front of our Elasticsearch stack that will enforce SSL (80 to 443). We’ll skip this part for now as we’ll make our simple and small tutorial to look like a proper book instead.

Let’s continue our tutorial by creating /authorize/elasticsearch/ folder structure within OpenResty’s base folder. Also, we’ll need to create two new files that will manage our user credentials and roles / permissions:


$ mkdir -p /usr/local/openresty/nginx/authorize/elasticsearch/
$ cd /usr/local/openresty/nginx/authorize/elasticsearch/
$ touch authorize.lua users


-- Users list:
local userGroups = {
	normaluser = "user",
	devuser = "dev",
	admin = "admin",
	testuser = "admin",
	logadmin = "adminlogs"
}

-- Groups / Roles list:
local restrictions = {
  user = {
   	["^/monitor*"]			= { "HEAD", "GET" }
  },

  dev = {
      	["^/monitor*"]			= { "HEAD", "GET", "PUT", "POST" },
	["^/log*"]			= { "HEAD", "GET", "PUT", "POST" }
  },

  admin = {
      	["^/*"]               		= { "HEAD", "GET", "POST", "PUT", "DELETE" },
	["^/app/*"]			= { "HEAD", "GET", "POST" }, 	-- Kibana
	["^/bundles/*"] 		= { "HEAD", "GET", "POST" }, 	-- Kibana
	["^/static/*"]			= { "HEAD", "GET" }, 		-- Elastic-HQ
	["^/api/*"] 			= { "HEAD", "GET", "POST" } 	-- Elastic-HQ
  },

  adminlogs = {
	["^/log*"]			= { "HEAD", "GET", "POST", "PUT", "DELETE" }
  }
}

-- Write 403 message function
function write403Message ()
  ngx.header.content_type = 'text/plain'
  ngx.status = 403
  ngx.say("403 Forbidden: You don\'t have access to this resource.")
  return ngx.exit(403)
end

-- get authenticated user as role
local user = ngx.var.remote_user	-- Get user
local role = userGroups[user]		-- Get group

-- exit 403 when no matching role has been found
if restrictions[role] == nil then
  return write403Message()
end

-- get URL
local uri = ngx.var.uri

-- get method
local method = ngx.req.get_method()

local allowed  = false

for path, methods in pairs(restrictions[role]) do
  -- path matched rules?
  local p = string.match(uri, path)

  -- method matched rules?
  local m = nil
  for _, _method in pairs(methods) do
    m = m and m or string.match(method, _method)
  end

  if p and m then
    allowed = true
    break
  end
end

if not allowed then
  return write403Message()
end

More granularity can be provided if needed for each specific group like shown below:


$GROUP = { ["$URL_REGEX"] = { "$HTTP_METHOD", "$HTTP_METHOD" } }

["^/$"]                             = { "GET" },
["^/?[^/]*/?[^/]*/_search"]         = { "GET", "POST" },
["^/?[^/]*/?[^/]*/_msearch"]        = { "GET", "POST" },
["^/?[^/]*/?[^/]*/_validate/query"] = { "GET", "POST" },
["/_aliases"]                       = { "GET" },
["/_cluster.*"]                     = { "GET" },
["/_cat"]                           = { "GET" },
["/_plugin/kopf"]                   = { "GET", "POST" },
["/_stats"]                         = { "GET" },
["/_nodes"]                         = { "GET" },

Now that we’ve defined our basic groups (user, dev, admin and adminlogs) and mentioned a few users (normaluser, devuser, admin, testuser and logadmin) we need to actually create those credentials within our users file. We’ll use htpasswd utility for this like in the example below:


$ htpasswd -b /usr/local/openresty/nginx/authorize/elasticsearch/users testuser 1234password4321

We’ve managed now to create our first user called testuser using htpasswd utility, let’s check the content of our users file now by using cat command.


$ cat /usr/local/openresty/nginx/authorize/elasticsearch/users

Console output should look similar to this:


testuser:$apr1$JNfdFOs4$GGFPQcUURr20nf4gPRMZ7/

Bear in mind that the password is encrypted. We can repeat this and add the next users within our list.

Once we’ve added all our users and we’ve managed to properly define our groups in terms of permissions then we can take another short test. Let’s use the testuser this time as this has admin permissions in place:


$ curl https://testuser:1234password4321@es-es246-staging.glenigan.com

A successful output will look like this one:


{
  "name" : "node03",
  "cluster_name" : "mycluster",
  "cluster_uuid" : "LOAQ9uItSQitNPmrMMwORw",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}

That’s it, we can now expand what we’ve learned by setting up an SSL as we’ve said previously, add new users and set up groups permissions.

Video

No video posted for this page.

Screenshots

No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Article
How to install Elasticsearch 2.4 and OpenResty on CentOS 7
Author
Category
Published
18/04/2018
Updated
05/11/2018
Tags

Share this page

If you found this page useful please share it with your friends or colleagues.