Thursday, March 2, 2017

Integrating Elasticsearch and Logstash with NetApp Harvest and Grafana : Part 2

Setting Up Your SYSLOG Server


Install Elasticsearch and Logstash


Since this blog series is about how to integrate ES and LS with NetApp and the NAbox,  I won't take the time to explain the installation of each.  Installation of both is fairly easy and there are many sites and blogs that explain it in detail.  I'll just add a few links and notes below that I found very helpful.

Elasticsearch Install Link                    Logstash Install Link

Some basic info about my instance:
OS:  Ubuntu 14.04.3 LTS
ES version 2.4.2
LS version  2.4.1
Installing Elasticsearch as a single node on the server
Elasticsearch and Logstash require java.  Refer to the install links above to verify supported versions.
Clustered DataOntap (CDOT) 8.2.4 and 8.3.1

Configure Elasticsearch


Warning:  Elasticsearch has no built-in security and can be controlled by anyone who can access the HTTP API. It is not recommended to set the 'network.host' setting to 0.0.0.0. I will be using this setting below in an isolated test environment for instructional purposes only.  Take the time to read this link before you move anything into production.

There is only one file we need to configure to get ES up and running and that is the elasticsearch.yml file.  The file is located in the /etc/elasticsearch directory.  ES also has a logging.yml that will allow you to configure all of your logging parameters such as default directories or log retention times.  This blog will only discuss the elasticsearch.yml settings.
Out of the box you will want to configure the following variables in the elasticsearch.yml. Use your favorite text editor (vi, nano, etc...) to uncomment and edit the following:

Cluster name
Node Name
Network Host
Enable CORS (for our Grafana connection)

 # ----------------------------- Cluster ------------------------------  
 #  
 # Use a descriptive name for your cluster:  
 #  
 cluster.name: yourcoolclustername  
 #  
 #-------------------------------- Node -------------------------------  
 #  
 # Use a descriptive name for the node:  
 #  
 node.name: yourcoolnodename  
 #  
 # Add custom attributes to the node:  
 #  
 # node.rack: r1  
 #  
 #------------------------------ Network ------------------------------  
 #  
 # Set the bind address to a specific IP (IPv4 or IPv6):  
 #  
 network.host: 0.0.0.0  
 #  
 # Set a custom port for HTTP:  
 #  
 # http.port: 9200  
 http.cors.enabled: true  
 http.cors.allow-origin: "*"  
The variables "http.cors.enabled: true" and 'http.cors.allow-origin: "*"' will enable direct access to Elasticsearch when adding it as a datasource in Grafana.

Once you are done go ahead and start the Elasticsearch search service and test:
 root@yourserver:/# curl 'http://localhost:9200'  
 {  
  "name" : "yourcoolnodename",  
  "cluster_name" : "yourcoolclustername",  
  "cluster_uuid" : "n6vcUe3FS_ChYrBkt9EVcA",  
  "version" : {  
   "number" : "2.4.2",  
   "build_hash" : "161c65a337d4b422ac0c805f284565cf2014bb84",  
   "build_timestamp" : "2016-11-17T11:51:03Z",  
   "build_snapshot" : false,  
   "lucene_version" : "5.5.2"  
  },  
  "tagline" : "You Know, for Search"  
 }  
If you see the above message then its working.  If not then refer to the elasticsearch logs located in the /var/log/elasticsearch/ directory.  Check the elasticsearch.log and the yourcoolclustername.log.

Configure Logstash


To get Logstash running you must create the Logstash configuration file in the JSON-format and place them in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Create a new file called netapplogs.conf
 sudo nano /etc/logstash/conf.d/netapplogs.conf  

Logstash has a number of plugins already installed out of the box. We are going to call the syslog plugin so we can bind TCP and UDP ports 514 and listen for NetApp syslogs. Netapp only forwards syslogs messages on port 514. Insert the following input configuration (feel free to pull these from my gist site 😊 ) :

Now let's add our filter section.  The most important thing about the filter section is our grok parser.  We need to create a grok parser that matches the typical format of a NetApp syslog.  Below is an example of a typical NetApp syslog message.  I've broken it down into sections.
Typical NetApp syslog message
If you are familiar with grok then this is a very simple message to parse.  If you are not familiar with grok, a great place to start learning is this blog I found http://logz.io/blog/logstash-grok/.  Here is what our grok filter should look like.
The image below shows how it breaks down.  There is a lot going on in the image so I apologize in advance.  just click the image to have a better view of it.
NetApp syslog message matched to a grok filter
You'll notice that I have taken the "nodename" portion and broken it down into two sections in my grok filter, "cluster" and "node".  This is important and I will explain in a later section.  We now have a grok filter so let's put it in the configuration file.  Here is what our filter section should look like:
Our final section is the "output".  This tells Logstash where to send everything once it has done filtering it.
The great thing about logstash is you can combine all those parts into one file or keep them separate as long as they are all in the conf.d directory.  Since we only create one file just save the file once you have everything in there.  Here is a nice clean version of all sections in one file.
Almost done!  We just need to verify that file will work so we run this command.
 root@yourserver:/# service logstash configtest  
 Configuration OK  
 root@yourserver:/#  
If you see the above response you can start logstash (service logstash start).  If you see an error then just fix the error in your file and retry.

Hopefully everything on you server is working just fine.  If you are having issues please feel free to send me a message using the contact form below or by leaving a comment.  In my next post I will show you how to forward the CDOT syslog messages to Logstash (coming very soon!).

2 comments:

  1. Hi James, It looks as though logstash can't listen on 514 unless it runs as root - how did you get it working in the example above?
    Many thanks, Paul.

    ReplyDelete
    Replies
    1. you have to give the capability of using lower network ports to java. e.g.:
      setcap cap_net_bind_service=+epi /usr/lib/jvm/java-1.8.0-openjdk-amd64/jre/bin/java
      setcap cap_net_bind_service=+epi /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java

      regards

      Delete