Tuesday, February 6, 2018

Integrating Elasticsearch and Logstash with NetApp Harvest and Grafana : Part 4

Setting up the Elasticsearch data source on Grafana

Grafana off the shelf comes with data sources already installed and ready to configure.  Thankfully, Elasticsearch is one of them and very easy to add.

Since Part 3 I have upgraded my environment.
ES version 5.5.2
LS version 5.5.2

I am also using two different versions of Grafana.  My prod version is 4.4 and my testing version is now at 5.0.0-beta1.  I'll show you how to add a data source in both versions.

Grafana also provides detailed steps here.

Grafana before 5.x

Start by clicking the Grafana icon in the top header then click on "Data Sources".  Click the "+ Add data source" button in the upper right hand corner.  Select "Elasticsearch" in the "Type" drop down.

You should be seeing this screen:


Grafana does a great job at making this very simple.  You can see some default settings in most sections.  Start with naming your new data source then apply the url to your Elasticsearch server keeping the port set at 9200.

The next section is "Access".  You have two choices.  Proxy or Direct.  
Direct = url is used directly from browser.  
Proxy = Grafana backend will proxy the request
If you are unsure of which one to use then try both.

Depending on your environment you may need to adjust the "Http auth" section.

The next section is the key.  The "Index Name" slot tells Grafana what index to load.  The index is your data source.  Each index you create must be loaded as a separate data source.  I have one index for all of my cDOT clusters and I have another index for the Elasticsearch server performance metrics.


In Part 2 of this series we used a pretty generic name for our index at the end of the conf file in the "output" section.  We also used a "Daily" pattern.  [logstash-*]YYYY.MM.DD is the name of our index that we enter.  Be sure to choose the "Daily" pattern. 


Leave "@timestamp" in the next section then choose which version of ES you are using.
Click on "Save & Test" and you should see this banner.


If you do not then my suggestion would be to try several different "Http auth" configurations until you have the correct one to match your environment.

Grafana 5.x and beyond

Grafana just released 5.0.0-beta1 and the biggest change is within the UI itself.  I like what I have seen so far but I am still doing a bunch of testing to it.

Adding a data source on version 5 is pretty much the same but they have added much more detail to help talk to your ES server.  All of which can be explained in better detail at Grafana.com


Time to start building some dashboards!

Before I start I should mention that the ES and LS installations I covered in this series are pretty basic when it comes to their configurations.  We set up a YAML file for ES to configure our node(cluster) settings then we set up a conf file for LS to tell it where/what to listen for, how to parse the messages and where to stash them (Index).  Everything else was left as default such as "mapping".  Mapping is explained here.  I'll explain why this is important to you for our install shortly.

Start with creating a new dashboard and choose the graph panel.  Edit the panel and select your ES index in the "Panel Data Source" drop down (I'm assuming you are using a Grafana v4.x or higher).

You should be seeing something like this.


Congratulations!  You just built your first ES dashboard.  Grafana defaults to "count" @timestamp.  It is showing you the number of hits your index has received.  That was easy!

One other thing before we move on.

Run this command on your ES server.  Be sure to make sure to use the correct date.
curl -XGET "http://localhost:9200/logstash-2017.08.18/_search?size=10;pretty"

You should see some entries in your index that look like this.


Notice all the fields in the image like "nodename", "cluster", "tags" etc...

Let's build a graph to see how many "info" level messages I have gotten in the past 24 hours by calling the "log-level" field.

Click on "Histogram" in the "Group by" section.  You'll notice you see a drop down.  Choose "Terms".  Now click on 'select field".  You should a list of all of your fields.


Choose "log-level".  You'll notice at the top left of the graph an error pops up.


Fielddata is disabled on text fields by default. 
Set fielddata=true on [log-level] in order to load fielddata in memory 
by uninverting the inverted index. Note that this can however use significant memory. 
Alternatively use a keyword field instead.

This goes back to the mapping configuration I talked about at the beginning of the section.  Feel free to research this area if you'd like.  You may or may not want to enable fielddata but I'm keeping things simple so we'll use the 'keyword" work-around.

Click on "log-level" again but this time type "log-level.keyword" (if you are using ES v2.x then type "log-level.raw" instead).  We've just set our Y-axis on the graph.  Now we need to set the X-axis.
Click the "x" button at the far right of the same row.


Now we will set the "Then by" to the 'Date Histogram' option and choose "@timestamp".
What do you see now?


You should see all the different log-levels you have received in the last 24 hours.  Now type "info" in the "Query" row.  You just created your first Query using the graph panel.  You are viewing the count of all "info" messages you have received.

Delete "info" from the Query section and type the name of a cluster in your environment.  Now you are looking at all the different log-level types that cluster has sent in the past 24 hours.


Let's build a Table panel.

Add a table panel and edit it.  Choose your ES data source again.  The table populates with a timestamp and count columns.  Change the "count" metric to "Raw Document".  Now you are seeing the JSON format of each message from the index.

Click the "Options" section of the table panel.  You should see something like this.


In the columns section choose syslog_timestamp,cluster and syslog_messsage.


Let's add a cool feature that Grafana gives you.  Click on the gear icon at the top of the page and choose "Templating".  Add a new one and choose "Ad hoc filters" in the Type section.  Choose your ES data source again then name your template anything you want.


Click "Add" then close you templating section.  Now you have a template you can use to filter your query.  Click the "+" and type "cluster.keyword" (or cluster.raw if using ES v2.x).  Click the "select tag value" and you should see a list of all your clusters.


You can now filter the query by cluster name!

This latest blog should give you enough information to get you started building your own dashboards.  If you are having issues or have any questions please do not hesitate to drop me a message.

7 comments:

  1. The content is good and very informative and I personally thank you for sharing article on Elasticsearch.

    ReplyDelete
  2. Thanks for sharing. This must be super helpful for so many students. I really appreciate your detailed information on DevOps . It was very challenging to get my DevOps assignment done. I am learning DevOps from JanBask Training and I wish it will help me or many students who are looking for information like this.

    ReplyDelete
  3. I am glad that I saw this post. It is informative blog for us and we need this type of blog thanks for share this blog, Keep posting such instructional blogs and I am looking forward for your future posts.
    DevOps Training in Chennai | DevOps Training in anna nagar | DevOps Training in omr | DevOps Training in porur | DevOps Training in tambaram | DevOps Training in velachery

    ReplyDelete
  4. you have executed an uproarious undertaking upon this text. Its completely change and very subjective. you have even figured out how to make it discernible and simple to make a get accord of into. you have a couple of definite composing dexterity. much appreciated likewise a lot. Movavi Video Editor Plus 2022 Crackeado

    ReplyDelete
  5. I go to your blog every now and again and counsel it to the total of people who wanted to highlight on happening their comprehension resulting to ease. The way of composing is perfect and in addition to the substance material is highest point score. gratitude for that insight you give the perusers! Stellar Data Recovery Software With Crack

    ReplyDelete
  6. Very Nice Post. Such wonderful information to share with us. I would like to share it with my friends.
    Immigration Lawyer Near Me Virginia
    Personal Injury Lawyers Near Me Virginia
    Bankruptcy Attorney Near Me Virginia

    ReplyDelete
  7. Happy birthday to my incredible, great, stunning and super-attractive sister! If I am told to choose another sister. Happy Birthday Wishes For Sister

    ReplyDelete