Parsec

Uncategorized

Adventures In High Availability Logging — Elasticsearch, Logstash, and Kibana (ELK) on Kubernetes

By Erik Nygren

At Parsec, we are a small team working on problems all over the stack. Between the four of us we are supporting low latency PC gaming over the WAN on Mac, Windows, Linux, Raspberry Pi, and stay tuned for an upcoming platform release, going as close as we can to the hardware to get you the low-latency stream you deserve. From there, we support happy gamers from Australia to North America, all of whom are living the thin-client dream while our AWS-hosted, kubernetes-backed infrastructure works it’s magic.

Early on, we knew that in order to solve issues as they arose, we would need a logging solution that was scalable, but easy enough to manage so that we could spend our time building all of the features you want and not get bogged down in tweaking our logging system ad-nauseum.

Over the past few months, we’ve experimented and tweaked our solution and have arrived at a setup I think works really well. Because of that, I want to showcase some of the work we’ve done, highlight what we learned and hopefully help someone else out there jumpstart your startup dreams with HAppiness [sic].

This post will be the first in a short series detailing various aspects of our stack and how our thinking has evolved as we’ve hit bumps in the road.

Our goal for our initial setup of logging was twofold. Bring all our logging into one system, and make it reliable without spending too much time on it. I had used Elasticsearch in the past and been really impressed, so I was guided by a desire to see if I could use it as a “one-stop-shop” for both metrics and log tailing. We also knew that we didn’t know what we might need our logs for yet, so we needed them to be stored safely away in case we’d need them down the road.

To that effect, we settled on a HTTPS endpoint using logstash that all our loggers can speak to via straightforward POST-ing. This, in turn, forwards all the messages to both Elasticsearch and S3. We then query and visualize the logs using Kibana, and restore them from S3 in the event of failure.

An overview of our logs^

Elasticsearch

In reality, this is more than one component, and deserves a whole blogpost on it’s own, but there are plenty of people more knowledgeable than me that have written plenty about it out there. We decided to self-host our cluster on Kubernetes from containers built by Pires, which you can find and read more about here and here.

Because we didn’t have to support a legacy logging system and “Newer is Better” (python3 anyone..?) we’re using the latest version of the Elastic suite, i.e. 5.X as of this writing.

The most important learning from ES is that it is brittle, especially when you run it on kubernetes like we do. You can set it up to be as robust as you like, but in my estimation it isn’t worth the effort. It’s a very complicated piece of software, and it can break in a myriad of ways, so instead of worrying about constant uptime and recovery, you are better off having a durable layer before ES, and replay logs into ES if (or when) it goes down. When we first set up ES we thought we’d be able to manage it and didn’t worry about backups too much, which came back to bite us when it crashed, and we had nowhere to look for our logs. We’ve had issues with everything from memory pressure, disk usage to hard-to-pinpoint index corruption, and have blown away our indices more than once. We learned the hard way how much it sucks to wish for vanished logs after the fact, so now we view ES the way it should be — as an “in-memory” index that can easily go away, but can be just as easily restored from a durable “on-disk” backup. Do yourself a favor and setup your backup upfront.

For running ES on kubernetes in particular, here are the shortcuts that would have most helped me:

  • Use “_eth0_” [NB: that is a zero] as the network host, as described here.

  • Increase the disk size of the nodes you run ES on. The default 20G will run out fast, especially when kubernetes tries to recover your failing nodes and duplicates the index on disk.

  • If you’re looking for better performance, increase the memory allocation of the “es-data” nodes first, that’s where most of the work is done. We run our masters at 256M still and haven’t had any issues.

  • Make sure you run the curator to clean up old indexes and only keep as many as you’d reasonably query (and can fit on your cluster).

I cannot yet say if hosting your own Elasticsearch is the way to go, or if using AWS hosted ES is a better bang for your time/buck. I found wrangling with it to have been a valuable lesson personally, but there’s been time wasted and heartbreak along the way. If you want to truly jumpstart yourself I’d counsel letting AWS handle it for you. We’ll likely try moving over there in the near future, so keep your eyes peeled here for a future blog post when that happens.

And as a final note: Be careful what you log! If any field you send to Elasticsearch has conflicting types, ES will reject the message and you’ll be scratching your head as to why you’re not seeing any entries. I’d encourage you to send as much structured data as you can — ES seems to handle thousands of fields without much trouble, but don’t switch your “user_id” from the string “123” to the integer 123 (or number in ES terms) unless you want trouble. I still haven’t found a good way to force or cast types in ES itself (if you’re reading this and do — talk to me), so nest your properties across your stack and keep them non-conflicting. Cleaning up conflicting indices is a pain.

Kibana

We decided to stick with just one visualization solution because we didn’t want to tool up too quickly before knowing exactly what problems were worth solving. User-facing features rule the day at Parsec, and in the end we want just enough metrics to help us figure out what to work on.

Kibana looks great, and takes a lot of the pain out of interacting with ES’ clunky API. You can search through and visualize log statements at the taps of a few buttons. And make no mistake, search is what ES does, and it does it well. You can easily search for substring matches, use boolean logic or match on specific fields, and since everything is indexed it’s typically very fast. When you first drill down through tens of thousands of scattered log statements in sub-seconds to find only the half-dozen relevant messages, Arthur C. Clarke’s saying “any sufficiently advanced technology is indistinguishable from magic” comes to mind.

But, all isn’t well in the Kibana world, and it’s worth mentioning some of the drawbacks.

In terms of learning curve, Kibana is like a gentle grassy slope, topped with a sheer brick wall at the top. Most things are dead simple and intuitive, and the graphs it produces look nice and are decently performant. However, from time to time you’re trying to do something fairly straightforward (“how do I filter out values below 10?”), and you’ll find that their good looking docs are disappointingly thin, and your Google queries are just as likely to turn up irrelevant docs from previous versions (I get the feeling version 4.x is still more popular). When you do find an answer, it often explains to you how to do something in Elasticsearch directly, which might as well be saying “can’t be done”. Learning the ins-and outs of Elasticsearch is great if you want to get serious about search, but if you’re like me and are looking to just fire-and-forget your logs, spending the time to learn it is a dealbreaker.

In comparison to something like Grafana, Kibana is a much clunkier visualization framework. It really is meant for log visualization, but our goal is to make it carry the full weight of a dedicated metrics infrastructure. There are straightforward things you would expect that aren’t there, or are complex to set up, which mostly boil down to Kibana’s log-centric paradigm. For example, scaling logs isn’t a transformation on the graph, but needs to be done using scripting on the fields of the log messages themselves, so ES will create a temporary field on each log statement with your calculation, even if you want to use the scale in only one graph.

But, all that being said, once you’ve gotten used to Kibana’s quirks it is perfectly serviceable as a basis for real-time metrics, you just have to put some restrictions on yourself in how and what you log. Here’s a run through of the things I’ve found valuable but hard to discover:

Be Careful What You Log (Part II)

Kibana can graph anything you log, but you’ll need to plan what you want to visualize on the application side with the logs you send. My advice is to log more fields than you think you need, think about every log statement as a potential metric, but don’t send all the objects indiscriminately. So, even if you’re not counting events yet, logging something like [standard Python logging in this example]:

logger.info(“I did something”, extra={“something_count” : 1})
logger.info(“I did a test”, extra={“test_worked” : 1 if worked else 0})

Is a good idea, as it’s easy to count, sum and graph using Kibana down the line. However, logging something like:

try:
except Exception as e:
logger.info(“there was an error!”, extra={“exception” : e})
resp = json.loads(requests.post(“http://some-api.com”, json={“oh”:  “my!”}))
logger.info(“response”, extra={response: resp})

Is asking for trouble. In this example, ES might reject your “response” message intermittently if the types or fields in the response from the API changes, and the exception object might be filled with fields you have no intention on looking at. In my experience it’s better to explicitly log the fields you’re interested in, such as line number and location, response status code and the fields you care about, thereby keeping your index clean.

Also worth being aware of is that Kibana can’t (as far as I know) graph two different search queries on the same graph. So, if you think two things might be related and you want them in one graph, it’s better to log them together.

logger.info(“number of active users”,extra={“num_active”: num_active})
logger.info(“total users”,extra={“num_users”: num_users})

With this, you won’t be able to put the active vs. total users on one graph, without searching for the string “users” — not a particularly unique search string. Instead, be more explicit what your metrics are:

logger.debug(“there are {} active users”.format(num_active))
logger.info(“active users metric”,extra={“num_active”: num_active, “num_users”: num_users})

The assumption here is that the debug log is used to showcase something of interests but isn’t relevant for any graphing, and only the info log is sent to ES. Now, you can search for “active users metric” and graph the terms using filters (more on that below).

Backing Up Kibana

If you read nothing else about Kibana, remember this image. This button is your best friend (unless you’ve already got a stable, humming ES cluster, in which case — why are you even here?).

I’ve been meaning to write an auto-backup agent for this because I do this so often, but until that day, manual backup is the way to go. My advice: Use it often, and don’t mourn the loss of your precious dashboards and visualizations.

And a fun fact: when you import, you’ll notice that that importing breaks if fields needed by your visualizations are missing! An annoying bug, unless you follow my advice and work on durability and restoration, so stay tuned for that…

Scripted Fields

As I mentioned before, scripted fields are how you scale your dashboard. These fields aren’t yet part of the Kibana export, and they run with every query and seem to incur some load, so I wouldn’t go overboard on them. If you find yourself transforming lots of fields, you should probably go back and change what you log instead.

As an example, for some of our graphs we like to transform a field that’s stored in seconds to hours, and graph the hours. To achieve this, we scale into a new field like so:

https://gist.github.com/underscorenygren/9034b5c86572fed4516f3a3370cc5636

Elastic is touting the possibility of these “Painless” scripts in several parts of their documentation, and I’m sure they can be used to great effect. However, until it’s a bit more robust, I’d suggest keeping it simple and using it in a few choice places.

Filters On Graphs

I find filters to be very useful, and was surprised how it wasn’t well described in the docs. I figured I’d write a note on it here.

Filters are available as a graph split and allows you to put sub-queries into your graph to show only the results of said filters. In my experience, combining these with logging the “right stuff” is typically good enough for most niche-metric needs. I figured a picture is worth more than a thousand words, so here is a somewhat arbitrary example that buckets events into “Long”, “Short” and “Failed” based on the value of a piece of data in the message. Note that the third filter uses more complex queries such as boolean “AND” logic, which can be very handy.

S3 For Durability

Speaking of magic, S3 is another piece of astounding software/infrastructure, and for our use case we couldn’t hope for a better solution to host our raw logs. It’s so widely used that last time it went down, the whole internet basically broke, and when that happens you typically have bigger problems than where your logs are at.

Our strategy for durability is to ship our logs to S3 as soon as possible, then replay them in the future as needed. This also gives us a lot of flexibility going forward. For example, an idea I’ve been toying around with is having long-term graphs in ES/Kibana, pulling in and filtering stats into a dedicated “long term” ES index. Having a backup of all the logs is essential to supporting such ideas (or any future use of a proper metrics infrastructure), and keeping them remote allows us to play a bit fast and loose with our kubernetes clusters, destroying containers aggressively without worrying about any state that’s stored on them.

Logstash

Which brings us to Logstash, which brings all of these components together. In short, we use Logstash because we have to — in my estimation it isn’t the best at anything when talking about log filtering, aggregation, etc, but it’s ubiquitous and configurable enough that it’s just the best place to start.

For us, we keep a centralized logging API that all of our loggers connect to. It runs `nginx` in front of logstash’s http plugin, which in turns forwards to ES and S3.

The Input

As easy as promised, when logstash is easy to configure it doesn’t disappoint:

input {
http {
port => 8080
}
}

You don’t have to use nginx in front of it, but I find that it takes a lot of worry out of the process, and if you want to support SSL it’s a must.

You can also use the codec => json if you’re sending logs like we are, but if you send your messages with the application/json content type the plugin figures it out for you.

Filters

The following all go into the filter section, i.e.

filter {
….
}

Dropping Events

if [headers][request_path] == “/liveness” {
drop {}
}

Drop is handy for healthchecking, liveness probes and the like. These calls will still return “ok” as if they went through the whole pipeline, but are immediately dropped. And let me tell you, liveness on containers is a great idea and is easy to set up. Just make sure that you set the liveness on the right container. Initially, we had set the liveness probe on our nginx container, so when logstash went down and the probe failed, kubernetes tried to restart nginx to no effect, when it needed to restart logstash.

Throttling

throttle {
after_count => 2
period => 10
max_age => 20
key => “user_id”
add_tag => “throttle_warn”
}
throttle {
after_count => 3
period => 10
max_age => 20
key => “user_id”
add_tag => “throttled”
}
if “throttled” in [tags] {
drop{}
}
if “throttle_warn” in [tags] {
mutate {
replace => {“message” => “throttling events for user:%{user_id}” }
}
}

A neat trick if you’ve got a lot of logs in certain situations. If you’re not solving this on the sending end, you can actually instruct the logger to drop them for you conditionally. The example above is a toy one as it is very strict — it sends a warning message after it sees two messages with the same “user_id” field within 10 seconds of each other, and drops all subsequent messages for that id until the period has ended.

Forcing Types

mutate {
convert => {
“user_id” => “integer”
}
}

Rather than handling types in ES schemas, we make life easy for ourselves and force types on the writing end.

Adding Env Variables

mutate {
add_field => {“[@metadata][docker_compose]” => “${DEBUG}” }
}

Logstash can’t read from your env variables when checking boolean logic, so crafting your if/elses can be tricky. Fortunately there is an easy hack around it by adding @metadata tags that won’t actually get added to the event, and then querying them in the filters (see below).

Outputting

output {
elasticsearch { hosts => [“${ES_HOST}:9200”] }
if [@metadata][docker_compose] != “true” {
s3 {
bucket => “your-logs”
prefix => “logstash/”
codec => “json”
encoding => “gzip”
access_key_id => “${AWS_ACCESS_KEY_ID}”
secret_access_key => “${AWS_SECRET_ACCESS_KEY}”
region => “us-east-1”
}
}
}

Sending to ES is almost as easy as setting up an HTTP input! I shudder to think how hairy that JRuby code must be under the hood…

Worth noting here is the use of env variables to turn off the S3 output when we are developing locally (as you might surmise, we use docker-compose to supply the correct env vars and act as our “local” kubernetes).

Now, bringing it all together:

https://gist.github.com/underscorenygren/fc0fba4ef392ad598f1c9193dfed67b4

Conclusion

To conclude, our initial logging setup is fairly straightforward. Logstash and Nginx run in a set of pods in kubernetes, and all of loggers attach to them and write their logs via JSON. From there, we ship them straight to Elasticsearch and S3 with little transformation and use Kibana to visualize what’s going on in real-time.

In our next installment, I’m going to go into further detail how we actually achieve High(er) Availability than what’s outlined here. What we do when our logstash HTTP endpoint (inevitably) goes down, how we restore ES when it gets booted from our cluster? Stay tuned for some neat juggling of crashing containers, all without really using any tooling that hasn’t been outlined here!

Power your remote workplace. Try Parsec for Teams