Handling The System Logs Headache

It’s a Friday evening, and you are yearning to finish your
work and go home to see the family, but you are stuck
with looking a bunch of logs from your system repository to
find out where the failure occurred, and why.

Sounds familiar? For most of us, it’s the case.

We wish we automate this whole thing so that we
watch for only the stuff that we are interested.
What a relief it would be!

There are a few tools to help us sort out the logs, and
focus on the topics that we are looking for, making our
lives a bit easier.

What tools do you use for processing system logs?

The biggest revelation I’ve had in this area was when we started to use splunk. All of a sudden working with logs became a breeze.

Another great moment was when we started to move towards DevOps and we as a team started to own the monitoring and alert system. So instead of the logs being something that testers and operations looked at it’s something that is managed in a systematic fashion by everyone concerned. :slight_smile:

Before that I thankfully worked on Unix system so find and grep did the job.


Sadly you often have to write your own tools.

  • Home baked. Which is not as easy as it sounds, although I’ve been in teams where a helpful dev. has gone and written nice colorizing tools (I’ve done this in python myself once) for logs to allow filtering based on log level or content. Once you have a tool that does just those basic things well, it will solve the timestamping problems. At that point you might be ready for an ML/AI project. But for me coloring the logs in is the start - unfortunately the source and way logs get presented changes all of the time and the particular log of interest formats differently to the one your app was written for is my main pain.
  • Check out the colorama python module
  • Check out Agent Ransack (Lite) https://www.mythicsoft.com/agentransack/download/

A happy journey starter

1 Like

Thank you! As part of DevOps monitoring and alert system, what tools do you use currently?

I feel you when you say that the format changes often!
Would it help if the search is done with a short input search
keyword rather than relying on a (probably long) format string?

Coloring definitely is valuable!


1 Like

A second for structured logging tools like Splunk, Kibana (and the ELK stack in general), Datadog, etc, especially if the dev time is taken to wire up trace ids. Add in prometheus or other metrics tooling, and you’re getting into observability vs. monitoring territory, which opens up a lot of doors.

That being said, we’ve got some legacy software with super verbose logging that doesn’t work well with Kibana, and at that point, parsing the logs using command line tools like grep and cut are often more efficient.

1 Like

Our tool chain look something like this: Graphite / Graphana for plotting metrics over time. Datadog for alerts. And Kibana for logs.

1 Like

If you use a Windows platform, you might consider BareTailPro:

You can use regular expressions to search for specific lines in growing files.

For UNIX a combination of tail and grep can also help.

1 Like

Those of us on Windows can also use Log Parser (a free utility provided by Microsoft) and Log Parser Studio (also free and provides a UI for the command line of Log Parser). The tool can read a whole lot of different log standards, and can query them using a rather basic version of SQL.

I haven’t dug through system logs with it, but I use it a lot for digging through IIS logs.

Log Parser Studio also comes with a bunch of sample queries that can be used as a starting point, which helps to understand how it works.