ASP.NET Core – tracking flow of requests with NLog

1. Introduction

A while ago I wrote an article about using MappedDiagnosticsLogicalContext for tracking request flow in your application. As we are moving our project to ASP.NET Core, I wanted to keep that functionality in place. Unfortunately, MDLC layout renderer is not available in that framework anymore(when targeting .NET Core). Luckily, there are two other renderers which can be used as a replacement:

  • aspnet-traceidentifier
  • aspnet-item

2. Configuring NLog

Before I dig into details of layout renderers mentioned above, we have to configure ASP.NET Core to use NLog as a logger. First of all, we need to grab NLog.Web.AspNetCore NuGet package. Once the package is installed we have to manually add NLog.config to the project (the file won’t be added automatically by NuGet installer). The next step is to configure NLog with our config, then configure NLogWeb and finally register NLog in LoggingFactory. All of these steps should be done in Startup class.

The final step required for NLog to work is to register HttpContextAccessor in IoC container of your choice. If you use the build one, you can do it like that

3. Using aspnet-traceidentifier renderer

Aspnet-traceidentifier layout renderer allows you to obtain value of TraceIdentifier property from current HttpContext. TraceIdentifier is basically unique id which identifies the request. The renderer can be used as follows

From now on (without any additional configuration) every log entry logged (once HttpContext is created), will contain TraceIdentifier.

4. Using aspnet-item renderer

If for some reasons we don’t like to use TraceIdentifier as our CorrelationId, we can leverage aspnet-item renderer, which basically allows you to read data from HttpContext.Items collection. However in that case, we are responsible for providing and storing unique identifier of request in HttpContext.Items. Fortunately it is pretty straightforward to do with custom middleware.

Of course we have to also use this middleware in our application so we have to add following line

in configure method of our Startup class. Having all the pieces in place, now we can append CorrelationId read from HttpContext.Items to our log entries with following configuration

From now on, CorrelationId will be automatically added to our log entries

6. It works in multithreading scenarios

If you take a closer look at logs presented above, you will see that both of the renderers can log proper CorrelationId/TraceId regardless of the thread from which the logger was called. This is possible thanks to implementation of HttpContextAccessor which uses AsyncLocal under the hood, which allows you to persist data across threads.

Source code for this post can be found here

ASP.NET Core – tracking flow of requests with NLog

Boxstarter – breaking infinite reboot loop

1. Introduction

Boxstarter is a great tool for configuring your machine without any user interaction. I’ve been using it for a while now, however recently I’ve noticed that for no apparent reason it wasn’t able to finish the execution of provisioning script as it stuck in infinite reboot loop. Is was quite frustrating for me, that is why I’ve decided to investigate the problem. Here are my findings.

2. Detecting pending reboots

If you dive into Boxstarter source code you will notice that script responsible for checking if reboot is necessary uses Get-PendingReboot cmdlet. By default Boxstarter doesn’t export it, so if you want to see what it does you have to add following line in your Boxstarer script

Calling the Get-PendingReboot cmdlet will give you an information about mandatory reboots. In my case, the output looked as follows
getpendingreboot
As you can see RebootPending flag is set to true and the reason is that the value of PendingFileRenVal property is not null. Basically, in most of the cases, a restart is indeed required however, some applications leave information about pending reboot in the registry (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations) and operating system (in some cases) never cleans this up.
As a result Get-PendingReboot indicates that reboot is required and Boxstarter falls into infinite restart loop.
pendingfilerenameoperationsframe

3. Fixing the problem

The obvious solution is to remove that value from the registry key manually. However this approach didn’t work for me, basically because the registry value somehow was added at the beginning of execution of my script. That is why, I’ve decided to modify my script and add a function which clears undesirable pending file renames of my choice.

As you can see I am taking current value of PendingFileRenameOperations and remove the entries which I know that will stay there forever. Calling that function as the very first step of my installation script solved the problem and I was able to successfully restore my machine from Boxstarter script
fullscrtipt

pendingfilerenameoperationsclearedframe

Entire Boxstarter script can be found here.

Boxstarter – breaking infinite reboot loop

Logstash – reading logs from RabbitMQ

1. Introduction

In my previous post, I’ve shown how to configure Logstash to parse logs from files. This is pretty useful however if your application is deployed on multiple servers, you usually log to some kind of central log storage – in my case to queue, RabbitMQ to be more specific. In this post, I will show how to configure Logstash so it reads the logs from that queue.

2. Preparing queue

Before we move to Logstash configuration, first of all, we have to prepare RabbitMQ test instance. If you don’t have RabbitMQ yet, go to this website and install the queue. Once installation is done, go to the installation folder (C:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.5\sbin in my case) and run in console

This command will prepare RabbitMQ management website, so it will be easier for us to see what is going on in given queue. In the next step, we have to prepare the queue, the logs will be sent to. You can do it via the website we’ve just enabled (http://localhost:15672/) or via RabbitMQ admin console. As I prefer to automate things as much as possible I will do it via command line. What is quite unusual when it comes RabbitMQ CLI is the fact that it is a python script you have to download and run locally (this is not an executable). The script can be found on management site under this address. Once the script is downloaded (in my case it is saved as rabbitmqadmin.py) you can start preparing necessary elements: exchange, queue and the binding.

As you can see I’ve created exchange called logger which is bound to MyAppLogginQueue queue using MyApp route. This means that every message with topic MyApp sent to logger exchange will be pushed to MyAppLogginQueue .

3. Preparing Logstash

Logstash configuration will be modified version of my previous config. I will just add another input source. Here is a basic usage

As you can see we will be consuming messages from MyAppLogginQueue which is deployed on localhost. For password and user properties use your own credentials. That is basically it, so now it is time to see if everything is working.

4. Testing coniguration

In order to test the configuration you have to run the Elasticsearch, Kibana and use new config for Logstash. I’ve shown how to do it in one of my recent post . For sending messages to the queue I will just use RabbitMQ management website API. The API exposes

endpoint accepting POST verbs which can be used for publishing messages to given exchange. In my case POST body will look as follows

and I will be sending it to

Note that I will be sending messages to the exchange, not to the queue itself. The exchange’s responsibility is to route the message to all bound queues. Here is how it looks in practice
ezgif-com-gif-maker
As you can see our configuration is valid and messages are shown on Kibana’s dashboard almost in real time.

Full Logstash config can be found here

Logstash – reading logs from RabbitMQ

Logstash – parsing multiline log entries

In my previous post I’ve shown how to configure Logstash so that, it would be able to parse the logs in custom format. Configuration presented in that post had one significant drawback – it wasn’t able to parse multiline log entries. This is a rather common scenario, especially when you log exceptions with a stack trace. Log entry, in that case may look as follows

Parsing these kind of messages using current configuration will result in grok parsing error
grokfailure
and we won’t be able to search against predefined fields. Fortunately Logstash allows you to configure something called input codecs which basically allows you to transform input data into some other form. One of those codes is multiline codec, which is responsible for “merging” multiline logs into one entry.
Here is example of codec configuration

The code above says that any line not starting with a “TimeStamp=timestamp value” should be merged with the previous line.
Multiline coded can be added to a variety of inputs. Here is how you can apply it to file input

Thanks to that change, Logstash is now able to correctly parse exceptions from our logs.
fixed2

Source code for this post can be found here

Logstash – parsing multiline log entries

Getting started with Elastic Stack

1. Introduction

In one of my previous post I’ve shown how to improve logging in an application by tracking the flow of incoming requests. Now it is time to show the basics of Elastic stack to make searching across multiple log files/sources a piece of cake. Elastic stack (previously called ELK stack) is set of three tools which allows you to parse (Logstash), query (Elasticsearch) and visualize (Kibana) logs with ease.

2. Installation

First of all (as usual) we have to get the tools, so go to elastic.co and download apps mentioned before.
These are stand-alone applications so no installation is required. The only requirement is to have JAVA_HOME system variable pointing to your java directory. In my case, this looks as follows

3. Elasticsearch

Once all three applications are downloaded we can run Elasticsearch (as there is not additional configuration needed for basic usage) instance via an elasticsearch.bat file.
elasticsearchweb

4. Logstash

Having our Elasticsearch instance up and running, now it is time configure Logstash so that it will be able to parse logs.
Configuration provided in next sections will be able to parse logs with the following format

4.1. Configuration

Logstash configuration is done via config file in specific format. You can read about that here
The very first step is to define input and output sections

As you can see the logs will be read from C:\logs directory and parsed content will be pushed to Elasticsearch instance and to the console output.
We can verify correctness of configuration calling

configurationtest
If you are making your config file for the first time it is good idea to add additional properties to the file section

This will force Logstash to reparse entire file when restarted and will also make sure that older files are not ignored (by default files modified more than 24 hours ago are ignored).

At this point, Logstash can read the log file but it doesn’t do anything special with it. Next step is to configure a pattern which will be used for log parsing. Logstash uses grok for defining the patterns. Long story short it is kind of a regex which can use predefined patterns. The easiest way to play around with it is to use grokconstructor, for list of ready to use patterns take a look at this

For parsing logs shown in previous section I’ve ended up with this grok pattern

Notice that you are able to give aliases for particular fields, for instance

means that everything that matches TimeStamp=%{TIMESTAMP_ISO8601} will be stored in logdate field.
Having defined our pattern, now we can add it to the config file. After modifications, it looks as follows

Once we run this config and start querying elastic search for list of all indices via

we will see that our logs were parsed and stored in index called logstash-2016.10.30
indices
If we now go to

we will be able to see index information
indexinfo

4.2. Fixing the date fields

At this moment there are two main problems with our configuration. First of all, the indices are created based on read time from file. Second of all our logdate field is treated as string.
logdateasstring
By default Logstash creates indices based on read time of the source. However in my opinion it is better to create index names based on time given event occurred.
In order to do that, we have to tell the Logstash which field is responsible for holding timestamp. In my case, this field is called logdate. All we have to do is to map this field into field via date filter

As you can see first argument is a filed name and rest of the arguments (you can specify more) are just date time formats. By default date filter maps a field from the match property into @timestamp field. So the config above equals this one

If we restart Logstash and get the indices, we will see something similar to that
multipleindices
Second problem can be handled in very similar way. We have to add second date filter and select as target longdate field

From now on logdate field will be treated as date so we will be able to filter the logs easily with Kibana
logdateasdate

5. Running Kibana

Having all the configuration in place now we are ready to run Kibana. As in previous steps, no installation is needed so just run the Kibana.bat file
kibanastarted
and go to

If you run the app for the first time you will be asked to configure the indices. You can use the default parameters and just click “Create” button.
Once the indices are setup you can start writing queries against the logs. By default entire message is searched for the search terms, however the real power comes with queries written against specific fields. For example, you can search for any errors in the application with this simple query

Thanks to date type fields, you can combine this query with date range selector and narrow down time to specific values, just like that

kibanasearchresults
These are just the basic queries you can run in Kibana, for more advanced scenarios please visit website . I also strongly encourage you guys to take a look at other feature Elastic stack provides. Source code for this post can be found here

PS
Configuration presented in this post will not be able to parse multiline log entries e.g. exceptions. I will show you how to do it in the next post.
Getting started with Elastic Stack