Asp.NET Core – populating IOptions<T> from external data source

1. Introduction

In ASP.NET Core web.config is no longer a proper place for storing application settings. New framework introduces the concept of a json based configuration and the default file which stores the settings now is appsettings.json. Here is a quick tutorial how to use new features.

2. Reading configuration

Let’s assume that our appsettings.json file looks as follows.

Thanks to ConfigurationBuilder we can parse this config and later on materialize sections or entire file into strongly typed configuration classes. Let’s say we want to parse AvailabilitySearchOptions node to following class

We can achieve that with following steps. First of all, we need to read entire configuration

And in the next step we have to register IOptions<AvailabilitySearchOptions> in the container using services.Configure<TOptions>(IConfiguration section) method

Note Configuration.GetSection(“AvailabilitySearchOptions”) passed as argument to services.Configure method.
From now on we can access AvailabilitySearchOptions settings via IOptions<AvailabilitySearchOptions> interface, which you can easily inject into your classes.

3. Populating IOptions<T> from external data source

From time to time, reading configuration just from JSON file might not be enough and for instance you would like to add additional configuration read from some external data source. Fortunately you don’t have to resign from the IOptions<T> class as it is possible to read additional data for literally any other source thanks to IConfigureOptions<T> class. All we have to do is to create a setup class which implements IConfigureOptions<T> interface

And then registering this class in our container

From now on, when value of IOptions<AvailabilitySearchOptions> is accessed for very first time, Configure method from AvailabilitySearchOptionsSetupService will be called and you will be able to set additional values for your settings. Note that values from appsettings.json will already be there.
In addition, it is possible to have multiple implementations of IConfigureOptions<T> so if you want your setup to be split into multiple classes you are good to go.
multiple
Source code for this post can be found here

Asp.NET Core – populating IOptions<T> from external data source

XUnit – sharing test data across assembly

1. Introduction

XUnit provides two ways of sharing data between tests – ICollectionFixture and IClassFixture. The first one allows you to shared context across collections and the second one across test classes. I’ve already had a couple of cases in which these fixtures were not enough. Basically, I wanted to share data between all tests in given assembly – without worrying in which test class or test collection given test is. In these circumstances, I’ve usually used some kind of old fashioned singleton or custom TestFrameworkExecutor. I’ve never liked those solutions, fortunately recently I’ve come across a nice little library – xunit.assemblyfixture.

2. Usage

Xunit.assemblyfixture allows you to share data between tests in given assembly via IAssemblyFixture interface. The usage is basically the same as in other XUnit’s fixtures. All we have to do is to create a class which instance we want to share with other tests

And then create test classes which implement IAssemblyFixture interface. If we want to have access to the fixture from the tests, we can inject instance of TFixture via constructor

Xunit.assemblyfixture fixture will ensure that given test class is shared via all the tests and is initialized only once.
Shared
If you want to share multiple classes across the assembly, you can, of course, use IAssemblyFixture multiple times.

3. Visual Studio 2017 – XUnit beta tests runners

As for now VS 2017 RC requires beta runners of XUnit in order to run our unit tests. xunit.assemblyfixture seems not to cooperate with them smoothly. However, there is a simple workaround to fix that. All we have to do is to add the following attribute to our test project

And from now on, XUnit correctly can inject assembly fixtures into our test classes

Source code for this post can be found here

XUnit – sharing test data across assembly

ASP.NET Core – tracking flow of requests with NLog

1. Introduction

A while ago I wrote an article about using MappedDiagnosticsLogicalContext for tracking request flow in your application. As we are moving our project to ASP.NET Core, I wanted to keep that functionality in place. Unfortunately, MDLC layout renderer is not available in that framework anymore(when targeting .NET Core). Luckily, there are two other renderers which can be used as a replacement:

  • aspnet-traceidentifier
  • aspnet-item

2. Configuring NLog

Before I dig into details of layout renderers mentioned above, we have to configure ASP.NET Core to use NLog as a logger. First of all, we need to grab NLog.Web.AspNetCore NuGet package. Once the package is installed we have to manually add NLog.config to the project (the file won’t be added automatically by NuGet installer). The next step is to configure NLog with our config, then configure NLogWeb and finally register NLog in LoggingFactory. All of these steps should be done in Startup class.

The final step required for NLog to work is to register HttpContextAccessor in IoC container of your choice. If you use the build one, you can do it like that

3. Using aspnet-traceidentifier renderer

Aspnet-traceidentifier layout renderer allows you to obtain value of TraceIdentifier property from current HttpContext. TraceIdentifier is basically unique id which identifies the request. The renderer can be used as follows

From now on (without any additional configuration) every log entry logged (once HttpContext is created), will contain TraceIdentifier.

4. Using aspnet-item renderer

If for some reasons we don’t like to use TraceIdentifier as our CorrelationId, we can leverage aspnet-item renderer, which basically allows you to read data from HttpContext.Items collection. However in that case, we are responsible for providing and storing unique identifier of request in HttpContext.Items. Fortunately it is pretty straightforward to do with custom middleware.

Of course we have to also use this middleware in our application so we have to add following line

in configure method of our Startup class. Having all the pieces in place, now we can append CorrelationId read from HttpContext.Items to our log entries with following configuration

From now on, CorrelationId will be automatically added to our log entries

6. It works in multithreading scenarios

If you take a closer look at logs presented above, you will see that both of the renderers can log proper CorrelationId/TraceId regardless of the thread from which the logger was called. This is possible thanks to implementation of HttpContextAccessor which uses AsyncLocal under the hood, which allows you to persist data across threads.

Source code for this post can be found here

ASP.NET Core – tracking flow of requests with NLog

Boxstarter – breaking infinite reboot loop

1. Introduction

Boxstarter is a great tool for configuring your machine without any user interaction. I’ve been using it for a while now, however recently I’ve noticed that for no apparent reason it wasn’t able to finish the execution of provisioning script as it stuck in infinite reboot loop. Is was quite frustrating for me, that is why I’ve decided to investigate the problem. Here are my findings.

2. Detecting pending reboots

If you dive into Boxstarter source code you will notice that script responsible for checking if reboot is necessary uses Get-PendingReboot cmdlet. By default Boxstarter doesn’t export it, so if you want to see what it does you have to add following line in your Boxstarer script

Calling the Get-PendingReboot cmdlet will give you an information about mandatory reboots. In my case, the output looked as follows
getpendingreboot
As you can see RebootPending flag is set to true and the reason is that the value of PendingFileRenVal property is not null. Basically, in most of the cases, a restart is indeed required however, some applications leave information about pending reboot in the registry (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations) and operating system (in some cases) never cleans this up.
As a result Get-PendingReboot indicates that reboot is required and Boxstarter falls into infinite restart loop.
pendingfilerenameoperationsframe

3. Fixing the problem

The obvious solution is to remove that value from the registry key manually. However this approach didn’t work for me, basically because the registry value somehow was added at the beginning of execution of my script. That is why, I’ve decided to modify my script and add a function which clears undesirable pending file renames of my choice.

As you can see I am taking current value of PendingFileRenameOperations and remove the entries which I know that will stay there forever. Calling that function as the very first step of my installation script solved the problem and I was able to successfully restore my machine from Boxstarter script
fullscrtipt

pendingfilerenameoperationsclearedframe

Entire Boxstarter script can be found here.

Boxstarter – breaking infinite reboot loop

Logstash – reading logs from RabbitMQ

1. Introduction

In my previous post, I’ve shown how to configure Logstash to parse logs from files. This is pretty useful however if your application is deployed on multiple servers, you usually log to some kind of central log storage – in my case to queue, RabbitMQ to be more specific. In this post, I will show how to configure Logstash so it reads the logs from that queue.

2. Preparing queue

Before we move to Logstash configuration, first of all, we have to prepare RabbitMQ test instance. If you don’t have RabbitMQ yet, go to this website and install the queue. Once installation is done, go to the installation folder (C:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.5\sbin in my case) and run in console

This command will prepare RabbitMQ management website, so it will be easier for us to see what is going on in given queue. In the next step, we have to prepare the queue, the logs will be sent to. You can do it via the website we’ve just enabled (http://localhost:15672/) or via RabbitMQ admin console. As I prefer to automate things as much as possible I will do it via command line. What is quite unusual when it comes RabbitMQ CLI is the fact that it is a python script you have to download and run locally (this is not an executable). The script can be found on management site under this address. Once the script is downloaded (in my case it is saved as rabbitmqadmin.py) you can start preparing necessary elements: exchange, queue and the binding.

As you can see I’ve created exchange called logger which is bound to MyAppLogginQueue queue using MyApp route. This means that every message with topic MyApp sent to logger exchange will be pushed to MyAppLogginQueue .

3. Preparing Logstash

Logstash configuration will be modified version of my previous config. I will just add another input source. Here is a basic usage

As you can see we will be consuming messages from MyAppLogginQueue which is deployed on localhost. For password and user properties use your own credentials. That is basically it, so now it is time to see if everything is working.

4. Testing coniguration

In order to test the configuration you have to run the Elasticsearch, Kibana and use new config for Logstash. I’ve shown how to do it in one of my recent post . For sending messages to the queue I will just use RabbitMQ management website API. The API exposes

endpoint accepting POST verbs which can be used for publishing messages to given exchange. In my case POST body will look as follows

and I will be sending it to

Note that I will be sending messages to the exchange, not to the queue itself. The exchange’s responsibility is to route the message to all bound queues. Here is how it looks in practice
ezgif-com-gif-maker
As you can see our configuration is valid and messages are shown on Kibana’s dashboard almost in real time.

Full Logstash config can be found here

Logstash – reading logs from RabbitMQ