Friday, September 27, 2013

EntLib6 - EventSource & Keywords for Semantic Logs

In this post we are going to discuss two very common details for defining event source. They are EventLevel and EventKeyword. They are so central for defining and enabling events. Yet, it is very easy to make mistake in using or expecting a particular behavior with the values used for them. The value assigned to them might include other values as well, which might come as a surprise for others.

Event Level
As we discussed, the event level selected for enabling a listener is the maximum value of the level which would be forwarded to the subscribed sinks. All events assigned with levels equal or lesser than the assigned enumeration values are considered enabled.

I have really felt that intellisence is not really useful in this case. The levels are ordered in the ascending order of their names. It could have been really useful if they could be available in the order of their values [but there are some things in life you cannot get, hence the void... :)]

Let us try to make it a little easier. We can come up with an acronym like from top to bottom, it is VIWECL. In order to remember this, we can come up with some catchy phrase like Very Intelligent Wife so Extremely Cool Life. Here whatever level we specify to enable the event source, all the lower levels are automatically included. I think we should not forget it now. When someone special is involved, it is better to try not to forget...

Keywords As Power of 2
It must be remembered that keywords must be assigned values in power of 2. It makes it easier to enable and disable the events. Here is an inner class Keywords defined in an EventSource. You can see that the values are 1, 2, 4, 8. They are powers of 0, 1, 2, 3 respectively.

In order to understand how we can use these keywords, we need to understand the bit mask for these keywords. We have the bit masks for Creation, AdmissionApproved, FeeDeposited and TransferCreditHours.

Definition of EventSource
Let us introduce an EventSource. The event source has four [Event] based methods. The methods are using the values of keywords defined above. They are also assigned with particular

Special Event Level & Keyword
EventLevel.LogAlways has special meaning when we use it to enable the events. It means to ignore levels when filtering is being applied i.e. all levels are considered. Since this is a perfectly fine EventLevel value which could be assigned to an Event method, it might get confusing when you would just want to enable the events assigned to this level, as it has a specific mean in this scenario. I think we should better avoid it when defining our events. I still have this in my EventSource just to give you an example.

Similar is the case for Keywords.All. It means to include all keywords. Basically, when we don't specify an Keyword to enable a scenario, all those events which are assigned with a specific value of keywords are just ignored. If we want to consider all of them instead, we can use this value for EventKeyword when enabling events. It has a value -1. We can use this value for enabling all keywords with xml configuration in the case of Semantic Logging Service configuration.

Configuration for In-Proc usage
As we have discussed before we can use SLAB both In-Proc and Out-Proc scenarios. For the In-Proc scenarios, we can enable the events using ObservableEventListener specifying the details of event's levels and keywords we are specially interested in. Let's first consider the case of specials for level and keyword. Here we are using EventLevel.Always and Keyword.All, which means to say, I want to log everything.

And it does have all the logs, no matter what Level or Keywords they are assigned with. Here is the output for the above code:

EventLevel.LogAlways with other keywords
Let's see the following combination. Here we are using EventLevel.LogAlways with two keywords.

If you go back and look at the definition of our EventSource, we don't have this combination of Level and Keywords for any event. There is only one event with Level assigned as EventLevel.LogAlways, but that has different keyword assigned.

So does it mean, it should include none of the events? Acutally No!!!

As we have discussed before, it has a special meaning the case when we are enabling event. It means to ignore whatever value of level assigned to events. In other words, just use the keywords specified for enabling the events. So it should just enable the two events, defined with the specified keywords. If we run this code, it shows the following output:

Level is the Max Level for Events
As we have discussed in the stairs diagram for Events. The event level assigned is the maximum value for the EventLevel considered for logging. In the following example, we are using EventLevel.Error for enabling the event. If we look at the diagram, we should realize, it would automatically include EventLevel.Critical and EventLevel.LogAlways. Let's change the configuration as follows:

As expected, it includes events assigned with levels as EventLevel.Error, EventLevel.Critical and EventLevel.LogAlways. Here is the output:

-matchAnyKeyword for Out-Proc Configuration
We can use a similar configuration for Out-Proc scenario. In the Semantic Logging service provided by Semantic Logging Application Block, matchAnyKeyword attribute is provided for source configuration. The value assigned to this keyword is treated differently. It is the sum of all keywords we want to enable. Here we want to enable Keyword.Creation and Keyword.AdmissionApproved. We need to just add the two numbers to calculate the value which should be assigned to the attribute. We can also use bit calculation by masking the two keyword bits. We can then convert the value in decimal as follows:

Let's use this in the configuration for the service. Here we are interested in the events with maximum level assigned as EventLevel.Informational. We are specially interested in the two keywords specified above.

Since we are using Windows Azure Sink with Storage Simulator, we should be able to see the following events logged in SLAB table.


Thursday, September 26, 2013

EntLib6 SLAB - Events Versioning

Semantic Logging Application Block's main focus is event consumption. It is based on EventSource introduced in .net framework 4.5. It allows these events to be consumed In-Proc and Out-Proc. But no system is free from any changes. Logging should also accommodate system changes. EventSource allows its events to be tagged with version numbers. This is part of metadata defined for the [Event] methods.

Let's assume we deploy the example application as developed in previous example. After two months the client calls and asks if we could add an additional fields to Students. The field would be used to record Age of student. Since student creation is being logged, we would need to log this information as well. In this post we will be trying to accommodate this change.

It is also possible that we might need to add additional methods to the EventSource due to additional logging needs. We need to make sure that we are assigning them appropriate keywords and levels as expected by the downstream consumption system. As discussed before, they can also be outside the system.

Versioning & EventSource
We must remember that these versions cannot co-exist in an overloaded fashion in an EventSource as in the following code. As we have discussed before, these methods map to the Event Types for ETW infrastructure, it would generate a duplicate key, which is not allowed.

Here we have just overloaded StudentCreatedCore() method decorated with EventAttribute. It has an additional parameter studentName. Here we have kept the Id as same for the Event but included Version with some value assigned. We have also added the similar property in the other overloaded method. In the public method, we are deciding which method to use based on certain condition. The world is really a happy place to live...

Let's see how our unit test behave in this case. Hmm, it fails with the details about possible issue at runtime because of more that one events with the same name.

Let me tell you that the code compiles just fine in this case. This is to save us from experiencing an exception at runtime and then cursing ourselves for missing the obvious.

Let's update the definition of the EventSource removing the earlier version of the [Event] method.

Versioning & Events Consumption
Since Events are generated to be consumed In-Proc or Out-Proc systems. The events detail can be used by automation system to generate reports and notifications for the infrastructure and management teams responsible for the health and monitoring for the system. They expect a certain payload and message format. Since they can be persisted, the data for historic and current payload structure would co-exist in the persistence storage. They need some way to decide what format to use based on some data. Version is the best candidate for this. The reports can use versioning to decide which payload format should be used by the automation system. We must remember that we cannot enable / disable the events based on versions or even [Event].

Let's update the configuration of Semantic Logging Service for the update. Here we are adding an Azure Sink. The sink is enabled for the EventSource defined above for all the events with a maximum level set as Informational.

As we generate these events in the source system. They are provided to the Semantic Logging Service by the ETW infrastructure. TracingService hooks up to the ETW infrastructure for these events. As the events are received they are passed to the avaiable sinks. These sinks are utilized based on their configuration of the events source and keywords they are interested in. For our events, they are pushed to the azure development table storage. Just notice the payload for the new version and old version. In this way, they can be easily identified and operated on.


Wednesday, September 25, 2013

Enterprise Library 6 - SLAB - Supporting Complex Types for Payload

Previously we have discussed there are only certain primitive types supported by EventSource introduced in .net framework 4.5. This post is an attempt how we can work around this limitation. Actually, we can decorate EventSource methods with [Event] and [NonEvent] attributes. The other most important point is the following:

[Event] methods can be private.

We can use this to only include private [Event] methods which would be used to generate ETW manifest from the EventSource. Additionally we can introduce public methods in the EventSource type supporting the non-primitive custom type as parameters. Hence these object-oriented types make writing the logs easier without putting any load on the underlying infrastructure supporting the event mechanism.

Let's create a sample C# project targeting .net framework 4.5. We add a type EventSourceWithComplexType to the project. As you can see below this is an EventSource. Here we are using Lazy initialization introduced in .net framework 4.0 for singleton implementation [Please refer here for singleton implementation]. It has a private constructor to make sure that it cannot be instantiated from outside. The most noticeable thing is its two methods. Here we have a method StudentCreatedCore() decorated with [Event] attribute, has a list of parameters including types supported by the ETW infrastructure. It has a corresponding StudentCreated() method with [NonEvent] attribute. It has a public access modified and has a non-primitive type for parameter. The most important thing is that it is using the StudentCreatedCore() method.

Can The implementation satisfy EventSourceAnalyzer?
We have discussed that we can use EventSourceAnalyzer for testing our EventSource implementation [Discussion]. Let's see if we can satisfy EventSourceAnalyzer for the above implementation. We just need to call InspectAll() method with an instance of the EventSource. Here we are using xUnit for testing supported by resharper. You need to install the nuget package. You would also need to get EnterpriseLibrary.SemanticLogging package for using EventSourceAnalyzer.

As a matter of fact, EventSourceAnalyzer has no complaints and the unit test passes. Zindabad!

Listening for Events
Now let's see if we can listen for events using this EventSource implementation. Here we are attaching the EventSource with ConsoleSink using the ConsoleLog [Discussion]. We are using the same EventLister to enable the events. Here we are just using the public method on EventSource type with an instance of Student type. Let's see if we could see the event's data on the console [keep the spirit high, friend!!!].

And we do see that event's data is being printed on the console in the same format as we provided in the StudentCreatedCore() method. Hence the runtime was able to do it from the private method. It has no problem with generating the ETW manifest for private [Event] methods and using it at run-time. Zindabad!


Love for other people what you love for yourself.    [Prophet Muhammad - Salallahu-Alaihe-Wasallam]

Monday, September 23, 2013

Semantic Logging Application Block In-Proc Usage

Semantic Logging Application Block provides various types of sinks. They include Console, File based, SQL Server and Windows Azure Table based sinks. We have been discussing how we can use these sinks out-proc. In this post we will be trying to focus how we can use these sinks in-proc i.e. in the same process where the ETW events are generated from. This is specially useful for scenarios where it is not feasible for running Semantic Logging Service as a separate process. I think the most common use would be desktop applications.

Let's create a sample console application SLABInProcApp targeting .net framework 4.5.

Since we would be using the sinks in process. We need to use the necessary nuget packages. As we know FlatFile, RollingFlatFile and Console sinks are directly available in EnterpriseLibrary.SemanticLogging package. The SQL Server and Windows Azure Table sinks are available in EnterpriseLibrary.SemanticLogging.Database and EnterpriseLibrary.SemanticLogging.WindowsAzure packages respectively. Since these packages have a dependency on EnterpriseLibrary.SemanticLogging package, installing these two packages should take care of all the necessary dependencies. These are the packages and their dependencies as shows in Package Visualizer.

And this is how the packages.config should look like for the project.

Now let us introduce our custom EventSource we have been using in these posts. As we have been seeing, it must inherit EventSource type available in System.Diagnostic.Tracing namespace. We can also introduce the necessary types for keywords, tasks. Here we have two logging methods with Event Ids set as 1 and 2.

IObservable<EventEntry> Implementation
SLAB provides ObservableEventListener. This is a specialized EventListener available (.net framework 4.5). As its name suggests, this is an observable type implementing IObservable<EventEntry>. Now SLAB provides various extension method to hook this observable type to the available sinks. In order to keep the world simple, for a given sink type X, the available extension is named as LogToX().

Here we seem to be interested in pushing the logs to Console, SQL Server and Windows Azure table. So we would be needing ConsoleSink, SqlDatabaseSink and WindowsAzureTableSink. These are provided through ConsoleLog, SqlDatabaseLog and WindowsAzureTableLog types respectively through their corresponding LogToX() methods. The extension methods for logging to SQL Server and Windows Azure Table are made available through installing their respective nuget packages.

There is some pre-requisites before we can push data to SqlServer and Windows Azure table here for this example. We need to create database (including the required objects). We also need to make sure that Storage Emulator is running on the machine running this application. This is only required for development. For the production application we can provide the necessary connection details. We have discussed the pre-requisites for SQL Server and Windows Azure table here.

And That's it! Ain't that simple!!!

Now we can just run the application. Since we have subscribed to the ConsoleSink through ConsoleLog.LogToConsole() extension method for IObservable<EventEntry> type, we should be seeing the following on the console.

Subscribing to WindowsAzureTableSink should push the following in the development table storage.

And finally we should be able to see the following data in Traces table in the SQL Server database used here:

Simplifying it Further
We can simplify the code even further. We don't need to instantiate ObservableEventListener ourselves. XLog types also provide the necessary CreateListener() methods. In addition to returning a new instance of ObservableEventListener, they also hook up the EventListener instance returned to the respective source sink.

Event Listeners and Application Life Cycles
Please note that this is a very simple example where you see these subscriptions and logging in the same methods. This would not be the same in the real-life application. You can consider this like Trace Listeners, they are added in the beginning of application life cycle. In the same way, EventListener subscriptions can also be done at the similar stage. They would be more suitable for application seams (Mark Seemann).

There are still some limitations about implementing interface type by the EventSource.

Subscribing Custom IObserver<EventEntry>
Since ObservableEventListener implements IObservable<EventEntry>, we can subscribe any IObserver<EventEntry> with this. Let's implement a sample IObserver<EventEntry> implementation.

We can use the CustomObserver to subscribe to the ObservableEventListener now. This should be using OnNext() method for any new event from the listener.

Download Code

Saturday, September 21, 2013

EntLib6 - Semantic Logging using Windows Azure Sink - Table Storage

This post is part of series of posts discussing about Semantic Logging Application Block [SLAB]. Earlier we have discussed about how we can push the log data to file and SQL Server storage. In this post we are discussing how we can use Windows Azure Table Storage as a destination of these logs.

Windows Azure Sink Configuration
As we have discussed updating destination and format of the event data requires no changes in the application generating these ETW events if we are using out-proc consumption of these events. SLAB provides Semantic Logging Service. We can use it out-of-the-box to make all these necessary changes without event bringing the application down. The configuration is done in SemanticLogging-Svc.xml configuration file for the service. Here we are using Windows Azure Storage Emulator.

Storage Emulator Not Running / Installed
For our development machine, we first need to install Windows Azure SDK to run the above. We specially need Windows Azure Storage Emulator running for this example. The emulator uses a default account key to test the application logic. In absence of an emulator running on your machine, with the above configuration, you would see the following error in your Semantic Logging Service console.

Installing Windows Azure SDK for .Net 2.1
The SDK is currently on the version 2.1. We can download it from Microsoft Download Center. You can make a selection based on Visual Studio edition you are running on your development machine.

It uses Web PI (Platform Installer) for the installation.

Running Storage Emulator
Running the emulator is easy. We can find it in the start menu. In order to run it from the Windows Azure Command Prompt, we need to use its application name i.e. dsserviceldb.

You should be able to see this running in the System Tray. I have it running on my Windows 7 machine as follows:

Just make sure that, it is running Table Storage service. We need it for Azure Sink as used in the example post.

Now let us run the same application as used in the previous post. The application is generating two ETW events with an Id 1 and 2. As we run it, we should be able to see the SLAB table Development storage. Here we are using the extensions installed in Visual Studio 2012 by Windows Azure SDK.

The extension also supports viewing the storage data. For our example the payload is recorded as follows:

But this seems a bit different than SQL Server sink data. Here it has added additional columns for the payload fields. It is a union of all the fields for all the event types as used by the ETW Event provider defined with the help of EventSource in the source application.

Thursday, September 19, 2013

Entlib6 SLAB - Custom IEventTextFormatter Implementation

We have seen how we can use IEventTextFormatter to further format the data from the source sinks using Semantic Application Block. SLAB provides three implementations of the interface and we discussed all three of them. You might be wondering what if we want to format this differently to be consumed by a certain stakeholder. So the main question is if we can introduce and utilize custom implementation of IEventTextFormatter. This post is a continuation of previous post discussing the custom event text formatter.

Let's create a simple class library project and install EnterpriseLibrary.SemanticLogging nuget package to the project. Let's name the project as EventSourceCustomizations. We add the following implementation of IEventTextFormatter to the project.

This is a very simple implementation of the interface. Here the formatter is just formatting the event entry data in CSV format and writes using the TextWriter passed as method argument. This is enough to define a custom event text formatter.

In-Proc Usage of Custom IEventTextFormatter implementation
In order to use the IEventTextFormatter in-process where the ETW events are originating, we need to add the assembly reference of the library to the source project. As discussed before, we can use ObservableEventListener to in-process consume the events. SLAB provides extension methods to the type to log to console which specifies the IEventTextFormatter as one of the parameter. Here we can use our custom implementation of IEventTextFormatter.

Running the above code with our sample ApplicationEventSource shows the data being pushed to the console as follows:

Semantic Logging Service - Custom IEventTextFormatter
We can appreciate the custom IEventTextFormatter more if we are able to use it out-proc. In this case, we need to make no change in the source application. It is possible that during the life time of the application, we realize that we need to format the event data in a different format. We don't need to make a single line of code change to the source application in this case. Semantic Logging Service configuration completely supports this change. We can add the new custom IEventTextFormatter to the SemanticLoggingSvc configuration as follows:

When the application logs the ETW events using the same source, Semantic Logging Service uses the new custom event text formatter and writes the data to the console as follows:

Just make sure that you have copied the assembly containing the custom IEventTextFormatter to the service folder:

Please note that this can be further improved by incorporating the intellisense support. That requires the schema definition for the custom IEventTextFormatter. We also need to provide an implemenation of IFormatterElement for the element. But we should be able to use the type directly instead of through CustomEventTextFormatter. Please refer development guide for Enterprise Library 6 for such an example.


Wednesday, September 18, 2013

EntLib Semantic Logging App Block - Event Text Formatting

In the discussion around Reactive Event Sourcing , we discussed briefly about text formatting for ETW messages. In this post, we are trying to understand the text formatting in a little more details discussing the different options available for text formatting for event data provided by EventSource as supported by Semantic Logging Application Block [SLAB].

In the examples for this post, we are going to be using Semantic Logging Service. We have discussed about it in the past. You can get yourself started with the installation and configuration of the service here: [Semantic Logging Service]

Sample Event Source
Let's create a sample console application EventSourceFormatting.csproj. We are adding the following EventSource to the project. It has only a single logging method which can be used when there is a request for password hashing.

Message Formatting
EventSource allows us to add a formatted message to the payload. Just look at the definition of RequestForPasswordHashing() method. Here we are decorating the event method with the Message property. You can notice that we are using it like string.Format() where we are expecting the arguments. The arguments are used from the Event method parameters with the same indexing. So for the example here, it would be using req for {0} and userName for {1}. The runtime would format the message using them and assign to event's Message.

The message and payload for the above event should be logged as follows:

Here we have called the event method as follows:

If you are using Semantic Logging Service to test this code with Sql LocalDB, you can use the following service configuration:

The message formatting as we described above, can format the message adding additional details from the payload. But we might need more than that. We might need to add some text in header and footer of the log being generated if we are using a file or console based sink.

SLAB supports text, JSON and XML based text formatting. The types are available in Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Formatters namespace available in Microsoft.Practices.EnterpriseLibrary.SemanticLogging assembly. The assembly is downloaded using EnterpriseLibrary.SemanticLogging nuget package.

EventTextFormatter is the simplest of all text formatters. It supports qualifying the event's data with additional details including header and footer. These are not the header and footer for the whole log file but each event data would be enveloped in them. Here We can add the following FlatFileSink to our service configuration. It has a simple EventTextFormatter with some specified header and footer details.

The Event Level specified on the sink is the minimum level. If there are error level events generated by one of the event source specified by the sink then they are also logged. You can notice how our event data is decorated with header and footer as specified in the above configuration using the EventTextFormatter by the FlatFileSink.

There is one more amazing option with EventTextFormatter which is missing in the other supported IEventTextFormatters (s). The option is called VerbosityThreshold. This is to further qualify the event for which we desire to apply this formatting. Let's first update our events as follows:

In the following example, we are interested in all the events from the specified EventSource which are at least set as [ Level = Informational]. But we want the additional verbose details only in the case of events with Informational level.

So all the events with level less than Informational wouldn't have the detailed verbose event detail. By default this is set as Error. That is why we were already seeing the detailed event data for Error level event. For our example, the data would be pushed to the sink as follows:

We can use XmlEventTextFormatter without any changes to the source application. Let's see below where we are updating the configuration of Semantic Logging Service. We are using the same Event source, still interested in the same event level. We are even logging to the same file but just in a different format i.e. Xml.

Here is how the same event data is logged in a different format. We don't have the header and footer data as we had for EventTextFormatter.

You can notice that message is still formatted in the same way.

Similar is the case for pushing the event data to the sink in JSON format. In the following Semantic Logging Service configuration, we have used JsonEventTextFormatter with FlatFileSink.

And now the same event data is pushed out to the file in JSON formatter. We don't even need to restart the source application for this change. Just look at the same data in a different format. This time this is JSON.

Why DataTimeFormat Property?
In our EventSourceAnalyzer discussion we mentioned that DateTime data type is not supported for event payload. If you look at the definition of IEventTextFormatter, it has a DateTimeFormatter property. Why in the world do you need to format this when it is not even supported. Just Don't get too excited!!!

The DateTimeFormat is not for the payload data. It is for the event data. If you look at the log file, you can find that all the events are logged with TimeStamp. This formatting is to applied for that Timestamp. Let's update the service configuration as follows:

Just notice the TimeStamp format. You can compare this with the TimeStamp format before.

Source Sinks & Supporting EventTextFormatters
It must be remembered that all sinks don't support every text formatter. There are only a few which support these formatters. In the following image, I have tried to list sinks and the supporting Event Text formatters.

Source Sinks support a maximum of a single text formatter. It can be any of the supported text formatter as presented above.

Changing the Semantic Logging Service Configuration
It must be remembered that we don't need to restart the service after every changes in the service configuration including additional / removal of an event sink. Here we are running the service as a console application. Just look at the detailed message when we changed the configuration. Isn't that great? Zindabad!!!


Sunday, September 15, 2013

Event Source & Event Provider Mapping

We have been discussing that EventSource is used to generate ETW event provider. I thought it would be good to discuss how it maps out to ETW world. In this brief post, we are going to discuss just that.

As we have discussed compiler generates ETW manifest based on the definition of our EventSource. This is a type which inherits from EventSoure class from System.Diagnostics.Tracing namespace in mscorlib assembly. There is a one-to-one correspondence between the EventSource in our code and resulting EventProvider. Let's see how the elements in EventSource customized type maps out to ETW event provider.

Naming Conventions & Battle b/w Dev & IT
Defining your own Event Source can get you in battle against naming conventions in your code and ETW. Since ETW infrastructure is not maintained by development team, they might be following some standard that they want you to adhere to. This is specially true if your ETW events are destined for Windows Event Log. Here we have discussed how we can target our EventSource based events to Windows Event Log.

In order to make it easier for you Microsoft has introduced EventSource allows specifying a friendly name of the EventSource using EventSourceAttribute. This is a class-level attribute. The compiler uses this name to create ETW provider for this EventSource. Now your system management team might require you to generate events in a folder structure based on their policy. In order to support that we can use dashes with EventSourceAttribute.Name. In the following example we would define the attribute as [EventSource(Name = "Samples-EventSourceDemo-EventLog")]. By default, it uses the class name for the EventProvider name. For the unique identifier for the EventProvider, EventSourceAttribute provides Guid property.



Saturday, September 14, 2013

Event Source Analyzer for Potential Run-time Errors

You might write the best EventSource with all the required methods for logging. But we must remember that your event source would generate manifest and the manifest is used at run-time to write ETW events. This might result in unexpected runtime errors. One wonders how those issues can be found out earlier in the development cycle. Semantic Logging Application Block provides EventSourceAnalyzer for just this purpose. It is used to verify the run-time checks for an EventSource instance.

The type provides Inspection methods. In case of a possible failure during verification, it throws EventSourceAnalyzerException. I think the best usage of this type is to write unit tests to verify such checks for the defined EventSource.

Checks Performed
Now what checks are performed by EventSourceAnalyzer. It performs a 3-point check on EventSource. The tests are as follows:
  1. Is there any error when the given EventSource is enabled by an EventListener?
  2. Can event schema be requested from the EventSource?
  3. Can all Event based methods be invoked in the specified EventSource class?
If the answer of all the above questions is Yes, the unit test is passed!

Let's Start Testing Our EventSource
Here is an method from an EventSource. The compiler generates manifest for ETW events based on the method parameters of Event method. So at run-time calling WriteEvent with a different parameter list than the generated manifest would cause failure. The same is the case with the following EventSource method.

If we use EventSourceAnalyzer for an EventSource containing this method, this should result in the an exception providing the details about the possible issue at runtime.

Generating an ETW event with a non-existent EventId would also result in failure at runtime. Let's see the following method where the Event method is decorated with an EventId which would be used to generate the manifest for the associated event.

Now at runtime it could generate an event with an Id in the manifest just because of an incorrect argument to WriteEvent method. We can determine that earlier in the cycle. The analyzer would result in the following exception in this case.

There is a typo here with the exception message. I have logged a connect item for this. If you really dislike this then you can vote for this.

There are only a handful of types which could be used for ETW events. Someone has specified a unofficial list here as follows: [ Boolean, SByte, Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Single, Double, String, Guid]. Since WriteEvent accepts an object array as method argument, the compiler wouldn't have any issue if we pass an instance of an unsupported type. This would only cause failure at runtime. Look at the following method which is using DateTime type.

The Analyzer wouldn't like this and would result an error. Here is the generated exception for the above case:

The analyzer even checks for the order of the parameters in the WriteEvent and the EventSource method. They should be in the same order. In the following we are passing arguments to WriteEvent in a different order than EventSource method.

EventSourceAnalyzer can be a little flexible about this. We can use the following property so that the order is not checked.

Because of the way EventSourceAnalyzer inspects the EventSource, we might need to set ExcludeEventListenerEmulation as well. So that EventListener is not used, otherwise, it would mimic the runtime and would result in failure. The order is important at run-time so we should avoid setting this property to true generally.

The analyzer can also be used to verify that not only the parameters count and order should match, but the types of EventSource method and WriteEvent should also be matching. Here is the exact case in the following:

This would not be checked by EventSourceAnalyzer by default. We can set the EventSourceAnalyzer as follows so that this is also checked.

Since the methods in the EventSource are used to define Event Types for the resulting Event Provider. There should be no non-event methods in the EventSource class. But we do need helper methods sometime. All such helper methods should be clearly marked as NonEventAttribute. Yes, even if they are private.

Writing Unit Tests
As you might have noticed, EventSourceAnalyzer is provided by Semantic Logging Application Block [SLAB] in Enterprise Library 6.0. On the other hand EventSource is provided by .net framework 4.5. SLAB extends the EventSource by providing the amazing ways we can utilize the event data but it is not a requirement to generate ETW event data. We can even have an out-proc event listener which could utilize the generated data and direct it to any sink. The source project doesn't need any reference of the application block.

But we still want to test the logic, we can just install the package to our unit test project. In the unit test we can verify all these checks pretty early. These unit tests can be added to our continuous integration build. Let's install the application block nuget package to our unit test project.

We can use xUnit for our unit tests. As we have discussed before, we can add its support in IDE by installing a Resharper extension. Here we are installing a release version of the package.

Let's add the following unit test to inspect EventSource for runtime checks.

In case a test fails, it shows it as follows:


EntLib6 Semantic Logging with SQL Server Sink

In the last couple of months we started our discussion about EventSource and Semantic Logging Application Block. You can refer to the previous posts here: In the above posts, we discussed how to create an Event Source and why do we actually need structured logging. We presented some examples of consuming the events generated by EventSource using tools including PerfView and SLAB's Semantic Logging Service. Event Sinks and listeners allow us to consume those events and pushed them out externally, including file system and Windows Event log. This post is about taking the discussion a little further to see how we can push out the events generated using an EventSource to SQL Server Database.

System Requirements
SLAB supports SQL Server 2008 and higher, SQL Server LocalDB and Windows Azure SQL Database. We will be using SQL Server Local DB for this example.

Sample Event Source
Let's create a sample console application SLABSqlEventSinkEx. We add the following EventSource to the project.

Let's try to verify if our EventSource has no issues by using PerfView. We discussed about using PerfView for ETW events here.

And we do see the event being generated. Please notice the selection below:

SLAB provides a number of Event sinks. Not all of them are in the same assembly.

Pattern & Practices team has distributed them among different Nuget packages. If you want to utilize the events generated by an event source and push them out to a SQL Server database then you would need to use the following package:

After packages installation, this is how my packages.config looks like.

Creating Database
The nuget package provides the necessary script to create database schema. We can find the scripts under ...\packages\EnterpriseLibrary.SemanticLogging.Database.1.0.1304.0\scripts folder. Just run the cmd file in the folder. It should run both sql files passing the required parameters.

The script creates a new database Logging with a single table, stored procedure and a couple of user defined types. It assumes installation of SqlLocalDB installation. Here we have connected with the database using Visual Studio SQL Server Object Explorer in Visual Studio 2012.

We can use SqlLocalDB command line tool to check the state of sql server instance used.

If you are using out-proc Semantic Logging Service to consume the events then you really don't need to download the package. The script and command file is copied into the service's folder when you install the nuget package. You can find them here:

Using SQL Server Sink with Semantic Logging Service
As we have discussed before SLAB allows us for out-proc logging using Semantic Logging Service. The service can run as a console application or windows service. You can see the this post where we introduced the service and its use with file system sink for event sources. Let's update the service configuration [SemanticLogging-svc.xml] to push the events from SqlApplicationEventSource to SQL Server database created above.

After updating the configuration, we can run the service as a console application as follows:

Run the application writing ETW Events. As the events are pushed out, the service writes the details to the specified database. Here is the same data as we saw in PerfView earlier in this post.

Using SLAB's In-Proc SQL Server Sink
We can also enable the events In-Proc. SqlDatabaselog can be used to create the listener to be used to enable the events. We can add the following code to the Main method of our console application created above.

But I have noticed an issue logging the data in SQL Server. I have discussed the issue here: [Discussion about Keyword issue]. The issue is based on using Keyword with the event method, which might seems related.