Enterprise Library V6 Data, Exception and Logging with Azure SDK 2.8

I have used the Enterprise library Blocks (which in different forms have been around since 2005) in quite a few projects. Individually the components are pretty good (not always best of breed) but they are well integrated and when used in the way which they were intended to be used work well.

I have just upgraded a client application to Visual Studio 2015 + .Net 4.5 + Enterprise Library V6 and some of the steps were not immediately obvious so hopefully this saves someone else some time. I have sample code for Azure Cloud Service Web and Worker roles.

For both web and worker roles I added the Azure Diagnostics listener to the listener config section of the enterprise library logging settings.

<loggingConfiguration name="" tracingEnabled="true" defaultCategory="General">
	<listeners>
    <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.SystemDiagnosticsTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35&amp;amp;quot;
         type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.8.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
         name="AzureDiagnosticTraceListener"/>
   </listeners>
...
</loggingConfiguration>

I then enabled diagnostics on the role and configured the transfer of logs.

Azure Diagnostics configuration dialog

Azure Diagnostic Configuration

This replaces the DiagnosticMonitorConfiguration based approach

DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
diagConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

// Enable scheduled transfer
diagConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);

...
DiagnosticMonitor.Start(&amp;amp;quot;Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString&amp;amp;quot;, diagConfig);

For the web role I configured the exception and logging blocks in the Global.asax.cs file


protected void Application_Start()
{
   AreaRegistration.RegisterAllAreas();
   GlobalConfiguration.Configure(WebApiConfig.Register);
   FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
   RouteConfig.RegisterRoutes(RouteTable.Routes);
   BundleConfig.RegisterBundles(BundleTable.Bundles);

   // Load the Entlib logging block configuration
   LogWriterFactory logWriterFactory = new LogWriterFactory();
   LogWriter logWriter = logWriterFactory.Create();
   Logger.SetLogWriter(logWriter);

   // Load the Entlib Exception block configuration
   ExceptionPolicyFactory policyFactory = new ExceptionPolicyFactory();
   exManager = policyFactory.CreateManager();
}

For the worker role I configured the exception and logging blocks in the worker role startup

public override bool OnStart()
{
   // Set the maximum number of concurrent connections
   ServicePointManager.DefaultConnectionLimit = 12;
   ...
   LogWriterFactory logWriterFactory = new LogWriterFactory();
   LogWriter logWriter = logWriterFactory.Create();
   Logger.SetLogWriter(logWriter);
   ...
   return result;
}

Then in the webrole webapi2 API controllers you can use embedded SQL or call stored procedures with retries. (This sample code uses the Northwind database and default retry configuration)

public IEnumerable&amp;amp;lt;ProductDto&amp;amp;gt; Get()
{
var products = new List&amp;amp;lt;ProductDto&amp;amp;gt;();

WebApiApplication.exManager.Process(() =&amp;amp;gt;
{
Database db = new DatabaseProviderFactory().Create(&amp;amp;quot;NorthwindInstance&amp;amp;quot;);

RetryPolicy retry = new RetryPolicy&amp;amp;lt;SqlDatabaseTransientErrorDetectionStrategy&amp;amp;gt;(RetryStrategy.DefaultExponential);

var productAccessor = db.CreateSqlStringAccessor(
&amp;amp;quot;SELECT [ProductID],[ProductName],[QuantityPerUnit],[UnitPrice],[UnitsInStock],[Discontinued] FROM Products&amp;amp;quot;,
MapBuilder&amp;amp;lt;ProductDto&amp;amp;gt;
.MapAllProperties()
.Map(p =&amp;amp;gt; p.ID).ToColumn(&amp;amp;quot;ProductID&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.Name).ToColumn(&amp;amp;quot;ProductName&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.QuantityPerUnit).ToColumn(&amp;amp;quot;QuantityPerUnit&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.UnitPrice).ToColumn(&amp;amp;quot;UnitPrice&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.UnitsInStock).ToColumn(&amp;amp;quot;UnitsInStock&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.Discontinued).ToColumn(&amp;amp;quot;Discontinued&amp;amp;quot;)
.Build());
products = retry.ExecuteAction(() =&amp;amp;gt;
{
return productAccessor.Execute().ToList();
});

}, &amp;amp;quot;ProductService&amp;amp;quot;);

return products;
}
public ProductDto Get(int id)
{
ProductDto productDto = null;

WebApiApplication.exManager.Process(() =&amp;amp;gt;
{
Database db = new DatabaseProviderFactory().Create(&amp;amp;quot;NorthwindInstance&amp;amp;quot;);

var productAccessor = db.CreateSqlStringAccessor(
&amp;amp;quot;SELECT [ProductID],[ProductName],[QuantityPerUnit],[UnitPrice],[UnitsInStock],[Discontinued] FROM Products WHERE [ProductID]=@ProductID&amp;amp;quot;,
new ProdductGetByProductIdParameterMapper(db),
MapBuilder&amp;amp;amp;lt;ProductDto&amp;amp;amp;gt;
.MapAllProperties()
.Map(p =&amp;amp;gt; p.ID).ToColumn(&amp;amp;quot;ProductID&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.Name).ToColumn(&amp;amp;quot;ProductName&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.QuantityPerUnit).ToColumn(&amp;amp;quot;QuantityPerUnit&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.UnitPrice).ToColumn(&amp;amp;quot;UnitPrice&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.UnitsInStock).ToColumn(&amp;amp;quot;UnitsInStock&amp;amp;quot;)
.Map(p =&amp;amp;gt; p.Discontinued).ToColumn(&amp;amp;quot;Discontinued&amp;amp;quot;)
.Build());

productDto = productAccessor.Execute(id).SingleOrDefault();

}, &amp;amp;quot;ProductService&amp;amp;quot;);

return productDto;
}

SQL Azure Database performance oddness

For a few years I have worked on a midsize Azure application for managing promotional vouchers which are delivered via SMS or email and redeemed via software integrated into the point of sale terminal software of several local vendors.

A promotion has a batch of vouchers which are available for allocation to consumers and I had noticed from the logs and performance counters that the process of getting the next voucher slowed down significantly as the number of available vouchers decreased.

Voucher batches range in size from 100’s to 100,000’s of vouchers and the performance of a small promotion for a local wine shop was where I initially noticed how much the duration increased. The promotion had roughly 850 vouchers the allocation of the first vouchers took 10’s of mSec each but the last 10 vouchers sometimes took more than 1000mSec each.

I took a copy of the live database and removed the customer data so I could explore the performance of my TSQL in a controlled environment. I initially downloaded a copy of the database to one of my development servers and tried to simulate the problem while monitoring performance using SQL Profiler and other tools but the allocation time was fast and consistent.

The database performance appeared to be a SQL Azure specific issue so I built a cut back web role test harness which called the underlying stored procedure so I could closely monitor the performance. The test harness could make a specified number of calls recording the duration of each call and the overall duration. I then de allocated all the vouchers in the wine shop promotion and allocated them in chunks(all durations are in mSec and are the average time it takes to make a single call)

100 vouchers at a time (0 – 800 of 843 vouchers)

43, 88, 136, 191, 260, 305, 358, 379

10 vouchers at a time (800-840 of 843 vouchers)

431, 440, 404, 412

1 voucher at a time (840-843 of 843 vouchers, last one is failure)

400, 423,404, 390

After some debugging, progressive removal of code and looking at query plans I identified the problematic TSQL statement.

UPDATE TOP(1) Voucher SET
Voucher.AllocatedByActivity = @ActivityUID,
Voucher.AllocatedAtUTC = @RequestDateTimeUTC,
Voucher.AllocationReference = @ClientReference,
@VoucherCode = Voucher.Code
WHERE (( Voucher.VoucherBatchUID = @VoucherBatchUID ) AND ( Voucher.AllocatedByActivity IS NULL ))

This update statement uses a single statement transaction to get the next random un-allocated voucher code in the specified voucher batch.

After some conversations with a SQL Azure support engineer at Microsoft (who was very helpful) he figured out that in SQL Azure the query processor needed a query hint to tell it to use an existing index to make it perform consistently.

UPDATE TOP(1) Voucher SET
Voucher.AllocatedByActivity = @ActivityUID,
Voucher.AllocatedAtUTC = @RequestDateTimeUTC,
Voucher.AllocationReference = @ClientReference,
@VoucherCode = Voucher.Code
FROM Voucher WITH (INDEX(IX_VoucherByVoucherBatchUIDAllocatedByActivity))
WHERE (( Voucher.VoucherBatchUID = @VoucherBatchUID ) AND ( Voucher.AllocatedByActivity IS NULL ))

100 vouchers at a time (0 – 800 of 843 vouchers)

20, 8, 13, 9, 32, 7, 15, 9

10 vouchers at a time (800-840 of 843 vouchers)

17, 9, 14, 13

1 voucher at a time (840-843 of 843 vouchers, last one is failure)

12,11,13, 5

I always have plenty of performance counters and logging (using the enterprise library) on my Azure web and worker roles but I was also lucky that I noticed something odd in the logs while checking on the progress of another promotion. I actively monitor the performance of my Azure applications as over time the performance characteristics of the underlying hardware will change as fixes and enhancements are released.

EntLib Logging in Azure – Revisited

At one of my clients I was debugging an application which used the Microsoft Enterprise Library logging block and I added some code to log the service configuration on start up.

The odd thing was that when I turned on verbose logging I noticed the first couple of log entries were missing sometimes. After some mucking around I found that the log entries were only missing for a new deployment and everything was fine for a restart.

The missing log entries weren’t immediately obvious as I was testing an application which was clustered and there were multiple instances were logging information simultaneously.

public override bool OnStart()
{
Trace.Listeners.Add(new DiagnosticMonitorTraceListener());
Trace.AutoFlush = true;

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"));
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);

var configuration = roleInstanceDiagnosticManager.GetCurrentConfiguration();
if (configuration == null)
{
configuration = DiagnosticMonitor.GetDefaultInitialConfiguration();
}

configuration.Logs.BufferQuotaInMB = 4;
configuration.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
configuration.Logs.ScheduledTransferLogLevelFilter = LogLevel.Undefined;
roleInstanceDiagnosticManager.SetCurrentConfiguration(configuration);

EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(new FileConfigurationSource("web.config", false));
Logger.Write("OnStart " + DateTime.UtcNow.ToLongTimeString());


return base.OnStart();
}

public override void Run()
{
int index = 0;

Logger.Write(“Run Start ” + DateTime.UtcNow.ToLongTimeString());

while (true)
{
Logger.Write(“Run Loop ” + index.ToString() + ” ” + DateTime.UtcNow.ToLongTimeString());
index++;
Thread.Sleep(10000);
}
}

The trace log for a new deployment looked like this (I have found it starts somewhere between loop 2 thru 6 )

Run Loop 6 8:04:09 AM
Run Loop 7 8:04:19 AM
Run Loop 8 8:04:29 AM
….

For a restart like this

OnStart 8:19:01 AM
Run Start 8:19:01 AM
Run Loop 0 8:19:01 AM
Run Loop 1 8:19:11 AM
Run Loop 2 8:19:21 AM

I’m assuming that it this is a result of the time it takes for DiagnosticsAgent.exe to spin up.

If I use the Windows Event log the results are similar but with duplicate entries every so often which is odd.

EntLib Logging in Azure Pt 3 – Working

After some more reading starting here I added a counter to the onRun so I could see if log entries were getting dropped (my gut feel was correct, they were)

public override void Run()
{
int index = 0;
Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write("Run Start " + DateTime.UtcNow.ToLongTimeString());

while (true)
{
Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write(“Run Loop ” + index.ToString() + ” ” + DateTime.UtcNow.ToLongTimeString());
index++;
Thread.Sleep(10000);
}
}

So after some trial and error I settled on this initialisation approach as the most robust

public override bool OnStart()
{
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(“Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString”));
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);

var configuration = roleInstanceDiagnosticManager.GetCurrentConfiguration();
configuration.Logs.BufferQuotaInMB = 4;
configuration.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
configuration.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

roleInstanceDiagnosticManager.SetCurrentConfiguration(configuration);

EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(new FileConfigurationSource(“web.config”, false));

Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write(“OnStart ” + DateTime.UtcNow.ToLongTimeString());

return base.OnStart();
}
This sample application seems to work and reliably log all the information I expect to the trace logs.

EntLib Logging in Azure Pt 2 – All is not what it seems

After confirming my environment was good I modified the sample code and added logging to the OnStart, I also added an OnRun with some logging and a simple while loop.

public override bool OnStart()
{
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});

var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration();
configuration.Logs.BufferQuotaInMB = 4;
configuration.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
configuration.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(“Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString”, configuration);

EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(new FileConfigurationSource("web.config", false));

Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write("OnStart " + DateTime.UtcNow.ToLongTimeString());

return base.OnStart();
}

public override void Run()
{
Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write("Run Start " + DateTime.UtcNow.ToLongTimeString());

while (true)
{
Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write("Run Loop " + DateTime.UtcNow.ToLongTimeString());
Thread.Sleep(10000);
}

base.Run();
}

Initially the webrole wouldn’t start and then after re-reading the full IIS vs. core post I realised that the entlib couldn’t find its config file so I added

EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(new FileConfigurationSource(“web.config”, false));

I could then see some log entries (note the process name…)

<TraceSource>General</TraceSource>

But my onStart logging wasn’t working and when I looked at the timestamps on the log entries it felt like there were some missing. Time for some more reading…

Entlib Logging in Azure Pt 1 – The baseline

Back when the Azure SDK V1.3 was released several of my cloud services broke due to the change from core to full IIS. I use the Enterprise Library which was also broken by this update. The Azure team talked about the changes in detail here and SMarx talked about how to fix you configuration problems here. My customer was in a hurry so I just put in the quick ‘n’ dirty hack and moved on. Every so often the warnings in the error list tab would annoy me so I would go back and have another look.

warning CloudServices078: The web role XXX is configured using a legacy syntax that specifies that it runs in Hostable Web Core.

One of these webroles was an SMS Gateway which polled an Azure storage queue for messages to send in the onRun and had two simple asp.net pages which inserted inbound messages and delivery notifications into Azure Storage queues for processing by a queue handler worker role.

The problem was that at 5-15mins for a deploy it was easy to get distracted and with the release of the Azure integration pack nearly 2 years later I figured it was time to sort this properly. I wanted to be able use the Entlib logging etc in the onStart, onRun and ASP.Net pages of an webrole. There was some useful reading here and here.

To reduce the application to a managable size I started with the Azure Integration pack logging hands on lab and this is the key section of webrole.cs

public override bool OnStart()
{
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});

var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration();
configuration.Logs.BufferQuotaInMB = 4;
configuration.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
configuration.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", configuration);

// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

return base.OnStart();
}

I built the application and deployed it into the cloud and everything worked as expected. When I went to the default.aspx page and clicked on the button. I use Cerebrata Diagnostics Manager and with this I could see log entries appearing.