Swarm Space – Bumblebee Hive Basic Emulator

One of the main problems building a Cloud Identity Translation Gateway (like my TTIV3AzureIoTConnector) is getting enough devices to make testing (esp. scalability) realistic. This is a problem because I have only got two devices, a Sparkfun Satellite Transceiver Breakout – Swarm M138 and a Swarm Asset Tracker. (Considering buying a Swarm Eval Kit)

Satellite Transceiver Breakout – Swarm M138
Swarm Asset Tracker

So, I can simulate lots of devices and test more complex configurations I have started build a Swarm Bumble Bee Hive emulator based on the API and Delivery-API OpenAPI files.

NSwagStudio configuration for generating ASP.NET Core web API

As well as generating clients NSwagStudio can also generate ASP.NET Core web APIs. To test my approach, I built the simplest possible client I could which calls the generated PostLoginAsync and GetDeviceCountAsync.

Swagger UI for NSwagStudio generated ASP.NET Core web API

Initially the BumblebeeHiveBasicClientConsole login method would fail with an HTTP 415 Unsupported Media Type error.

BumblebeeHiveBasicClientConsole application 415 Unsupported Media Type error

After some trial and error, I modified the HiveController.cs and HiveControllerImplementation.cs Login method signatures so the payload was “application/x-www-form-urlencoded” rather than “application/json” by changing FromBody to FromForm

Task<Response> IAuthController.PostLoginAsync([FromForm] LoginForm body)
{
     return Task.FromResult(new Response()
    {
        Token = Guid.NewGuid().ToString()
    });
}

Modifying code generated by a tool like NSwagStudio should be avoided but I couldn’t work out a simpler solution

/// <summary>
/// POST login
/// </summary>
/// <remarks>
/// &lt;p&gt;Use username and password to log in.&lt;/p&gt;&lt;p&gt;On success: returns status code 200. The response body is the JSON &lt;code&gt;{"token": "&amp;lt;token&amp;gt;"}&lt;/code&gt;, along with the header &lt;code&gt;Set-Cookie: JSESSIONID=&amp;lt;token&amp;gt;; Path=/; Secure; HttpOnly;&lt;/code&gt;. The tokens in the return value and the &lt;code&gt;Set-Cookie&lt;/code&gt; header are the same. The token is a long string of letters, numbers, and punctuation.&lt;/p&gt;&lt;p&gt;On failure: returns status code 401.&lt;/p&gt;&lt;p&gt;To make authenticated requests, there are two ways: &lt;ul&gt;&lt;li&gt;(Preferred) Use the token as a Bearer Authentication token by including the HTTP header &lt;code&gt;Authorization: Bearer &amp;lt;token&amp;gt;&lt;/code&gt; in further requests.&lt;/li&gt;&lt;li&gt;(Deprecated) Use the token as the JSESSIONID cookie in further requests.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;
/// </remarks>
/// <returns>Login success</returns>
[Microsoft.AspNetCore.Mvc.HttpPost, Microsoft.AspNetCore.Mvc.Route("login")]
public System.Threading.Tasks.Task<Response> PostLogin([Microsoft.AspNetCore.Mvc.FromForm] LoginForm body)
{

    return _implementation.PostLoginAsync(body);
}

BumblebeeHiveBasicCLientConsole application calling the simulator
BumblebeeHiveBasicClientConsole application calling the production system

After some initial problems with content-types the Swarm Hive API (not tried the Delivery-API yet) appears to be documented and easy to use. Though, some of the variable type choices do seem a bit odd.

public virtual async System.Threading.Tasks.Task<string> GetDeviceCountAsync(int? devicetype, System.Threading.CancellationToken cancellationToken)

Azure Functions with VB.Net on .NET Core V6

A year and a half ago I wrote a post about how to build Azure functions with VB.Net and the .NET Framework 4.X. The Microsoft VB team posted about Visual Basic Support for .NET 5.0 in March 2020 then went quiet, so my customer put the project on hold. Since then, a lot has changed .NET Core 3.1 LTS ends December 12, 2022, and .NET Core 5.0 support (no LTS) ended May 10, 2022 so I have ported the samples to .NET Core V6.

The process is similar (but different) to the original approach

The VB.Net Solution from June 2021

First step is to create a Visual Basic .NET Core V6 console application

Visual Studio 2022 “Add a new project”

The specify a name for the new project.

Visual Studio 2022 Add Project “Configure your new project”

Then select the version of .NET Core used

Visual Studio 2022 Add Project “Additional information”

Then rename program.cs to a name which highlights that it is a trigger

Visual Studio 2022 rename program.vb to TimerTrigger.vb

The initial version of the TimerTrigger code was “inspired” by the VB.Net 4.8 version.

'---------------------------------------------------------------------------------
' Copyright (c) November 2022, devMobile Software
'
' Licensed under the Apache License, Version 2.0 (the "License");
' you may Not use this file except in compliance with the License.
' You may obtain a copy of the License at
'
'     http://www.apache.org/licenses/LICENSE-2.0
'
' Unless required by applicable law Or agreed to in writing, software
' distributed under the License Is distributed on an "AS IS" BASIS,
' WITHOUT WARRANTIES Or CONDITIONS OF ANY KIND, either express Or implied.
' See the License for the specific language governing permissions And
' limitations under the License.
'
'---------------------------------------------------------------------------------
Imports System.Threading

Imports Microsoft.Azure.WebJobs
Imports Microsoft.Extensions.Logging


Public Class TimerTrigger
    Shared executionCount As Int32

    <FunctionName("Timer")>
    Public Shared Sub Run(<TimerTrigger("0 */1 * * * *")> myTimer As TimerInfo, log As ILogger)
        Interlocked.Increment(executionCount)

        log.LogInformation("VB.Net .NET V6 TimerTrigger next trigger:{0} Execution count:{1}", myTimer.ScheduleStatus.Next, executionCount)

    End Sub
End Class

Visual Studio 2022 highlighting missing libraries
Visual Studio 2022 with additional function SDK references

The next step is to add the hosts.json(empty for timer tigger) and localsettings.json to configure the function

Visual 2022 Hosts.json file
Visual Studio 2022 showing hosts.json & local.settings.json

Then I could run the function in the Azure Functions runtime emulator and “single step” in the Visual Studio 2022 Debugger.

VB.Net .NET Core V6 Timer Trigger running in emulator

For completeness I also built sample BlobTrigger, HttpTrigger and QueueTrigger versions

VB.Net .NET Core V6 Blob Trigger running in emulator
VB.Net .NET Core V6 HTTP Trigger running in emulator
VB.Net .NET Core V6 Queue Trigger running in emulator

I also deployed the Azure Storage QueueTrigger to Microsoft Azure, configured it, and then stress tested it with multiple instances of my QueueMessageGenerator.

Queue Trigger Function deployment
Queue Trigger configuration
Queue Trigger Throughput 48K messages

What if it goes wrong…

“Can’t determine project language from files. Please add one of [–csharp, –javascript, –typescript, –java, –powershell, –customer]

Check “FUNCTIONS_WORKER_RUNTIME” in the local.settings.json file.

The baked in error logging doesn’t handle broken message formats very well. Look at the call stack or single step through the application to find the message format that is broken

Visual Studio 2022 editor with malformed message highlighted

WARNING

I assume this is not a supported approach so use

“at your own risk”

.NET Core web API + Dapper – Authorisation of Data Access

The theme of this post is controlling users’ ability to read and write rows in a table. The best scenario I could come up with using the World Wide Importers database was around controlling access to Customer information.

This would be a representative set of “project requirements”…

  • Salespeople tend to look after categories of Customers
    • Kayla – Novelty Shops
    • Hudson – Supermarkets
    • Issabella – Computer Stores
    • Sophia – Gift Stores, Novelty Shops
    • Amy – Corporates
    • Anthony – Novelty Stores
    • Alica – Coporates
    • Stella – Supermarkets
  • But some Salespeople have direct relationships with Customers
    • Kayla – Corporate customers Eric Torres & Cosmina
    • Hudson – Tailspin Toys Head Office
    • Issabell – Tailspin Toys (Sylvanite, MT), Tailspin Toys (Sun River, MT), Tailspin Toys (Sylvanite, MT)
  • No changes to the database which could break the existing solution

In a previous engagement we added CustomerCategoryPerson and CustomerPerson like tables to the database to control read/write access to Customers’ information.

The CustomerCategoryPerson table links the CustomerCategory and Person tables with a flag (IsWritable) which indicates whether the Person can read/write Customer information for all the Customers in a CustomerCategory.

CREATE TABLE [Sales].[CustomerCategoryPerson](
	[CustomerCategoryPersonID] [int] IDENTITY(1,1) NOT NULL,
	[CustomerCategoryId] [int] NOT NULL,
	[PersonId] [int] NOT NULL,
	[IsWritable] [bit] NOT NULL,
	[LastUpdatedBy] [int] NOT NULL,
 CONSTRAINT [PK_CustomerCategoryPerson] PRIMARY KEY CLUSTERED 
(
	[CustomerCategoryPersonID] ASC
)...

The CustomerPerson table links the Customer and Person tables with a flag (IsWritable) which indicates whether a Person can read/write a Customer’s information.

CREATE TABLE [Sales].[CustomerPerson](
	[CustomerPersonId] [int] IDENTITY(1,1) NOT NULL,
	[CustomerID] [int] NOT NULL,
	[PersonId] [int] NOT NULL,
	[IsWritable] [bit] NOT NULL,
	[LastEditedBy] [int] NOT NULL,
 CONSTRAINT [PK_CustomerPerson] PRIMARY KEY CLUSTERED 
(
	[CustomerPersonId] ASC
)...

Users can do “wildcard” searches for Customers and the results set has to be limited to “their” customers and customers in the customer categories they are assigned too.

ALTER PROCEDURE [Sales].[CustomersNameSearchUnionV1]
@UserId as int,
@SearchText nvarchar(20),
@MaximumRowsToReturn int
AS
BEGIN
	-- Individual assignment
    SELECT TOP(@MaximumRowsToReturn) [Customers].[CustomerID] as "ID", [Customers].[CustomerName] as "Name", [Customers].[IsOnCreditHold] as "IsOnCreditHold"
    FROM Sales.Customers
	INNER JOIN [Sales].[CustomerPerson] ON ([Sales].[Customers].[CustomerId] = [Sales].[CustomerPerson].[CustomerId])
    WHERE ((CustomerName LIKE N'%' + @SearchText + N'%')
		AND ([Sales].[CustomerPerson].PersonId = @UserId))
    --ORDER BY [CustomerName]

	UNION 
	
	-- group assignment
   SELECT TOP(@MaximumRowsToReturn) [Customers].[CustomerID] as "ID", [Customers].[CustomerName] as "Name", [Customers].[IsOnCreditHold] as "IsOnCreditHold"
   FROM [Sales].[Customers]
      INNER JOIN [Sales].[CustomerCategories] ON ([Sales].[Customers].[CustomerCategoryID] = [Sales].[CustomerCategories].[CustomerCategoryID])
      INNER JOIN [Sales].[CustomerCategoryPerson] ON ([Sales].[Customers].[CustomerCategoryID] = [CustomerCategoryPerson].[CustomerCategoryID])
    WHERE ((CustomerName LIKE N'%' + @SearchText + N'%')
		AND ([Sales].[CustomerCategoryPerson].PersonId = @UserId))

END;

This approach increases the complexity and reduces the maintainability of stored procedures which have to control the reading/writing of Customer information. Several times I have extracted customer information read\write controls to a couple of database views, one for controlling read access.

CREATE VIEW [Sales].[CustomerPersonReadV1]
AS
-- Individual assignment
   SELECT [Sales].[Customers].[CustomerID], [Sales].[CustomerPerson].[PersonID], [Sales].[Customers].[CustomerCategoryID]
   FROM [Sales].[Customers]
      INNER JOIN [Sales].[CustomerPerson] ON ( [Sales].[Customers].[CustomerID] = CustomerPerson.CustomerID)

UNION -- Takes care of duplicates

-- Group assignment
   SELECT [Sales].[Customers].[CustomerID], [Sales].[CustomerCategoryPerson].[PersonID], [Sales].[Customers].[CustomerCategoryID]
   FROM [Sales].[Customers]
      --INNER JOIN [Sales].[CustomerCategories] ON ([Sales].[Customers].[CustomerCategoryID] = [Sales].[CustomerCategories].[CustomerCategoryID])
      INNER JOIN [Sales].[CustomerCategoryPerson] ON ([Sales].[Customers].[CustomerCategoryID] = [CustomerCategoryPerson].[CustomerCategoryID])

The other database for controlling write access

CREATE VIEW [Sales].[CustomerPersonWriteV1]
AS
-- Individual assignment
   SELECT [Sales].[Customers].[CustomerID], [Sales].[CustomerPerson].[PersonID], [Sales].[Customers].[CustomerCategoryID]
   FROM [Sales].[Customers]
      INNER JOIN [Sales].[CustomerPerson] ON (([Sales].[Customers].[CustomerID] = [CustomerPerson].[CustomerID]) AND ([Sales].[CustomerPerson].[IsWritable] = 1))

UNION -- Takes care of duplicates

-- Group assignment
   SELECT [Sales].[Customers].[CustomerID], [Sales].[CustomerCategoryPerson].[PersonID], [Sales].[Customers].[CustomerCategoryID]
   FROM [Sales].[Customers]
      INNER JOIN [Sales].[CustomerCategories] ON ([Sales].[Customers].[CustomerCategoryID] = [Sales].[CustomerCategories].[CustomerCategoryID])
      INNER JOIN [Sales].[CustomerCategoryPerson] ON ([Sales].[Customers].[CustomerCategoryID] = [CustomerCategoryPerson].[CustomerCategoryID] AND ([Sales].[CustomerCategoryPerson].[IsWritable] = 1))

The versioning of database views uses the same approach as stored procedures. When a view is updated (the columns returned changes , updated constraints etc.) the version number is incremented. Then we work through the dependencies list checking and updating the view version used and re-testing.

SQL Server Management Studio displaying objects which depend on the view

These two views are the UNION of the users individual and group access permissions. (If a user has Write they also have Read access). This reduces the complexity of stored procedures used for reading from and writing to the Customer table.

CREATE PROCEDURE [Sales].[CustomersListV1]
@UserId as int
AS
BEGIN
SELECT [Customers].[CustomerID] as "ID", [Customers].[CustomerName] as "Name", [Customers].[IsOnCreditHold] as "IsOnCreditHold"
	FROM [Sales].[Customers]
		INNER JOIN [Sales].[CustomerPersonReadV1] ON ([Sales].[Customers].[CustomerID] = [Sales].[CustomerPersonReadV1].CustomerID)
    WHERE ([Sales].[CustomerPersonReadV1].PersonId = @UserId)
	ORDER BY Name
END

The GET method of the Customer controller returns a list of all the Customers the current user has read only access to using their individual and group assignment.

[HttpGet(), Authorize(Roles = "SalesPerson,SalesAdministrator")]
[ProducesResponseType(StatusCodes.Status200OK, Type = typeof(List<Models.CustomerListDtoV1>))]
public async Task<ActionResult<IEnumerable<Models.CustomerListDtoV1>>> Get()
{
      IEnumerable<Models.CustomerListDtoV1> response;

      using (SqlConnection db = new SqlConnection(this.connectionString))
      {
         response = await db.QueryWithRetryAsync<Models.CustomerListDtoV1>(sql: "[Sales].[CustomersListV1]", param: new { userId = HttpContext.PersonId() }, commandType: CommandType.StoredProcedure);
      }

   return this.Ok(response);
}

The CustomerPersonWriteV1 view is used to stop users without IsWritable set (individual or group) updating a Customers IsOnCreditHold flag.

CREATE PROCEDURE [Sales].[CustomerCreditHoldStatusUpdateV1]
@UserID int,
@CustomerId int,
@IsOnCreditHold Bit
AS
BEGIN
    UPDATE [Sales].[Customers]
    SET IsOnCreditHold = @IsOnCreditHold, LastEditedBy = @UserID
	FROM [Sales].[Customers]
		INNER JOIN [Sales].[CustomerPersonWriteV1] ON ([Sales].[Customers].[CustomerID] = [Sales].[CustomerPersonWriteV1].CustomerID)
    WHERE (([Sales].[CustomerPersonWriteV1].PersonId = @UserId) 
		AND ([Sales].[Customers].[CustomerID] = @CustomerId )
		AND (IsOnCreditHold <> @IsOnCreditHold))
	
END

The PUT CreditHold method uses a combination of roles (Aministrator,SalesAdministrator,SalesPerson) and database views (CustomerPersonWriteV1) to control the updating of customer data.

[HttpPut("{customerId}/CreditStatus", Name ="CreditHold")]
[Authorize(Roles = "Aministrator,SalesAdministrator,SalesPerson")]
[ProducesResponseType(StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status409Conflict)]
public async Task<IActionResult> CustomerCeditHold(int customerId, [FromBody] Models.CustomerCreditHoldUpdateV1 request )
{
    request.UserId = HttpContext.PersonId();
    request.CustomerId = customerId;

    using (SqlConnection db = new SqlConnection(connectionString))
    {
        if (await db.ExecuteWithRetryAsync("[Sales].[CustomerCreditHoldStatusUpdateV1]", param: request, commandType: CommandType.StoredProcedure) != 1)
        {
            logger.LogWarning("Person {UserId} Customer {CustomerId} IsOnCreditHold {IsOnCreditHold} update failed", request.UserId, request.CustomerId, request.IsOnCreditHold);

            return this.Conflict();
        }
    }

    return this.Ok();
}

My customers usually don’t have a lot of automated testing so minimising the impact of changes across the database and codebase is critical. Sometimes we duplicate code (definitely not DRY) so that the amount of functionality that has to be retested is reduced. We ensure this is time allocated for revisiting these decisions and remediating as required.

.NET Core web API + Dapper – Authorisation Permissions

The permissions required for an on-premises system running in a trusted environment are often minimalist. The World Wide Importers database People table has IsSystemUser, IsEmployee, IsSalesperson which is representative of the granularity of permissions I have encountered in Windows Forms .NET, ASP.NET Web Forms and other “legacy” applications.

CREATE TABLE [Application].[People](
    [PersonID] [int] NOT NULL,
    [FullName] nvarchar NOT NULL,
    [PreferredName] nvarchar NOT NULL,
    [SearchName] AS (concat([PreferredName],N' ',[FullName])) PERSISTED NOT NULL,
    [IsPermittedToLogon] [bit] NOT NULL,
    [LogonName] nvarchar NULL,
    [IsExternalLogonProvider] [bit] NOT NULL,
    [HashedPassword] varbinary NULL,
    [IsSystemUser] [bit] NOT NULL,
    [IsEmployee] [bit] NOT NULL,
    [IsSalesperson] [bit] NOT NULL,
    [UserPreferences] nvarchar NULL,
    [PhoneNumber] nvarchar NULL,
    [FaxNumber] nvarchar NULL,
    [EmailAddress] nvarchar NULL,
    [Photo] varbinary NULL,
    [CustomFields] nvarchar NULL,
    [OtherLanguages] AS (json_query([CustomFields],N'$.OtherLanguages')),
    [LastEditedBy] [int] NOT NULL,
    [ValidFrom] datetime2 GENERATED ALWAYS AS ROW START NOT NULL,
    [ValidTo] datetime2 GENERATED ALWAYS AS ROW END NOT NULL,
CONSTRAINT [PK_Application_People] PRIMARY KEY CLUSTERED
(
    [PersonID] ASC
)

The existing application appears to have a method for a Person to change their password which calls the [Website].[ChangePassword] stored procedure (I was surprised that the stored procedure didn’t set the LastEditedBy value).

CREATE PROCEDURE [Website].[ChangePassword]
@PersonID int,
@OldPassword nvarchar(40),
@NewPassword nvarchar(40)
WITH EXECUTE AS OWNER
AS
BEGIN
    SET NOCOUNT ON;
    SET XACT_ABORT ON;

    UPDATE [Application].People
    SET IsPermittedToLogon = 1,
        HashedPassword = HASHBYTES(N'SHA2_256', @NewPassword + FullName)
    WHERE PersonID = @PersonID
    AND PersonID <> 1
    AND HashedPassword = HASHBYTES(N'SHA2_256', @OldPassword + FullName);

    IF @@ROWCOUNT = 0
    BEGIN
        PRINT N'The PersonID must be valid, and the old password must be valid.';
        PRINT N'If the user has also changed name, please contact the IT staff to assist.';
        THROW 51000, N'Invalid Password Change', 1;
        RETURN -1;
    END;
END;

The new version removes the PersonId special case (Assumed that PersonId 1 can’t logon and the use of Throw). I think the use of Throw can add significant complexity to the exception handling of the WebAPI controller that calls the stored procedure

ALTER PROCEDURE [Website].[PersonPasswordChangeV1]
	@UserID int,
	@PasswordOld nvarchar(40),
	@PasswordNew nvarchar(40)
WITH EXECUTE AS OWNER
AS
BEGIN
    UPDATE [Application].People
    SET IsPermittedToLogon = 1
        ,HashedPassword = HASHBYTES(N'SHA2_256', @PasswordNew + FullName)
		,LastEditedBy = @UserID
    WHERE ((PersonID = @UserID )
		AND (HashedPassword = HASHBYTES(N'SHA2_256', @PasswordOld + FullName)))
END;

The PasswordChange method of the Person Controller only requires the caller to be authenticated.

/// <summary>
/// Changes current user's password.
/// </summary>
/// <param name="request">Current password and new password</param>
/// <response code="200">Password changed.</response>
/// <response code="401">Unauthorised, bearer token missing or expired.</response>
/// <response code="409">Previous password invalid or User name has changed.</response>
[Authorize()]
[HttpPut(Name = "PasswordChange")]
public async Task<ActionResult> PasswordChange([FromBody] Models.PersonPasswordChangeRequest request)
{
    request.UserID = HttpContext.PersonId();

    using (SqlConnection db = new SqlConnection(connectionString))
    {
        if (await db.ExecuteWithRetryAsync("[WebSite].[PersonPasswordChangeV1]", param: request, commandType: CommandType.StoredProcedure) != 1)
        {
            logger.LogWarning("Person {0} password change failed", request.UserID);

            return this.Conflict();
        }
    }

    return this.Ok();
}

The new application will have functionality for resetting a Person’s password. Access to this functionality will be restricted to people with the “Administrator” and “PasswordReset” roles.

CREATE PROCEDURE [Website].[PersonPasswordResetV1]
@UserID int,
@PersonID int,
@Password nvarchar(40)
WITH EXECUTE AS OWNER
AS
BEGIN
    UPDATE [Application].People
    SET IsPermittedToLogon = 1
        ,HashedPassword = HASHBYTES(N'SHA2_256', @Password + FullName)
		,LastEditedBy = @UserID
    WHERE PersonID = @PersonID
END;

One of the conventions we often use, is that the first parameter of any stored procedure that is called once a User has logged on is their unique identifier which is used for data access permissions and change tracking.

[Authorize(Roles = "Administrator")]
[HttpPut("{personId:int}", Name = "PasswordReset")]
public async Task<ActionResult> PasswordReset([Range(1, int.MaxValue, ErrorMessage = "Person id must greater than 1")] int personId, [FromBody] Models.PersonPasswordResetRequest request)
{
    request.UserId = HttpContext.PersonId();
    request.PersonID = personId;

    using (SqlConnection db = new SqlConnection(connectionString))
    {
        if (await db.ExecuteWithRetryAsync("[WebSite].[PersonPasswordResetV1]", param: request, commandType: CommandType.StoredProcedure) != 1)
        {
            logger.LogWarning("Person {0} password change failed", request.PersonID);

            return this.Conflict();
        }
    }

    return this.Ok();
}

For a couple of applications, we have added “Permissions” and “PersonPermissions” tables alongside the existing authorisation functionality to reduce the likely hood of any unintended side effects.

CREATE TABLE [Application].[Permissions](
	[PermissionID] [int] IDENTITY(1,1) NOT NULL,
	[Name] [nvarchar](20) NOT NULL,
	[Description] [nvarchar](50) NOT NULL,
	[LastEditedBy] [int] NOT NULL,
	[ValidFrom] [datetime2](7) GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
	[ValidUntil] [datetime2](7) GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
 CONSTRAINT [PK_Permissions] PRIMARY KEY CLUSTERED 
(
	[PermissionID] ASC
)...

We try and keep the names of the permissions short, so the token doesn’t get too large.

CREATE TABLE [Application].[PersonPermissions](
	[PersonPermissionId] [int] IDENTITY(1,1) NOT NULL,
	[PersonId] [int] NOT NULL,
	[PermIssionId] [int] NOT NULL,
	[Active] [bit] NOT NULL,
	[ValidFrom] [datetime2](7) GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
	[ValidUntil] [datetime2](7) GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
 CONSTRAINT [PK_PersonPermissions] PRIMARY KEY CLUSTERED 
(
	[PersonPermissionId] ASC
)...
Permissions, PersonPermissions and People with Foreign Keys

The additional permissions (from the Person record) and the PersonPermissions table required some modifications to the PersonAuthenticateLookupByLogonNameV1 stored procedure and the addition of the PersonPermissionsByPersonIdV1 stored procedure.

ALTER PROCEDURE [Website].[PersonAuthenticateLookupByLogonNameV2]
@LogonName nvarchar(50),
@Password nvarchar(40)
AS
BEGIN
	SELECT PersonID
		,FullName
		,EmailAddress
		,IsSystemUser
		,IsEmployee
		,IsSalesPerson
	FROM [Application].[People]
	WHERE (( LogonName = @LogonName)
		AND (IsPermittedToLogon = 1)
		AND (HASHBYTES(N'SHA2_256', @Password + FullName) = HashedPassword))

The IsSystemUser, IsEmployee and IsSalesPerson bit flags were added to the stored procedure and Data Transfer Object(DTO)

private class PersonAuthenticateLogonDetailsDto
{
    public int PersonID { get; set; }    

    public string FullName { get; set; }

    public string EmailAddress { get; set; }

    public bool IsSystemUser { get; set; }

    public bool IsEmployee { get; set; }

    public bool IsSalesPerson { get; set; }
}

The PersonPermissionsByPersonIdV1 retrieves a list of the permissions of the User who has been authenticated.

ALTER PROCEDURE [Website].[PersonPermissionsByPersonIdV1]
	@PersonId AS int
AS
BEGIN

	SELECT [Application].[Permissions].[Name]
	FROM [Application].[Permissions]
		INNER JOIN [Application].[PersonPermissions] ON ([Application].[Permissions].PermissionID = [Application].[PersonPermissions].[PermissionId] )
	WHERE [Application].[PersonPermissions].[PersonId] = @PersonId
	ORDER BY [Application].[Permissions].[Name]

END

The Person’s permissions(effectively roles) are added as claims, the IsSystemUser, IsEmployee and IsSalesPerson flags are also added to the list of claims so they can be used in the new application.

[HttpPost("logon")]
public async Task<ActionResult> Logon([FromBody] Models.LogonRequest request )
{
    PersonAuthenticateLogonDetailsDto userLogonUserDetails;
    IEnumerable<string> permissions;
    var claims = new List<Claim>();

    using (SqlConnection db = new SqlConnection(configuration.GetConnectionString("WorldWideImportersDatabase")))
    {
        userLogonUserDetails = await db.QuerySingleOrDefaultWithRetryAsync<PersonAuthenticateLogonDetailsDto>("[Website].[PersonAuthenticateLookupByLogonNameV2]", param: request, commandType: CommandType.StoredProcedure);
        if (userLogonUserDetails == null)
        {
            logger.LogWarning("Login attempt by user {0} failed", request.LogonName);

           return this.Unauthorized();
        }

        // Lookup the Person's permissions
        permissions = await db.QueryWithRetryAsync<string>("[Website].[PersonPermissionsByPersonIdV1]", new { userLogonUserDetails.PersonID }, commandType: CommandType.StoredProcedure);
    }

    // Setup the primary SID + name info
    claims.Add(new Claim(ClaimTypes.PrimarySid, userLogonUserDetails.PersonID.ToString()));
    if (userLogonUserDetails.IsSystemUser)
    {
       claims.Add(new Claim(ClaimTypes.Role, "SystemUser"));
    }
    if (userLogonUserDetails.IsEmployee)
    {
       claims.Add(new Claim(ClaimTypes.Role, "Employee"));
    }
    if (userLogonUserDetails.IsSalesPerson)
    {
        claims.Add(new Claim(ClaimTypes.Role, "SalesPerson"));
    }

    foreach(string permission in permissions)
    {
        claims.Add(new Claim(ClaimTypes.Role, permission));
    }

    var authSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(jwtIssuerOptions.SecretKey));

    var token = new JwtSecurityToken(
         issuer: jwtIssuerOptions.Issuer,
          audience: jwtIssuerOptions.Audience, 
          expires: DateTime.UtcNow.Add(jwtIssuerOptions.TokenExpiresAfter),
          claims: claims,
          signingCredentials: new SigningCredentials(authSigningKey, SecurityAlgorithms.HmacSha256));
                
    return this.Ok(new
   {
        token = new JwtSecurityTokenHandler().WriteToken(token),
        expiration = token.ValidTo,
    });
}

We try to reduce the number of roles a User requires by having core roles (Administrator, Sales consultant, Warehouse administrator etc.) with additional roles for each task that can be added as required (ResetPassword, CustomerIsOnCreditHold Set/Clear etc.)

The Get Method of Authorisation controller returns a list of the User’s Roles which can be used to enable/disable functionality of the user interface.

/// <summary>
/// Gets a list of the current User's roles.
/// </summary>
/// <response code="200">List of claims returned.</response>
/// <response code="401">Unauthorised, bearer token missing or expired.</response>
/// <returns>list of claims.</returns>
[HttpGet]
[ProducesResponseType(StatusCodes.Status200OK, Type = typeof(List<string>))]
public List<string> Get()
{
    List<string> claimNames = new List<string>();

    foreach (var claim in this.User.Claims.Where(c => c.Type == ClaimTypes.Role))
    {
        claimNames.Add(claim.Value);
    }

    return claimNames;
}

We have found this approach to be a robust way to add granular authorisation for new functionality to a “legacy’ application without breaking the existing solution. I have ignored a user being disabled after a number of failed logins, password complexity rules etc. as these tend to be application specific and not really related to the use of Dapper.

Most blog posts talk about building “green fields” applications, I have found hardly any cover “muddy fields” development where you have to deal with “legacy” code.

Not all “legacy” code is bad, I work on one code base which is nearly 20years old. It started as a spreadsheet plug-in and has grown of time to a SaaS application. There is very little of the original code left it has just been carefully re-factored over the years with time allocated to chip away at technical debt.

Swarm Space – Bumblebee Hive API Basic client

Back in July I purchased a Satellite Transceiver Breakout – Swarm M138 from SparkFun and it has been sitting on the shelf since then. I want to get telemetry from a sensor to an Azure IoT Hub or Azure IoT Central over a Swarm Space link for a project I am working on.

I’ll need to solder on some headers and cut a couple of tracks on the breakout board so my device (most probably a SparkFun – ESP32-S2 WROOM) can connect to the Swarm-M1138 modem. The NET nanoFramework team have an IoT.Device Swarm Tile NuGet package which I will use to interface the device to the modem.

I have started with a “nasty” Proof of Concept(PoC) to figure out how to connect to the Swarm Hive API.

The Swarm Hive API has been published with Swagger/OpenAPI which is really simple to use. I used NSwagStudio to generate a C# client to I didn’t have to “handcraft” one.

Initially the code would compile but I found a clue in a Github Issue from September 2017 which was to change the “Operation Generation Model” to SingleClientFromOperationId.(The setting is highlighted above).

static async Task Main(string[] args)
{
    using (HttpClient httpClient = new HttpClient())
    {
        BumblebeeHiveClient.Client client = new BumblebeeHiveClient.Client(httpClient);

        client.BaseUrl = "https://bumblebee.hive.swarm.space/hive/";

        BumblebeeHiveClient.LoginForm loginForm = new BumblebeeHiveClient.LoginForm();

        // https://bumblebee.hive.swarm.space/login/
        loginForm.Username = "...";
        loginForm.Password = "...";

        Console.WriteLine($"devMobile SwarmSpace Bumblebee Hive Console Client");
        Console.WriteLine("");

        Console.WriteLine($"Login POST");
        BumblebeeHiveClient.Response response = await client.PostLoginAsync(loginForm);

        Console.WriteLine($"Token :{response.Token[..5]}.....{response.Token[^5..]}");
        Console.WriteLine($"Press <enter> to continue");
        Console.ReadLine();

        string apiKey = "bearer " + response.Token;

        httpClient.DefaultRequestHeaders.Add("Authorization", apiKey);


        Console.WriteLine($"Device count GET");

        string count = await client.GetDeviceCountAsync(1);

        Console.WriteLine($"Device count :{count}");
        Console.WriteLine($"Press <enter> to continue");
        Console.ReadLine();

        Console.WriteLine($"Device(s) information GET");

        var devices = await client.GetDevicesAsync(1, null, null, null, null, null, null, null, null);

        foreach (var device in devices)
        {
            Console.WriteLine($" Id:{device.DeviceId} Name:{device.DeviceName} Type:{device.DeviceType} Organisation:{device.OrganizationId}");
        }

        Console.WriteLine($"Press <enter> to continue");
        Console.ReadLine();

        Console.WriteLine($"User Context GET");
        var userContext = await client.GetUserContextAsync();

        Console.WriteLine($" Id:{userContext.UserId} Name:{userContext.Username} Country:{userContext.Country}");

        Console.WriteLine("Additional properties");
        foreach ( var additionalProperty in userContext.AdditionalProperties)
        {
            Console.WriteLine($" Id:{additionalProperty.Key} Value:{additionalProperty.Value}");
        }

        Console.WriteLine($"Press <enter> to exit");
        Console.ReadLine();
    }
}

I tried a couple of ways to attach the Swarm Hive API authorisation token (returned by the Login method) to client requests. After a couple for failed attempts, I “realised” that adding the “Authorization” header to the HttpClient defaultRequestHeaders was by far the simplest approach.

My “nasty” console application calls the Login method, then requests the number of devices (I only have one), gets a list of the properties of all the devices(very short list) then gets the User Context and displays their ID, Name and Country.

.NET Core web API + Dapper – Authentication

A couple of recent contracts have been maintaining and “remediating” legacy codebases which have been in production for upto a decade. The applications are delivering business value (can’t stop working) and the customer’s budgets are limited (they can only afford incremental change). As a result of this we end up making tactical decisions to keep the application working and longer-term ones to improve the “ilities“(taking into account the customer’s priorities).

It is rare to have a “green fields” project, so my plan was to use the next couple of WebAPI + Dapper posts to illustrate the sort of challenges we have encountered.

Customer: “we want to put a nice webby frontend on our existing bespoke solution” (“putting lipstick on a pig”).

Customer: The user’s login details are stored in the database and we can’t change the login process as the help desk won’t cope.

The remediation of Authentication and Authorisation(A&A) functionality can be particularly painful and is often driven by compliance issues e.g. EU GDPR, The Privacy Act 202 etc.

The World Wide Importers database would be a representative example of databases we have worked with.

CREATE TABLE [Application].[People](
	[PersonID] [int] NOT NULL,
	[FullName] [nvarchar](50) NOT NULL,
	[PreferredName] [nvarchar](50) NOT NULL,
	[SearchName]  AS (concat([PreferredName],N' ',[FullName])) PERSISTED NOT NULL,
	[IsPermittedToLogon] [bit] NOT NULL,
	[LogonName] [nvarchar](50) NULL,
	[IsExternalLogonProvider] [bit] NOT NULL,
	[HashedPassword] [varbinary](max) NULL,
	[IsSystemUser] [bit] NOT NULL,
	[IsEmployee] [bit] NOT NULL,
	[IsSalesperson] [bit] NOT NULL,
	[UserPreferences] [nvarchar](max) NULL,
	[PhoneNumber] [nvarchar](20) NULL,
	[FaxNumber] [nvarchar](20) NULL,
	[EmailAddress] [nvarchar](256) NULL,
	[Photo] [varbinary](max) NULL,
	[CustomFields] [nvarchar](max) NULL,
	[OtherLanguages]  AS (json_query([CustomFields],N'$.OtherLanguages')),
	[LastEditedBy] [int] NOT NULL,
	[ValidFrom] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL,
	[ValidTo] [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL,
 CONSTRAINT [PK_Application_People] PRIMARY KEY CLUSTERED 
(
	[PersonID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [USERDATA],
	PERIOD FOR SYSTEM_TIME ([ValidFrom], [ValidTo])
) ON [USERDATA] TEXTIMAGE_ON [USERDATA]
WITH
(
SYSTEM_VERSIONING = ON (HISTORY_TABLE = [Application].[People_Archive])
)

Initial observations

  • The LogonName column doesn’t have an index and uniqueness is not enforced which is a bit odd.
  • The table has SYSTEM_VERSIONING enabled so any structural changes are going to be hard work.
  • There are a couple of computed columns, so we need to be careful with any changes.
  • A password hash is a varbinary max column, so we need to figure out how this is generated and updated
  • Surprising number of nullable columns
  • The code associated with IsExternalLogonProvider needs to be investigated.
  • Looks like the granularity of permissions i.e. IsSystemUser, IsEmployee, IsSalesperson is low.
  • The database must be old there is a FaxNumber column.
  • Looks like there are internal and external (maybe IsSystemUser, IsEmployee, IsSalesperson are all false) people.
  • The PersonId is a Sequence rather than an Identity column which is unusual.
  • IsPermittedToLogin indicates that login process might be a bit more complex than expected
  • The terms Login and Logon appear to be used interchangeably.
  • No lockout after several failed logon attempts, lockout until etc. functionality.
  • No concurrency control (optimistic or pessimistic) for updates (with TimeStamp or Version column) so last update wins.

The next step would be to have a look at the contents of the People table with SQL Server Management Studio(SSMS)

SQL Server Management Studio [Application].[Person] table

Initial observations

  • Looks like the system administrators are in the first couple of rows. (makes Indirect Object Reference Attack easier).
  • Lots of NULL values which often makes application code more complex
  • Duplicates in LoginName column e.g. “NO LOGON”
  • Some “magic” values e.g. “NO LOGON”
  • Why does the “Data Conversion Only” person have a photo?
  • The IsExternalLogonProvider is always false.
  • The UserPreferences, CustomFields and OtherLanguages columns contain Java Script Object Notation(JSON), need to see how these a generated, updated and used.

The application must have an existing external user provisioning process. It looks like a Person record is created by an Administrator then the User sets their password on first Logon. (not certain if there are any password complexity rules enforced)

An application I worked on didn’t have any enforcement of password complexity and minimum length in earlier versions. This caused issues when a number of their clients couldn’t logon to the new application because their existing password was too short. We updated the logon field rules and retained minimum complexity and length rules on change and reset password field validation. We then forced 5-10% of users per month (so the helpdesk wasn’t overwhelmed by support calls) to update their passwords.

ALTER PROCEDURE [Website].[ActivateWebsiteLogon]
@PersonID int,
@LogonName nvarchar(50),
@InitialPassword nvarchar(40)
WITH EXECUTE AS OWNER
AS
BEGIN
    SET NOCOUNT ON;
    SET XACT_ABORT ON;

    UPDATE [Application].People
    SET IsPermittedToLogon = 1,
        LogonName = @LogonName,
        HashedPassword = HASHBYTES(N'SHA2_256', @InitialPassword + FullName),
        UserPreferences = (SELECT UserPreferences FROM [Application].People WHERE PersonID = 1) -- Person 1 has User Preferences template
    WHERE PersonID = @PersonID
    AND PersonID <> 1
    AND IsPermittedToLogon = 0;

    IF @@ROWCOUNT = 0
    BEGIN
        PRINT N'The PersonID must be valid, must not be person 1, and must not already be enabled';
        THROW 51000, N'Invalid PersonID', 1;
        RETURN -1;
    END;
END;

Initial observations

  • The password hashing uses SHA2_256 which is good
  • The password hash is seeded with the persons FullName, this could be a problem if a user changes their name.
  • The user selects their LoginName, this needs further investigate as duplicates could be an issue
  • XACT_ABORT ON + THROW for validation and state management is odd, need to check how SqlExceptions are handled in application code.
  • PersonID magic number handling adds complexity and needs further investigation.
  • EXECUTE AS OWNER caught my attention, checked only one Database user for application.
  • The LastEditBy isn’t set to the PersonID which seems a bit odd.

Based on the [Website].[ActivateWebsiteLogon] (couldn’t find Logon so reviewed ChangePassword) this is my first attempt at a stored procedure which validates a user’s LogonName and Password.

CREATE PROCEDURE [Application].[PersonAuthenticateLookupByLogonNameV1]
@LogonName nvarchar(50),
@Password nvarchar(40)
AS
BEGIN
	SELECT PersonID, FullName, EmailAddress
	FROM [Application].[People]
	WHERE (( LogonName = @LogonName)
		AND (IsPermittedToLogon = 1)
		AND (HASHBYTES(N'SHA2_256', @Password + FullName) = HashedPassword))
END
GO

We use the ..VX approach to reduce issues when doing canary and rolling deployments. If the parameter list or return dataset of a stored procedure changes (even if we think it is backwards compatible) the version number is incremented so that different versions of the application can be run concurrently and backing out application updates is less fraught.

The next step was to create a Data Transfer Object(DTO) for the Logon request payload

public class LogonRequest
{
    [JsonRequired]
    [MinLength(Constants.LogonNameLengthMinimum)]
    [MaxLength(Constants.LogonNameLengthMaximum)]
    public string LogonName { get; set; }

    [JsonRequired]
    [MinLength(Constants.PasswordLengthMinimum)]
    [MaxLength(Constants.PasswordLengthMaximum)]
    public string Password { get; set; }
}

Then a Proof of Concept (PoC) AuthenticationController to process a login request.

Swagger Docs Authentication controller Login

The Logon method calls the PersonAuthenticateLookupByLogonNameV1 stored procedure to validate the LoginName and password. In this iteration the only claim added to the JSON Web Token(JWT) is the PersonId. We try and keep the JWTs small as possible as one customer’s application failed randomly because a couple of user’s JWTs were so large (lots of roles) that some versions of browsers choked.

public AuthenticationController(IConfiguration configuration, ILogger<AuthenticationController> logger, IOptions<Model.JwtIssuerOptions> jwtIssuerOptions)
{
    this.configuration = configuration;
    this.logger = logger;
    this.jwtIssuerOptions = jwtIssuerOptions.Value;
 }

[HttpPost("login")]
public async Task<ActionResult> Login([FromBody] Model.LoginRequest request )
{
    var claims = new List<Claim>();

    using (SqlConnection db = new SqlConnection(configuration.GetConnectionString("WorldWideImportersDatabase")))
    {
        UserLogonUserDetailsDto userLogonUserDetails = await db.QuerySingleOrDefaultWithRetryAsync<UserLogonUserDetailsDto>("[Application].[PersonAuthenticateLookupByLogonNameV1]", param: request, commandType: CommandType.StoredProcedure);
        if (userLogonUserDetails == null)
        {
             logger.LogWarning("Login attempt by user {0} failed", request.LogonName);

             return this.Unauthorized();
        }

        // Setup the primary SID + name info
        claims.Add(new Claim(ClaimTypes.PrimarySid, userLogonUserDetails.PersonID.ToString()));
    }

    var authSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(jwtIssuerOptions.SecretKey));

    var token = new JwtSecurityToken(
              issuer: jwtIssuerOptions.Issuer,
              audience: jwtIssuerOptions.Audience, 
              expires: DateTime.UtcNow.Add(jwtIssuerOptions.TokenExpiresAfter),
              claims: claims,
              signingCredentials: new SigningCredentials(authSigningKey, SecurityAlgorithms.HmacSha256));
                
        return this.Ok(new
        {
            token = new JwtSecurityTokenHandler().WriteToken(token),
            expiration = token.ValidTo,
        });
    }
}

After a successful logon the Token has to be copied (I regularly miss the first or the last character) from the response payload to the Authorisation form.

The Swagger UI Authentication method after a successful Logon with the bearer token highlighted

I decorated the SystemController DeploymentVersion deployment with the [Authorize] attribute to force a check that the user is authenticated.

/// <summary>
/// WebAPI controller for handling System Dapper functionality.
/// </summary>
[Route("api/[controller]")]
[ApiController]
public class SystemController : ControllerBase
{
    /// <summary>
    /// Returns the Application version in [Major].[Minor].[Build].Revision] format.
    /// </summary>
    /// <response code="200">List of claims returned.</response>
    /// <response code="401">Unauthorised, bearer token missing or expired.</response>
    /// <returns>Returns the Application version in [Major].[Minor].[Build].Revision] format.</returns>
    [HttpGet("DeploymentVersion"), Authorize]
    public string DeploymentVersion()
    {
        return Assembly.GetExecutingAssembly().GetName().Version.ToString();
    }
}

If the bearer token is missing, invalid (I accidentally didn’t copy either the first or last character) or expired the method call will fail with a 401 Unauthorized error.

Swagger UI System Controller DeploymentVersion method failing because the JWT is missing, invalid or expired

Controlling access to controllers and methods of controllers is probably not granular for most applications so adding “coarse” and “fine grained” authorisation to an existing application and the configuration of Swashbuckle and application request processing middleware to support JWTs will be covered in a couple of future posts.

For remediation projects we try to keep the code as simple as possible (but no simpler), by minimising the plumbing and only using advanced language features etc. where it adds value.

I’m leaning towards using Dependency Injection for configuration information, so the way connection strings and jwtIssuerOptions is going to be harmonised. In a future version of the application the jwtIssuerOptions will be migrated to an Azure Key Vault.

.NET Core web API + Dapper – Swagger

This is the first post in a series about my “learning journey” integrating Swagger/OpenAPI into my WebAPI + Dapper sample….

In the first iteration, I extracted the Title, Description etc. from the Assembly Version information.

// Extract application info for Swagger docs from assembly info
var fileVersionInfo = System.Diagnostics.FileVersionInfo.GetVersionInfo(Assembly.GetEntryAssembly().Location);

builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1",
    new OpenApiInfo
    {
        Title = fileVersionInfo.ProductName,
        Version = $"{fileVersionInfo.FileMajorPart}.{fileVersionInfo.FileMinorPart}",
        Description = fileVersionInfo.Comments,

        License = new OpenApiLicense
        {
             Name = fileVersionInfo.LegalCopyright,
             //Url = new Uri(""),
        },
        //TermsOfService = new Uri(""),

        Contact = new OpenApiContact
        {
            Name = fileVersionInfo.CompanyName,
            //Url = new Uri(""),
        }
    });
    c.OperationFilter<AddResponseHeadersFilter>();
    c.IncludeXmlComments(string.Format(@"{0}\WebAPIDapper.xml", System.AppDomain.CurrentDomain.BaseDirectory));
});

This worked okay but there were still some fields which I had to manually update (or there was no matching property in the assembly information), so I abandoned this approach. I still use the version information property as this changes regularly as part of my build management process.

var version = Assembly.GetEntryAssembly().GetName().Version;

builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1",
    new OpenApiInfo
    {
        Title = ".NET Core web API + Dapper + Swagger",
        Version = $"{version.Major}.{version.Minor}",

        Description = "This sample application shows how .NET Core and Dapper can be used to build lightweight Web APIs described with Swagger",
        Contact = new()
        {
            //Email = "", // Not certain this is a good idea
            Name = "Bryn Lewis",
            Url = new Uri("https://blog.devMobile.co.nz")
        },
        License = new()
        {
            Name = "MIT License",
            Url = new Uri("https://opensource.org/licenses/MIT"),
        }
    });
    c.OperationFilter<AddResponseHeadersFilter>();
    c.IncludeXmlComments(string.Format(@"{0}\WebAPIDapper.xml", System.AppDomain.CurrentDomain.BaseDirectory));
});

I then updated the builder.Services.AddSwaggerGen parameters to include the XML documentation comments but the application failed with an exception when I started it up.

Application failed because of missing XML docs file
Enabling the Generation of XML Documentation file

I wanted my Swagger Interface Definition to have a Favicon. I used Favicon Generator which generates a zip file containing different resolution versions of Shaun The Sheep.

The first time I tried to edit one of the image files the “this file comes from and untrusted source..” warning was displayed.

Using File Properties to unblock contents.

I “unblocked” the downloaded zip file and extracted the contents again rather than having to unblock each file individually. I then launched the website in the Visual Studio 2002 debugger and the favicon was not displayed. I had forgotten to configure copying of the image files when the application was compiled.

Configuring the favicon images to be copied to the website root.

I added some javascript to display the favicon on the top of the browser toolbar as well.

(function () {
    var link = document.querySelector("link[rel*='icon']") || document.createElement('link');;
    document.head.removeChild(link);
    link = document.querySelector("link[rel*='icon']") || document.createElement('link');
    document.head.removeChild(link);
    link = document.createElement('link');
    link.type = 'image/x-icon';
    link.rel = 'shortcut icon';
    link.href = '../images/favicon.ico';
    document.getElementsByTagName('head')[0].appendChild(link);
})();

I used my “elite” Cascading Style Sheet(CSS) skills to change the colour of the Swagger API explorer toolbar to light blue

.swagger-ui .topbar {
    background-color:lightblue    
}

Neither of these “enhancements” worked first time as I had neglected to configure the loading of static files.

app.UseStaticFiles(new StaticFileOptions
{
    FileProvider = new PhysicalFileProvider(Path.Combine(Directory.GetCurrentDirectory(), "css")),
    RequestPath = "/css",
});

app.UseStaticFiles(new StaticFileOptions
{
    FileProvider = new PhysicalFileProvider(Path.Combine(Directory.GetCurrentDirectory(), "images")),
    RequestPath = "/images",
});

app.UseStaticFiles(new StaticFileOptions
{
    FileProvider = new PhysicalFileProvider(Path.Combine(Directory.GetCurrentDirectory(), "JavaScript")),
    RequestPath = "/JavaScript",
});

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();

    app.UseSwaggerUI(c =>
    {
        c.EnableFilter();
        c.InjectStylesheet("/css/Swagger.css");
        c.InjectJavascript("/JavaScript/Swagger.js");
        c.DocumentTitle = "Web API Dapper Sample";
        });
    }
}

Now the Swagger Docs interface is a “visual symphony” with customisations illustrating the options available.

Swager API Explorer with custom images and “elite” styling

This is the first in a series of success & fail posts. Now that the “eye candy” is sorted I can go back to coding.

.NET Core web API + Dapper – Parameters

Different Approaches…

While working on customer ASP.NET Core web API(WebAPI) + Microsoft SQL Server(MSSQL) applications I have encountered several different ways of passing parameters to stored procedures and embedded Structured Query Language(SQL) statements. I have created five examples which query the World Wide Importers database [Warehouse].[StockItems] in the World Wide Importers database to illustrate the different approaches.

A customer with large application which had a lot of ADO.Net code was comfortable Dapper DynamicParameters. Hundreds of stored procedures with input (some output) parameters were used to manage access to data. The main advantage of this approach was “familiarity” and the use of DynamicParameters made mapping of C# variable and stored procedure parameters (with different naming conventions) obvious.

[HttpGet("Dynamic")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetDynamic(
            [Required][MinLength(3, ErrorMessage = "The name search text must be at least {1} characters long"), MaxLength(20, ErrorMessage = "The name search text must be no more that {1} characters long")] string searchText,
            [Required][Range(1, int.MaxValue, ErrorMessage = "MaximumRowsToReturn must be greater than or equal to {1}")] int maximumRowsToReturn)
{
    IEnumerable<Model.StockItemListDtoV1> response = null;

    using (SqlConnection db = new SqlConnection(this.connectionString))
    {
        DynamicParameters parameters = new DynamicParameters();

        parameters.Add("MaximumRowsToReturn", maximumRowsToReturn);
        parameters.Add("SearchText", searchText);

        response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(sql: "[Warehouse].[StockItemsNameSearchV1]", param: parameters, commandType: CommandType.StoredProcedure);
    }

    return this.Ok(response);
}
Error message displayed when SearchText field missing
Error message displayed when SearchText is too short
Error message displayed when SearchText too long
Successful query of StockItems table

The developers at another company used anonymous typed variables everywhere. They also had similar C# and stored procedure parameter naming conventions so there was minimal (in the example code only maximumRowsToReturn vs. stockItemsMaximum) mapping required. They found mapping stored procedure output parameters was problematic. For longer parameter lists they struggled with formatting the code in a way which was readable.

 [HttpGet("Anonymous")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetAnonymous(
            [Required][MinLength(3, ErrorMessage = "The name search text must be at least {1} characters long"), MaxLength(20, ErrorMessage = "The name search text must be no more that {1} characters long")] string searchText,
            [Required][Range(1, 100, ErrorMessage = "The maximum number of stock items to return must be greater than or equal to {1} and less then or equal {2}")] int stockItemsMaximum)
{
   IEnumerable<Model.StockItemListDtoV1> response = null;

   using (SqlConnection db = new SqlConnection(this.connectionString))
   {
      response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(sql: "[Warehouse].[StockItemsNameSearchV1]", new { searchText, maximumRowsToReturn = stockItemsMaximum }, commandType: CommandType.StoredProcedure);
   }

   return this.Ok(response);
}

At another customer the developers used Data Transfer Objects(DTOs)/Plain Old CLR Objects(POCOs) and they had some control over the naming of the stored procedure/embedded SQL parameters.

public class StockItemNameSearchDtoV1
{
   [Required]
   [MinLength(3, ErrorMessage = "The name search text must be at least {1} characters long"), MaxLength(20, ErrorMessage = "The name search text must be no more that {1} characters long")]
   public string SearchText { get; set; }

   [Required]
   [Range(1, 100, ErrorMessage = "The maximum number of rows to return must be greater than or equal to {1} and less then or equal {2}")]
   public int MaximumRowsToReturn { get; set; }
}
[HttpGet("AutomagicDefault")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetDefault([FromQuery] Model.StockItemNameSearchDtoV1 request)
{
   IEnumerable<Model.StockItemListDtoV1> response = null;

   using (SqlConnection db = new SqlConnection(this.connectionString))
   {
      response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(sql: "[Warehouse].[StockItemsNameSearchV1]", param: request, commandType: CommandType.StoredProcedure);
   }

   return this.Ok(response);
}

At another customer the developers used Data Transfer Objects(DTOs)/Plain Old CLR Objects(POCOs) to access the database which had several hundred stored procedures. They had no control over the stored procedure parameter names so they mapped query string parameters to the properties of their POCOs.

This took some experimentation as System.Text.Json/Newtonsoft.Json decorations didn’t work (query string is not Java Script Object Notation(JSON)). They decorated the properties of their DTOs with the [FromQuery] attribute.

public class StockItemNameSearchDtoV2
{
   [Required]
   [FromQuery(Name = "SearchText")]
   [MinLength(3, ErrorMessage = "The name search text must be at least {1} characters long"), MaxLength(20, ErrorMessage = "The name search text must be no more than {1} characters long")]
   public string SearchText { get; set; }

   [Required]
   [FromQuery(Name = "StockItemsMaximum")]
   [Range(1, 100, ErrorMessage = "The maximum number of stock items to return must be greater than or equal to {1} and less then or equal {2}")]
   public int MaximumRowsToReturn { get; set; }
}
[HttpGet("AutomagicMapped")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetMapperDecorated([FromQuery] Model.StockItemNameSearchDtoV2 request)
{
   IEnumerable<Model.StockItemListDtoV1> response = null;

   using (SqlConnection db = new SqlConnection(this.connectionString))
   {
      response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(sql: "[Warehouse].[StockItemsNameSearchV1]", param: request, commandType: CommandType.StoredProcedure);
   }

   return this.Ok(response);
}

I don’t think that [FromQuery] decorations on POCOs is a good idea. If the classes are only used for one method I would consider moving them into the controller file.

//
// https://localhost:5001/api/StockItemsParameter/Array?StockItemId=1&StockItemId=5&StockItemId=10
//
[HttpGet("Array")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetArray(
   [FromQuery(Name = "stockItemID")][Required(), MinLength(1, ErrorMessage = "Minimum of {1} StockItem id(s)"), MaxLength(100, ErrorMessage = "Maximum {1} StockItem ids")] int[] stockItemIDs)
{
    IEnumerable<Model.StockItemListDtoV1> response = null;

    using (SqlConnection db = new SqlConnection(this.connectionString))
    {
        response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(sql: @"SELECT [StockItemID] as ""ID"", [StockItemName] as ""Name"", [RecommendedRetailPrice], [TaxRate] FROM [Warehouse].[StockItems] WHERE  StockItemID IN @StockItemIds ", new { StockItemIDs = stockItemIDs }, commandType: CommandType.Text);
    }

    return this.Ok(response);
}

A customer wanted users to be able search for items selected in a multiple selection list so a Dapper WHERE IN value array was used.

Dapper WHERE IN with no StockItemIds on the query string
Dapper WHERE IN with several StockItemIds on query string

To explore how this worked I downloaded the Dapper source code and reference the project in my solution.

After single stepping through the Dapper source code I found where the array of StockTtems was getting mapped into a “generated” parameterised SQL statement.

Dapper generated parameterised SQL Statement

Based on my customer’s experiences a “mix ‘and ‘n’ match” approach to parameterising Dapper queries looks like a reasonable approach.

.NET Core web API + Dapper – Asynchronicity Revisited

Asynchronous is always better, maybe…

For a trivial ASP.NET Core web API controller like the one below the difference between using synchronous and asynchronous calls is most probably negligible. Especially as the sample World Wide Importers database [Warehouse].[StockItems] table only has 227 records.

[HttpGet("IEnumerableSmall")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetIEnumerableSmall([FromQuery] bool buffered = false)
{
	IEnumerable<Model.StockItemListDtoV1> response = null;

	using (SqlConnection db = new SqlConnection(this.connectionString))
	{
		logger.LogInformation("IEnumerableSmall start Buffered:{buffered}", buffered);

		response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(
			sql: @"SELECT [SI1].[StockItemID] as ""ID"", [SI1].[StockItemName] as ""Name"", [SI1].[RecommendedRetailPrice], [SI1].[TaxRate]" +
				   "FROM [Warehouse].[StockItems] as SI1",
			buffered,
			commandType: CommandType.Text);

		logger.LogInformation("IEnumerableSmall done");
	}

	return this.Ok(response);
}

The easiest way to increase the size of the returned record was with CROSS JOIN(s). This is the first (and most probably the last time) I have used a cross join in a “real” application.

[HttpGet("IEnumerableMedium")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetIEnumerableMedium([FromQuery] bool buffered = false)
{
	IEnumerable<Model.StockItemListDtoV1> response = null;

	using (SqlConnection db = new SqlConnection(this.connectionString))
	{
		logger.LogInformation("IEnumerableMedium start Buffered:{buffered}", buffered);

		response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(
					sql: @" SELECT [SI2].[StockItemID] as ""ID"", [SI2].[StockItemName] as ""Name"", [SI2].[RecommendedRetailPrice], [SI2].[TaxRate]" +
									"FROM [Warehouse].[StockItems] as SI1" +
									"CROSS JOIN[Warehouse].[StockItems] as SI2",
					buffered,
					commandType: CommandType.Text);

		logger.LogInformation("IEnumerableMedium done");
	}

	return this.Ok(response);
}

The medium controller returns 51,529 (227 x 227) rows and the large controller upto 11,697,083 (227 x 227 x 227) rows.

[HttpGet("IEnumerableLarge")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetIEnumerableLarge()
{
	IEnumerable<Model.StockItemListDtoV1> response = null;

	using (SqlConnection db = new SqlConnection(this.connectionString))
	{
		logger.LogInformation("IEnumerableLarge start");

		response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(
				sql: $@"SELECT [SI3].[StockItemID] as ""ID"", [SI3].[StockItemName] as ""Name"", [SI3].[RecommendedRetailPrice], [SI3].[TaxRate]" +
						"FROM [Warehouse].[StockItems] as SI1" +
						"   CROSS JOIN[Warehouse].[StockItems] as SI2" +
						"	CROSS JOIN[Warehouse].[StockItems] as SI3",
				commandType: CommandType.Text);

		logger.LogInformation("IEnumerableLarge done");
	}

	return this.Ok(response);
}

The first version of “IEnumerableLarge” returned some odd Hyper Text Transfer Protocol(HTTP) error codes and Opera kept running out of memory.

After a roughly 3minute delay Opera Browser displayed a 500 error

I think this error was due to the Azure App Service Load Balancer 230 second timeout.

Opera displaying out of memory error

I added some query string parameters to the IEnumerable and IAsyncEnumerable methods so the limit number of records returned by the QueryWithRetryAsync(us the TOP statement).

if (command.Buffered)
{
   var buffer = new List<T>();
   var convertToType = Nullable.GetUnderlyingType(effectiveType) ?? effectiveType;
   while (await reader.ReadAsync(cancel).ConfigureAwait(false))
   {
      object val = func(reader);
      buffer.Add(GetValue<T>(reader, effectiveType, val));
   }
   while (await reader.NextResultAsync(cancel).ConfigureAwait(false)) 
   { /* ignore subsequent result sets */ }
   command.OnCompleted();
   return buffer;
}
else
{
   // can't use ReadAsync / cancellation; but this will have to do
   wasClosed = false; // don't close if handing back an open reader; rely on the command-behavior
   var deferred = ExecuteReaderSync<T>(reader, func, command.Parameters);
   reader = null; // to prevent it being disposed before the caller gets to see it
   return deferred;
 }

The QueryWithRetryAsync method (My wrapper around Dapper’s QueryAsync) also has a “buffered” vs. “Unbuffered” reader parameter(defaults to True) and I wanted to see if that had any impact.

[HttpGet("IEnumerableLarge")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetIEnumerableLarge([FromQuery] bool buffered = false, [FromQuery] int recordCount = 10)
{
	IEnumerable<Model.StockItemListDtoV1> response = null;

	using (SqlConnection db = new SqlConnection(this.connectionString))
	{
		logger.LogInformation("IEnumerableLarge start RecordCount:{recordCount} Buffered:{buffered}", recordCount, buffered);

		response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(
			sql: $@"SELECT TOP({recordCount}) [SI3].[StockItemID] as ""ID"", [SI3].[StockItemName] as ""Name"", [SI3].[RecommendedRetailPrice], [SI3].[TaxRate]" +
					"FROM [Warehouse].[StockItems] as SI1" +
					"   CROSS JOIN[Warehouse].[StockItems] as SI2" +
					"	CROSS JOIN[Warehouse].[StockItems] as SI3",
		buffered,
		commandType: CommandType.Text);

		logger.LogInformation("IEnumerableLarge done");
	}

	return this.Ok(response);
}

I used Telerik Fiddler to call the StockItemsIAsyncEnumerable controller IEnumberable and IAsyncEnumerable methods. The Azure App Service was hosted in an Azure Application Plan (S1, 100 total ACU, 1.75 GB). I found Telerik Fiddler had problems with larger responses, and would crash if the body of a larger response was viewed.

IEnumberableLarge method (buffered=false) response sizes and timings
IEnumberableLarge method (buffered=true) response sizes and timings

The unbuffered buffered version was slower Time To Last Byte(TTLB) and failed earlier which I was expecting.

[HttpGet("IAsyncEnumerableLarge")]
public async Task<ActionResult<IAsyncEnumerable<Model.StockItemListDtoV1>>> GetAsyncEnumerableLarge([FromQuery] bool buffered = false, [FromQuery]int recordCount = 10)
{
    IEnumerable<Model.StockItemListDtoV1> response = null;

    using (SqlConnection db = new SqlConnection(this.connectionString))
    {
        logger.LogInformation("IAsyncEnumerableLarge start RecordCount:{recordCount} Buffered:{buffered}", recordCount, buffered);

        response = await db.QueryWithRetryAsync<Model.StockItemListDtoV1>(
            sql: $@"SELECT TOP({recordCount}) [SI3].[StockItemID] as ""ID"", [SI3].[StockItemName] as ""Name"", [SI3].[RecommendedRetailPrice], [SI3].[TaxRate]" +
                    "FROM [Warehouse].[StockItems] as SI1" +
                    "   CROSS JOIN[Warehouse].[StockItems] as SI2" +
                    "   CROSS JOIN[Warehouse].[StockItems] as SI3",
        buffered,
        commandType: CommandType.Text);

        logger.LogInformation("IAsyncEnumerableLarge done");
    }

    return this.Ok(response);
}
IAsyncEnumberableLarge method response sizes and timings
[HttpGet("IAsyncEnumerableLargeYield")]
public async IAsyncEnumerable<Model.StockItemListDtoV1> GetAsyncEnumerableLargeYield([FromQuery] int recordCount = 10)
{
	int rowCount = 0;

	using (SqlConnection db = new SqlConnection(this.connectionString))
	{
		logger.LogInformation("IAsyncEnumerableLargeYield start RecordCount:{recordCount}", recordCount);

		CommandDefinition commandDefinition = new CommandDefinition(
			$@"SELECT TOP({recordCount}) [SI3].[StockItemID] as ""ID"", [SI3].[StockItemName] as ""Name"", [SI3].[RecommendedRetailPrice], [SI3].[TaxRate]" +
						"FROM [Warehouse].[StockItems] as SI1" +
						"   CROSS JOIN[Warehouse].[StockItems] as SI2" +
						"	CROSS JOIN[Warehouse].[StockItems] as SI3",
			//commandTimeout:
			CommandType.Text,
			//flags: CommandFlags.Pipelined
		);

		using var reader = await db.ExecuteReaderWithRetryAsync(commandDefinition);

		var rowParser = reader.GetRowParser<Model.StockItemListDtoV1>();

		while (await reader.ReadAsync())
		{
			rowCount++;

			if ((rowCount % 10000) == 0)
			{
				logger.LogInformation("Row count:{0}", rowCount);
			}

			yield return rowParser(reader);
		}
		logger.LogInformation("IAsyncEnumerableLargeYield done");
	}
}

When this post was written (August 2022) Dapper IAsyncEnumerable understanding was limited so I trialed the approach suggested in the StackOverflow post.

IAsyncEnumberableLargeYield method response sizes and timings

The IAsyncEnumerableLargeYield was faster to start responding, the overall duration was less and returned significantly more records 7000000 vs. 13000000. I assume this was because the response was streamed so there wasn’t a timeout.

Azure Application Insights displaying the IAsyncEnumerable with yield method executing

The results of my tests should be treated as “indicative” rather than “definitive”. In a future post I compare the scalability of different approaches. The number of records returned by the IAsyncEnumerableLargeYield not realistic and in a “real-world” scenario paging or an alternate approach should be used.

.NET Core web API + Dapper – Readonly query workloads with Data Sync

Azure SQL Data Synchronisation Process

Read-only replicas of an Azure SQL Database database with Active geo-replication are easy to setup but there are some disadvantages. e.g. bi-directional synchronisation is not supported, not all tables or selected columns of some tables might not be needed\should not be accessible for reporting, the overhead of replicating tables used for transaction processing might impact on the performance of the solution etc. Azure SQL Data Sync is a service built on Azure SQL Database that can synchronise selected data bi-directionally across multiple databases, both on-premises and in the cloud.

The first step was to remove all the Microsoft SQL Server features used in the the World Wide Importers database (e.g. Sequence Numbers, Column Store indexes etc.) which are not supported(see general limitiations) by Azure SQL Data Sync. I then used the “Deploy Database Wizard” to copy my modified World Wide Importers database to an Azure SQL Database.

Deploy Database to Microsoft Azure SQL Database running
Microsoft Azure SQL Database Servers with WorldWideImporters and ReadonlyReplicaHub database
Data Sync Group creation onReadOnlyReplicaHub database

For my “read-only replicas” scenario if there are any update conflicts the the source database “wins”.

Data Sync Group created
Data Sync Group configuration database management
Data Sync Group configuration adding “source” database synchronisation to Hub
Data Sync Group configuration adding “destination” database(s) synchronisation from Hub
Data Sync Group selecting source database StockItems table and columns to synchronise
Data Sync Group “destination” databases configured
Initial synchronisation of only StockItems table and seed data
Regular synchronisation of a subset of StockItems columns to destination databases
Source Azure SQL Database regular synchronisation compute utilisation
StockItemsReadOnlyReplicas Controller JSON after first replication completed
Azure application Insights Dependencies showing usage of different synchronised databases
StockItems table in source database with updated RRP
StockItems table in destination database with updated RRP after next scheduled snychronisation
StockItems table in destination database after next scheduled synchronisation

The Azure SQL Database Data Sync was pretty easy to setup (configuration in the hub database tripped me up initially). For a production scenario where only a portion of the database (e.g. shaped by Customer, Geography, security considerations, or a bi-directional requirement) it would be an effective solution, though for some applications the delay between synchronisations might be an issue.