Random wanderings through Microsoft Azure esp. the IoT bits, AI on Micro controllers, .NET nanoFramework, .NET Core on *nix, and GHI Electronics TinyCLR
As part of some scale testing of my WebAPIDapper and WebMinimalAPIDapper i have been “cleaning up” a portion of the Inside AirbnbLondon dataset. To make the scale testing results more realistic I wanted at least one table with lots of rows.
CREATE TABLE [dbo].[CalendarRawDetailed](
[listing_id] [bigint] NOT NULL,
[Date] [date] NOT NULL,
[Xavailable] [bit] NULL,
[available] [nvarchar](5) NOT NULL,
[Xprice] [money] NULL,
[price] [nvarchar](30) NOT NULL,
[Xadjusted_price] [money] NULL,
[adjusted_price] [nvarchar](30) NOT NULL,
[Xminimum_nights] [smallint] NULL,
[minimum_nights] [nvarchar](30) NOT NULL,
[Xmaximum_nights] [smallint] NULL,
[maximum_nights] [nvarchar](30) NOT NULL
) ON [PRIMARY]
The CalendarRawDetailed had some invalid values which were most probably due formatting inconsistencies on the AirBnb website
SELECT COUNT(*) FROM CalendarRawDetailed WHERE Xminimum_nights IS NULL
SELECT * FROM CalendarRawDetailed WHERE Xminimum_nights IS NULL
SELECT COUNT(*) FROM CalendarRawDetailed WHERE Xmaximum_nights IS NULL
SELECT * FROM CalendarRawDetailed WHERE Xmaximum_nights IS NULL
SELECT COUNT(*) FROM CalendarRawDetailed WHERE Xadjusted_price IS NULL
SELECT * FROM CalendarRawDetailed WHERE Xadjusted_price IS NULL
SELECT COUNT(*) FROM CalendarRawDetailed WHERE Xprice IS NULL
SELECT * FROM CalendarRawDetailed WHERE Xprice IS NULL
Where possible I recovered the values with an “incorrect” format, but some rows had to be deleted.
UPDATE CalendarRawDetailed SET Xmaximum_nights = TRY_CONVERT(smallint, RTRIM(maximum_nights, '"')) WHERE Xmaximum_nights IS NULL
UPDATE CalendarRawDetailed SET XmINimum_nights = TRY_CONVERT(smallint, RTRIM(mINimum_nights, '"')) WHERE Xminimum_nights IS NULL
UPDATE CalendarRawDetailed SET Xadjusted_price = TRY_CONVERT(money, LTRIM(adjusted_price, '$')) --WHERE Xmaximum_nights IS NULL
SELECT *
FROM CalendarRawDetailed
WHERE Xadjusted_price IS NULL
DELETE FROM CalendarRawDetailed WHERE Xmaximum_nights IS NULL
UPDATE CalendarRawDetailed set Xavailable = 1 where available = 't'
The Calendar table has 365 rows for each listing, and I will update Calendar dates, so they are in the “future”.
CREATE TABLE [dbo].[Calendar](
[listing_id] [bigint] NOT NULL,
[date] [date] NOT NULL,
[available] [bit] NOT NULL,
[price] [money] NOT NULL,
[adjusted_price] [money] NOT NULL,
[minimum_nights] [smallint] NOT NULL,
[maximum_nights] [smallint] NOT NULL
) ON [PRIMARY]
The Calendar table as approximately 31 million rows which should be plenty for my scale testing.
CREATE PROCEDURE [dbo].[ListingsSpatialNearbyNTSLocation]
@Origin AS GEOGRAPHY,
@distance AS INTEGER
AS
BEGIN
DECLARE @Circle AS GEOGRAPHY = @Origin.STBuffer(@distance);
SELECT TOP(50) UID AS ListingUID
,[Name]
,listing_url as ListingUrl
,Listing.Location.STDistance(@Origin) as Distance
,Listing.Location
FROM Listing
WHERE (Listing.Location.STWithin(@Circle) = 1)
ORDER BY Distance
END
CREATE PROCEDURE [dbo].[ListingsSpatialNearbyNTSSerialize]
@Origin AS GEOGRAPHY,
@distance AS INTEGER
AS
BEGIN
DECLARE @Circle AS GEOGRAPHY = @Origin.STBuffer(@distance);
SELECT TOP(50) UID AS ListingUID
,[Name]
,listing_url as ListingUrl
,Listing.Location.STDistance(@Origin) as Distance
,Location.Serialize() as Location
FROM [listing]
WHERE (Location.STWithin(@Circle) = 1)
ORDER BY Distance
END
class PointHandlerSerialise : SqlMapper.TypeHandler<Point>
{
public override Point Parse(object value)
{
var reader = new SqlServerBytesReader { IsGeography = true };
return (Point)reader.Read((byte[])value);
}
public override void SetValue(IDbDataParameter parameter, Point? value)
{
((SqlParameter)parameter).SqlDbType = SqlDbType.Udt; // @Origin parameter?
((SqlParameter)parameter).UdtTypeName = "GEOGRAPHY";
var writer = new SqlServerBytesWriter { IsGeography = true };
parameter.Value = writer.Write(value);
}
}
Once the location column serialisation was working (I could see a valid response in the debugger) the generation of the response was failing with a “System.Text.Json.JsonException: A possible object cycle was detected. This can either be due to a cycle or if the object depth is larger than the maximum allowed depth of 64″.
After fixing that issue the response generation failed with “System.ArgumentException: .NET number values such as positive and negative infinity cannot be written as valid JSON.”
Fixing these two issues required adjustment of two HttpJsonOptions
CREATE PROCEDURE [dbo].[ListingsSpatialNearbyNTSWkb]
@Origin AS GEOGRAPHY,
@distance AS INTEGER
AS
BEGIN
DECLARE @Circle AS GEOGRAPHY = @Origin.STBuffer(@distance);
SELECT TOP(50) UID AS ListingUID
,[Name]
,listing_url as ListingUrl
,Listing.Location.STDistance(@Origin) as Distance
,Location.STAsBinary() as Location
FROM [listing]
WHERE (Location.STWithin(@Circle) = 1)
ORDER BY Distance
END
CREATE PROCEDURE [dbo].[ListingsSpatialNearbyNTSWkt]
@Origin AS GEOGRAPHY,
@distance AS INTEGER
AS
BEGIN
DECLARE @Circle AS GEOGRAPHY = @Origin.STBuffer(@distance);
SELECT TOP(50) UID AS ListingUID
,[Name]
,listing_url as ListingUrl
,Listing.Location.STDistance(@Origin) as Distance
,Location.STAsText() as Location
FROM [listing]
WHERE (Location.STWithin(@Circle) = 1)
ORDER BY Distance
END
class PointHandlerWkt : SqlMapper.TypeHandler<Point>
{
public override Point Parse(object value)
{
WKTReader wktReader = new WKTReader();
return (Point)wktReader.Read(value.ToString());
}
public override void SetValue(IDbDataParameter parameter, Point? value)
{
((SqlParameter)parameter).SqlDbType = SqlDbType.Udt; // @Origin parameter?
((SqlParameter)parameter).UdtTypeName = "GEOGRAPHY";
parameter.Value = new SqlServerBytesWriter() { IsGeography = true }.Write(value);
}
}
I have focused on getting the spatial queries to work and will stress/performance test my implementations in a future post. I will also revisit the /Spatial/NearbyGeography method to see if I can get it to work without “Location.Serialize() as Location”.
CREATE TABLE [dbo].[Neighbourhood](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[NeighbourhoodUID] [uniqueidentifier] NOT NULL,
[name] [nvarchar](50) NOT NULL,
[neighbourhood_url] [nvarchar](100) NOT NULL,
[boundary] [geography] NOT NULL,
CONSTRAINT [PK_Neighbourhood] PRIMARY KEY CLUSTERED
(
[id] ASC
)
-- Then create a spatial index on GEOGRAPHY which contains the boundary polygon(s)
CREATE SPATIAL INDEX [ISX_NeighbourhoodBoundary] ON [dbo].[Neighbourhood]
(
[boundary]
)
I added a GEOGRAPHY column to the Listing table, populated it using the Latitude and Longitudes of the Listings then added a spatial index.
-- Use latitude and longitude to populate Location GEOGRAPHY column
UPDATE listing
SET Listing.Location = geography::Point(latitude, longitude, 4326)
-- Then index Location column after changing to NOT NULL
CREATE SPATIAL INDEX [IXS_ListingByLocation] ON [dbo].[listing]
(
[Location]
)
The first spatial search uses the latitude and longitude (most probably extracted from image metadata) to get a Listing’s neighbourhood.
It uses the STContains method to find the neighbourhood polygon (if there is one) which the listing location is inside.
const string ListingInNeighbourhoodSQL = @"SELECT neighbourhoodUID, name, neighbourhood_url as neighbourhoodUrl FROM Neighbourhood WHERE Neighbourhood.Boundary.STContains(geography::Point(@Latitude, @Longitude, 4326)) = 1";
...
app.MapGet("/Spatial/Neighbourhood", async (double latitude, double longitude, [FromServices] IDapperContext dapperContext) =>
{
Model.NeighbourhoodSearchDto neighbourhood;
using (var connection = dapperContext.ConnectionCreate())
{
neighbourhood = await connection.QuerySingleOrDefaultWithRetryAsync<Model.NeighbourhoodSearchDto>(ListingInNeighbourhoodSQL, new { latitude, longitude });
}
if (neighbourhood is null)
{
return Results.Problem($"Neighbourhood for Latitude:{latitude} Longitude:{longitude} not found", statusCode: StatusCodes.Status404NotFound);
}
return Results.Ok(neighbourhood);
})
.Produces<IList<Model.NeighbourhoodSearchDto>>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound )
.WithOpenApi();
In a future blog post I will compare the performance of STContains vs. STWithin with a load testing application.
The second search simulates a customer looking for listing(s) within a specified distance of a point of interest.
const string ListingsNearbySQL = @"DECLARE @Origin AS GEOGRAPHY = geography::Point(@Latitude, @Longitude, 4326);
DECLARE @Circle AS GEOGRAPHY = @Origin.STBuffer(@distance);
--DECLARE @Circle AS GEOGRAPHY = @Origin.BufferWithTolerance(@distance, 0.09,true);
SELECT uid as ListingUID, Name, listing_url as ListingUrl,
@Origin.STDistance(Listing.Location) as Distance
FROM [listing]
WHERE Listing.Location.STWithin(@Circle) = 1 ORDER BY Distance";
...
app.MapGet("/Spatial/NearbyText", async (double latitude, double longitude, double distance, [FromServices] IDapperContext dapperContext) =>
{
using (var connection = dapperContext.ConnectionCreate())
{
return await connection.QueryWithRetryAsync<Model.ListingNearbyListDto>(ListingsNearbySQL, new { latitude, longitude, distance });
}
})
.Produces<IList<Model.ListingNearbyListDto>>(StatusCodes.Status200OK)
.WithOpenApi();
The STBuffer command returns a geography object that represents represent a circle centered on @Location with a radius of @distance.
The third and final search simulates a customer looking for listing(s) within a specified distance of a point of interest with the latitude and longitude of the listing included in the results.
const string ListingsNearbyLatitudeLongitudeSQL = @"DECLARE @Location AS GEOGRAPHY = geography::Point(@Latitude, @longitude,4326)
DECLARE @Circle AS GEOGRAPHY = @Location.STBuffer(@distance);
SELECT UID as ListingUID
,[Name]
,listing_url as ListingUrl
,Listing.Location.STDistance(@Location) as Distance
,latitude
,longitude
FROM [listing]
WHERE Listing.Location.STWithin(@Circle) = 1
ORDER BY Distance";
app.MapGet("/Spatial/NearbyLatitudeLongitude", async (double latitude, double longitude, double distance, [FromServices] IDapperContext dapperContext) =>
{
using (var connection = dapperContext.ConnectionCreate())
{
return await connection.QueryWithRetryAsync<Model.ListingNearbyListLatitudeLongitudeDto>(ListingsNearbyLatitudeLongitudeSQL, new { latitude, longitude, distance });
}
})
.Produces<IList<Model.ListingNearbyListLatitudeLongitudeDto>>(StatusCodes.Status200OK)
.WithOpenApi();
The next couple of posts will use the third-party libraries Geo and NetTopolgySuite
using System.Text.Json;
using System.Data;
using System.Data.SqlClient;
using Microsoft.SqlServer.Types;
using Dapper;
string jsonString = File.ReadAllText("your geoJSON file");
using (IDbConnection connection = new SqlConnection("This is not the connection string you are looking for"))
{
connection.Open();
var neighbourHoods = JsonSerializer.Deserialize<GeoJSON.Text.Feature.FeatureCollection>(jsonString)!;
Console.WriteLine($"Features:{neighbourHoods.Features.Count}");
foreach (var feature in neighbourHoods.Features)
{
string neighbourhood = feature.Properties["neighbourhood"].ToString();
Console.WriteLine($"Neightbourhood:{neighbourhood}");
var geometery = (GeoJSON.Text.Geometry.MultiPolygon)feature.Geometry;
var s = new SqlGeographyBuilder();
s.SetSrid(4326);
s.BeginGeography(OpenGisGeographyType.MultiPolygon);
s.BeginGeography(OpenGisGeographyType.Polygon); // A
Console.WriteLine($"Polygon cordinates:{geometery.Coordinates.Count}");
foreach (var coordinates in geometery.Coordinates)
{
//s.BeginGeography(OpenGisGeographyType.Polygon); // B
Console.WriteLine($"Linestring cordinates:{coordinates.Coordinates.Count}");
foreach (var c in coordinates.Coordinates)
{
Console.WriteLine($"Point cordinates:{c.Coordinates.Count}");
s.BeginFigure(c.Coordinates[0].Latitude, c.Coordinates[0].Longitude, null, null);
for (int i = 1; i < c.Coordinates.Count; i++)
{
s.AddLine(c.Coordinates[i].Latitude, c.Coordinates[i].Longitude);
Console.Write('.');
}
Console.WriteLine();
s.EndFigure();
}
//s.EndGeography(); //B
}
s.EndGeography(); //A
s.EndGeography(); // OpenGisGeographyType.MultiPolygon
connection.Execute("INSERT INTO Neighbourhood (Name, Boundary) VALUES( @Neighbourhood, geography::STMPolyFromText(@boundary, 4326))", new { neighbourhood, boundary = s.ConstructedGeography.ToString()});
Console.WriteLine();
}
}
Console.WriteLine("loaded press <enter> to exit");
Console.ReadLine();
The neighbourhood feature loader utility is pretty “nasty” and was built for my specific scenario
CREATE TABLE [dbo].[Neighbourhood](
[NeighbourhoodUID] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[Boundary] [geography] NOT NULL,
CONSTRAINT [PK_Neighbourhood] PRIMARY KEY CLUSTERED
(
[NeighbourhoodUID] ASC
)
I used Dapper to insert rows into the Neighbourhood table
DECLARE @PointGeography AS GEOGRAPHY
SET @PointGeography = geography::Point(51.512837,-0.2894983, 4326) --Flat
--SET @PointGeography = geography::Point(51.5053469,-0.0262693,4326) -- Canary Wharf
--SET @PointGeography = geography::Point(51.476853,0.0,4326) -- Greenwich
--SET @PointGeography = geography::Point(51.501476,-0.140634, 4326) -- Buckingham palace
--SET @PointGeography = geography::Point(51.533611, -0.318889, 4326) -- Hoover factory
--SET @PointGeography = geography::Point(51.5045, -0.0865, 4326) -- Shard
--SET @PointGeography = geography::Point(51.5145683288574,-0.0194199997931719, 4326) -- D0FD60C0-CC45-4517-91F6-00161E710F28 Tower Hamlets
--SET @PointGeography = geography::Point(51.5553092956543,0.00039999998989515, 4326) -- 80264AED-BC74-4150-B393-02D42711E2E6 Waltham Forest
--SET @PointGeography = geography::Point(51.4925193786621,-0.192310005426407, 4326) -- D36E4D1C-4A35-4B6E-B9A7-01E8D732FD3B Kensington and Chelsea
SET @PointGeography = geography::Point(51.5185317993164,-0.199739992618561, 4326) -- 2CAE3CAE-5E43-4F20-9550-01B86D7EF6FF Westminster
SELECT Name, @PointGeography.STWithin(Neighbourhood.Boundary) as 'STWithin', Neighbourhood.Boundary.STContains(@PointGeography) as 'STContains'
FROM Neighbourhood
ORDER BY Name
To test the neighbourhood geography I built a test harness with some “known” locations
The query results were the inverse of what I was expecting.
I was interested in the five hosts with the most listings.
Host_id: 129230780
Name: STK Homes
About: Hello from Staykeepers Homes! We are a group of people who love traveling and sharing their experiences with others. But more so, we like to think of ourselves as people who constantly search to challenge and improve themselves.
URL: https://www.airbnb.com/users/show/129230780
Host_id: 33889201
Name: Veeve
About: Greetings from the Veeve team! Veeve manage the largest collection of homes to rent for short stays in London since 2011, launching in Paris in 2017. Our rental homes are predominantly privately-owned, lived-in homes in vibrant neighbourhoods. Thanks to our fully managed service, when you stay in a Veeve property you can benefit from the spaciousness, personal style and relaxed feel of a home – but with all the amenities of a hotel. Once a booking is confirmed our Customer Service team will become your main point of contact and get in touch with you to organise your arrival at the property (including airport transfers, upon request); after which our on-the-ground management team will look after all aspects of your stay, with in-stay support available to help with any queries. We have strict cleanliness guidelines for our cleaners and all staff who enter our properties and have reinforced existing cleaning procedures in line with official WHO, British and French government guidance. As always, our team ensure that all properties are cleaned to a professional standard before and after each stay, with increased focus on touch-point cleaning (door handles, kitchen surfaces, kettles, taps etc.). All staff who enter properties will follow the below procedures: - Carry and regularly use alcohol-based hand rub - Sanitise their mobile phones - Focus extra attention when cleaning key touch-point areas (door handles, kitchen surfaces, kettles, taps etc.) - Wear protective gear where appropriate - Wash and disinfect work shoes We might be able to arrange for property viewings, upon request, for stays of over 28 nights. Please be aware once a booking is confirmed you will be required to complete a Rental Agreement and also ID verification via our partners Superhog. This is mandatory for all our stays and in accordance to short lets requirements with Veeve. If you have any questions, please do not hesitate to contact us and we will be sure to answer you promptly.
URL: https://www.airbnb.com/users/show/33889201
Host_id: 314162972
Name: Blueground
About: We’re Blueground, a global proptech company with several thousand move-in-ready apartments in a growing number of major cities around the world. With flexible terms and homes in vibrant, centrally based neighborhoods, you’ll feel at home and free to roam for as long as you want — a month, a year, or longer. Each apartment is thoughtfully designed with exclusive furnishings, fully equipped kitchens, and incredible amenities – making every day a five-star experience. From day one, you’ll enjoy high-speed Wi-Fi, premium linens, and smart home entertainment. Plus, access to pools, gyms, and outdoor spaces in select buildings. Why stress over your apartment? We provide a hassle-free alternative — a consistent, quality guest experience that starts even before you arrive. Because we let you book our most up-to-date apartment listings online, confirm with a click, pay securely, and check in easily. During your stay Upon arrival, you’ll either be greeted personally by a Blueground team member or given self-check-in instructions. The entire apartment is yours! You’ll enjoy reliable support via email, phone, and our Guest App, where you can request everything from a home cleaning to extra towels. We’ll share all details upon confirmation of your stay.
URL: https://www.airbnb.com/users/show/314162972
Host_id: 28820321
Name: Veronica
About: I moved to London to follow my love who is a Londoner, I have a 4 years old daughter and two cats. we live in a beautiful house surrounded by parks and trees in Muswell Hill. The Airbnb experience has been very positive and from that experience I set up a company called HelloGuest to provide excellent hassle-free Airbnb management services for short stay rental properties in London and across all the UK advertised on Airbnb. Our services cover everything from full property management (guest communication, check-in & out, cleaning & laundry, manage the listing, 24/7 support service, etc) to partly managed or individual services depending on your needs. Our aim is to provide professional services for high quality properties to deliver an excellent guest experience. We are here to help, so please get in touch with any questions you have.
URL: https://www.airbnb.com/users/show/28820321
host_id: 1432477
Name: City Relay
About: City Relay is a property management company that looks after a bright and diverse portfolio of properties in London! We are committed to offering the best stay to each and every one of our guests. We provide some unique services that most of the other hosts cannot provide. All our properties are selected carefully in order to offer you the best stay! From the moment you book to the moment you arrive at the apartment, we are helping you through all the processes so that your stay is hassle-free! In order to address health and safety concerns, all our properties receive a deep professional cleaning and disinfection after each stay.
URL: https://www.airbnb.com/users/show/1432477
After looking at the listings table this was my first attempt at a Host table.
CREATE TABLE [dbo].[Host](
[Id] [bigint] NOT NULL,
[Url] [nvarchar](150) NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[Since] [date] NOT NULL,
[About] [nvarchar](max) NULL,
[Response_time] [nvarchar](50) NOT NULL,
[Response_rate] [nvarchar](50) NOT NULL,
[Acceptance_rate] [nvarchar](50) NOT NULL,
[Thumbnail_url] [nvarchar](150) NOT NULL,
[Picture_url] [nvarchar](150) NOT NULL,
[Identity_verified] [bit] NOT NULL,
[Identity_verifications] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_Host] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
Which I populated with the following SELECT INTO statement
SELECT DISTINCT [Host_id] as Id
,host_url as url
,[Host_Name] as Name
,host_since as Since
---,host_location as
,host_about as About
,host_response_time as Response_time
,host_response_rate as ResponseRate
,host_acceptance_rate as Acceptance_rate
--,host_is_superhost
,host_thumbnail_url as thumbnail_url
,host_picture_url as picture_url
--,host_neighbourhood
--,host_listings_count
--,host_total_listings_count
,host_verifications as Identity_verifications
--,host_has_profile_pic
,host_identity_verified as Identity_verified
FROM listingsRaw
INTO Host
After extracting the hosts information both needed some more “curation” so I wrote some queries (like the following) to find incomplete records.
SELECT id, Url from Host WHERE Url is null
SELECT id, Name from Host WHERE Name is null
SELECT id, Since from Host WHERE Since is null
SELECT id, response_time from Host WHERE response_time is null
SELECT id, response_rate from Host WHERE response_rate is null
SELECT id, Acceptance_rate from Host WHERE Acceptance_rate is null
SELECT id, Thumbnail_url from Host WHERE Thumbnail_url is null
SELECT id, Picture_url from Host WHERE Picture_url is null
SELECT id, Identity_verified from Host WHERE Identity_verified is null
SELECT id, Identity_verifications from Host WHERE Identity_verifications is null
SELECT id, About from Host WHERE About is null
At the end of my data cleansing the Listing table has 87940 rows and the Host table 53388 rows.
The Inside AirbnbLondon dataset has 87946 listings and the id column (which is the primary key) has a minimum value of 13913 and maximum of 973895808066047620 in the database.
I used “Ealing” as the SearchText for my initial testing and tried different page numbers and sizes
The listings search results JSON looked good but I missed one important detail…
string LookupByIdSql = @"SELECT Id, [Name], Listing_URL AS ListingURL
FROM ListingsHosts
WHERE id = @Id";
public record ListingLookupDto
{
public long Id { get; set; }
public string? Name { get; set; }
public string? ListingURL { get; set; }
};
//...
app.MapGet("/Listing/Results/{id:long}", async (long id, IDapperContext dappperContext) =>
{
using (var connection = dappperContext.ConnectionCreate())
{
ListingLookupDto result = await connection.QuerySingleOrDefaultWithRetryAsync<ListingLookupDto>(LookupByIdSql, new { id });
if (result is null)
{
return Results.Problem($"Listing {id} not found", statusCode: StatusCodes.Status404NotFound);
}
return Results.Ok(result);
}
})
.Produces<ListingLookupDto>(StatusCodes.Status200OK)
.Produces<ProblemDetails>(StatusCodes.Status404NotFound)
.WithOpenApi();
The id values in the search response and lookup DTOs were correct
I had missed the clue in the search response JSON the listing id and the listingURL id didn’t match.
This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision.
My initial ASP.NET Core Minimal AP exploration uses the Inside AirbnbLondon dataset which has 87946 listings. The data is pretty “nasty” with lots of nullable and wide columns so it took several attempts to import.
CREATE TABLE [dbo].[listingsRaw](
[id] [bigint] NOT NULL,
[listing_url] [nvarchar](50) NOT NULL,
[scrape_id] [datetime2](7) NOT NULL,
[last_scraped] [date] NOT NULL,
[source] [nvarchar](50) NOT NULL,
[name] [nvarchar](max) NOT NULL,
[description] [nvarchar](max) NULL,
[neighborhood_overview] [nvarchar](1050) NULL,
[picture_url] [nvarchar](150) NULL,
[host_id] [int] NOT NULL,
[host_url] [nvarchar](50) NOT NULL,
[host_name] [nvarchar](50) NULL,
[host_since] [date] NULL,
[host_location] [nvarchar](100) NULL,
[host_about] [nvarchar](max) NULL,
[host_response_time] [nvarchar](50) NULL,
[host_response_rate] [nvarchar](50) NULL,
[host_acceptance_rate] [nvarchar](50) NULL,
[host_is_superhost] [bit] NULL,
[host_thumbnail_url] [nvarchar](150) NULL,
[host_picture_url] [nvarchar](150) NULL,
[host_neighbourhood] [nvarchar](50) NULL,
[host_listings_count] [int] NULL,
[host_total_listings_count] [int] NULL,
[host_verifications] [nvarchar](50) NOT NULL,
[host_has_profile_pic] [bit] NULL,
[host_identity_verified] [bit] NULL,
[neighbourhood] [nvarchar](100) NULL,
[neighbourhood_cleansed] [nvarchar](50) NOT NULL,
[neighbourhood_group_cleansed] [nvarchar](1) NULL,
[latitude] [float] NOT NULL,
[longitude] [float] NOT NULL,
[property_type] [nvarchar](50) NOT NULL,
[room_type] [nvarchar](50) NOT NULL,
[accommodates] [tinyint] NOT NULL,
[bathrooms] [nvarchar](1) NULL,
[bathrooms_text] [nvarchar](50) NULL,
[bedrooms] [tinyint] NULL,
[beds] [tinyint] NULL,
[amenities] [nvarchar](max) NOT NULL,
[price] [money] NOT NULL,
[minimum_nights] [smallint] NOT NULL,
[maximum_nights] [int] NOT NULL,
[minimum_minimum_nights] [smallint] NULL,
[maximum_minimum_nights] [int] NULL,
[minimum_maximum_nights] [int] NULL,
[maximum_maximum_nights] [int] NULL,
[minimum_nights_avg_ntm] [float] NULL,
[maximum_nights_avg_ntm] [float] NULL,
[calendar_updated] [nvarchar](1) NULL,
[has_availability] [bit] NOT NULL,
[availability_30] [tinyint] NOT NULL,
[availability_60] [tinyint] NOT NULL,
[availability_90] [tinyint] NOT NULL,
[availability_365] [smallint] NOT NULL,
[calendar_last_scraped] [date] NOT NULL,
[number_of_reviews] [smallint] NOT NULL,
[number_of_reviews_ltm] [int] NOT NULL,
[number_of_reviews_l30d] [tinyint] NOT NULL,
[first_review] [date] NULL,
[last_review] [date] NULL,
[review_scores_rating] [float] NULL,
[review_scores_accuracy] [float] NULL,
[review_scores_cleanliness] [float] NULL,
[review_scores_checkin] [float] NULL,
[review_scores_communication] [float] NULL,
[review_scores_location] [float] NULL,
[review_scores_value] [float] NULL,
[license] [nvarchar](max) NULL,
[instant_bookable] [bit] NOT NULL,
[calculated_host_listings_count] [int] NULL,
[calculated_host_listings_count_entire_homes] [int] NOT NULL,
[calculated_host_listings_count_private_rooms] [int] NOT NULL,
[calculated_host_listings_count_shared_rooms] [int] NOT NULL,
[reviews_per_month] [float] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
There are other data quality issues e.g. the host information is duplicated in each of their Listings e.g. host_id, host_name, host_since, host_* etc. which will need to be tidied up.
I have implemented basic (“incomplete”) OpenAPI support for functionality and stress testing.
The search results are paginated and individual listings can be retrieved using the Airbnb listing “id”.
const string SearchPaginatedSql = @"SELECT Uid,Id,[Name], neighbourhood
FROM listings
WHERE[Name] LIKE N'%' + @SearchText + N'%'
ORDER By[Name]
OFFSET @PageSize *(@PageNumber - 1) ROWS FETCH NEXT @PageSize ROWS ONLY";
public record ListingListDto
{
public long Id { get; set; }
public string? Name { get; set; }
public string? Neighbourhood { get; set; }
};
The first HTTP GET implementation returns an untyped result-set which was not very helpful.
app.MapGet("/Listing/Search", async (string searchText, int pageNumber, int pageSize, [FromServices] IDapperContext dappperContext) =>
{
using (var connection = dappperContext.ConnectionCreate())
{
return await connection.QueryWithRetryAsync(SearchPaginatedSql, new { searchText, pageNumber, pageSize });
}
})
.WithOpenApi();
The second HTTP GET implementation returns a typed result-set which improved the “usability” of clients generated from the OpenAPI definition file.
app.MapGet("/Listing/Search/Typed", async (string searchText, int pageNumber, int pageSize, [FromServices] IDapperContext dappperContext) =>
{
using (var connection = dappperContext.ConnectionCreate())
{
return await connection.QueryWithRetryAsync<ListingListDto>(SearchPaginatedSql, new { searchText, pageNumber, pageSize });
}
})
.Produces<IList<ListingListDto>>(StatusCodes.Status200OK)
.WithOpenApi();
string LookupByIdSql = @"SELECT Id,[Name], neighbourhood
FROM ListingsHosts
WHERE id = @Id";
public record ListingLookupDto
{
public long Id { get; set; }
public string? Name { get; set; }
public string? Neighbourhood { get; set; }
};
app.MapGet("/Listing/{id:long}", async (long id, IDapperContext dappperContext) =>
{
using (var connection = dappperContext.ConnectionCreate())
{
ListingLookupDto result = await connection.QuerySingleOrDefaultWithRetryAsync<ListingLookupDto>(LookupByIdSql, new { id });
if (result is null)
{
return Results.Problem($"Listing {id} not found", statusCode: StatusCodes.Status404NotFound);
}
return Results.Ok(result);
}
})
.Produces<ListingLookupDto>(StatusCodes.Status200OK)
.Produces<ProblemDetails>(StatusCodes.Status404NotFound)
.WithOpenApi();
The lack of validation of the SearchText, PageSize and PageNumber parameters allow uses to enter invalid values which caused searches to fail.
My initial approach was to decorate the parameters of the ValidatedQuery method with DataAnnotations to ensure only valid values were accepted.
This wasn’t a great solution because the validation of the parameters was declared as part of the user interface and would have to be repeated everywhere listing search functionality was provided.
app.MapGet("/Listing/Search/Parameters", async ([AsParameters] SearchParameters searchParameters,
[FromServices] IDapperContext dappperContext) =>
{
using (var connection = dappperContext.ConnectionCreate())
{
return await connection.QueryWithRetryAsync<ListingListDto>(SearchPaginatedSql, new { searchText = searchParameters.SearchText, searchParameters.PageNumber, searchParameters.PageSize });
}
})
.Produces<IList<ListingListDto>>(StatusCodes.Status200OK)
.Produces<ProblemDetails>(StatusCodes.Status400BadRequest)
.WithOpenApi();
public record SearchParameters
{
// https://github.com/domaindrivendev/Swashbuckle.AspNetCore/issues/2658 possibly related?
public const byte SearchTextMinimumLength = 3;
public const byte SearchTextMaximumLength = 15;
public const int PageNumberMinimum = 1;
public const int PageNumberMaximum = 100;
public const byte PageSizeMinimum = 5;
public const byte PageSizeMaximum = 50;
//[FromQuery, Required, MinLength(SearchTextMinimumLength, ErrorMessage = "SearchTextMinimumLegth"), MaxLength(SearchTextMaximumLength, ErrorMessage = "SearchTextMaximumLegth")]
//[Required, MinLength(SearchTextMinimumLength, ErrorMessage = "SearchTextMinimumLegth"), MaxLength(SearchTextMaximumLength, ErrorMessage = "SearchTextMaximumLegth")]
[MinLength(SearchTextMinimumLength, ErrorMessage = "SearchTextMinimumLegth"), MaxLength(SearchTextMaximumLength, ErrorMessage = "SearchTextMaximumLegth")]
public string SearchText { get; set; }
//[FromQuery, Range(PageNumberMinimum, PageNumberMaximum, ErrorMessage = "PageNumberMinimum PageNumberMaximum")]
//[Required, Range(PageNumberMinimum, PageNumberMaximum, ErrorMessage = "PageNumberMinimum PageNumberMaximum")]
[Range(PageNumberMinimum, PageNumberMaximum, ErrorMessage = "PageNumberMinimum PageNumberMaximum")]
public int PageNumber { get; set; }
[Range(PageSizeMinimum, PageSizeMaximum, ErrorMessage = "PageSizeMinimum PageSizeMaximum")]
public int PageSize { get; set; }
}
This last two implementations worked though the error messages I had embedded in the code were not displayed I think this is related to this Swashbuckle Issue.
There is also an issue looking up some listings with larger listing ids which I will need some investigation.
For the next series of random posts about ASP.NET Core Minimal APIs I wanted a much larger dataset for testing. When looking for datasets for another project I stumbled across Inside Airbnb which has a selection of datasets scraped from the Airbnb website. I started with the Melbourne Victoria Australia dataset which had 24652 listings.
I had to manually update some of the “suggested” column types which “broke” the import flat file task.
The tinyint or smallint columns which caused conversion errors were changed to an integer column and any text column that which caused a truncation error was changed to an NVARCHAR(MAX) column.
The Melbourne Victoria Australia dataset turned out to be pretty “nasty” with lots of nullable and columns that looked like several values had been concatenated.
CREATE TABLE [dbo].[listings](
[id] [bigint] NOT NULL,
[listing_url] [nvarchar](50) NOT NULL,
[scrape_id] [datetime2](7) NOT NULL,
[last_scraped] [date] NOT NULL,
[source] [nvarchar](50) NOT NULL,
[name] [nvarchar](100) NOT NULL,
[description] [nvarchar](1050) NULL,
[neighborhood_overview] [nvarchar](1050) NULL,
[picture_url] [nvarchar](150) NOT NULL,
[host_id] [int] NOT NULL,
[host_url] [nvarchar](50) NOT NULL,
[host_name] [nvarchar](50) NULL,
[host_since] [date] NULL,
[host_location] [nvarchar](50) NULL,
[host_about] [nvarchar](max) NULL,
[host_response_time] [nvarchar](50) NULL,
[host_response_rate] [nvarchar](50) NULL,
[host_acceptance_rate] [nvarchar](50) NULL,
[host_is_superhost] [bit] NULL,
[host_thumbnail_url] [nvarchar](150) NULL,
[host_picture_url] [nvarchar](150) NULL,
[host_neighbourhood] [nvarchar](50) NULL,
[host_listings_count] [int] NULL,
[host_total_listings_count] [int] NULL,
[host_verifications] [nvarchar](50) NOT NULL,
[host_has_profile_pic] [nvarchar](50) NULL,
[host_identity_verified] [bit] NULL,
[neighbourhood] [nvarchar](max) NULL,
[neighbourhood_cleansed] [nvarchar](50) NOT NULL,
[neighbourhood_group_cleansed] [nvarchar](1) NULL,
[latitude] [float] NOT NULL,
[longitude] [float] NOT NULL,
[property_type] [nvarchar](50) NOT NULL,
[room_type] [nvarchar](50) NOT NULL,
[accommodates] [tinyint] NOT NULL,
[bathrooms] [nvarchar](1) NULL,
[bathrooms_text] [nvarchar](50) NULL,
[bedrooms] [tinyint] NULL,
[beds] [tinyint] NULL,
[amenities] [nvarchar](max) NOT NULL,
[price] [money] NOT NULL,
[minimum_nights] [smallint] NOT NULL,
[maximum_nights] [int] NOT NULL,
[minimum_minimum_nights] [smallint] NOT NULL,
[maximum_minimum_nights] [int] NOT NULL,
[minimum_maximum_nights] [int] NOT NULL,
[maximum_maximum_nights] [int] NOT NULL,
[minimum_nights_avg_ntm] [float] NOT NULL,
[maximum_nights_avg_ntm] [float] NOT NULL,
[calendar_updated] [nvarchar](1) NULL,
[has_availability] [bit] NOT NUL\L,
[availability_30] [tinyint] NOT NULL,
[availability_60] [tinyint] NOT NULL,
[availability_90] [tinyint] NOT NULL,
[availability_365] [smallint] NOT NULL,
[calendar_last_scraped] [date] NOT NULL,
[number_of_reviews] [int] NOT NULL,
[number_of_reviews_ltm] [int] NOT NULL,
[number_of_reviews_l30d] [tinyint] NOT NULL,
[first_review] [date] NULL,
[last_review] [date] NULL,
[review_scores_rating] [float] NULL,
[review_scores_accuracy] [float] NULL,
[review_scores_cleanliness] [float] NULL,
[review_scores_checkin] [float] NULL,
[review_scores_communication] [float] NULL,
[review_scores_location] [float] NULL,
[review_scores_value] [float] NULL,
[license] [nvarchar](1) NULL,
[instant_bookable] [bit] NOT NULL,
[calculated_host_listings_count] [int] NOT NULL,
[calculated_host_listings_count_entire_homes] [tinyint] NOT NULL,
[calculated_host_listings_count_private_rooms] [tinyint] NOT NULL,
[calculated_host_listings_count_shared_rooms] [bit] NULL,
[reviews_per_month] [float] NULL,
CONSTRAINT [PK_listings] PRIMARY KEY CLUSTERED
(
[id] ASC
)
On the Kaggle website there was a study of an Airbnb dataset from Singapore which looked better but was a significantly smaller with only 3483 listings. After some “exploration” of the available datasets on Inside Airbnb the London one looked pretty good.
The “suggested” table structure for the London dataset looks a lot better so I will use if for my ASP.NET Core Minimal APIs posts
CREATE TABLE [dbo].[listings](
[id] [bigint] NOT NULL,
[name] [nvarchar](max) NOT NULL,
[host_id] [int] NOT NULL,
[host_name] [nvarchar](50) NULL,
[neighbourhood_group] [nvarchar](1) NULL,
[neighbourhood] [nvarchar](50) NOT NULL,
[latitude] [float] NOT NULL,
[longitude] [float] NOT NULL,
[room_type] [nvarchar](50) NOT NULL,
[price] [money] NOT NULL,
[minimum_nights] [smallint] NOT NULL,
[number_of_reviews] [int] NOT NULL,
[last_review] [date] NULL,
[reviews_per_month] [float] NULL,
[calculated_host_listings_count] [int] NOT NULL,
[availability_365] [smallint] NOT NULL,
[number_of_reviews_ltm] [int] NOT NULL,
[license] [nvarchar](50) NULL,
CONSTRAINT [PK_listings] PRIMARY KEY CLUSTERED
(
[id] ASC
)
Some of the columns will need to be “unpacked” e.g. “Rental unit in Islington · ★4.80 · 1 bedroom · 1 bed · 1 shared bath” but this shouldn’t be an issue.
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddApplicationInsightsTelemetry();
// Add services to the container.
builder.Services.AddTransient<IDapperContext>(s => new DapperContext(builder.Configuration));
builder.Services.AddControllers();
builder.Services.AddSingleton<IConnectionMultiplexer>(s => ConnectionMultiplexer.Connect(builder.Configuration.GetConnectionString("Redis")));
var app = builder.Build();
// Configure the HTTP request pipeline.
app.UseHttpsRedirection();
app.MapControllers();
app.Run();
}
I trialed the initial versions of my Redis project with Memurai on my development machine, then configured an Azure Cache for Redis. I then load tested the project with several Azure AppService client and there was a significant improvement in response time.
[ApiController]
[Route("api/[controller]")]
public class StockItemsController : ControllerBase
{
private const int StockItemSearchMaximumRowsToReturn = 15;
private readonly TimeSpan StockItemListExpiration = new TimeSpan(0, 5, 0);
private const string sqlCommandText = @"SELECT [StockItemID] as ""ID"", [StockItemName] as ""Name"", [RecommendedRetailPrice], [TaxRate] FROM [Warehouse].[StockItems]";
//private const string sqlCommandText = @"SELECT [StockItemID] as ""ID"", [StockItemName] as ""Name"", [RecommendedRetailPrice], [TaxRate] FROM [Warehouse].[StockItems]; WAITFOR DELAY '00:00:02'";
private readonly ILogger<StockItemsController> logger;
private readonly IDbConnection dbConnection;
private readonly IDatabase redisCache;
public StockItemsController(ILogger<StockItemsController> logger, IDapperContext dapperContext, IConnectionMultiplexer connectionMultiplexer)
{
this.logger = logger;
this.dbConnection = dapperContext.ConnectionCreate();
this.redisCache = connectionMultiplexer.GetDatabase();
}
[HttpGet]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> Get()
{
var cached = await redisCache.StringGetAsync("StockItems");
if (cached.HasValue)
{
return Content(cached, "application/json");
}
var stockItems = await dbConnection.QueryWithRetryAsync<Model.StockItemListDtoV1>(sql: sqlCommandText, commandType: CommandType.Text);
#if SERIALISER_SOURCE_GENERATION
string json = JsonSerializer.Serialize(stockItems, typeof(List<Model.StockItemListDtoV1>), Model.StockItemListDtoV1GenerationContext.Default);
#else
string json = JsonSerializer.Serialize(stockItems);
#endif
await redisCache.StringSetAsync("StockItems", json, expiry: StockItemListExpiration);
return Content(json, "application/json");
}
...
[HttpDelete()]
public async Task<ActionResult> ListCacheDelete()
{
await redisCache.KeyDeleteAsync("StockItems");
logger.LogInformation("StockItems list removed");
return this.Ok();
}
}
public class StockItemListDtoV1
{
public int Id { get; set; }
public string Name { get; set; }
public decimal RecommendedRetailPrice { get; set; }
public decimal TaxRate { get; set; }
}
[JsonSourceGenerationOptions(PropertyNamingPolicy = JsonKnownNamingPolicy.CamelCase)]
[JsonSerializable(typeof(List<StockItemListDtoV1>))]
public partial class StockItemListDtoV1GenerationContext : JsonSerializerContext
{
}
The cost of constructing the Serialiser may be higher, but the cost of performing serialisation with it is much smaller.
I used Telerik Fiddler to empty the cache then load the StockItems list 10 times (more tests would improve the quality of the results). The first trial was with the “conventional” serialiser
The average time for the conventional serialiser was 0.028562 seconds
The average time for the generated version was 0.030546 seconds. But, if the initial compilation step was ignored the average duration dropped to 0.000223 seconds a significant improvement.
While exploring some of the functionality of MiniProfiler there were some 3rd party examples which caught my attention.
using (SqlConnection connection = new SqlConnection(@"Data Source=...; Initial Catalog=SyncDB; Trusted_Connection=Yes"))
{
using (ProfiledDbConnection profiledDbConnection = new ProfiledDbConnection(connection, MiniProfiler.Current))
{
if (profiledDbConnection.State != System.Data.ConnectionState.Open)
profiledDbConnection.Open();
using (SqlCommand command = new SqlCommand("Select * From Authors", connection))
{
using (ProfiledDbCommand profiledDbCommand = new ProfiledDbCommand(command, connection, MiniProfiler.Current))
{
var data = profiledDbCommand.ExecuteReader();
//Write code here to populate the list of Authors
}
}
}
“Inspired” by code like this my first attempt to retrieve a list of stock items didn’t look right.
[HttpGet("AdoProfiledOtt")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetAdoProfiledOtt()
{
List<Model.StockItemListDtoV1> response = new List<Model.StockItemListDtoV1>();
using (SqlConnection connection = new SqlConnection(configuration.GetConnectionString("default")))
{
using (ProfiledDbConnection profiledDbConnection = new ProfiledDbConnection(connection, MiniProfiler.Current))
{
await profiledDbConnection.OpenAsync();
using (SqlCommand command = new SqlCommand(sqlCommandText, connection))
{
using (ProfiledDbCommand profiledDbCommand = new ProfiledDbCommand(command, profiledDbConnection, MiniProfiler.Current))
{
using (SqlDataReader reader = await command.ExecuteReaderAsync())
{
using (ProfiledDbDataReader profiledDbDataReader = new ProfiledDbDataReader(reader, MiniProfiler.Current))
{
var rowParser = profiledDbDataReader.GetRowParser<Model.StockItemListDtoV1>();
while (await profiledDbDataReader.ReadAsync())
{
response.Add(rowParser(profiledDbDataReader));
}
}
}
}
}
await profiledDbConnection.CloseAsync();
}
}
}
/// <summary>
/// Initializes a new instance of the <see cref="ProfiledDbDataReader"/> class (with <see cref="CommandBehavior.Default"/>).
/// </summary>
/// <param name="reader">The reader.</param>
/// <param name="profiler">The profiler.</param>
public ProfiledDbDataReader(DbDataReader reader, IDbProfiler profiler) : this(reader, CommandBehavior.Default, profiler) { }
/// <summary>
/// Initializes a new instance of the <see cref="ProfiledDbDataReader"/> class.
/// </summary>
/// <param name="reader">The reader.</param>
/// <param name="behavior">The behavior specified during command execution.</param>
/// <param name="profiler">The profiler.</param>
public ProfiledDbDataReader(DbDataReader reader, CommandBehavior behavior, IDbProfiler? profiler)
{
WrappedReader = reader;
Behavior = behavior;
_profiler = profiler;
}
...
/// <summary>
/// The <see cref="DbDataReader"/> that is being used.
/// </summary>
public DbDataReader WrappedReader { get; }
/// <inheritdoc cref="DbDataReader.Dispose(bool)"/>
protected override void Dispose(bool disposing)
{
// reader can be null when we're not profiling, but we've inherited from ProfiledDbCommand and are returning a
// an unwrapped reader from the base command
WrappedReader?.Dispose();
base.Dispose(disposing);
}
Another “using” not required as ProfiledDbDataReader “automagically” disposes the SqlDataReader. This was my final version of profiling the System.Data.SqlClient code to retrieve a list of stock items.
[HttpGet("AdoProfiled")]
public async Task<ActionResult<IEnumerable<Model.StockItemListDtoV1>>> GetProfiledAdo()
{
List<Model.StockItemListDtoV1> response = new List<Model.StockItemListDtoV1>();
using (ProfiledDbConnection profiledDbConnection = new ProfiledDbConnection((SqlConnection)dapperContext.ConnectionCreate(), MiniProfiler.Current))
{
await profiledDbConnection.OpenAsync();
using (ProfiledDbCommand profiledDbCommand = new ProfiledDbCommand(new SqlCommand(sqlCommandText), profiledDbConnection, MiniProfiler.Current))
{
DbDataReader reader = await profiledDbCommand.ExecuteReaderAsync();
using (ProfiledDbDataReader profiledDbDataReader = new ProfiledDbDataReader(reader, MiniProfiler.Current))
{
var rowParser = profiledDbDataReader.GetRowParser<Model.StockItemListDtoV1>();
while (await profiledDbDataReader.ReadAsync())
{
response.Add(rowParser(profiledDbDataReader));
}
}
}
}
return this.Ok(response);
}
The profileDbDataReader.cs implementation was “sparse” and when loading a longer list of stock items there were some ReadAsync calls which took a bit longer.
/// <summary>
/// The profiled database data reader.
/// </summary>
public class ProfiledDbDataReader : DbDataReader
{
private readonly IDbProfiler? _profiler;
/// <summary>
/// Initializes a new instance of the <see cref="ProfiledDbDataReader"/> class (with <see cref="CommandBehavior.Default"/>).
/// </summary>
/// <param name="reader">The reader.</param>
/// <param name="profiler">The profiler.</param>
public ProfiledDbDataReader(DbDataReader reader, IDbProfiler profiler) : this(reader, CommandBehavior.Default, profiler) { }
/// <summary>
/// Initializes a new instance of the <see cref="ProfiledDbDataReader"/> class.
/// </summary>
/// <param name="reader">The reader.</param>
/// <param name="behavior">The behavior specified during command execution.</param>
/// <param name="profiler">The profiler.</param>
public ProfiledDbDataReader(DbDataReader reader, CommandBehavior behavior, IDbProfiler? profiler)
{
WrappedReader = reader;
Behavior = behavior;
_profiler = profiler;
}
/// <summary>Gets the behavior specified during command execution.</summary>
public CommandBehavior Behavior { get; }
/// <inheritdoc cref="DbDataReader.Depth"/>
public override int Depth => WrappedReader.Depth;
/// <inheritdoc cref="DbDataReader.FieldCount"/>
public override int FieldCount => WrappedReader.FieldCount;
/// <inheritdoc cref="DbDataReader.HasRows"/>
public override bool HasRows => WrappedReader.HasRows;
/// <inheritdoc cref="DbDataReader.IsClosed"/>
public override bool IsClosed => WrappedReader.IsClosed;
/// <inheritdoc cref="DbDataReader.RecordsAffected"/>
public override int RecordsAffected => WrappedReader.RecordsAffected;
/// <summary>
/// The <see cref="DbDataReader"/> that is being used.
/// </summary>
public DbDataReader WrappedReader { get; }
/// <inheritdoc cref="DbDataReader.this[string]"/>
public override object this[string name] => WrappedReader[name];
/// <inheritdoc cref="DbDataReader.this[int]"/>
public override object this[int ordinal] => WrappedReader[ordinal];
...
/// <inheritdoc cref="DbDataReader.GetString(int)"/>
public override string GetString(int ordinal) => WrappedReader.GetString(ordinal);
/// <inheritdoc cref="DbDataReader.GetValue(int)"/>
public override object GetValue(int ordinal) => WrappedReader.GetValue(ordinal);
/// <inheritdoc cref="DbDataReader.GetValues(object[])"/>
public override int GetValues(object[] values) => WrappedReader.GetValues(values);
/// <inheritdoc cref="DbDataReader.IsDBNull(int)"/>
public override bool IsDBNull(int ordinal) => WrappedReader.IsDBNull(ordinal);
/// <inheritdoc cref="DbDataReader.IsDBNullAsync(int, CancellationToken)"/>
public override Task<bool> IsDBNullAsync(int ordinal, CancellationToken cancellationToken) => WrappedReader.IsDBNullAsync(ordinal, cancellationToken);
/// <inheritdoc cref="DbDataReader.NextResult()"/>
public override bool NextResult() => WrappedReader.NextResult();
/// <inheritdoc cref="DbDataReader.NextResultAsync(CancellationToken)"/>
public override Task<bool> NextResultAsync(CancellationToken cancellationToken) => WrappedReader.NextResultAsync(cancellationToken);
/// <inheritdoc cref="DbDataReader.Read()"/>
public override bool Read() => WrappedReader.Read();
/// <inheritdoc cref="DbDataReader.ReadAsync(CancellationToken)"/>
public override Task<bool> ReadAsync(CancellationToken cancellationToken) => WrappedReader.ReadAsync(cancellationToken);
/// <inheritdoc cref="DbDataReader.Close()"/>
public override void Close()
{
// reader can be null when we're not profiling, but we've inherited from ProfiledDbCommand and are returning a
// an unwrapped reader from the base command
WrappedReader?.Close();
_profiler?.ReaderFinish(this);
}
/// <inheritdoc cref="DbDataReader.GetSchemaTable()"/>
public override DataTable? GetSchemaTable() => WrappedReader.GetSchemaTable();
/// <inheritdoc cref="DbDataReader.Dispose(bool)"/>
protected override void Dispose(bool disposing)
{
// reader can be null when we're not profiling, but we've inherited from ProfiledDbCommand and are returning a
// an unwrapped reader from the base command
WrappedReader?.Dispose();
base.Dispose(disposing);
}
}
In the [HttpGet(“DapperProfiledQueryMultipleStep”)] method I wrapped ReadAsync and could see in the profiling that every so often a call did take significantly longer.
using (MiniProfiler.Current.Step("invoiceSummaryLine.ReadAsync"))
{
response.InvoiceLines = await invoiceSummary.ReadAsync<Model.InvoiceLineSummaryListDtoV1>();
}
I did consider modifying profileDbDataReader.cs to add some instrumentation to the Read… and Get… methods but, the authors of miniprofiler are way way smarter than me so there must be a reason why they didn’t.