TTI V3 Gateway provisioning Dragino LHT65 Uplink

This very long post is about how to connect a Dragino LHT65 Temperature and Humidity sensor to Azure IoT Central using my TTI/TTN V3Azure IoT Connector and the Digital Twin Definition Language (DTDL).

Dragino LHT65 temperature and Humidity sensor

The first step was to add an application(dragino-lht65) in my The Things Industries(TTI) tenant

TTI/TTN application for my Dragino LHT65 devices
Adding devMobile as a collaborator on the new application
TTI Application API Key configuration

The new Application API Key used by the MQTTnet managed client only needs to have write downlink and read uplink traffic enabled.

FTDI Adapter and modified LHT64 cable

So I could reliably connect to my LHT65 devices to configure them I modified a programming cable so I could use it with a spare FTDI adaptor without jumper wires. Todo this I used a small jewelers screwdriver to “pop” out the VCC cable and move the transmit data line.

After entering the device password and checking the firmware version I used the AT+CFG command to display the device settings

AT+CFG: Print all configurations

[334428]***** UpLinkCounter= 0 *****
[334430]TX on freq 923200000 Hz at DR 2
[334804]txDone
[339807]RX on freq 923200000 Hz at DR 2
[339868]rxTimeOut
[340807]RX on freq 923200000 Hz at DR 2
[340868]rxTimeOut

Correct Password

Stop Tx events,Please wait for all configurations to print
Printf all config...
AT+DEUI=a8 .. .. .. .. .. .. d6
AT+DADDR=01......D6

AT+APPKEY=9d .. .. .. .. .. .. .. .. .. .. .. .. .. .. 2e
AT+NWKSKEY=f6 .. .. .. .. .. .. .. .. .. .. .. .. .. .. 69
AT+APPSKEY=4c 35 .. .. .. .. .. .. .. .. .. .. .. .. .. 3d
AT+APPEUI=a0 .. .. .. .. .. .. 00
AT+ADR=1
AT+TXP=0
AT+DR=0
AT+DCS=0
AT+PNM=1
AT+RX2FQ=923200000
AT+RX2DR=2
AT+RX1DL=1000
AT+RX2DL=2000
AT+JN1DL=5000
AT+JN2DL=6000
AT+NJM=1
AT+NWKID=00 00 00 00
AT+FCU=0
AT+FCD=0
AT+CLASS=A
AT+NJS=0
AT+RECVB=0:
AT+RECV=0:
AT+VER=v1.7 AS923

AT+CFM=0
AT+CFS=0
AT+SNR=0
AT+RSSI=0
AT+TDC=1200000
AT+PORT=2
AT+PWORD=123456
AT+CHS=0
AT+DATE=21/3/26 07:49:15
AT+SLEEP=0
AT+EXT=4,2
AT+RTP=20
AT+BAT=3120
AT+WMOD=0
AT+ARTEMP=-40,125
AT+CITEMP=1
Start Tx events OK


[399287]***** UpLinkCounter= 0 *****

[399289]TX on freq 923400000 Hz at DR 2

[399663]txDone

[404666]RX on freq 923400000 Hz at DR 2

[404726]rxTimeOut

[405666]RX on freq 923200000 Hz at DR 2

[405726]rxTimeOut

I copied the AppEUI and DevEUI for use on the TI Dragino LHT65 Register end device form provided by the TTI/TTN.

TTYI/TTN Dragino LHT65 Register end device

The Dragino LHT65 uses the DeviceEUI as the DeviceID which meant I had todo more redaction in my TTI/TTN and Azure Application Insights screen captures. The rules around the re-use of EndDevice ID were a pain in the arse(PITA) in my development focused tenant.

Dragino LHT 65 Device uplink payload formatter

The connector supports both uplink and downlink messages with JSON encoded payloads. The Dragino LHT65 has a vendor supplied formatter which is automatically configured when an EndDevice is created. The EndDevice formatter configuration can also be overridden at the Application level in the app.settings.json file.

Device Live Data Uplink Data Payload

Once an EndDevice is configured in TTI/TTN I usually use the “Live data Uplink Payload” to work out the decoded payload JSON property names and data types.

LHT65 Uplink only Azure IoT Central Device Template
LHT65 Device Template View Identity

For Azure IoT Central “automagic” provisioning the DTDLModelId has to be copied from the Azure IoT Central Template into the TTI/TTN EndDevice or app.settings.json file application configuration.

LHT65 Device Template copy DTDL @ID
TTI EndDevice configuring the DTDLV2 @ID at the device level

Configuring the DTDLV2 @ID at the TTI application level in the app.settings.json file

{
  "Logging": {
    "LogLevel": {
      "Default": "Debug",
      "Microsoft": "Debug",
      "Microsoft.Hosting.Lifetime": "Debug"
    },
    "ApplicationInsights": {
      "LogLevel": {
        "Default": "Debug"
      }
    }
  },

  "ProgramSettings": {
    "Applications": {
      "application1": {
        "AzureSettings": {
          "DeviceProvisioningServiceSettings": {
            "IdScope": "0ne...DD9",
            "GroupEnrollmentKey": "eFR...w=="
          }
        },
        "DTDLModelId": "dtmi:ttnv3connectorclient:FezduinoWisnodeV14x8;4",
        "MQTTAccessKey": "NNSXS.HCY...RYQ",
        "DeviceIntegrationDefault": false,
        "MethodSettings": {
          "Reboot": {
            "Port": 21,
            "Confirmed": true,
            "Priority": "normal",
            "Queue": "push"
          },
          "value_0": {
            "Port": 30,
            "Confirmed": true,
            "Priority": "normal",
            "Queue": "push"
          },
          "value_1": {
            "Port": 30,
            "Confirmed": true,
            "Priority": "normal",
            "Queue": "push"
          },
          "TemperatureOOBAlertMinimumAndMaximum": {
            "Port": 30,
            "Confirmed": true,
            "Priority": "normal",
            "Queue": "push"
          }
        }
      },
      "seeeduinolorawan": {
        "AzureSettings": {
          "DeviceProvisioningServiceSettings": {
            "IdScope": "0ne...DD9",
            "GroupEnrollmentKey": "AtN...g=="
          },
        },
        "DTDLModelId": "dtmi:ttnv3connectorclient:SeeeduinoLoRaWAN4cz;1",
        "MQTTAccessKey": "NNSXS.V44...42A",
        "DeviceIntegrationDefault": true,
        "DevicePageSize": 10
      },
      "dragino-lht65": {
        "AzureSettings": {
          "DeviceProvisioningServiceSettings": {
            "IdScope": "0ne...DD9",
            "GroupEnrollmentKey": "SLB...w=="
          }
        },
        "DTDLModelId": "dtmi:ttnv3connectorclient:DraginoLHT656w6;1",
        "MQTTAccessKey": "NNSXS.RIJ...NZQ",
        "DeviceIntegrationDefault": true,
        "DevicePageSize": 10
      }
    },
    "TheThingsIndustries": {
      "MqttServerName": "eu1.cloud.thethings.industries",
      "MqttClientId": "MQTTClient",
      "MqttAutoReconnectDelay": "00:00:05",
      "Tenant": "...-test",
      "ApiBaseUrl": "https://...-test.eu1.cloud.thethings.industries/api/v3",
      "ApiKey": "NNSXS.NR7...ZSA",
      "Collaborator": "devmobile",
      "DevicePageSize": 10,
      "DeviceIntegrationDefault": true
    }
  }
}

The Azure Device Provisioning Service(DPS) is configured at the TTI application level in the app.settings.json file. The IDScope and one of the Primary or Secondary Shared Access Signature(SAS) keys should be copied into DeviceProvisioningServiceSettings of an Application in the app.settings.json file. I usually set the “Automatically connect devices in this group” flag as part of the “automagic” provisioning process.

Azure IoT Central Group Enrollment Key
Then device templates need to be mapped to an Enrollment Group then Device Group.

For testing the connector application can be run locally with diagnostic information displayed in the application console window as it “automagically’ provisions devices and uploads telemetry data.

Connector application Diagnostics
Azure IoT Central Device list before my LHT65 device is “automagically” provisioned
Azure IoT Central Device list after my LHT65 device is “automagically” provisioned

One a device has been provisioned I check on the raw data display that all the fields I configured have been mapped correctly.

Azure IoT Central raw data display

I then created a dashboard to display the telemetry data from the LHT65 sensors.

Azure IoT Central dashboard displaying LHT65 temperature, humidity and battery voltage graphs.

The dashboard also has a few KPI displays which highlighted an issue which occurs a couple of times a month with the LHT65 onboard temperature sensor values (327.7°). I have connected Dragino technical support and have also been unable to find a way to remove the current an/or filter out future aberrant values.

Azure Application Insights logging

I also noticed that the formatting of the DeviceEUI values in the Application Insights logging was incorrect after trying to search for one of my Seeedstudio LoRaWAN device with its DeviceEUI.

TTN V3 Gateway Downlink Broken

While adding Azure Device Provisioning Service (DPS) support to my The Things Industries(TTI)/The Things Network(TTN) Azure IoT Hub/Azure IoT Central Connector I broke Cloud to Device(C2D)/Downlink messaging. I had copied the Advanced Message Queuing Protocol(AMQP) connection pooling configuration code from my The Things Network Integration assuming it worked.

return DeviceClient.CreateFromConnectionString(connectionString, deviceId,
	new ITransportSettings[]
	{
		new AmqpTransportSettings(TransportType.Amqp_Tcp_Only)
		{
			PrefetchCount = 0,
			AmqpConnectionPoolSettings = new AmqpConnectionPoolSettings()
			{
				Pooling = true,
			}
		}
	});

I hadn’t noticed this issue in my Azure IoT The Things Network Integration because I hadn’t built support for C2D messaging. After some trial and error I figured out the issue was the PrefetchCount initialisation.

return DeviceClient.CreateFromConnectionString(connectionString, deviceId,
	new ITransportSettings[]
	{
		new AmqpTransportSettings(TransportType.Amqp_Tcp_Only)
		{
			AmqpConnectionPoolSettings = new AmqpConnectionPoolSettings()
			{
				Pooling = true,
			}
		}
	});

From the Azure Service Bus (I couldn’t find any specifically Azure IoT Hub ) documentation

Even though the Service Bus APIs do not directly expose such an option today, a lower-level AMQP protocol client can use the link-credit model to turn the “pull-style” interaction of issuing one unit of credit for each receive request into a “push-style” model by issuing a large number of link credits and then receive messages as they become available without any further interaction. Push is supported through the MessagingFactory.PrefetchCount or MessageReceiver.PrefetchCount property settings. When they are non-zero, the AMQP client uses it as the link credit.

n this context, it’s important to understand that the clock for the expiration of the lock on the message inside the entity starts when the message is taken from the entity, not when the message is put on the wire. Whenever the client indicates readiness to receive messages by issuing link credit, it is therefore expected to be actively pulling messages across the network and be ready to handle them. Otherwise the message lock may have expired before the message is even delivered. The use of link-credit flow control should directly reflect the immediate readiness to deal with available messages dispatched to the receiver.

In the Azure IoT Hub SDK the prefetch count is set to 50 (around line 57) and throws an exception if less that zero (around line 90) and there is some information about tuning the prefetch value for Azure Service Bus.

The best explanation I count find was Github issue which was a query “What exactly does the PrefetchCount property control?”

“You are correct, the pre-fetch count is used to set the link credit over AMQP. What this signifies is the max. no. of messages that can be “in-flight” from the service to the client, at any given time. (This value defaults to 50 for the IoT Hub .NET client).
The client specifies its link-credit, that the service must respect. In simplest terms, any time the service sends a message to the client, it decrements the link credit, and will continue sending messages until linkCredit > 0. Once the client acknowledges the message, it will increment the link credit.”

In summary if Prefetch count is set to zero on startup in my application no messages will be sent to the client….

TTN V3 Gateway Azure Configuration Simplication

To reduce complexity the initial version of the V3 TTI gateway didn’t support the Azure Device Provisioning Service(DPS). In preparation for this I had included DeviceProvisioningServiceSettings object in both the Application and AzureSettingsDefault sections.

After trialing a couple of different approaches I have removed the AzureSettingsDefault. If an application has a connectionstring configured that is used, if there is not one then the DPS configuration is used, if there are neither currently the application logs an error. In the future I will look at adding a configuration option to make the application optionally shutdown

{
  ...
  "ProgramSettings": {
    "Applications": {
      "application1": {
        "AzureSettings": {
          "IoTHubConnectionString": "HostName=TT...n1.azure-devices.net;SharedAccessKeyName=device;SharedAccessKey=Am...M=",
          "DeviceProvisioningServiceSettings": {
            "IdScope": "0n...3B",
            "GroupEnrollmentKey": "Kl...Y="
          }
        },
        "MQTTAccessKey": "NNSXS.HC...YQ",
        "DeviceIntegrationDefault": false,
        "DevicePageSize": 10
      },
      "seeeduinolorawan": {
        "AzureSettings": {
          "IoTHubConnectionString": "HostName=TT...n2.azure-devices.net;SharedAccessKeyName=device;SharedAccessKey=D2q...L8=",
          "DeviceProvisioningServiceSettings": {
            "IdScope": "0n...3B",
            "GroupEnrollmentKey": "Kl...Y="
          }
        },
        "MQTTAccessKey": "NNSXS.V44...42A",
        "DeviceIntegrationDefault": true,
        "DevicePageSize": 10
      }
    },

    "TheThingsIndustries": {
      "MqttServerName": "eu1.cloud.thethings.industries",
      "MqttClientId": "MQTTClient",
      "MqttAutoReconnectDelay": "00:00:05",
      "Tenant": "br...st",
      "ApiBaseUrl": "https://br..st.eu1.cloud.thethings.industries/api/v3",
      "ApiKey": "NNSXS.NR...SA",
      "Collaborator": "de...le",
      "DevicePageSize": 10,
      "DeviceIntegrationDefault": true
    }
  }
}

The implementation of failing back from application to default settings wasn’t easy to implement, explain or document.