Access to cloud version of INFLUXDB

Thanks for the detailed update @tim_influx. I was unaware of the July 2020 changes. Influx 1.x has been very popular with IoTaWatt users using local servers like RPi or NAS units. There are a few larger users using 1.x with many IoTaWatt, but they are all part of a single enterprise.

When you say Telegraf will allow users to use influx 2 with ZERO code changes, does that mean that the influx 1.x query language is translated through Telegraf as well? The reason I ask is that on startup, IoTaWatt uses a series of influx queries to determine the time of the last measurement sent to influx, and then resumes upload of data from that point. With this feature, communication can go down for hours, days, or weeks and IoTaWatt will upload all of the backlog.

When I first encountered the 2.0 issue, before I was aware of the HTTPS requirement (maybe it was not a requirement then?) I recall thinking that it would be fairly trivial to convert the measurement API to the 2.0 protocol, and was preparing to do that. But as I recall, the queries would not work.

So would that translate seamlessly through Telegraf, or is there another approach that might work?

Ah…the best laid plans… Telegraf only exposes /write and doesn’t accept the /query API.

Well, that is a clever way to do checkpointing, but you are right. If queries are going to come from the device, that means we’ll need to use NGINX as a proxy instead. (I believe you suggested this). Perhaps we can work on documenting that configuration together?

Putting that in place should unlock all of the scenarios I outlined. Meaning, once NGINX is there, we can decide what scenario we want to configure:

  1. just upgrading the connection from HTTP to HTTPS
  2. enabling the dual write scenario by splitting the /writes to Telegraf and /query to the desired InfluxDB instance

If you could give me a pointer to the InfluxQL queries you are currently running (in GitHub), I would be happy to contribute the corresponding Flux queries – if you are interested over time towards supporting 2.0 more officially (even if there needs to be an NGINX proxy to ensure we can upgrade the connection to HTTPS off the device). We have some other examples of how to handle this with the different versions and again, would be happy to assist where we can.

Thank you.

Once the InfluxDB v2 OSS release candidate is out (first non-alpha/beta release), I believe the following should work:

  1. There is no requirement for HTTPS, it should be secure enough for this use case using basic auth via the API token. Turning on TLS is a config option for OSS, but clearly undesirable given the power of the ESP8266 devices.
  2. It will introduce the 1.x compatibility APIs so that read/write will both function and behave as if it were a 1.x instance.

It will simply be a matter of documenting how to populate the basic auth information into the existing form. This is similar to the way in which the InfluxQL support is configured through the Grafana data source instructions I linked above.

Happy to contribute a similar doc addition to the repo…to clarify how to specify the appropriate creds to make this work.

For InfluxDB Cloud – HTTPS is required so the proxy setup will also be required if you wish to connect from a ESP8266 device to Cloud.

As a one-person-show I can’t commit to doing this at the pace you might like, but lets stick a toe in and test the water. Granted that NGINX solves the HTTPS problem. I run one now on an RPi for inbound HTTPS connections to an IoTaWatt.

Once the HTTPS problem is resolved, I think I would first give serious consideration to just changing both the /write and /query APIs to go native and skip the complexity of configuring and maintaining a Telegraf service. At the end of the day, my sense is that would be simpler. I would just treat influx1 and influx2 as two different services.

So the IoTaWatt Service that handles influx is here. To effectively have autonomous tasks in the ESP8266 environment (FREERTOS too resource heavy) IoTaWatt uses state machines to manage context. This Service is a state machine.

It may be hard to follow without getting under the hood. I’ll make it easy on you by saying the query is:

db={database}&epoch=s[&rp={retention policy}]&q=SELECT LAST {field key} FROM {measurement} [WHERE {tag key} = {value} [,.....]]

A query is done for each of the measurements and the time of the most recent is used as a resume point.

The /writes are:

{measurement}[,{tagkey1}={tagvalue1}[,{tagkey2}=....]] {fieldkey}={fieldvalue} {timestamp}

but note that if there are multiple measurements with the same name and tag set, in the interest of buffer economy they are combined into one measurement with the fields comma delimited.

If you could comment on what you would consider best practice in converting this to 2.0, I’ll see what’s involved.

Couple of additional comments:

The IoTaWatt keeps up to a year of 5 second measurements and more than 10 years of 1 minute measurements. It’s reasonably fast to retrieve and provides just about everything you might want to know: Volts, Watts, kWh, Amps, VAR, VARh, Hz, and PF. It can deliver this data averaged or summed over any period as easily as from the discrete intervals. See the GRAPH+ docs and Query API (heavily infuenced by influxquery and grafana).

I say this in the interest of full disclosure because I feel many users are satisfied with the standalone solution that IoTaWatt provides. Capacity, detail and speed are not remarkably better with influx. The utility with influx is being able to aggregate data from multiple IoTaWatt (and other devices), being able to access securely in the cloud, and a reasonable expectation of security against data loss. I haven’t looked at the Flux capabilities, so maybe that’s something else that would be a draw.

Given the availability of the compatibility APIs. I’d suggest we continue to leverage those and then we can take this at a pace that works for you and this community.

There is no hurry – other than we want to make sure that everyone understands how to get it all working.

2.0 API Overview:
From a /write perspective, that all looks fine. We didn’t make any major changes to line protocol for now. The major changes to the /write API are more around the credential handling. (Now: bucket, organization and token).

The /query side is where there has been significant changes as we have a new functional query language which eliminates many of the limitations/restrictions that were previously in place with InfluxQL. While the syntax is no longer as familiar as SQL, the expressiveness of the language is significantly better including support for variables, result set shaping, nested functions, math across measurements, joins, string interpolation, more native date handling functions…and more.

Here’s the Query API docs.
and an applied example is also documented here.

Using your query above and via curl it might look something like this (replacing everything with and including { }:

curl http://{influxdb_host}:8086/api/v2/query?org={your-org} -XPOST -sS \
  -H 'Authorization: Token {YOURAUTHTOKEN}' \
  -H 'Accept: application/csv' \
  -H 'Content-type: application/vnd.flux' \
  -H 'Accept-Encoding: gzip' \
  -d 'from(bucket:"{IoTaWATT-bucket}")
        |> range(start: {some time -12h or now()-24h or ?? -- a guess at the time is required})
        |> filter(fn: (r) => r._measurement == "{measurement}"
        |> filter(fn: (r) => r[{tag_key}] = {tag_value} AND.... OR...)
        |> last()'

This returns the last field value for each unique group key – based on the filters applied within the measurement provided.

Here is an example


from(bucket: "my-bucket")
  |> range(start: -12h)
  |> filter(fn: (r) => r._measurement == "cpu") 
  |> last()


You could add more logic to the query if you wanted to do things like stripping off undesired columns, pivot the data to turn the individual fields into a single result set record…and more. The idea is that you can use the query language to shape the data to eliminate complex and extensive parsing.

If you simply wanted the last field for each series key within ALL measurements in a bucket…that can be returned in a single payload with this query:

   |> range(start: {some time -12h or now()-24h or ?? -- a guess at the time is required})
   |> last()

The results are grouped into sets based on the annotations required to describe them, but this may be more difficult to parse particularly if you are already setup to loop through the specific measurements.

If you add the pivot to the query after the last():

  |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")  

The result set returned is 2 rows of data 1 for each unique series. As I believe you are after the last timestamp of each of the series, this might be the most desirable?

If you only care about the last report, independent of the group key or individual fields, you can modify the group key and simply return the last reported field to the measurement.
|> group(columns: ["_measurement"], mode:"by")

The resulting query looks like this:

from(bucket: "my-bucket")
  |> range(start: -12h)
  |> filter(fn: (r) => r._measurement == "cpu")
  |> group(columns: ["_measurement"], mode:"by")
  |> last()
  |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")  

Resulting in:

Other changes:
database and retention policies have been collapsed into a single concept: bucket.
Results are returned in an annotated CSV format. The annotations provide additional meta data that can be valuable for understanding and processing the data.

Configuration… I think you can follow the same kind of layout that the Grafana folks took configure a Flux data source.

Again…happy to help.


I think I get the approach, the devil is in the details. In addition to being more verbose than the 1.x influxql, I can see some other issues that would complicate the port. Suffice to say it’s a little less IOT friendly. I’ll ruminate on this awhile and get back to you when I decide to take a stab at it.

Sorry, I had missed this as I was composing my reply to the previous post. I had interpreted your earlier post to say that HTTPS was required for the OSS version.

Just to clarify: The cloud version requires HTTPS but the OSS version, when released, will not? The OSS version will support the 1.x /write and /query endpoints?

I agree.

Yeah… security. Multiple layers. The APIs require a basic auth using the token at a minimum. For Cloud - HTTPS is mandatory, for OSS - its optional. I misstated this originally.

Question for you: world according to you, what would make the output format “most” IoT friendly? We are discussing additional output formats…and would appreciate your guidance.

Although I don’t use it, the go to transport seems to be MQTT. The problem is what to put in the payload to make it meaningful. Most of the protocols I see are real-time with no time stamp, so the payload is just a value identified by the topic that is stamped with time of receipt. You may already have something along those lines available with Telegraf.

My problem is buffer size and throughput with HTTP. I started looking into using compression to solve the problem, but got bogged down. I think the first phase of gzip that back references repetitive strings would be very efficient at compressing the influx write data as there is a lot of repetition. From what I can see the output of that is then compressed with an LZ bit twiddling algorithm that would be a heavy lift without much reward in a small buffer. I wrote a codex for LZW in a past life and i wouldn’t want to do it for this.

Turning to HTTPS, I haven’t looked in detail at the token auth scheme in influx2, but it seems to be based on shared secrets between client and server. As long as you have that, messages can be encrypted as well as authorized and authenticated. I developed an encrypted protocol with emonCMS that uses a shared secret write-key.

The esp8266 has a good crypto library and the ESP32 has a crypto module. The HTTP headers would be exposed, but an entire POST payload could be secure and the header could be authenticated with a hash.

It’s not just the heap and tools needed for HTTPS, it’s the extra time to do the handshake. IOT devices, even with some HTTPS capability, would have a hard time managing keeping multiple sessions open to avoid repetitive handshaking.

All that said, I believe these limitations will disappear soon as HTTPS becomes ubiquitous in IOT. The first “mainframe” I worked with was an IBM 360/30 with 32K of “core” and two 2311 disks (7mb each). Moore’s law is on your side.

I’m assuming you are not building on top of the Arduino stack? We did produce a client library for Arduino and that seems to handle the HTTPS connection natively on an ESP8266…

The library supports both InfluxDB 1.x and 2.x. Might cut some of the work down.

But, if you aren’t going the Arduino route…then, this won’t be useful.

I was completely unaware of this library. I do use the Arduino framework for ESP8266, but this library won’t work for me for a variety of reasons:

  • It uses a synchronous client library that blocks for the duration of a transaction. IoTaWatt is a power monitor and must sample ADCs for 16ms every 24ms (60Hz) or 20ms every 30ms (50Hz). All HTTP to servers is currently done asynchronously.

  • I’ve not used the BearSSL port to ESP8266 but it looks to be very well done. Nevertheless, there are caveats to using it in that they recommend only one active connection and recommend much more heap than IoTaWatt has available.

I am curious about the ESP32 capabilities though, as these two issues are somewhat mitigated there. My port to ESP32 dedicates one of the two processors to power sampling and leaves the other to do everything else. So synchronous I/O is OK, and would only block the FREERTOS task anyway (ESP8266 doesn’t use FREERTOS). Heap is less of an issue there as well, although it isn’t unlimited.

Thanks for the link.

Let me see if an asynch mode is possible. Appreciate your feedback.