Access to cloud version of INFLUXDB

Got it. That sounds simple. If you have any thoughts on which service would be easier to emulate between influx and emoncms, the advice would be appreciated. Basically I’m going to create something like the following:

IoTaWatt --> MyCustom(Influx|Emoncms)Service -->DB(Timescale|MemSQL) <–Grafana Dashboard.

If you have any request / response samples for the startup / restart and upload actions of whichever one you think is easier, it would be pretty helpful. If not, I’ll wait to observe my own IoTaWatt. Once done, I’d be happy to abstract and donate the class if you think any would make use of it.

I’m about to send you pictures of my circuit breaker box and dryer plugs in another thread to that I can finalize my order. I’m trying to get it down to a single IoTaWatt while still getting a good view of energy use (there are 40 total circuits / 22 distinct channels).

Do you have any example payloads for the transactions to Emoncms? I should have my hardware ordered in the coming week but would love to get a peek.

@overeasy - So I started with Emoncms and emulating it went fairly well. Once I got into it I found I could not see where the names or units of the inputs are passed to the server. Am I reading that right?

I think so. The Emoncms input protocols are described here. IoTaWatt uses input/bulk.

Emoncms has no provision for capturing or storing units. InfluxDB does. So you can specify the units you want to send to Emoncms, but they arrive as unitless numbers.

Emoncms was improved to accept named inputs, in addition to ordinal values, but that protocol is very verbose and does not provide for efficiently sending multiple frames (time stamps) in one transaction.

Where you ever able to get this working? I want to store the data on the Influxdb 2.0 cloud and pull it using grafana.


While the initial 2.0 announcement stated the intent to provide 1.x compatibility, that seems to have been completely abandoned. So now 2.0 is actually a completely new server support.

I have not looked at this lately, but as I recall the influxcloud 2.0 required all APIs to be sent as HTTPS, which IoTaWatt simply cannot do. If you point me to some documentation that HTTP to the cloud is supported, I’ll take another look

The authentication in 2.0 appears to use tokens which is reasonably secure for most IoTaWatt users, so if the data posting and query (F.lux) API can be sent via HTTP it would be feasible.


Thanks for the update. If thats the case I plan to use python http client to pull form a local influxdb server and push it to the cloud server.

Appreciate it.

Head of products from InfluxData here.

I’d like to provide some clarity to a number of points made in this thread.

  1. InfluxDB Cloud – currently supports the 1.x AND 2.x APIs. This was introduced in July 2020.
  2. InfluxDB OSS 2.0 is now winding down the beta program and entering into the release candidate phase which leads to general availability. The first release candidate will re-introduce the 1.x APIs for compatibility and should be available next week at the earliest. Yes, it has taken awhile to reach this point. Planned GA is during Q4 – likely early November.
  3. InfluxDB 2.0 – all editions: Cloud/OSS – are secure by default. Meaning, they all require HTTPS for API access and they all use a username/token combination for API calls. This is a big change from 1.x but one that has been demanded by a large portion of our community.
  4. Given the ESP8266 does not have adequate memory or appropriate tools for HTTPS, there is already a pre-built proxy that you can use to overcome this. Telegraf is a lightweight OSS-based agent which is used to gather a wide variety of metrics/logs/etc. If you want to leverage the new editions of InfluxDB you can do this with ZERO code changes to IoTaWatt.

Following the pattern above:
IoTaWatt --> influxDB_listener input /[ Telegraf /] influxDBv2_output --> InfluxDB 2.0 (OSS or Cloud)
You can then use the native UI to build beautiful dashboards.

If you already have dashboards built, in say Grafana, you can use the influxDBv1_output and this will create the appropriate mappings and land the data precisely as it was in InfluxDB 1.x. You can then continue to leverage your Grafana dashboards by creating an InfluxDB data source connection leveraging the InfluxQL language in Grafana 7.1.x or above. The instructions are here.

You can send the HTTP-based metrics to the influxDB_listener input plugin configured within Telegraf. This essentially makes Telegraf look like an InfluxDB 1.x instance. Telegraf can then be configured with the appropriate security credentials and will translate the inbound payload to what InfluxDB 2.0 expects (via the InfluxDB 2.0 output plugin) using HTTPS (or use InfluxDB v1.x output plugin and the appropriate compatibility APIs will be leveraged).

Further, if you have an existing setup with InfluxDB and you want to dual write — simply to explore the capabilities of InfluxDB 2.0, you can do this with Telegraf as well. Telegraf is capable of having multiple output plugins configured. Meaning you can configure 1 output plugin for your existing 1.x instance while configuring a 2nd output plugin for InfluxDB 2.0 (again, either Cloud or OSS).

We have a very large community of open source projects and applications that have leveraged the capabilities of InfluxDB and we appreciate the support, interest, contributions, and effort that is has taken to grow such a community. We did not want to aggressively deliver a major upgrade to the OSS community without thought, consideration and feedback from our developers who have clearly put their trust into our technology. So, we have been very deliberate about not pushing the OSS edition towards GA haphazardly. All of the new capabilities have been tested in the Cloud edition first and we continue to deliver improvements, feature additions and more approximately 25-30x/week to the Cloud edition. We’ve now been effectively running the Cloud edition at scale for a year and are confident that we are ready to move the OSS edition forward to a GA state starting with Linux packages and then expanding towards Windows and other platforms.

We’ve reached this point by listening to feedback and incorporating these kinds of compatibility modes and techniques in an effort to minimize code changes as much as possible while also trying to move such a large community forward.

Happy to field additional questions or if you would like to dig deeper, we welcome your inputs to our community site as well -> community . influxdata . com

Thank you…

1 Like

Thanks for the detailed update @tim_influx. I was unaware of the July 2020 changes. Influx 1.x has been very popular with IoTaWatt users using local servers like RPi or NAS units. There are a few larger users using 1.x with many IoTaWatt, but they are all part of a single enterprise.

When you say Telegraf will allow users to use influx 2 with ZERO code changes, does that mean that the influx 1.x query language is translated through Telegraf as well? The reason I ask is that on startup, IoTaWatt uses a series of influx queries to determine the time of the last measurement sent to influx, and then resumes upload of data from that point. With this feature, communication can go down for hours, days, or weeks and IoTaWatt will upload all of the backlog.

When I first encountered the 2.0 issue, before I was aware of the HTTPS requirement (maybe it was not a requirement then?) I recall thinking that it would be fairly trivial to convert the measurement API to the 2.0 protocol, and was preparing to do that. But as I recall, the queries would not work.

So would that translate seamlessly through Telegraf, or is there another approach that might work?

Ah…the best laid plans… Telegraf only exposes /write and doesn’t accept the /query API.

Well, that is a clever way to do checkpointing, but you are right. If queries are going to come from the device, that means we’ll need to use NGINX as a proxy instead. (I believe you suggested this). Perhaps we can work on documenting that configuration together?

Putting that in place should unlock all of the scenarios I outlined. Meaning, once NGINX is there, we can decide what scenario we want to configure:

  1. just upgrading the connection from HTTP to HTTPS
  2. enabling the dual write scenario by splitting the /writes to Telegraf and /query to the desired InfluxDB instance

If you could give me a pointer to the InfluxQL queries you are currently running (in GitHub), I would be happy to contribute the corresponding Flux queries – if you are interested over time towards supporting 2.0 more officially (even if there needs to be an NGINX proxy to ensure we can upgrade the connection to HTTPS off the device). We have some other examples of how to handle this with the different versions and again, would be happy to assist where we can.

Thank you.

Once the InfluxDB v2 OSS release candidate is out (first non-alpha/beta release), I believe the following should work:

  1. There is no requirement for HTTPS, it should be secure enough for this use case using basic auth via the API token. Turning on TLS is a config option for OSS, but clearly undesirable given the power of the ESP8266 devices.
  2. It will introduce the 1.x compatibility APIs so that read/write will both function and behave as if it were a 1.x instance.

It will simply be a matter of documenting how to populate the basic auth information into the existing form. This is similar to the way in which the InfluxQL support is configured through the Grafana data source instructions I linked above.

Happy to contribute a similar doc addition to the repo…to clarify how to specify the appropriate creds to make this work.

For InfluxDB Cloud – HTTPS is required so the proxy setup will also be required if you wish to connect from a ESP8266 device to Cloud.

As a one-person-show I can’t commit to doing this at the pace you might like, but lets stick a toe in and test the water. Granted that NGINX solves the HTTPS problem. I run one now on an RPi for inbound HTTPS connections to an IoTaWatt.

Once the HTTPS problem is resolved, I think I would first give serious consideration to just changing both the /write and /query APIs to go native and skip the complexity of configuring and maintaining a Telegraf service. At the end of the day, my sense is that would be simpler. I would just treat influx1 and influx2 as two different services.

So the IoTaWatt Service that handles influx is here. To effectively have autonomous tasks in the ESP8266 environment (FREERTOS too resource heavy) IoTaWatt uses state machines to manage context. This Service is a state machine.

It may be hard to follow without getting under the hood. I’ll make it easy on you by saying the query is:

db={database}&epoch=s[&rp={retention policy}]&q=SELECT LAST {field key} FROM {measurement} [WHERE {tag key} = {value} [,.....]]

A query is done for each of the measurements and the time of the most recent is used as a resume point.

The /writes are:

{measurement}[,{tagkey1}={tagvalue1}[,{tagkey2}=....]] {fieldkey}={fieldvalue} {timestamp}

but note that if there are multiple measurements with the same name and tag set, in the interest of buffer economy they are combined into one measurement with the fields comma delimited.

If you could comment on what you would consider best practice in converting this to 2.0, I’ll see what’s involved.

Couple of additional comments:

The IoTaWatt keeps up to a year of 5 second measurements and more than 10 years of 1 minute measurements. It’s reasonably fast to retrieve and provides just about everything you might want to know: Volts, Watts, kWh, Amps, VAR, VARh, Hz, and PF. It can deliver this data averaged or summed over any period as easily as from the discrete intervals. See the GRAPH+ docs and Query API (heavily infuenced by influxquery and grafana).

I say this in the interest of full disclosure because I feel many users are satisfied with the standalone solution that IoTaWatt provides. Capacity, detail and speed are not remarkably better with influx. The utility with influx is being able to aggregate data from multiple IoTaWatt (and other devices), being able to access securely in the cloud, and a reasonable expectation of security against data loss. I haven’t looked at the Flux capabilities, so maybe that’s something else that would be a draw.

Given the availability of the compatibility APIs. I’d suggest we continue to leverage those and then we can take this at a pace that works for you and this community.

There is no hurry – other than we want to make sure that everyone understands how to get it all working.

2.0 API Overview:
From a /write perspective, that all looks fine. We didn’t make any major changes to line protocol for now. The major changes to the /write API are more around the credential handling. (Now: bucket, organization and token).

The /query side is where there has been significant changes as we have a new functional query language which eliminates many of the limitations/restrictions that were previously in place with InfluxQL. While the syntax is no longer as familiar as SQL, the expressiveness of the language is significantly better including support for variables, result set shaping, nested functions, math across measurements, joins, string interpolation, more native date handling functions…and more.

Here’s the Query API docs.
and an applied example is also documented here.

Using your query above and via curl it might look something like this (replacing everything with and including { }:

curl http://{influxdb_host}:8086/api/v2/query?org={your-org} -XPOST -sS \
  -H 'Authorization: Token {YOURAUTHTOKEN}' \
  -H 'Accept: application/csv' \
  -H 'Content-type: application/vnd.flux' \
  -H 'Accept-Encoding: gzip' \
  -d 'from(bucket:"{IoTaWATT-bucket}")
        |> range(start: {some time -12h or now()-24h or ?? -- a guess at the time is required})
        |> filter(fn: (r) => r._measurement == "{measurement}"
        |> filter(fn: (r) => r[{tag_key}] = {tag_value} AND.... OR...)
        |> last()'

This returns the last field value for each unique group key – based on the filters applied within the measurement provided.

Here is an example


from(bucket: "my-bucket")
  |> range(start: -12h)
  |> filter(fn: (r) => r._measurement == "cpu") 
  |> last()


You could add more logic to the query if you wanted to do things like stripping off undesired columns, pivot the data to turn the individual fields into a single result set record…and more. The idea is that you can use the query language to shape the data to eliminate complex and extensive parsing.

If you simply wanted the last field for each series key within ALL measurements in a bucket…that can be returned in a single payload with this query:

   |> range(start: {some time -12h or now()-24h or ?? -- a guess at the time is required})
   |> last()

The results are grouped into sets based on the annotations required to describe them, but this may be more difficult to parse particularly if you are already setup to loop through the specific measurements.

If you add the pivot to the query after the last():

  |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")  

The result set returned is 2 rows of data 1 for each unique series. As I believe you are after the last timestamp of each of the series, this might be the most desirable?

If you only care about the last report, independent of the group key or individual fields, you can modify the group key and simply return the last reported field to the measurement.
|> group(columns: ["_measurement"], mode:"by")

The resulting query looks like this:

from(bucket: "my-bucket")
  |> range(start: -12h)
  |> filter(fn: (r) => r._measurement == "cpu")
  |> group(columns: ["_measurement"], mode:"by")
  |> last()
  |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")  

Resulting in:

Other changes:
database and retention policies have been collapsed into a single concept: bucket.
Results are returned in an annotated CSV format. The annotations provide additional meta data that can be valuable for understanding and processing the data.

Configuration… I think you can follow the same kind of layout that the Grafana folks took configure a Flux data source.

Again…happy to help.


I think I get the approach, the devil is in the details. In addition to being more verbose than the 1.x influxql, I can see some other issues that would complicate the port. Suffice to say it’s a little less IOT friendly. I’ll ruminate on this awhile and get back to you when I decide to take a stab at it.

Sorry, I had missed this as I was composing my reply to the previous post. I had interpreted your earlier post to say that HTTPS was required for the OSS version.

Just to clarify: The cloud version requires HTTPS but the OSS version, when released, will not? The OSS version will support the 1.x /write and /query endpoints?

I agree.

Yeah… security. Multiple layers. The APIs require a basic auth using the token at a minimum. For Cloud - HTTPS is mandatory, for OSS - its optional. I misstated this originally.

Question for you: world according to you, what would make the output format “most” IoT friendly? We are discussing additional output formats…and would appreciate your guidance.

Although I don’t use it, the go to transport seems to be MQTT. The problem is what to put in the payload to make it meaningful. Most of the protocols I see are real-time with no time stamp, so the payload is just a value identified by the topic that is stamped with time of receipt. You may already have something along those lines available with Telegraf.

My problem is buffer size and throughput with HTTP. I started looking into using compression to solve the problem, but got bogged down. I think the first phase of gzip that back references repetitive strings would be very efficient at compressing the influx write data as there is a lot of repetition. From what I can see the output of that is then compressed with an LZ bit twiddling algorithm that would be a heavy lift without much reward in a small buffer. I wrote a codex for LZW in a past life and i wouldn’t want to do it for this.

Turning to HTTPS, I haven’t looked in detail at the token auth scheme in influx2, but it seems to be based on shared secrets between client and server. As long as you have that, messages can be encrypted as well as authorized and authenticated. I developed an encrypted protocol with emonCMS that uses a shared secret write-key.

The esp8266 has a good crypto library and the ESP32 has a crypto module. The HTTP headers would be exposed, but an entire POST payload could be secure and the header could be authenticated with a hash.

It’s not just the heap and tools needed for HTTPS, it’s the extra time to do the handshake. IOT devices, even with some HTTPS capability, would have a hard time managing keeping multiple sessions open to avoid repetitive handshaking.

All that said, I believe these limitations will disappear soon as HTTPS becomes ubiquitous in IOT. The first “mainframe” I worked with was an IBM 360/30 with 32K of “core” and two 2311 disks (7mb each). Moore’s law is on your side.

I’m assuming you are not building on top of the Arduino stack? We did produce a client library for Arduino and that seems to handle the HTTPS connection natively on an ESP8266…

The library supports both InfluxDB 1.x and 2.x. Might cut some of the work down.

But, if you aren’t going the Arduino route…then, this won’t be useful.

I was completely unaware of this library. I do use the Arduino framework for ESP8266, but this library won’t work for me for a variety of reasons:

  • It uses a synchronous client library that blocks for the duration of a transaction. IoTaWatt is a power monitor and must sample ADCs for 16ms every 24ms (60Hz) or 20ms every 30ms (50Hz). All HTTP to servers is currently done asynchronously.

  • I’ve not used the BearSSL port to ESP8266 but it looks to be very well done. Nevertheless, there are caveats to using it in that they recommend only one active connection and recommend much more heap than IoTaWatt has available.

I am curious about the ESP32 capabilities though, as these two issues are somewhat mitigated there. My port to ESP32 dedicates one of the two processors to power sampling and leaves the other to do everything else. So synchronous I/O is OK, and would only block the FREERTOS task anyway (ESP8266 doesn’t use FREERTOS). Heap is less of an issue there as well, although it isn’t unlimited.

Thanks for the link.

Let me see if an asynch mode is possible. Appreciate your feedback.