Generic HTTP server

@overeasy is there any update on an open server connection? I know a lot of guys have looked at options other than the emoncms, influx, and pvoutput stack like Thingsboard and Ubidots or other MQQT or HTTP options… A user defined output and payload option would make a lot of us happy.
Happy to pay half the fee to get this up and running.

It’s an open platform. The PVoutput service was contributed.

The architecture makes it relatively easy to add additional server support in the firmware. The hard part is designing and implementing a general purpose user interface that is simple yet enables most of the features of the upstream server.

I had looked at Ubidots some time ago. I just took a quick look at Thingsboard. From the perspective of uploading a lot of data, the Thingsboard looks more robust, but I haven’t looked into it in any detail.

More important would be what you can do with the data once it’s uploaded, and the price of the service to the user. Did you have anything in particular in mind, and can you be more specific about what you are trying to accomplish? I’d be interested in something very inexpensive or free that any user could employ to get basic reporting in a mobile app, but I don’t want to do anything beyond providing the IoTaWatt functionality.

I’ve looked into supporting MQTT, and while it’s doable, I have to question the value of it. IMO it can not be directly compared with the application specific protocols that are layered onto HTTP. MQTT is basically a transport, and a pretty dumb one at that. It doesn’t provide for a response. You can go with highest QOS and get acknowledgement of receipt by the agent, but that doesn’t mean that any server subscribing to the topic has successfully received and assimilated the data. Using it to upload data to a time-series datastore has a lot of problems. That said, for sending real-time data to one or more devices for real-time action, MQTT is a lightweight way to go.

I’ve used Ubidots extensively , it’s powerfull and reliable but relatively an end use platform , also not free… Thingsboard on the other hand is robust , free ( open source version) and enormously versatile it what can be linked and done. I am relatively connected to the core development team and can easily involve them if you need input from their side. I also have access to a professional license instance if you need to test ( but the community version is ok for integration aswell) mostly reports it hasn’t got.

Apart from the exhaustive list of immediate scalability Thingsboard brings to Iotawatt users is an often overlooked importance of data usage on IOT devices ( and the other http instances , particularly emoncms are lacking in this respect )…

Hope this helps

As I read the Thingsboard API documentation, the payload using both HTTP and MQTT is identical. I’m trying to understand why HTTP keeps coming up here. To me HTTP and MQTT (which I am assuming is the antithesis here) are just transports. IoTaWatt already has a very sophisticated asynchronous (i.e. non-blocking) HTTP client. All of the official MQTT clients that I’ve seen for both ESP8266 and ESP32 are synchronous (blocking). So investing a lot of time, effort and heap to support MQTT would only add an inferior protocol implementation that is doing the same thing as HTTP.

What am I missing here?

You missing Speed and size !!

Medium - “According to measurements in 3G networks, throughput of MQTT is 93 times faster than HTTP’s.”

I see where you’re coming from now. It’s a fresh approach to a previous inquiry.

BTW/ I believe EMONCMS can accept MQTT data.

Not a fan of Emoncms.

Spend some time looking at Thingsboard integration architecture. It gives the Iotawatt community that wants extra integration a huge advantage vs a proprietary charting dashboard :wink:

I’m in for penny or pound !

@overeasy

Any luck :grin:

Hi Bob - is there any reality of MQTT in the foreseeable future?

I have no plans to add any features utilizing the MQTT protocol. It’s open source so you are free to add whatever you want.

“Coding” is a handicap of mine- I wouldn’t know where to begin. I am willing to put $500 toward someone capable of doing it though. Any thoughts of how i can approach it or whom?

Not that I’m interested, but I’m spending time right now exploring options (that exist!) for connecting to databases or to Home Assistant. MQTT would work for HA also, but I’m curious what it is really you want to accomplish. Please note I’m just a user, but intrigued; if I saw such a need sure, I might take a crack at it. But…

One of your notes said:

But there’s a more fundamental difference – the general HTTP API available from IotaWatt is a pull protocol, you can set up an application (in my case it would be Home Assistant) to pull data as needed, and in doing so specify the data it is pulling.

MQTT is a push protocol. To implement it in IotaWatt it would look something like the InfluxDB does, you would need to pre-specify what to send, and send it on a recurring basis. The same all the time.

The choice of push vs pull is more fundamental than protocol. I would see pull as more appropriate to dynamic or varying queries, where the target system might need different data on different schedules (based on time of day, or what is being displayed at the moment, or just the nature of the data – maybe mains watts is needed once a minute but hot water heater watts would be fine once an hour).

For me, for push protocols, you want to just dump the data and let something with a lot more horsepower filter/aggregate/sort/present it. Again, something like Influxdb’s dump – that’s dump from Iotawatt is quite fast and efficient (you can even group periods). It’s certainly more efficient than you could send the same thing via MQTT.

Now… if Iotawatt did alerts, some kind of notifications – that would seem very suitable to MQTT. But it doesn’t.

Not sure what does 3G has, that’s old-school cellular. I would hope most people are using this locally or on wired broadband, not a 10 year old phone. :smirk:

So I am really curious where you see MQTT fitting in better than the current push protocols, or than using a pull protocol? Not about the protocol itself, that’s just a wrapper – what’s the use case where it is better?

@Linwood - Thanks for your input.

Let me share my thoughts and instances in using the device. Our units are not typically deployed in residences, as a result, there is most often no form of quick WIFI connect to a fibre LAN but we are limited to using a mobile 3g/4g router to do the WAN connection. Mobile data costs more than Fibre for us hence the requirement to utilise a protocol which keeps data usage as low as possible.

Pull approach requires DDNS and becomes a nightmare when handling multiple devices that being said the standard query API is excellent but Port forwarding is the nightmare.

Push approach is most certainly the answer and of the 3 built-in services EMonCMS has a data gremlin in its approach (it’s a published anomaly that no-one has let me know how to fix ), PV output seems limited ( although I haven’t pursued this much ) and InfluxDB whilst incredible at time-series data is extremely difficult to integrate with additional, traditional, SQL query and 3rd party and PHP.

MQTT keeps the data usage low and provides a platform to connect to a host of third-party products to do “heavy lifting” with alarms, reports, etc. etc ( Thingsboard and example ).

My dream is to get the current log into a MySQL database with as little data used as possible…whether by MQTT or not.

Im curious to hear your thoughts. Tx , Wayne

as a result, there is most often no form of quick WIFI connect to a fibre LAN but we are limited to using a mobile 3g/4g router to do the WAN connection

That certainly provides a different perspective. I was looking at it entirely from the view of someone with “real” internet available, and I agree with your conclusion that push is better in those circumstances (though a push of a VPN connection gets rid of a lot of the DDNS/firewall issues, and is something to consider, that then effectively allows what looks like a local connection; though there are challenges keeping VPN up with very poor links).

The InfluxDB connection (only one I looked at in some detail) looks pretty efficient. I have not tried this but what if you pushed to InfluxDB and then used that as staging area into MySQL. You could use aggregation and integration in InfluxDB to the extent it is better (things like integral and fill in particular are not present in MySQL), then feed that to MySQL.

In 60 seconds of google I did not see a pre-packaged tool for this, but worst case is to export to CSV (or whatever) and import into MySQL. It would also be pretty easy to write a daemon that would read one and write the other.

I also ran across this:

https://github.com/philip-wernersbach/influx-mysql

I have zero knowledge of it other than reading the readme, but it sounds like you might possibly be able to take data directly from iotawatt into MySql using it. Maybe. I also did note it is pretty old, no updates in 3 years, which is not all that encouraging.

But to the higher level issue – I would argue that MQTT, which is still one-measure-at-a-time, is not a very efficient transfer tool for slow circuits (especially ones which have high latency; even when cellular has good bandwidth its latency usually stinks). Unless true real time data is needed, being able to buffer up into a larger push (as the InfluxDb interface can) may help a lot. The best efficiency is when you can get dense data, where the packet and data point overhead do not overwhelm the data content in competing for limited throughput. I haven’t sniffed Iotawatt’s data, so this is speculation based on just the fact it does allow you to buffer in groups.

Say you needed 1 minute data intervals, if you could buffer into 10 at a time, it SHOULD be a lot more efficient, in return for giving a 10 minute lag.

Now the downside of that (vs MQTT) is that MQTT is designed to “notice” a new data item with low latency. Neither database is. This kind of staged transfer (Iotawatt → InfluxDB → Mysql) is much better suited to a planned lag, e.g. 1 minute data → 10 minute transfers → (for example) 60 minute aggregation and transfer → MySQL. If you can live with those kind of timings (or something similar) I think this could work very nicely. It also has some real advantages in that each stage can buffer data if the next stage is down for backup or maintenance.

So lots of rambling thoughts, but thanks for taking the time to explain the use case, it makes a lot more sense now.

PS. I’m a big fan of Postgresql if the data grow very large, it’s much more of a production database (while being still free) than MySQL. indeed, MariaDB is better than MySQL and a drop in replacement (don’t get me started on the corporate greed that spoiled MySQL, just think “Oracle”).

Nice input. If I can’t get a Dev to script a direct push to SQL ( in our case MariaDB ) i will look into the round trip /device/influxdb/MariaDB … Wish I could code :wink:

One other consideration coming from someone who does code: Changing Iotawatt’s software is a lot harder than building something standalone, outside, since it involves becoming familiar with their environment, making sure you don’t break it, conforming to their coding standards, etc. It’s not that it is undoable, but it is a significant hurdle; much larger hurdle than the actual functionality.

Instead, creating (for example) a program to pull from Influxdb and push to MySql (if that works for you) is an island – you can code it anyway you want, you can’t really break anything, there’s no hoops to jump through in collaborative programming. Lots of the code-bid sites would certainly offer someone to do that for you, cheaply (with the obvious caveat of caveat emptor).

Just to add another perspective on the efficiency of communications, HTTP has a high fixed overhead for small transaction, where sometimes the headers can be larger than the payload. But at the end of the day, there’s some volume of data (payload) that needs to be sent. So a lot of small transactions have high inherent overhead, and HTTP does worse than MQTT in that regard.

But data must be framed. When you get it, you have to know what it is. InfluxDB uses a very verbose line format that is wasteful when not compressed. For instance, every single measurement must have the 10 digit time stamp as well as the name of the measurement, the standard key set. As @Linwood says buffering up mitigates the fixed overhead but doesn’t really help with the actual payload bloat. Compression would, but that’s a difficult thing with limited heap and I’m not aware of any existing class that will do it.

To me, the solution is to pull with query. Most of the pulling I’m seeing uses the “all” group to get the average data for a single time period. If used efficiently, query will return CSV or JSON formatted data in tables. You can get an hours worth of 10 second data in one transaction, without any fluff.

I love the Iotawatt Query API output , timestamped and complete datasets without bloat. In the perfect world the ideal would be the backend of the data and chart on a mariaDB server. If only port forwarding ddns was that more manageable in device volume.

It’s a DDNS server vs Upwork for me then :wink:

Don’t forget VPN. A remote device can establish a VPN connection from a dynamic IP address so long as the remote hub is static. It’s far more secure than trying to poke holes in firewalls or uPNP or such. There’s some overhead for the tunnel but that’s fairly moot. This also gives you a reverse path to reach in from outside and manage the IoTaWatt that is secure.

Whenever I’ve had remote sites I’ve alway done VPN tunnels. I was just working on one yesterday, for mobile command vans for first responders in Illinois. A tiny Cisco there “dials” up and brings up a tunnel, and it does not care what IP it gets, then the van is “on” the internal network. No holes in the firewall, it’s an inside-to-outside connection from the remote.

Incidentally a lot of cellular companies can offer static IP’s, if DDNS is your real issue, make sure you ask. AT&T could in the above case, though we elected not to depend on it because we wanted to be able to switch carriers.