Congratulations on being the first to write an output service for IoTaWatt. I’m sure it wasn’t easy. I have scanned over it briefly, but that’s not a trivial bug fix, it’s a lot of code. I had tried your approach briefly some time back, making the service a class, but it got really complicated very fast and I abandoned that approach in favor of the current Emoncms and influxDB services. I feel that the code is starting to stress the limits of the ESP8266 and I have to be careful about the unintended consequences of new things. Also, the service scheduler/dispatcher really needs to have finer granularity than 1 second to support more of these services.
It will be awhile before I can dig into it and try it out. I have a file from when I researched PVoutput a few months ago. As I recall, the post interval is pretty large, and there are two modes where contributors get more functionality and are allowed to upload more history. I had been ruminating on how best to handle that as I think the contributor mode is the most useful.
I’ll give you a shout out when I get a chance to explore it. BTW I didn’t see where there were any changes to the configuration app. Could you post a screenshot of what you did there?
Thanks. It was ok, the hardware is the hard part the coding is fine After your recent refactors for config app and async http I had to change a lot of code but it was worth it as it cleaned things up a lot. Your effort in doing this was well worth it thanks.
I originally had coded it the same as the existing services all in a single big function with a switch case state machine and static data for state. I thought it was a bit simpler to read this way but I can easily revert back to a similar style to other services if that is something you prefer.
The concept of the class is still the same as the original code, but I broke things into separate functions instead of all inline in the multiple cases in a switch block it calls functions from the switch and the static data in the Influx style is basically stored as members of the class instead and I have a single static class instance. So anecdotally it should amount to very similar machine assembled code (if in-lining is used).
If there are issues with code size or heap usage let me know and I can take a look. There are lots of things I can do. I will wait for feedback first though about what specific issues you would like me to resolve.
I understand, the ESP is quite limited in various resources. I noticed you spent a lot of time optimizing heap in your changes so tried to minimize it in this code as well though there are more trade offs that can be made to improve this further (reduce logging, remove batching, or recalculate for resend). I understand at some point you wont be able to keep adding services. If that becomes an issue maybe we can talk about producing a new set of “update classes” for the different output services or a single generic http service that can cover all existing cases using string template replacement.
That wont impact PVOutput since its shortest post time is 5 min but maybe the other services.
I understand it is a big change and I assume you already have a bunch of other things planned in the pipeline I am not particularly worried about time, but I would like to avoid having to refactor too much like when the async http change was made if there was no intent of adding it to master.
I also saw another user was after the functionality so thought I would post it in case it is useful.
I plan to use it regardless but currently I have to run my own update server and use my own private key to sign the updates. I wrote some python scripts to pack+sign update images or unpack and test signature of update images that are working fine for me locally.
Yes the post interval is quite large (smallest is 5 min typically I think the default was 15 min).
I went for the free option which will also work for the donator mode. From memory the API is extensible so the free mode API will also work for donators (though I may remove the 14 day history POST restriction to support it). The donator mode adds a lot of extra functionality like collecting other data like temperature which is beyond scope for IoTaWatt. The history for free mode is summarized after 14 days not lost. So you get detailed logs for 14 days in free and then from then on it is daily/monthly/yearly granularity in the PVOutput database. This doesn’t impact the upload of stats.
It may have been difficult to see the config code change. I isolated the PVOutput code as much as possible from the rest of the system the main interface to the module is just two functions:
void PVOutputUpdateConfig(const char* jsonText);
void PVOutputGetStatusJson(JsonObject& pvoutput);
Here is an screenshot of the config app for IoTaWatt:
One thing I realized I haven’t really supported is 3-phase. I had considered changing the channel config for mains+solar to be able to use the Iota Script instead of a hard coded channel then I assume people would be able to configure for 3-phase (though a bit more involved). The other option is permit N channels to be specified for mains/solar in the config instead. Less flexible but simpler to use and debug IMO though I have not thought through the various 3-phase use cases.
Yea, it does impact. Right now you can wait 1 second (return UNIXtime + 1) or you can wait zero seconds (return 1). So when an async I/O is outstanding and you are waiting for completion with return 1, you are competing with another service that might have something useful to do. At a minimum, it would be nice to be able to wait until after another sample interval (16.6 or 20 ms). I’m weighing two approaches to the problem: Just use the millisecond clock in scheduling; or more elegantly, introduce the notion of blocking semaphores where time is just one event. So you could for instance also wait for unblocking from an asyncHTTPrequest ready state change handler, or an event signaled from a transaction in the webserver. That’s all blue-skies, but I like to keep it in mind so as to leave the door open for it.
I haven’t looked closely enough at the code or reviewed my past notes to make that commitment.
There is a lot of interest in PVoutput. It is a simple (from the user side) way to organize basic usage and PV data in the cloud.
As I recall donator mode allows adding something like six additional data items, which combined with scripted output, can tell a pretty complete story. So I think that’s key. That said, it may be easier and cleaner to have two discrete handlers rather than a bunch of if-else everywhere. Once again, my notes have more on that, but I haven’t come back up to speed…
I saw that. The UpdateConfig is more or less the same as currently. The GetStatusJason is what I would expect with the class approach. That approach is cleaner than collecting the information in the webserver handler.
It’s still a beast. If I had the time i’d start over with jQuery and Bootstrap.
US split-phase also requires adding two mains. Scripting is the way to go on that, plus the extra outputs in donator mode would need to do that. It’s trivial to use inside the firmware. Using the calculator in the config app is a little more complicated. There are tweaks for different applications that make it a bit more fragile.
I’m thinking maybe a month to get to it. Right now I’m in hardware mode with a big manufacturing run in progress, and all software effort dedicated to new features for hardware changes and problem resolution.
You already have the timezone setting so not sure what you mean by ‘reliable local time’. Hope its not too hard.
I assume you know this already but you can post a lot of data to the free pvoutput account when going via the API (been doing this on my current energy monitor for years)
I send temp, voltage, power generation, and power consumption
//Parameter Field Required Format Unit Example Since
//d Date Yes yyyymmdd date 20100830 r1
//t Time Yes hh:mm time 14:12 r1
//v1 Energy Generation No number watt hours 10000 r1
//v2 Power Generation No number watts 2000 r1
//v3 Energy Consumption No number watt hours 10000 r1
//v4 Power Consumption No number watts 2000 r1
//v5 Temperature No decimal celsius 23.4 r2
//v6 Voltage No decimal volts 210.7 r2
//c1 Cumulative Flag No number - 1 r1
//n Net Flag No number - 1 r2
The timezone setting is static in current IoTaWatt and doesn’t change with daylight savings time (needs to be manually changed in GUI when DST starts/ends which is what I have been doing). It does make sense to use proper time zone implementation. The fixed integral offset is not really correct.
The stock IoTaWatt has no temperature measurement (though I plan to add a I2C temp sensor to my device later) so that field is skipped but yes the others are all fine.
I am not sure if Bob is planning to use my implementation as a basis or re-writing one that he is comfortable to maintain himself. But mine was only designed for the free account and I know Bob wanted scripting support to add extra for donation mode and also support 3-phase.
Brendon is right, and Daylight time is what I am working on now. It’s a very complicated procedure. PVoutput does talk about an “adjust time” feature that corrects standard-time entries to daylight-time, but I can’t seem to turn it on my PVwatts account. I haven’t given up on that, but getting daylight-time adjustment working in IoTaWatt would be a plus anyway.
I have looked in more detail at Brendon’s PVoutput class code. I’m almost convinced to go with the class approach, but there are so many things about that particular implementation that I want to change or extend, I may just rewrite it within a similar framework.
Adopting local-time with daylight time.
Resuming with last posted value after restart.
Support for donation mode features.
Use scripts (calculator) to produce all outputs.
By the way, this should already work using the getstatus API in my branch unless you mean storing locally what you have previously successfully posted. I just query PVOutput what it thinks the last post was: https://pvoutput.org/help.html#api-getstatus
The donation mode and script support however is a bigger job and not supported in my code.
I was just looking at that last night. I’m still digesting your code. It’s a very different style where the functionality is fragmented over a lot of functions, so it’s taking me awhile to get my head around it.
I’m trying to understand the 23:59:59 status entry. When I look at your site linked above, I see that the last entry of the day is the first entry of the following day. i.e. the last entry for 10/11/18 is 11/11/18 12:00AM and the first entry for 11/11/18 is also 11/11/18 12:00AM. I don’t see anything about a 23:59:59 entry in the API documentation.
It’s hard to imagine that of the many devices that already upload top PVoutput, this issue hasn’t at least been documented. I have to wonder if we’re missing something. One thing I notice is that Add Batch Status service says an addoutput is generated for the last successful status update in the batch. Might that be how a day is finished?
Looking at your PVoutput site, I’m curious why Energy Used stops increasing every day at the same time Energy stops increasing. It increases in the morning when there is no solar generation, but not in the evening. I don’t see anything in the code to cause that, but you are more familiar with how that works.
UPDATE: Just realized I have been looking at an out of date branch.
My first version was a whole lot simpler until I tried to work around this bug.
The fact that I was losing 5 min of data at the end of each day was very surprising to me and I was also surprised that I couldn’t find anything about it online. I assumed for a while I was just doing something wrong. I was planning to post to PVOutput forums and ask, but didn’t get around to it.
I tested a lot of scenarios manually using wget (I might be able to find the tests and result somewhere if they are helpful for you).
One thing I didn’t try though is using cumulative mode. Its possible this works ok as instead of us doing the per-day accumulation calculations ourselves PVOutput do it for us. But I read in the documentation of a significant limitation in the max range of the energy which prevented me attempting that option (I think it was a 250KWh limitation). I read it again just then and it is possible this limitation applies only to the non-cumulative value so that is certainly an avenue worth looking into.
My work-around was to POST the end of day stats in the second before the end of the day.
end-of-day post is 23:59:59
begin-of-day post is 00:00:00
The 23:59:59 entry gets rounded up to midnight in the reported stats by PVOutput though as it stores data quantized to 5 min instead of the original 23:59:59 so it appears as 12:00AM (prev day) at end of day and the begin of day post of 00:00:00 appears as 12:00AM (curr day)
The end-of-day stats and begin-of-day stats I post are the same instantaneous values but I report the final day cumulative value for the end-of-day and reset the cumulative value for the start of day.
I didn’t notice that before, thanks for pointing it out. That is most likely a bug on my part in the calculation of accumulated energy. I will try and check my logs later this week and verify (lost my dev environment recently so need some time to setup again). Its most likely something to do with the code I use to auto-adjust sign in case I configure the CT incorrectly.
Thanks for the info. Looking at this stuff for a couple of days now, it wouldn’t surprise me that the last status has been falling through the cracks for everyone. I’ll email their support and see what I can get from them. Seems to me that they must be storing this data at UTC.
I’ve got local time adjustment with daylight time working now, so I’m digging into this.
For those interested in what type of support goes into PVoutput, I’ve been looking deeper into this service. As I’m coming to view the service there are really two distinct parts:
The output service is a dataset of daily summaries. Primary metrics are kWh generation and kWh consumption. Secondary metrics include time and magnitude of peak production, weather summary, min/max temps. It seems there are no limits on how far back historical uploads can be, and there is an API query that can be used to determine what days’ data have been uploaded. The PVoutput apps can use this information to produce charts and summaries useful for a broad understanding of both energy use and production. Uploading historical data and maintaining it are relatively trivial, would be fast, and do not stress the hourly transaction limits imposed on an account.
The Status Service is a more detailed dataset. It is essentially the same information summarized in the Output dataset, with some optional additional user defined data, and detailed in 5, 10 or 15 minute intervals. The PVoutput apps can use this data to graphically break down a day. At 5 minute intervals, there are 288 entries per day. Maintaining this data historically can be challenging because as far as I can tell, there is only one practical query that only provides visibility to the last 7 day’s worth. The API is only capable of uploading Status data for the past 14 days (100 in donator mode). It’s not clear what the retention period of this data is. It may only be 14 or 100 days. Regardless, there is a fixed window for uploading anything historical and so it appears that Status data is meant to be only for recent activity.
This leads me to what functionality should or could be in the IoTaWatt PVoutput service. My current inclination is to make a robust Output service, and to maintain the status data within the 7 day lookback period that is visible in the API. This may change depending what I can learn in the future about the retention and API of Status.
It seems that PVoutput is more about organizing and reporting collective PV generation. Using it as a repository for detailed usage information seems like a stretch. Fortunately, IoTaWatt is able to maintain high resolution detail onboard, and can also upload to any and all other supported servers while still maintaining the PVoutput data.
My typical use case is normally looking at the detail (status) view for the last few days in the to see if things are working as expected and also to get a rough indication of when my power usage vs generation was to adjust timers for pool pumps etc. I do this quite often. I only look at the less detailed (output) information occasionally to give an overall view of the month typically to compare with my power bill but is less important to me.
PVOutput automatically populates the history (output) from the detailed (status) so I would suggest implementing the status as a first priority. If you want to also upload history older than 14 days then that can be implemented on top of it as a less important feature. As dotnetdan mentioned if the system is running normally then this will work fine with all the features of PVOutput without having to worry about the output service.
This is very similar to your IoTa log design (detailed recent stats and less detailed history stats) only the resolution and periods are different.
As you mentioned if I want detailed information for a long time in the past I can always just look at iotawatt graphs directly. So I agree with dotnetdan that the status interface is the best to implement initially if the choice needs to be made.
It sounds like the two of you agree with my basic plan:
Loading the output looks like a trivial problem, and only needs to be done at startup. Even in non-donator mode, a year’s worth of outputs could be uploaded in about 20 Requests. Might take all of a couple of minutes. So that can be done at every startup (getMissing and addBatchOutput for any missing). Then upload detailed status beginning with the next as determined by the most recent returned in a getStatus request. It looks as if that will only provide a lookback window of seven days, so it’s really not something that can be maintained with absolute integrity.
The various upload services are not mutually exclusive in IoTaWatt. You can upload to Emoncms, an influxDB database, and soon PVoutput all at the same time (so to speak).
But what I’m wondering about in your case is whether you intend to try to use the delay features where the usage data posting is delayed by PVoutput for a fixed time to allow generation data to arrive from a different source. I’ve not paid much attention to this feature because I see it as a kludge. My intention is to have both generation and consumption measured by IoTaWatt and send together in status updates. Like the other upload services, IoTaWatt would upload any backlog if communication is interrupted. The two-source with delay seems to rely on successful real-time updates, so my feeling is that data can be lost.