Wrong values returned by REST API? - explained

Hi there.

I’m a bit puzzled with the values returned when I query the iotawatt
I have various sensors (13 in total) for all circuits as well as two solar feed.

Solar feed reads negative, all the other positive.

I have one output defined as:
Phase1 + Phase2 + Phase3 + Tesla1 + Tesla2 + Tesla3 + Pool + Solar1 + Solar2 + AC1 + AC2 + AC3 + HotWater min 0

If I query the individual value for Wh I will read:

http://iotawatt.local/query?select=[time.utc,Export.wh]&begin=2021-08-18&end=2021-08-19&format=csv&group=m
2021-08-18T00:33:00, 0
2021-08-18T00:34:00, -1
2021-08-18T00:35:00, -2
2021-08-18T00:36:00, -1
2021-08-18T00:37:00, -2
2021-08-18T00:38:00, -2
2021-08-18T00:39:00, -5
2021-08-18T00:40:00, -5
2021-08-18T00:41:00, -5
2021-08-18T00:42:00, -5
2021-08-18T00:43:00, -6
2021-08-18T00:44:00, -5
2021-08-18T00:45:00, -6
2021-08-18T00:46:00, -10
2021-08-18T00:47:00, 0
2021-08-18T00:48:00, -8
2021-08-18T00:49:00, -9
2021-08-18T00:50:00, -11
2021-08-18T00:51:00, -18
2021-08-18T00:52:00, -12

etc…

however, when I query the cumulative value since the beginning of the year:
http://192.168.10.3/query?select=[time.iso,Export.wh]&begin=y&end=s&group=all
it will read:
0

Using Graph+ it will plot things properly. There’s definitely proper data in there, since the beginning of the year.

But I just can’t retrieve the value via REST

Any ideas?
thanks
JY

Please see this explanation of how solar import/export is handled in the datalog.

Sorry, but I don’t see how this is applicable? Or maybe the title of the post has me confused (it’s not a matter of import/export, but more generic math expression in general)
While I do use this output to measure import and export, there’s nothing in iotawatt identifying this output as an import/export.
Import in particular gives me the right value, the only difference with export is min vs max in the expression.
Watts = (Phase1 + Phase2 + Phase3 + Tesla1 + Tesla2 + Tesla3 + Pool) + Solar1 + Solar2 + AC1 + AC2 + AC3 + HotWater max 0

I have plenty of output counters define in the same way, only that one in particular reads as zero. All the others that are typically the additions of all three phases a correct…

If it’s hitting a storage size what’s unfortunate is that all that I need really is the cumulated value overtime, I don’t need the details of all the previous months.

An RRD database store it that way, it keeps the cumulated value regardless of time and then compressed intervals such as every seconds for the last month, every hour for the last 6 months, every day before that etc…

So you never run out space and you will always have all averages/sum just the the inner details anymore and the size is all allocated on creation and it never grows over time.

Couldn’t this be accomplished with iotawatt?
If it has to store all the data forever I can certainly see how you would quickly hit the wall.
I did notice that when I added a test output, the iotawatt became unresponsive for several minutes, I couldn’t even ping it anymore, I thought it had crashed,

I should also add, that even if I only query for the last month, last week or last day, I also only get 0
http://192.168.10.3/query?select=[time.iso,Export.wh]&begin=d&end=s&group=all

Yield
[["2021-08-18T00:00:00",0]]

The jist of it is that IoTaWatt query evaluates your script using the NET values of the inputs for the given intervals(s), rather than integrate the script over the detail data for the interval.

It can produce the detail data, but if you want to integrate it over a long period you will need to export the data to a more capable computer like PVOutput, influx or Emoncms.

There is a limit to how much you can get from an ESP8266 with an SDcard.you seem to have some ideas about how to improve it. It’s open source, you are free to do so.

I understand the processing and storage limitations.

However here, I’m not sure it’s the problem at hand. From your other post you do mention that data is stored over a month at full resolution. So why can’t I retrieve just the last day summary?

This is a query for today, midnight to the present time. Your expression is:

If you add up the total of each of those values, what do you get? Let’s say that the data is this:

Phase1 = 3000Wh
Phase2 = 3200Wh
Phase3 = 3100Wh
Tesla1 = 1000Wh
Tesla2 = 1000Wh
Tesla3 = 1000Wh
Pool = 0Wh
Solar1 = -2800Wh
Solar2 = -3100Wh
AC1 = 0
AC2 = 0
AC3 = 0
HotWater = 1500Wh

Adding those up I get 7,900Wh. 0 is less than 7,900 Wh so the function evaluates to zero.

As long as the solar generation is less than the sum of the other inputs in the formula, for the entire period, the result will always be zero. Very few houses maintain net zero or better. Most all use more power than they generate and so have a net positive value.

Now if you run that query for say begin=s-1h&end=s, &group=all in the middle of a sunny day, you will probably get a negative number.

As you said above:

You get a series of values for each minute of the day. 1,440 of them. If you ad them up, you will get the integration of your function over the day. IoTaWatt does not provide an integrated result when you ask for “all”. It evaluates the expression using the net values of the operands over the requested period (all).

You can get an integration for a day using Graph+. Just plot the function for the day in question and click the statistics tab at the bottom. The integrated Wh will be showh in the rightmost column, integrated to 2 minute resolution.

Mission one for the IoTaWatt is to sample and save the raw measurements. Because of the limitations of the ESP8266 environment, it does not sample while doing a query, so extensive SDcard I/O to read 17,280 datalog records from an SDcard to integrate a day’s worth of detail is not practical. The way data is organized, you can rapidly retrieve many metrics and get total net or average usage for any time period. It just doesn’t integrate to break down a net value into it’s positive and negative components.

Oh I see…

it runs the calculations on the average of each individual sample rather than the sum of the average at each data point.

Makes sense now.

My plan was to use the iota watt with the new Home Assistant energy screen, but I believe that even them get it wrong.

It’s something I had written a while ago for showing on those graphs:
https://mediaserver.avenard.org/power/usage
and indeed calculating the export values requires fairly complex calculations that I run I regular interval.

1 Like

Someone has written an iotawatt integration for Home Assistant.

this runs:
http://IP_ADDRESS/query?select=[time.iso,Export,Export.wh]&begin=d&end=s&group=all

for all input and output registered.

I wonder how much this is pounding the iota CPU wise.
Is there a way to check that?

It’s a trivial query. It should be no problem at all. Bottom line is how it impacts the chicle sample rate. You can see that in the status display.

Would there be a way to retrieve via a GET request the value that the iota can generate when you draw the export / import outputs above

the value you get when you over on the last point is almost exactly the same as what I calculate in my daemon or that the power utility bills me for.

I tried to read just the power value in HA and integrate that with their Riesmann Sum integral component to get the Wh value (Integration - Riemann sum integral - Home Assistant) but it’s rather innacurate after 12h it already reads low
35.8kWh as found per HA in the graph vs 34.1kWh by reading the Import and summing them regularly.
Interesting, for the export it actually reads 10% too high.
4016Wh vs 4455Wh when checking Export

The iota obviously knows how to do that calculation very nicely and very accurately. I wouldn’t even need to pull that very often, every 5m would be more than sufficient.

That plot is over 11 hours so the group (interval) should be 1 minute. Graph+ is accruing (adding) them up.

If you query begin=m-1m&end=m every minute, you will get the same data.

If your kWh are off now, it may just be a precision issue. If you are requesting kWh instead of Wh, the precision for small intervals is not optimal. If that’s the case, you can either switch to Wh and divide by 1000, or you can increase the precision of the query with export.kWh.p6.

Every 5 minutes with good precision should be adequate. That’s what PVoutput uses and it’s right on.

1 Like

Is querying in kWh a new thing? some for pN . I thought for setting the precision it was dN?

The above request gives me {"error":"invalid query. Invalid series method: kwh"}

You’re right, query only supports Wh and precision is “d” not “p”. That’s what I get for answering on the fly with my phone.

I had a similar exchange a few weeks ago but it was dealing with the influx uploaded which does allow kWh.

1 Like

Alright, there’s still something I don’t get based on this earlier discussion :slight_smile:

I’m modifying the Iotawatt HA integration to now retrieve the Wh meter using a daily view instead of yearly which yield not useful data.

So my query end up being:
http://iotawatt.local/query?select=[time.iso,Export.wh,Import.wh,Total.wh,Total_Solar.wh]&begin=d&end=s&group=all

And this returns the right accumulated value for outputs that are simply the sum of inputs ; but wrong for the more complex Import/Export one.

The documentation does states:

all will cause all of the data in the time period to be treated as a single group. For most units, this will result in the average value over the entire period. For Wh, it will result in the total Wh for the entire period.

Graph+ gives me the right stuff, close enough with 5m average when we’re getting toward the end of the day (I’ll note that graph+ uses 2m grouping with auto, while the GET uses 5m interval and return 192 intervals and not 360 like the documentation states)

But that’s not what I want here, and from what you indicated would have done something similar as graph+

Is there a way to get the iota to process the calculation over a given interval rather than the entire day average for each of the output/input meters?

I’m in the belief that this would be quicker/more efficient than querying with group=1m
Thanks

Daily will have the same problem as yearly when dealing with a Wh that is not monotonicaly increasing.

Maybe a better way to think about this is to compare to a checking account. Over the course of a month you make deposits and write checks. If you simply look at the change in balance from month to month, you cannot discern how much you deposited and how much you spent. If you look at the daily change in balance, you get a better idea, hourly change starts to be very accurate.

If your HA integration queries say every 5 minutes, then use begin=m-5m&end=m&group=all, then run that result through the HA electricity meter integration to get a running total.

I find it to diverge too much compare to what I’m actually billed for when using 5m

The other thing I’ve noticed is that no matter my query, the result will be truncated to 100188 bytes.

What I’m planning of doing now is to ask for all the values stored since the beginning of the day, grouped in 1m to the current hour. And cache that.
After that I will query from the last hour, a query for the last 59 entries only takes .1s here
When we reach a new hour, I will add what has currently been retrieved to the last hour and restart.

I’m doing this because quite often the iota stops responding to ping and queries. It can stop responding for quite a while, it looks like it’s losing wifi connectivity. It always comes back.
But yesterday it was offline for over 40 minutes. If I was to integrate in HA, or juste query for the last 5 minutes I would lose data then.

With the above, it won’t matter if I restart HA or the iota becomes unresponsive, I’ll be able to always get it all.

That doesn’t sound right. PVoutput uses 5 minute intervals and is pretty accurate. Can you show an example of that?

Look at the &limit parameter, but be careful, it’s there for a reason. Also, you probably don’t need to do a large query on the inputs other than solar import/export.

That works. You are essentially doing an integration of the values.

Poor WiFi is problem. I used to struggle with that until I installed a Ubiquiti network with several APs. End of problem.

1 Like

I’ve been working on my monitoring system for over a decade. Doing by the minute average is what gave me the most acceptable result.
Issues mostly occurs with cloudy days where solar would peak a lot and then drop to zero shortly after. I have a lot of partial shading too. A 5 minutes average under this scenario typically reads very low on those days.

I did set it to remove the limit, I found that it had no effect whatsoever.

I am, except that the iota is doing it and Over continuous intervals so there’s no issue with inaccurate timers that could make it strike and make the request just before a minute is due leading to a one minute loss.

I just changed wifi router and even added an extra node near the iota.
The rssi is pretty good. Throughput is terrible however.

In all, what I’m doing is a hack to get access to data the iota already has and could retrieve much quicker.

I wish I could do that brute force method only for import and export, but that’s how I’ve named them, others can make different choices.

Also I found that querying the data for one sensor or 10 doesn’t make that much of a difference in time.
The maximum amount I can query before the iota truncate the results is one day for every minute. People with more outputs may hit the 108k limit quicker.

Any chance you could make the resolution=high or equivalent run the calculation on minute averages rather than the average of the whole requested interval?

Not what I had in mind as an example. We could be talking about completely different things.

What release are you on?

When you query, the data comes from the time stamped datalog, so it doesn’t matter when you query, the responses are time stamped and consistent.

[quote=“overeasy, post:18, topic:3239”]
That works. You are essentially doing an integration of the values.

Did you look at the log to see if the unit disconnected? The problems you are describing are usually network related.