This is probably a simple question, but I figured I would ask.
When you guys create a query to grab the total usage for a period do you take all the readings collected and sum them?
Or do you take all the readings for a period and take the mean?
I notice I am capturing watt utilization every 10th of a second, 6 times a second.
I would think I would take the mean per second and then the sum of those means per duration.
Grafana doesn’t do any sums or conversions, it gets its data pre-calculated from influx (or other sources). But I think you are asking about the origins of the data in the IoTaWatt.
The rate and timing of samples is driven primarily by the number of channels configured and then influenced by other activities, like servicing web server requests.
The individual samples are accumulated as Watt-milliseconds. At approximately five second intervals, the total is converted to double precision Watt-hours and added to a running total for that input.
When you query for Wh, the IoTaWatt returns the difference between the totals at the beginning and end of the interval.
IoTaWatt also maintains a running total of measured hours in the datalog. So when you query for average Watts, the Wh are calculated as above, and that is divided by the difference in measured hours for the interval. In that way, if there was a power outage during the interval, it will not effect the average power while actually measuring.
Same basic methodology for Voltage and VA with the datalog recording cumulative Volt-hours and VA-hours respectively.
If I understood what you are saying - the unit takes the total watt utilization for a 5 second period & writes that to the database. So I should simply be able to sum all the entries in the database for a particular time & output to get the total power utilization during a given period. No need to get fancy.
Did I read that right?