Grafana Experts? Tile to show current power source

Thanks, if you would like the code for my dashboard or any specific information just let me know, happy to share

Yeah, I agree. I looked for a center zero and I couldn’t figure it out. I think there might be a way using thresholds, but I couldn’t get it to work right. Yeah, I have colors. Green is for exporting and red is for importing

Importing right now, here is what it looks like

I did get it to work between “100% Solar” and “Solar + Grid”, but still have not found a way to get “Grid Only” or any Generator ones

@BungledPossum are you using Influxdb v2 with the flux language? Here is my current dash with influxdb v2 through Grafana. I got some inspiration from the phisaver.com site (he has a live demo up). I’m new to the flux queries - and I’m stuck on some things. One is creating a daily (and monthly) bar graph with consumption, production, net - more or less like the one Home Assistant has in their energy panel.

1 Like

I’m not using Flux, but I have the IoTaWatt bucket added into Grafana as both Flux and influxql, so if I need to use Flux, I can.

I find influxql much more user friendly

here is a dash i made which uses the flux language.

I have separate quries for the 12x breakers i am monitoring which are graphed in the pit chart.

import "date"

month = date.truncate(t: now(), unit: 1mo)

from(bucket: "home_energy")
 |> range(start: month)
  |> filter(fn: (r) => r["_measurement"] == "house_power2")
  |> filter(fn: (r) => r["_field"] == "Breaker_2_W")
  |> filter(fn: (r) => r["tag1"] == "iotawatt.01")
|> aggregateWindow(
    every: 30d,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))
|> keep(columns: ["_value", "_time"])
|> rename(columns: {_value: "Breaker 2 - Dinning Room Outlets, Front Porch, Upstairs LR outlets along stair case, Upstairs Bathroom Right GFCI", _time: "Date"})

per hour usage, this also needs a query per breaker

from(bucket: "home_energy")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "house_power2")
  |> filter(fn: (r) => r["_field"] == "Breaker_4_W")
  |> filter(fn: (r) => r["tag1"] == "iotawatt.01")
|> aggregateWindow(
    every: 1h,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))
|> keep(columns: ["_value", "_time"])
|> rename(columns: {_value: "Breaker 4 Total Energy", _time: "Date"})

usage per day for the month, no other queries needed, just this single one

import "strings"

month = time(v: "${strings.substring(v: string(v: now()), start: 0, end: 8)}01T00:00:00Z")

from(bucket: "home_energy")
|> range(start: month, stop: now())
|> filter(fn: (r) => r._field == "Mains1_W" or r._field == "Mains2_W")
  |> filter(fn: (r) => r["_measurement"] == "house_power2")
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with _value: r.Mains1_W + r.Mains2_W }))
|> aggregateWindow(
    every: 1d,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))
|> keep(columns: ["_value", "_time"])
|> rename(columns: {_value: "Total Energy Uage Per Day", _time: "Date"})

total kw hours used since the start of the month

import "strings"

month = time(v: "${strings.substring(v: string(v: now()), start: 0, end: 8)}01T00:00:00Z")

from(bucket: "home_energy")
|> range(start: month, stop: now())
|> filter(fn: (r) => r._field == "Mains1_W" or r._field == "Mains2_W")
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with _value: r.Mains1_W + r.Mains2_W }))
|> aggregateWindow(
    every: 30d,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))
|> keep(columns: ["_value", "_time"])
|> rename(columns: {_value: "Total Energy Uage Per Day", _time: "Date"})

cost per month

import "strings"

month = time(v: "${strings.substring(v: string(v: now()), start: 0, end: 8)}01T00:00:00Z")

from(bucket: "home_energy")
|> range(start: month, stop: now())
|> filter(fn: (r) => r._field == "Mains1_W" or r._field == "Mains2_W")
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with _value: (r.Mains1_W + r.Mains2_W) * 0.14552 }))
|> aggregateWindow(
    every: 30d,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))
|> keep(columns: ["_value", "_time"])
|> rename(columns: {_value: "Total Energy Uage Per Day", _time: "Date"})

i am fairly used to the flux language, feel free to ask for assistance

2 Likes

Thanks for this I had a look at flux language and got a bit confused , I had tried using something like this for calculating the power costs for last 2mths
SELECT INTEGRAL(“value”,6s) *0.2932 / 1000000 FROM “total/watts” WHERE “value” >= 0 AND $timeFilter GROUP BY time(24h,-8h)

So will have a look at how you have done it and compare results

I’d really like to be able to create something to compare the current flat rate charging we are on to time of day charging to see if its worth the switch

Not sure I have things working configured correctly but also have been pre occupied on other stuff
Was doing things like
SELECT mean(“total”) * 29.21 /100 FROM “power_play”.“autogen”.“weekend_shoulder_kwh_daily” WHERE “ct”=‘1’ GROUP BY time(1d) FILL(0)

SELECT mean(“total”) * 15.3645 /100 FROM “power_play”.“autogen”.“everyday_off_peak_kwh_daily” WHERE “ct”=‘1’ GROUP BY time(1d) FILL(0)

SELECT mean(“total”) * 29.21 /100 FROM “power_play”.“autogen”.“weekday_shoulder_kwh_daily” WHERE “ct”=‘1’ GROUP BY time(1d) FILL(0)

SELECT mean(“total”) * 55.7734 /100 FROM “power_play”.“autogen”.“weekday_peak_kwh_daily” WHERE “ct”=‘1’ GROUP BY time(1d) FILL(0)

then using a total transform but not sure that is the best way, only a newbie at this stuff

here is a breakdown of my code

the first lines allow me to extract the current month. the command now() will return somethign formated as YYYY-MM-DDTHH:MM:SSZ
so i am using substring to extract the first 8 chars from this return which amount to YYYY-MM- which i then concatenate with 01T00:00:00Z so that i get the string of the current year/month starting at the first day of that month at midnight.

import "strings"

month = time(v: "${strings.substring(v: string(v: now()), start: 0, end: 8)}01T00:00:00Z")

next i select which database i am using and filter it. in this case, i am pulling from my database home_energy using the month value i previously calculated before and the current date/time and the start and end. i am choosing the two values returned from the IoTaWatt Mains1_W and Mains2_W which are the watts variable for my two mains

from(bucket: "home_energy")
|> range(start: month, stop: now())
|> filter(fn: (r) => r._field == "Mains1_W" or r._field == "Mains2_W")

the next lines are needed to perform the calculations to add the two separate values and to multiply them by the total cost i pay per KWh. the first row makes a pivot table to combine the two values in the table so they are marched by time and creates a intermediary variable “_value”

|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with _value: (r.Mains1_W + r.Mains2_W) * 0.14552 }))

the next lines perform the windowing and the integral to get watt-hours from the watts the IoTaWatt returns. if you see the second line “every: 30d,” this is telling grafana to display 30 days worth of information. the lines “|> integral(unit: 1h) |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))” are used to calculate watt hours and then dividing by 1000 gives me kilo-watt hours

|> aggregateWindow(
    every: 30d,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))

the last lines are “cleanup” lines where i am making things more presentable. right now if i was to look at all of the RAW data InfluxDB has sent, i would have the Mains1_W, mains2_W, the total watts i calculated, the integral, and many other columns of data which i no longer care about since i have used them to generate my final value. the “keep” command tells grafana which columns i want to keep, and so i drops all of the other data. the final one renames the column to what i want. in this case i rename it to describe the particular breakers are being monitored etc.

|> keep(columns: ["_value", “_time”])
|> rename(columns: {_value: “Total Energy Uage Per Day”, _time: “Date”})

here is additional information

i want to make a separate post to discuss how you can do what you want so much easier in flux

in this you can full on if statements.

so you can your first set of queries (all within the same grafana query)

for example, this would all be one query

variable1=from(bucket: “your_bucket”)
|> range(start: when_to_start, stop: now())
|> filter(fn: (r) => r._field == “item1” or r._field == “item2”)

variable2=from(bucket: “your_bucket”)
|> range(start: when_to_start, stop: now())
|> filter(fn: (r) => r._field == “item3” or r._field == “item4”)

variable3=from(bucket: “your_bucket”)
|> range(start: when_to_start, stop: now())
|> filter(fn: (r) => r._field == “item5” or r._field == “item6”)

then you can combine all of the different variables into one table for processing and organize then by time. you will want to use the “fill” commands to fill in any columns with a non-null value if the same columns are not in each of your different queries

overview = union(tables: [variable1, variable2, variable3])
|> group(columns: [“_time”], mode:“by”)
|> fill(column: “total_size”, value: 0.0)
|> fill(column: “vol_used”, value: 0.0)

now you can do IF statements with your data in the combined table named “overview” and if values are one way, do something and if they are a different value do something else

here is an example (not with if statements, but with multiple queries and combining them to perform a bunch of math

////////////////////////////////////////////////
//TOTAL AVAILABLE SPACE ON THE DIFFERENT VOLUMES
////////////////////////////////////////////////
server2_vol1_total=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_totalsize")
  |> filter(fn: (r) => r["nas_name"] == "Server2")
  |> filter(fn: (r) => r["volume"] == "/volume1")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "total_size"})

server2_vol2_total=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_totalsize")
  |> filter(fn: (r) => r["nas_name"] == "Server2")
  |> filter(fn: (r) => r["volume"] == "/volume2")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "total_size"})

servernvr_vol1_total=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_totalsize")
  |> filter(fn: (r) => r["nas_name"] == "Server_NVR")
  |> filter(fn: (r) => r["volume"] == "/volume1")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "total_size"})

servernvr_vol2_total=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_totalsize")
  |> filter(fn: (r) => r["nas_name"] == "Server_NVR")
  |> filter(fn: (r) => r["volume"] == "/volume2")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "total_size"})

serverplex_vol1_total=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_totalsize")
  |> filter(fn: (r) => r["nas_name"] == "Server-Plex")
  |> filter(fn: (r) => r["volume"] == "/volume1")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "total_size"})

////////////////////////////////////////////////
//USED SPACE ON THE DIFFERENT VOLUMES
////////////////////////////////////////////////

server2_vol1_used=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_used")
  |> filter(fn: (r) => r["nas_name"] == "Server2")
  |> filter(fn: (r) => r["volume"] == "/volume1")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "vol_used"})

server2_vol2_used=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_used")
  |> filter(fn: (r) => r["nas_name"] == "Server2")
  |> filter(fn: (r) => r["volume"] == "/volume2")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "vol_used"})

servernvr_vol1_used=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] =="vol_used")
  |> filter(fn: (r) => r["nas_name"] == "Server_NVR")
  |> filter(fn: (r) => r["volume"] == "/volume1")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "vol_used"})

servernvr_vol2_used=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_used")
  |> filter(fn: (r) => r["nas_name"] == "Server_NVR")
  |> filter(fn: (r) => r["volume"] == "/volume2")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "vol_used"})

serverplex_vol1_used=from(bucket: "Test/autogen")
  |> range(start: -5m, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "synology_volume")
  |> filter(fn: (r) => r["_field"] == "vol_used")
  |> filter(fn: (r) => r["nas_name"] == "Server-Plex")
  |> filter(fn: (r) => r["volume"] == "/volume1")
  |> aggregateWindow(every: 1h, fn: last)
  |> rename(columns: {_value: "vol_used"})

////////////////////////////////////////////////
//COMBINE ALL TABLES INTO ONE TABLE
////////////////////////////////////////////////

overview = union(tables: [server2_vol1_total, server2_vol2_total, servernvr_vol1_total, servernvr_vol2_total, serverplex_vol1_total,server2_vol1_used, server2_vol2_used, servernvr_vol1_used, servernvr_vol2_used, serverplex_vol1_used])
|> group(columns: ["_time"], mode:"by")
|> fill(column: "total_size", value: 0.0)
|> fill(column: "vol_used", value: 0.0)
|> reduce(
       fn: (r, accumulator) => ({
          total_size: r.total_size + accumulator.total_size,
          total_used: r.vol_used + accumulator.total_used
         }),
        identity: {total_size: 0.0, total_used: 0.0},
    )
|> map(fn: (r) => ({ r with _value: (r.total_size - r.total_used) }))
|> rename(columns: {_value: "Free"})
|> rename(columns: {total_size: "Total"})
|> rename(columns: {total_used: "Used"})
|> drop(columns: ["_time"])
|> yield()

Thanks for taking the time to post your comprehensive reply, and taking the time to break things out and explain the various sections, I just realised I forgot to mention that I was using NodeRed to do the time of day and day of week splits but I’d much rather just have it all in one place
I’ll have play as time permits

no problem, feel free to IM me or discuss over email if you desire.

one thing that helps is if you open the web-interface of influxDB and choose your data/plot it, in the lower right there is a “script editor” button. this will actually generate the Flux language code needed for the query you just made. this helped a lot in my initial understanding.

then i just googled a lot about how to do xyz or how to do abc and ready a lot of the documentation on influxdb’s site as they explain the new flux language is fairly good detail.

i just the played around figuring out how to do different things etc. once you understand which commands to use, it is easy to understand the code, and now that i have been using it, it is so much better to use as it is crazy powerful if you need it to be.

Thanks for posting this! I’ll need to read through these and get it going

We really appreciate the work

your welcome.
small comment to my previous posts, when you see “every: 30d” replace that with “every: 1mo”

Hi! I am starting with grafana and looking for information I saw your post. I am looking for something similar, but simplier, just know if it is 100% solar, combined or all from the line. I don’t know if you figured out how to do it, because the post didn’t continue, but i would like to know if you fixed it.

Also it would be really great if you could share your code, because it has almost the same structure i want to follow, but i am starting and have it as an example would be great.

Thank you in advance!

I will gladly help

Can you send me an screen shot or something of your data layout in Influx?

It should be fairly easy to do what you want.

We can make a single flux language query that pulls the data from your AC mains, and pulls data from your solar mains.

We then need to join the two different tables

After they are joined we can perform several IF statement where we determine the sources of power and have it report “all solar” / “All AC Mains” / combination

It would also then be easy to make a panel that shows what percentage is from mains and what is from solar when you are getting combined power.

Feel free to email me at wallacebrf@hotmail.com

1 Like

Thx heaps @wallacebrf , this weekend gone I finally had time to convert my Influxdb to V2 so I could have a play with these.
I couldn’t work out how to do the time of day calculations easily in Flux so created different measurements for each tariff , populated via Nodered rules and then played with Grafana first using a transform to add the values then probably the cleaner method just using flux
This is what came up with for graphing each day and will then work on using this as the basis for additional work

Wondering if sometime you could cast your eye over it and double check I haven’t made any stupid mistake as I certainly made plenty b4 I got to this stage, thx

import "strings"
import "date"
import "timezone"

option location = timezone.location(name: "Australia/Perth")
yesterday = date.truncate(t: -1d, unit: 1d)

from(bucket: "power_play")
|> range(start: -14d, stop: yesterday)
|> filter(fn: (r) => r._measurement == "Peak" or r._measurement == "OffPeak" or r._measurement == "SuperOffPeak")

// get the kWh
|> aggregateWindow(
    every: 1d,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0}))
)

// apply the time of day tariff
|> map(
   fn: (r) => ({r with _value:
   if r._measurement == "Peak" then
       r._value * 0.55
   else if r._measurement == "OffPeak" then
       r._value * 0.22   
   else
       r._value * 0.08 
    })
 )

// create a table containing data from all 3 measurements
|> pivot(rowKey: ["_time"], columnKey: ["_measurement"], valueColumn: "_value")

// if column has / need to use [] eg
//|> map(fn: (r) => ({ r with _value: r["Peak/watts"] + r["OffPeak/watts"] + r["SuperOffPeak/watts"] }))
|> map(fn: (r) => ({ r with _value: r.Peak + r.OffPeak + r.SuperOffPeak }))

|> keep(columns: ["_value", "_time"])

Just wanted to share my flux for my branch circuit breakers; I render as a table with the name, current power, 24h energy, and 7-day average 24h energy. I would rather render it graphically, but haven’t found the right tools for a composite presentation yet.

There are a couple little tricks in there that it took me a while to figure out.

import "math"

daily = from(bucket: "Iotabucket2")
  |> range(start: -24h)
  |> filter(fn: (r) => r["ct"] !~ /^PNL.*/ and r["ct"] != "Consumption" and r["ct"] != "Import" and r["ct"] != "Export" and r["ct"] != "HECO" and r["ct"] != "LivingPlugs")  
  |> filter(fn: (r) => r["_field"] == "Watts")
  |> integral(unit: 1000h)
  |> drop(columns: ["_start" , "_stop" , "_time" , "_measurement" , "device" , "_field" , "units"])
  |> group()
  |> map(fn: (r) => ({ r with _value: math.round(x: r._value * 1000.0)/1000.0}) )
  |> rename(columns: {_value: "kWh"})
demand = from(bucket: "Iotabucket2")
  |> range(start: -2m)
  |> filter(fn: (r) => r["ct"] !~ /^PNL.*/ and r["ct"] != "Consumption" and r["ct"] != "Import" and r["ct"] != "Export" and r["ct"] != "HECO" and r["ct"] != "LivingPlugs")  
  |> filter(fn: (r) => r["_field"] == "Watts")
  |> last()
  |> drop(columns: ["_start", "_stop", "_time", "_measurement", "_field", "device", "units"])
  |> group()
  |> rename(columns: {_value: "demand"})

day7 = from (bucket: "IotaHourly")
  |> range(start: -7d)
  |> filter(fn: (r) => r["ct"] !~ /^PNL.*/ and r["ct"] != "Consumption" and r["ct"] != "Import" and r["ct"] != "Export" and r["ct"] != "HECO" and r["ct"] != "LivingPlugs")  
  |> filter(fn: (r) => r["_field"] == "kWh")
  |> cumulativeSum()
  |> drop(columns: ["_start", "_stop", "_time", "_measurement", "_field", "device", "units"])
  |> last()
  |> map(fn: (r) => ({r with _value: r._value / 7.0 }))
  |> group()
  |> rename(columns: {_value: "day7"})

interim = join(tables: {demand: demand, daily: daily}, on: ["ct"])
join(tables: {interim: interim, day7: day7}, on: ["ct"])
  |> sort(desc: true, columns: ["demand"])
1 Like

looks good to me, you got some good tricks there

it looks like you are doing everything correctly.

once you know how to use flux, you can do much more powerful and useful queries compared to using the old method on older InflixDB versions.

I’ve been away from the Iotawatt forums - looks like some Influx v2/grafana experts have arrived since. I would love some help with the following -

I have Iotawatt set up to send data to my influx v2 db more or less like Bob instructs in the help file. I also set up a handful of wifi plugs that do energy monitoring via MQTT. That data is getting dumped into my influx v2 DB as well (via Home Assistant).

I have my grafana energy monitoring dashboard set up to show me kWh usage - BUT - and here’s the rub - I can’t seem to get the two different data sets together in one graph - so I have one for all the CTs from iotawatt in one graph and all my outlets in another.

my iotawatt data
Screenshot 2022-11-03 215155

my outlet data
Screenshot 2022-11-03 215211

here are the queries -

CTs

from(bucket: “iotawatt”)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r[“_measurement”] == “CarCharger” or r[“_measurement”] == “Dishwasher” or r[“_measurement”] == “Dryer” or r[“_measurement”] == “HeatpumpFurnace” or r[“_measurement”] == “HotWaterHeater” or r[“_measurement”] == “InductionStove” or r[“_measurement”] == “ServerGroup” or r[“_measurement”] == “Shed” or r[“_measurement”] == “WaterPump” or r[“_measurement”] == “balance”)
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> integral(unit: 1000h, column: “_value”)

outlets

from(bucket: “iotawatt”)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r[“friendly_name”] == “stereo” or r[“friendly_name”] == “theater TV” or
r[“friendly_name”] == “washer” or r[“friendly_name”] == “waterbowl” or r[“friendly_name”] == “softener” or r[“friendly_name”] == “microwave” or r[“friendly_name”] == “iotawatt” or r[“friendly_name”] == “fridge” or r[“friendly_name”] == “family room TV” or r[“friendly_name”] == “dehumidifier” or r[“friendly_name”] == “coffee nook” or r[“friendly_name”] == “closet net” or r[“friendly_name”] == “brickroompc” or r[“friendly_name”] == “basement fridge”)
|> filter(fn: (r) => r[“_field”] == “value”)
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> integral(unit: 1000h)

My end game here is to have all the data in one graph and use some math to subtract all the outlet data from the ‘balance’ value in the CT graph and get a ‘true’ balance value (unmeasured power).

Sorry for the scatter brain post - but would appreciate any help here.

View the raw data, drop the columns that don’t match, and join the two tables. Your queries can remain as-is, just both in a single cell. It doesn’t work when there is data that doesn’t match between the two tables, which is why you have to strip out that information.

1 Like

I knew that a join was likely in my future. Here are the two tables with raw data -

I will do some googling here for Drop/Join - but any help would be appreciated.
Thanks