New Install, two IotaWatt, US Split phase panel - photos

As I work through this will offer a bit of a cheat sheet for those who might stumble across this thread.

The InfluxDB database output is well documented, but if you are not familiar with it (say, as me, coming from more traditional databases) it may be confusing. There’s also a confusing terminology on the webs server setup screen. Finally, I have two Iotawatts, and it took a bit of trial and error to find what worked in terms of setup so I could combine their output downstream easily.

I built a cheat sheet that has the Iotawatt setup on the top, and a command line interface output at the bottom.

The first thing to know is that InfluxDB has some serious limitations that will not come naturally to traditional database users: (1) it will not allow math (etc) across “measurements”, only within a measurement (it does allow it between fields and differently tagged data with some caveats), and (2) you will see it easily does rollup aggregation on a automated basis, BUT you cannot create new data with new tags.

This last one is a bit obscure; I have an ACCompressor (outside circuit) and AirHandlerHeat (inside). Each is identified by a tag (whose value are those names). I wanted to create a similar measurement with a tag of simply “AC”. You cannot do that in InfluxDB – you can add them together in a query, but not save the data with the new tag (you can save it untagged, even as a new measurement, but if you are using tags to identify circuits (and the like) this breaks your model.

Using measurements instead of tag actually makes it worse – then you are even more restricted.

You can output it from Iotawatt of course, added together, provided both items are on the same device. However, there is a bit downside from doing that, or put another way, there is a convenience if you send a given data item only once – if you then do SUM() against them collectively, you are not double counting them.

So here’s what my setup looks like:

It’s useful to follow the colored arrows.

RED is what the measurement name becomes in Influxdb – that is from the text box at the top – do not be confused by the “measurements” section which is subtly different (I wonder if something like “outputs” might be mess confusing – these produce measurements, but nothing in that section is specifically a Measurement per Influxdb). What ever you put in the Red box becomes the thing on the left if you say “Show Series”. In my case I make it the $Units, which in turn produces two measurements – Watts and Volts (as those are the only units I send). Notice on the left side in the CLI – that “Watts” is from the input box.

Now look at the BLUE arrows. I have defined those as $name, and these map to the “measurements” section names (think “InfluxDB Outputs” not “Measurements”) like “ACCompressor”. Notice they are referred to in the Tag section, which in turn produces a tag on the series beside the actual Measurement name (red). So in the output of Show Series they are of the form “Watts” (from red above) and “ct=ACCompressor” as a tag and can be used in the where clause in queries.

Now look to the Green line. The Field-Key becomes the Field Name in the data. In my case (magenta bracket) I have units that are both Volts and Watts, and if you follow them up to the Field-Key where $units is used again, it flows into the data as shown in the two select statements. This does not need to match the Units, it could simply be “Value” if you like – but it seemed to make sense to show the name as the units.

I used $units in both the measurement and field-key, so I get the same name in both places, but they are very different. Measurement is a grouping, and strongly isolates data. If for example I had used the IotaWatt’s device name there for Measurement (e.g. putting in $device in place of $Units in the Measurement input box), I would get two separate groups (red arrow), both of which would have field names of Watts – but I would have great difficulty combining them together properly. Influxdb will do nothing much at all between Measurements, so pick that name most carefully.

What I wanted was to make this look, as much as possible, like it all came from one IotaWatt; that makes reporting on it easiest. I can always tell by “ct” tag which device it came from if debugging.

The result of all this is that my database now has per-minute data (the 60 at the top), updated every minute (same times bulk send number = 1).

Now I’m working on aggregation and presentation. More later.

I’ve got two IoTaWatt in my home. One is out at the meter and measures the mains at the meter, and also the mains of a panel in the outbuilding where the meter is. The outbuilding has some loads in the summer, but this time of year the only real load is a sewage pump that runs a few times a day. The other IoTaWatt is in the house and has mains as well as circuit breakouts for the house 200A panel.

I’ve never tried to aggregate these two, but @Linwood has inspired me, and I have a new install of influxDB with grafana. So I’m taking a fresh look at the problem. My motivation is that there really needs to be some R&D into how to do this if multiple units are to be a viable solution in commercial applications. I’m presently toying with a different approach.

Here is the setup for each of the units:

You can see that I am using $units for the measurement name, and $name (the output name) for the field-key. There is one tag-key “iota” and each of the units have a different value for this tag. This produces extremely low series cardinality. There are just three series:

Volts,iota=units
Watts,iota=units
Watts,iota=main

Basically, there will be a series for each unique units/iota combination. I believe this is well suited to the kinds of retrieval I will be doing. Reference the influxDB schema discussion and specifically:

Now the goal is to be able to use the measurements from different units in the same calculations. To demonstrate using this schema in a calculation, I took the iota=units and computed the “other” with a formula in the influx query and plotted it with the other that was computed by the IoTaWatt:

They are the same. I added 5W to the influx computed value so it would be visible, otherwise it completely disappears behind the other other :grin:

I couldn’t figure out how to do the calculation with grafana’s menu based query interface, but I was able to type in the query that you see. Actual;l, the menu produced the template with the tags, group, and time.

So then I moved to the two IoTaWatt scenario. The iota=main unit measures the mains and the subpanel, called GH_main here. It’s a basic Y so meter should equal GH_main + total_power (the house panel mains). Subtracting total_power from the meter then yields the usage by the GH_mains. Here is that computed value, plotted with the directly measured GH_main:

If you look close, you can see some green traces here and there caused by the voltage drop in the mains cable running underground to the house when the water heater ran. It caused the power to be undermeasured compared to the same current going through the meter, hence the slightly negative difference.

I think this is a good way to handle multiple IoTaWatt.

First because it keeps cardinality low and should be more performant.

Second because this could result in more efficient influxDB uploads if IoTaWatt combined all of the values for each unit into a single line.

Last, because if used in a commercial setting where various groupings may be needed, the tags can be used. For example, lets say that units were in each building on several different college campuses.

All of the units at each campus would have a tag equal to their ‘location’ i.e. Boston, Lowell, Amherst. ‘building’ might be dorm or gym or library etc.

Now if you wanted to see the total power used in each branch’s library, you could do that. Same for dorms. If each building had fields for AC or lighting or HW, that could be broken out by campus, or building type or both. Moreover, if fields for various individual AC units were labelled AC01, AC02, or ACwhatever, they could be viewed individually, or aggregated using regular expressions.

Does that make sense to anyone?

So this document, which I had seen, is why I did the reverse. :slight_smile:

To make sure I understand, what do you get from this:

select * from Watts order by time desc limit 10 

Do you get a row with lots of filled in fields, or separate rows by circuit and all but one field null? I assume the former, and indeed that dramatically reduces cardinality.

You will, I am sure, at least get a separate row by IotaWatt because of the tag.

There are two places I think this is needed, one is in dynamic queries, e.g. from Grafana, as you have demonstrated. What I’m working through though is how to consolidate and aggregate data over time; I do not think it is worth keeping minute-by-minute data for more than a week or two (one might argue not even that).

So I’m doing things like this:

CREATE CONTINUOUS QUERY kWh_per_hour ON HA RESAMPLE 
    EVERY 20m FOR 2d 
        BEGIN 
            SELECT integral(Watts, 1000h) AS kWh INTO HA.autogen.kWh 
            FROM HA.autogen.Watts GROUP BY *, time(1h) 
       END

(Indentation mine to make it easier to read).

This creates a new measurement rolling up Watts to Killowatt Hours, and because I use tags for circuit the “*” in there preserves all those tags. Once I get this right it will have a different retention (not autogen).

In your case this would work similarly, I think, you just need a whole string of integrals in the select with separate “as” names.

Your approach does imply hard coding of these field names in such queries though, whereas mine is dynamic. This becomes relevant on larger installs as you might add new circuits or new IotaWatt devices. These are not just in the continuous queries but in the Grafana queries as well, in some places, e.g.

select sum(mainWatts) - sum(measureWatts) as otherWatts from 
    (select sum(lastWatts) as measureWatts from 
        (select last(Watts) as lastWatts from Watts where ct<>'Main' group by ct)),
    (select last(Watts) as mainWatts from Watts where ct='Main') 
group by *

This calculates the “other” or un-monitored circuits as the difference in the Mains and the monitored circuits (which are coming from two IotaWatt devices). The third line shows what I’m talking about - it will handle any number of circuits from any number of devices, as all it cares is “not Main”, and it can then sum them all. To do this with fields as circuits you would need to hard coded each one in the math.

I have all of a couple days experience with influxdb, so I really have no idea if the smaller cardinality you are probably getting is more helpful than the… maybe extensibility is the word… of using a tag for the circuit names.

But I am sure glad you brought it up, as I had not even considered it.

To your discussion of larger installs, if we are talking InfluxDB and Grafana, one approach if you do find the lower cardinality is worth it, is to write some automatic scripting. It would be very easy to generate the continuous queries for aggregation automatically, even the json for a basic Grafana dashboard.

Where I think this because difficult is for users who do their own dashboards and have a lot of hard coded field lists (e.g. in sums or subtraction) and then add a device or circuit, they will struggle to find all the places to adjust (well, unless they are comfortable editing JSON in text editors. Even that could be painful.

Incidentally, I wonder if it might be useful to allow different measurement names for included circuits. E.g. a Main and then branch circuits are in some ways fundamentally different in that they should not be added together in sum() stations for a group by, so I have a lot of places that exclude them. If the upload to influxDb allowed the measurement name to be per output instead of overall, InfluxDB’s aggregation would by default aggregate only within the same “level” (for want of a better name), and you can (in Grafana) do math between, but all data rollups in continuous queries would auto-magically not include items at different levels. For example you might have service entry as one level, building as another, panels as a third, and then branch circuits as a fourth. Having separate measurement names would implicitly keep these separated.

Random late night thoughts for what it is worth…

A follow up. I’ve been working off and on with influxdb and how to set things up. I am (for now) staying on the opposite course as @overeasy above, and sending circuit name as tags.

So what I did was design a set of retention policies for increasing levels of aggregation. For now I have:

name    duration   shardGroupDuration replicaN default
autogen 528h0m0s   168h0m0s           1        true
hourly  1800h0m0s  168h0m0s           1        false
daily   26400h0m0s 168h0m0s           1        false

That’s 22 days, 75 days and 100 days. So minute by minute data is saved for about 3 weeks (I’ll likely reduce that at some point). Note “autogen” is the Inflexdb name for “computer generated default”. You can change the default to something else, but this was just as easy, to use it (leave the corresponding field blank in the setup you get the default, and if you don’t change the default in Influxdb it is “autogen”).

That data is aggregated into hourly totals and kept for a bit over 2 months. In turn that is aggregated into daily data and kept for a bit over 3 years.

I then created continuous queries that looked more or less like these:

CREATE CONTINUOUS QUERY kWh_per_hour ON HA RESAMPLE EVERY 1h FOR 2d BEGIN SELECT integral(Watts, 1000h) AS kWh INTO HA.hourly.kWh FROM HA.autogen.Watts GROUP BY *, time(1h) END
CREATE CONTINUOUS QUERY kWh_per_day ON HA RESAMPLE EVERY 1d FOR 3d BEGIN SELECT sum(kWh) AS kWh INTO HA.daily.kWh FROM HA.hourly.kWh GROUP BY *, time(1d) TZ('America/New_York') END

The first does the hourly aggregation, the second the daily aggregation. It’s important to adjust the “tz” time zone as needed to get the right cutoff for days at midnight local.

This allows me to have different flavors of graphs, from short term very detailed ones (for 22 days) to moderate term hourly (2 months) to many year day-over-day comparisons.

Influxdb takes care of purging old data as needed based on the policies, and just changing the policy is all it takes if you want to keep some longer or shorter.

There are some quirks though, that are worth mentioning. InfluxDB does not let you schedule these queries beyond what you see. So when I say “every 1h” (1 hour) it runs at the top of the hour. There are two problems with that, (a) now all the data for that last minute may be there yet, in fact almost certainly is not, and (b) it does NOT summarize for the period ending. My guess is (b) is to allow for (a), but it means that you are always a period behind. So at 11:30am you have data through 9am, not through 10am; or on April 24 you have data through April 22, not April 23.

You can manually run the commands to bring it up to date, and these could be scripted outside of influxDB to run off schedule (e.g. 5 minutes after the time), but I have found no way to do it inside of InfluxDB (note the time shift functions also shift the periods).

Also note I have it resampling and overwriting data; this is to handle “normal” cases of missed data, e.g. network down, computer down, etc. IoTawatt will buffer, then dump several minutes, hours or even days at once; the resample will then merge it in. If it’s longer than the resample I just have to run the statement by hand (without a period selected it does all). If you didn’t know - Influxdb will only allow one row per timestamp per distinctive criteria (tags, etc.) so as new data comes in it does not try to figure out what was there, what wasn’t, it just recalculates everything and writes it – if already there and the same it stays the same, if different it updates (overwrites), if missing it creates.

Anyway… what I ended up with are graphs like these:

The top portion is (near) real time data for each circuit, plus temperature for reference.

The next two are recent data plots over time, and you can adjust the period (up to the maximum for the “autogen” or default data to show reasonably precise power (the Load Over Time) and since A/C is my biggest piece the A/C load by hour along with temperatures. This is good for the period of the grap.

Finally there are two fixed-length graphs at the bottom with daily energy usage and daily A/C usage.

The top real time gauges link to an analysis page for whatever you click on, e.g.

The above is my fridge. There’s probably a lot more I can add later to this one, e.g. the daily graph I can also put a dollars scale. I’ve been too fixated on the mechanics to think much about the analysis.

A clear issue in all this is cardinality and processing time as the data builds. The rollup and regular purge is intended to keep it manageable, but I will know only after I start getting at least the default and the hourly “full”.

Now what happens next is kind of interesting. I put them first into an iFrame in my Home Assistant dashboard. So for example:

I can click there and up comes the dashboard above. But I can also say “Hey google, activate show energy in bedroom”:

My LG TV calls up the Grafana graph and displays it.

Please do not ask my WHY I might need that, but it will really confuse visitors (it could be the Living Room or Den TV of course).

Incidentally that’s not just an image, it’s a live graph, and you can use the TV remote to select/change things like drill down or period covered.

I’m sure eventually I will find a good reason for it, in the meantime it’s kind of a cool solution in search of a problem. :smirk:

2 Likes

Nice project. Same basic theme as whet I was doing except I didn’t follow through with a HA setup. There are some technical issues raised here that I tought I’d comment on:

The IoTaWatt current log and history log are the same basic setup. Current is 5 second and history is one minute. The way the IoTaWatt is setup, it would be trivial to add an hourly and daily, but it’s not really necessary as IoTaWatt doesn’t need to integrate or sum to get the kWh or average Watts over any time interval. Just a function of how it keeps the data.

When I setup my version, I used RPs to limit the duration of each category, but took a different approach to the CQs. To aggregate the one minute data, I use

RESAMPLE EVERY 2m FOR 1m (minute downsamples)
RESAMPLE EVERY 2h FOR 1h (hour downsamples)
RESAMPLE EVERY 2d FOR 1d (day downsamples)

This solves the currency problem (pretty much) but doesn’t do much for the problem of historical data upload. That’s a real problem and I can’t find a good solution in influx. There are several threads raising this issue with influx, and their two solutions are:

  1. What you are doing, oversample and keep rewriting the old data.
  2. Use Kapacitor.

Your solution is only as good as the lookback window, and while the influx folks may think it’s a great solution on an AWS server, it might not scale down to a RPi very well. At a minimum, I would be concerned about potential wear to the SDcard if the activity causes rewrites.

I looked into Kapacitor, and while it’s a pretty elegant way to setup nodes and pipes, I couldn’t see a way to easily do this.

I’m currently considering writing an external chron task that would submit downsample queries.

Slick.

Your solution is only as good as the lookback window, and while the influx folks may think it’s a great solution on an AWS server, it might not scale down to a RPi very well.

I’m running on a HyperV on an old windows PC, on SSD, so I’m OK with it so long as it doesn’t degrade in some fashion due to internal InfluxDB data structures. I haven’t looked into what kind of maintenance/reorg is needed there (if any – I’m guessing with their Shard structure there is none).

I’m currently considering writing an external chron task that would submit downsample queries.

That’s likely the simplest, though I had in the back of my head to look if there’s something in HA where I can do it also (or Node Red).

One problem I find with cron is I tend to forget about them, especially once they broke crontab up into user specific files, there’s now WAY to many nooks and crannies for me to write a cron job then forget I wrote it.

Now one might call that a flaw in my memory, but I’m sticking with blaming Linux.

I really don’t get why a time series database can’t handle the concept of late arriving data and scheduling queries. I mean they CALL Them continuous, but they aren’t. If they were – if all data subject to them ran through them on arrival – this wouldn’t be an issue!

Take a look at Kapacitor. I didn’t get deep enough to see if it can do that. There is a stream handler that would work if it can operate on the input stream to influxDB. The way I read it, it had to operate on a stream out of a query. If you have time see if you can figure it out. It’s a pretty modular concept.

1 Like

This is exactly what I’m looking to do… Maybe I’ll setup an Influx / Grafana. I was trying to get away from my day-job with this, but this is too spot on. Nice work.

In the ongoing saga…

A few days ago I stupidly left a refrigerator door ajar, and made a bit of a mess, and it occurred to me that IoTaWatt and HA gives me the tools to deal with that. What I really want is a temperature alarm inside the fridge, but batteries do not work well there, and I’m not sure about running a USB cord in through the door insulation. Maybe.

But this is easy. I’m taking minute by minute watt measurements, and I see the fridge is only a few watts at idle, and 100+ separately, and tends normally to run in 30-45 minute periods. So…

So I fired up Node Red and built a flow that is centered on this influxdb query:

select time, ct, Watts 
from Watts 
where Watts < 40 and ct =~ /Fridge/ 
group by * 
order by time desc 
limit 1 
tz('America/New_York')  

(Line breaks added for clarity). I have two fridges, and the “ct” distinguishes them. This returns a message payload that has the time (sadly not TZ converted, so that aspect is probably moot) of the last time that fridge was using less than 40 watts.

Then in the next node I pull it apart, calculate whether I’ve complained recently or need to complain. The core math looks like this in a function node:

for (loopcnt=0; loopcnt<msg.payload.length; loopcnt++)
{ 
    var last = new Date(msg.payload[loopcnt].time); 
    var diff = now - last; 
    var min = Math.round(diff / 1000 / 60); 
    if(diff > FridgeMaxRuntimeMS)
    {
      speakstring += (speakstring===''?'':', '); 
      speakstring += msg.payload[loopcnt].ct + " has been running " + min + " minutes "; 
    }
} 

A nice aspect of Node Red is the loop – I could handle any number of devices that fit this sort of model.

FridgeMaxRuntimeMS is a context variable that gives the limit of how long I let it run before complaining (I use 90 minutes, the time is in milliseconds). I then construct an announcement string and let Google broadcast it on all the house speakers. There’s some logic also in there so it will only repeat the announcement every 30 minutes not even loop.

I did this in Node Red inside of HA, instead of HA itself, as to do it in automation in HA you still need that sort of query (probably in a value template that turns it to “alarm” or “not alarm”), but dealing with the “when and how do I alert” is much easier in Node-Red. I’m doing almost all automation in Node-Red now.

I also threw in a separate calculation into a rollup field for duty cycle that looks like this:

SELECT count(running) / count(total) AS DutyCycle INTO HA.hourly.DutyCycle 
FROM 
   (SELECT Watts AS total FROM HA.autogen.Watts GROUP BY *), 
   (SELECT Watts AS running FROM HA.autogen.Watts WHERE Watts > 40 GROUP BY *) 
GROUP BY *, time(1h) fill(0) 

This looks at the minute by minute periods to see when a device is running over 40 watts (a rather arbitrary “idle” limit) and saves as a fraction of total intervals. So this is a decent estimate of the duty cycle in use.

I’m not sure it is hugely better than just kWh used over time, but I figure if it starts to show more a more erratic nature, or starts to grow significantly, it might indicate some kind of issue. I didn’t do this just for the fridge but for anything – the A/C duty cycle should somewhat track outside temperature, if over months I start seeing a significant change I can look into its efficiency or refrigerant levels.

Kind of cool the stuff you can get.

Oh… I left the door open on a fridge that just had sodas and such in it (i.e. nothing went bad); 90 minutes later Google starts complaining. I was surprised, I closed the door and it took almost 2 hours to stop running, so four complaints at least. Good thing google’s got a polite voice.

Here’s what some of the data looked like. One thing I found odd was that my fridge apparently has two speeds for the compressor – after failing to cool adequately for a while, it doubled its current draw. You can see how the yellow envelop, which is duty cycle, pretty much parallels kWh, so it’s not clear how useful that is going to be over time, but who knows.

The peak at about 4pm on 3/27 is when I loaded it up after a trip to the store. What’s mysterious is the 5am peak on 3/29 – some kind of defrost cycle? It jumped up to 500w (from about 100 normally) for about 20 minutes.

Anyway, just sharing stuff in case it helps anyone with ideas.

2 Likes

Great details and use cases for how the IoTaWatt can help monitoring and alerting on various details. Thanks for the hard work and posting the info.

I would additionally suggest one pick up a freezer temperature sensor (Accurite sells some) that operate on the 433mhz frequency. Use this to hook into whatever tool using an SDR (software defined radio) and NodeRed as well. I use this type of setup to being in Temp from many sensors around my property along with my energy production/consumption.

Good work on this method as well.

Hopefully @overeasy will yell if I am filling up his server too much…

So today I stopped using Continuous Queries and moved them into Node Red. By putting them there, I can run them any time with any parameters. I set up something simple so they all run at 6 minutes after every hour, and resample for 2 days.

The resample is all just done inside a simple function node like this:

var now  = new Date(); 
from = new Date(now - (2 * 24 * 60 * 60 * 1000));   // 2 days 
var hourlyFrom = new Date(from.getFullYear(), from.getMonth(), from.getDate(), from.getHours(), 0, 0); 
var dailyFrom  = new Date(from.getFullYear(), from.getMonth(), from.getDate(), 0              , 0, 0); 
if(msg.payload === "DOALL")
{
    hourlyFrom = new Date("2000-01-01T00:00:00Z");
    dailyFrom  = new Date("2000-01-01T00:00:00Z");
}
var hourlyTo = new Date();
var dailyTo  = new Date(); 
var newMsg = { dailyFrom  : dailyFrom, dailyTo:    dailyTo, 
               hourlyFrom : hourlyFrom,  hourlyTo: hourlyTo 
             };  
return newMsg;

Essentially this just picks a time period from an even hour and day boundary about 2 days ago, up to the current time, and passes it out where the next function nodes weave it into a query string, e.g. like this:

var queryStr =  "SELECT integral(Watts, 1000h) AS kWh INTO HA.hourly.kWh FROM HA.autogen.Watts where time >= '" + msg.hourlyFrom.toISOString() + "' AND time <= '" + msg.hourlyTo.toISOString()  + "' GROUP BY *, time(1h)"; 
var newMsg = { query: queryStr };  
return newMsg;

The former function allows an override of “DOALL” which I inject from an inject node if I want to wipe all data and recalculate from the basic minute by minute data (obviously not possible beyond the retention period).

It would be easy to have two inject nodes, one runningly daily and one hourly, so it recalculates daily only once a day, but I kind of like day-to-date data.

Maybe I’m old school, but I like that I can drive the queries any way and any time I like; the continuous queries default intervals as well as exactly when they ran were … annoying. Node Red is a pretty cool automation tool.

Looks like this:

Can you share a info link to the freezer/refrigerator sensor you are using? Thanks.

You could go with the more expensive route such as this Acurite package (still on 433mhz), but no need for the display unless you want to have a display and be able to collect them via a SDR.
Brushed Stainless Steel Digital Refrigerator and Freezer Thermometer

However, I use 8 of these devices around the outside (US Northeast so it can get cold), and inside of the home. Indoor / Outdoor Temperature / Humidity Sensor – Weather Sensors & Parts | AcuRite Weather

You could put one of these in both the fridge and freezer and then collect the data from the air via an inexpensive SDR (Software Defined Radio). I use this (Amazon.com) from Amazon for SDR and an Open Source tool called RTL_433 which reads and decodes items from the air over 433mhz. I then use NodeRed to take that inputs and push them to EmonCMS; or other tool.

Hope that helps.

Thanks, Just what I’m looking for!

I use similar on a Rpi zero and have RTL_433 automatically start & scanning 433 and 915 mhz and send the data to influxdb, then on to Grafana for alerts. Thanks again.

Sounds like you are on track for one or two of the tower sensors then. With that, you should be good to go with your existing setup. Let me know know it goes.

So I might make this work by putting it on a separate rPi, but my HA runs in a VM that can’t use a USB.

I tried using a WIRED esphome based device for this, and it worked beautifully, except the gaskets around fridge doors are not designed to let you run a wire inside, they leak air. The fridge wouldn’t be so bad, but I had a snow storm in the freezer. Using an esphome device on batteries is not going to work well unless you like replacing batteries.

Have you run across any of the 433mhz devices whose base (I don’t mind having the display if needed) are then easily integrated via wifi? Or zwave?

PS. The wire-into-fridge depends a lot on model. If I had a pure side-by-side, I would put a hole in the gasket and all would be well. But I have two fridges with french doors so the freezer is a pull out drawer. This means you can’t go through the gasket (too much wire slack needed) but the gasket has to snug up to the wire on being closed, and this just doesn’t work. Maybe cut the gasket up and lots of soft silicon. Maybe. Send photos if you get it to work.

Most units have internal electrical components like lights, sensors and fans. They usually exit through a harness in the back of the unit. Typically all that is accessible by removing some plastic covers. Might be worth looking into running wires out through the same hole in the back.

I have not looked for devices that use WiFi or Zigbee as a means of transmitting a signal. I know the 433mhz is the cheaper solution and this is how you can get a unit for ~12.00USD or cheaper. You could even build your own if you wanted using an ESP8266 and some sensors, etc. I think it may be more work and more expensive than the off the shelf option. I have a dedicated RPI running the software which has all the decoding and formatting options like JSON, XML, TXT, etc. which can be consumed by most tools like NodeRed.

A wired solution would be good if you can find a way to get the wired into the device easily without much loss of cold air. A drill and some silicon sealer?

I should probably take a run at doing that. I looked in the service manual for one of the fridges, and it looked like these were all sealed up in something that looked like epoxy blocks (from a picture, remember I did not disassemble). Maybe I should take one apart and look. But they are full of food (you know, trying to limit grocery trips to once every 10 days or so) and I have this vision of breaking something and can’t get parts/repair and meat spoiling …

But I really should look.

My approach would be to start by removing the back to see if I could locate where and how the harness goes through. Shouldn’t need to disturb the innards. If it looks possible, I’d probably see if there is a YouTube of someone servicing an interior component. I love YouTube DIY videos.

Another approach might be to look at a schematic to see how the units own temperature sensors work. If they are 10k sensors, it’s pretty easy to sense that directly from the external PCB using an arduino or ESP.

1 Like