It doesn’t seem like much but 1K of RAM means 1K less heap. I’m already trying to mitigate the 1K lost in the next release due to the latest core. Probably the most common problem reported these days is low-heap restarts. While that is nearly always associated with WiFi issues, it doesn’t help when the starting point is lower. Dynamically, each call will temporarily reduce heap by at least several K due to the verbose Json response. The heap loss can be reduced by using more efficient PROGMEM for the formatting, but you are dumping in so many metrics into the response that the buffer will still be very large and if it exceeds about 1400 bytes will cause multiple writes which are blocking and suspend sampling.
While it’s doable, there is a lot more that would need to be resolved. What you have coded serves your needs, but maybe not others. As I ponder the metrics I see being sent to influx, Amps are pretty common, some folks like to see power-factor, VAR sometimes.
If you just dump out everything available, it would be prohibitably large. But it doesn’t stop there. There is no capability to export outputs. That is a fundamental part of all of the uploaders. In fact, everything that is uploaded is essentially an output as they use the script system to develop the output. Simple case in point is a US split phase usage upload where the total usage is the sum of the two mains. Nobody is really interested in the individual inputs, just the total. When solar is included, scripts are used to produce the usage, import and export metrics.
I guaranty that there would be a push to include script outputs in any Promethius endpoint. There is an infrastructure within IoTaWatt for defining a scrtptset to be used and for including that in the confic.txt file and for generating the metrics in the endpoint handler. It would require considerable work in the setup app (again infrastructure is there with the calculator) and it would require building a response handler that processes that data. It would also consume more heap.
Stepping back a bit and reading the Promethius comparison to influx, the big contrast to me is that Promethius is a real-time only database. They state clearly that Influx is better suited to billing which can be interpreted as historical accuracy. It appears to just log what it gets, when it gets it. All of the existing uploaders in IoTaWatt maintain the integrity of the external database regardless of interruptions in communications or the database system itself. Upon startup of the uploaders, the database is queried, the time of last update is determined, and timestamped uploads are resumed from that point. The data has historical integrity. The value of any historical reporting from a Promethius database would be questionable.
In many ways the use of Promethius is similar to various reporting systems that use MQTT. I have resisted supporting that as well for the same reason. The data is real-time with no ability to correct for gaps caused by external failures. The Home Assistant Energy integration suffers from the same problem, but it is third party and uses the query API.
With limited resources you have to draw the line somewhere. I can count on one hand the number of inquiries expressing an interest in Promethius support. It’s about the same interest level as MongoDB. On the other hand, the interest in influx is and has been very strong.