Entso-E binding for Nordpool spot prices

I developed this solution, which optimizes the energy consumption to cheap hours of the day based on the Nordpool day-ahead market prices. Control a water heater and ground source heat pump based on cheap hours of spot priced electricity - #13 by masipila

The spot prices are fetched from Entso-E Transparency Platform’s API. The script action of the rule that fetches the prices looks like this. The helper files (date-helper.js, entsoe.js and influx.js are available in the thread linked above).

dh = require('kolapuuntie/date-helper.js');
entsoe = require('kolapuuntie/entsoe.js');
influx = require('kolapuuntie/influx.js');

// Entso-E bidding zone.
zone = '10YFI-1--------U';

// Entso-E API access token.
token = 'insert-your-access-token-here';

// Multiplier for VAT
tax = 1.24;

// Get date range in the correct format for Entso-E API
start = dh.getEntsoStart();
end = dh.getEntsoEnd();

// Read spot prices and write them to the database.
points = entsoe.getSpotPrices(start, end, zone, token, tax);
influx.writePoints('spot_price', points);

What this rule does is that

  • It fetches the spot price data from Entso-E API
  • Transforms the XML response to JSON with an XSLT transformation which is licensed under MIT license
  • Parses the spot prices from the JSON
  • Writes the future-timestamped points to Influx DB via the HTTP API of the Influx server.

I’m considering volunteering to generalize the spot price fetching as a Binding so that it would be easier to use for other community members for their own use cases. However, I would appreciate a bit of guidance since I’m still quite new to openHab and especially I’m new at contributing to openHab (okay, the documentation contribution in the thread linked above is quite significant contribution, but I mean contributing code here).

My current solution is tightly coupled to an Influx DB and it does not use any openHab abstraction layers for anything. The influx.js mentioned above has the influx server’s access parameters hard coded in it.

First of all, I acknowledge that other users might have different database servers than InfluxDB. Secondly, even if this Binding would require InfluxDB, the connection parameters could most probably be read somehow from openHab, which uses the same influx database for normal persistence reasons.

The point here is that the time series that that this rule writes is in the future, because we are talking about day-ahead prices which can be fetched for tomorrow at today afternoon. As far as I’ve let myself understand, the openHab API does not support writing arbitrary timestamps from the future.

  • Question 1: Is this assumption (that openHab API does not support writing points with future timestamps) correct? How do the weather forecast Bindings handle this same kind of thing where we want to write a future time series?
  • Question 2: If it is correct, does that mean that this Binding will have to be database specific (for example InfluxDB specific, MySQL / MariaDB specific)?

About the open source licensing…

Entso-E API responses the spot prices in a hard to parse XML format because of the way it declares the XML namespaces. It can of course be parsed, but when i did, the code was getting so hard to read that I was not able to read my own code after one beer anymore. To overcome the XML namespaces challenge, I used the XSLT transformation published here: https://www.bjelic.net/2012/08/01/coding/convert-xml-to-json-using-xslt/ That is published under the MIT license. According to openhab-addons/LICENSE at main · openhab/openhab-addons · GitHub, openHab add-ons must comply with the Eclipse license.

  • Question 3: Is my assumption correct that MIT licensed code cannot be committed to the openHab repo?

I have a couple of other follow-up questions which depend on the answers to these first questions over here, but I’ll get back to those after these first questions are clear first.



I think that get spot prices and store them to DB should be separated concerns, meaning binding duty could be just to fetch spot prices from provider like Entso-E. So far an unwritten rule has been that bindings shouldn’t access openHAB persistence services directly.

Spot price binding could provide channels to get current price and future prices. In same way as example weather bindings do, which provide current weather attributes and forecast for future (see e.g. FMI weather binding).

You could develop one binding which supports in the future multiple providers (spot price binding) or approach could be that one binding just support one provider (entso-e binding).

Thanks Pauli!

So the data model could look like this, if I’m following you correctly?

Thing: Spot Prices

  • Channel 1
    Item - Name: Spot price time 1. Example value: 2023-02-28T23:00+00:00
    Item - Name: Spot price 1. Example value: 3.274

  • Channel 2
    Item - Name: Spot price time 2. Example value: 2023-02-29T00:00+00:00
    Item - Name: Spot price 2. Example value: 3.121

  • Channel 24
    Item - Name: Spot price time 24. Example value: 2023-02-29T22:00+00:00
    Item - Name: Spot price 24. Example value: 2.017

I also recently found this service through AMS reader, and also for a moment considered writing a binding for it. However, for me to use that personally, we would also need to integrate some currency exchange service because prices are in Euro. See this issue I created for discussing these things in general:

Therefore, to start with something simpler, I ended up creating a binding for the Danish service Energi Data Service:

I have the same problem as you regarding how to model future prices. For now I’m providing a JSON channel with this data, so it can be parsed and used by rules/automations. Having 24 channels where each represents an hour seems very much like a work-around. Also it would be quite dynamic - each hour it’s rotated, so one channel will have a different meaning (e.g. 15:00 tomorrow, not 15:00 today, when we are past that point in time).

Having support for persisting future data (forecasts and prices for example) in core would be preferable. Some discussions, proposals and context here:

See also this post:

I know I’m not answering your questions, but I hope adding this additional context can help you.

Exactly. The suggested approach that there would be 24 different channels, that would populate 24 x 2 items (one for the datetime and one for the actual value) works for fetching the data, but it’s of pretty much of no use for these energy optimization use cases like yours and mine. The future datetimes must be persisted, so that they can be used further for calculations and simply for plotting with openHab Charts or Grafana or whaterver.

If the Entso-E binding would only fetch the data and use this 24 x 2 different items approach, there would then have to be a Rule or something that would iterate the 24 x 2 different items, read the datetimes and values and then persist the values with the indicated future timestamps. But then we are again back to square one that as far as I know, persisting values with future timestamps is not possible via openHab API. Which is the reason why I ended up bypassing it altogether and I’m writing the data directly to the influxDB via the InfluxDB HTTP API. That works for me, but requires a bit too much of tinkering to be a widely adopted solution.

The facts that my script is

a) fetching the data directly (not utilizing openHab for anything else than invoking the script as per the Rule schedule) and

b) writing it to the database directly (not utilizing openHab for anything) means that openHab is also unaware of these Items.

To overcome point b), I have created an item with a unique ID “spot_price”, which happens to be the same name that my script uses as the measurement name when I write it to InfluxDB via its HTTP API. This “workaround” means that I can access the persisted data (both in the past and in the future) from openHab and for example create Charts that render data either from the past or the future.

Many of this stuff comes back to the question: Does openHab support persisting data with arbitrary timestamps. If only we would be able to persist future-timestamped values, that would mean for example that

  • We are able to store future-timestamped energy prices
  • We are able to store future-timestamped wind power production forecasts
  • We are able to store future-timestamped solar power production forecasts
  • and so on…

This allows me to optimize our power consumption to the cheap hours of the day. I was able to save 400€ in August - December last year with the solution linked above.

I don’t know if it’s possible to persist data with passed timestamps, but I would also need that. I do not have an power consumption meter at home, but the hourly consumption data is available from the API of my electricity company / network company on the next day. I fetch it from there and persist it to my Influx DB (again, bypassing openHab and writing it directly with InfluxDB HTTP API).


@laursen Thanks especially for this link: https://github.com/openhab/openhab-core/pull/3000

It talks exactly on the topic to be able to persist data with historical and future timestamps. The future-timestamped points is the key to make pretty much any kind of consumption optimizations where the target is anything else than traditional “try to maintain the given setpoint temperature”.

I commented on that pull request, let’s see how it goes.


I believe yes, but I also don’t understand why you would need this. You might have some software architecture ready in your head that would require this but there’s also ways to do without future values.
Weather binding provides channels like “temperature in 1 hour from now”, “temperature in 3 hours from now”, temp in 6/9/12/24 hours so for each slice or hour, you would be having an own channel.
Now if you persist that future value instead of the current value, you can use persistence to look into the past. So the forecasted temperature in 1 hour would be something like temp-in-3h.historicValue(now.minushours(2)) (well this is no proper code but I think you get the idea).

Consider not to use Influx or MariaDB or whatever DB outside of OH, just use standard persistence. Not everybody is able to run such a database.
Depending on the time series action you want to execute this may prove to be difficult with those only very basic operations standard persistence provides, but in my experience with the energy management system I’m selling it can be done, no advanced magics required, basic ops will do.
And it will make your solution applicable to any OH user. No need to install Influx let alone to have hardware it can run on (Influx on Raspi ain’t a clever idea but most people use Raspis as their OH hardware).

@mstormi thanks for your thoughts!

I was originally considering having 24 separate items for storing the day ahead prices, but…

… then I was not able to figure out how I would be able to plot them as a chart.

User story: As an end user I want to do a visual comparison (chart) of today’s spot prices and tomorrow’s spot prices so that I can plan our energy consumption.

So something like this, where the user can navigate between the days with the date navigation:

Is my thinking too stuck in normal time series way of thinking? Do you have any hints to which way to think?

That’s exactly what I would like to do!


Possibly so. The point is that the very moment when you retrieve the new day ahead prices the data is persisted with current date as the x value, but the meaning of the data is +1 day so it should be (x+24hrs) really but you can’t do that as you cannot force to use a specific x it’s always “now”.
If the intention “only” is to display the price for the day that you select in charts I think you have to apply a trick something like this:
Use a second item and a rule to set it to your original item’s value as that was 24 ago
Pseudo code: DayAheadPriceItem.HistoricItem(now.minusDays(1)).
Run the rule every minute via cron or so.

I do not have enough knowledge about this stuff it so this may not help: what about How to overlap an offset time series in OH3? (Corelate yesterdays values vs. today) - #6 by ysc ?

1 Like

@mstormi I’m mostly following what you’re saying but my brain still tilts.

The thing is that the result set what I receive from the API is not now + 24h.

  • If I call the API (and get successful response) at Feb 18 at 14.00 CET, I will get a time series response with 24 points with a time range from Feb 19 00:00 CET until Feb 19 at 23:00 CET
  • If I call the API (and get successful response) 1 hour later, I still get a time series response with 24 points with a time range from Feb 19 00:00 CET until Feb 19 at 23:00 CET.

So the offset is not “now + 24h” or “now + 1d”.

Making 24 channels for the spot prices and another 24 channels for their timestamps sounds confusing to the max. Maybe it’s just easier to create one channel which contains the time series as a JSON array. It cannot be visualized using Charts but at least the data is then available for example for Rules that can then make whatever they want with it.

Replying to myself… A Binding that would fetch the data and save it to as an JSON to an Item would not bring the spot price energy optimizations any closer to a non-developer openHab user than what I have already documented in Control a water heater and ground source heat pump based on cheap hours of spot priced electricity - #13 by masipila

Maybe abstracting the spot price fetching to a Binding is just not worth it…

That’s right. To make your solution/binding a success (= make it useable to as many not-as-technically-proficient users, too), data must be available as channels so users can use them right away without additional postprocessing.

But this now is the wrong turn:

For sure it is !
Because anyone who wants to make proper use of this info to control his heating or whatever device
relies on it.
Then again visualizing isn’t so important. A well automated system doesn’t even need any.
But easy access to available pricing information is very important to ease programming the run times of your heat pump or other consumers that you want to run at specific times in order to benefit from cheap power.

Step back for a second and have a look at other threads on the forum, search for Tibber and aWATTar bindings. These are dynamic tariff providers available (also) in Germany.
Check out their binding docs how they represent the pricing information in terms of channels.

The aWATTar binding for example has a “cheap now” switch type channel to allow for the most easiest usage one can think of, see e.g. this thread.
It allows you to create multiple OH things with different setting on a “consecutive hours” parameter which is very similar to your number of slices.
So users have the choice at any time. They can for example create two OH things, one with consecutive=1 resulting in what in your solution would be 1 slice.
In parallel they can create a second thing with consecutive=<some more>.
Depending on their application context (referring to our recent conversation: depending on if it’s a deep winter’s day or not so we want to use 1-slicing or 3-slicing algorithm) they will use a channel of thing1 or the same channel but from thing2.

Thanks for the hints to check Tibber and aWATT.

Notes to self:

Tibber seems to be storing the future prices to a channel called “Tomorrow prices” as a JSON:

And aWATTar seems to have 48 channels, 24 for today and 24 for tomorrow…

In case somebody is following this thread:

This looks extremely promising: aWATTar binding: Beta and discussion - #76 by masipila


Last autumn I made a basic binding for fetching prices from EntsoE. For now it’s quite outdated and havn’t used it in a while but the base idea for channels and stuff was to create 24 channel groups for future hours and then separate channels for each group to hold price and timestamp. Update would run every hour. Two different binding settings were for price compensations. One for adding tax and one for adding marginal (what your power grid company charges you for kWh). My target was to move the price data to separate automation system so there was no need to visualize any part of it.

But what i wrote follows Pauli’s idea as has multiple channels and no persistence. From coding perspective imo this is simpler and creating multiple channels and grouping them makes it easier to handle the data in rules. For example iterate all hourly channel groups and separate them further based on their price to another set of groups would be quite easy.

Just leave the InfluxDB and visualization to separate scripts?