Extracting a single value from a webpage

Hi. I am very new to openhab and was wondering if it was possible to extract a single value from a webpage and have it as a variable in blockly?
the website I want to extract from is: https://minspotpris.no/
I want to extract the current electricity price in my area. shown in the picture.

I dont know much about openhab so it would be nice to get a detailed instruction.
Thanks in advance.

This is very doable. You will need just the http binding:

and either the XPath transform or the Regex transform

You will use the http binding to create a Thing which connects to the webpage of interest. The http binding will fetch the entire contents of the webpage.

Then you will want to create a Channel for the thing that is a number type channel. That channel will will include a configuration for transforming the incoming data . You will need to use some of the links in the docs for the XPath or Regex transforms that will take you to testing tools to allow you to develop the correct path or regex (without knowing more, my guess is that XPath will be a little easier for a beginner). Once you have the channel configured to collect the relevant piece of information, you will want to Link a number type Item to that channel.

Blockly will allow you to access the state of that item for further processing.

1 Like

@JustinG : or what about just one single block of inline code:

The inline code is:

result =  Java.type("org.openhab.core.model.script.actions.HTTP").sendHttpGetRequest(url, 5000)

That works too for getting the entire contents of the web page into Blockly. That still doesn’t extract the one value of interest. Whether it is more or less appropriate in this case depends on what the rule is expected to do, how the rule is triggered to run, and whether this information is also wanted anywhere else in another rule or the UI.

Just one little word of warning that the XPath transform will only work if the original web page is XHTML. Regular HTML is less strict about every opening tag having a closing tag. If the web page in question doesn’t ensure that XPath will fail.

It’s been some time since I’ve played with this sort of thing so perhaps more web pages follow XHTML now than used to. But if you get parsing errors when trying to use XPath, that may be why.