JDBC can connect to TimescaleDB, but it treats it like a normal SQL database.
I wanted a persistence service that actually uses TimescaleDB as a time-series database.
What it does
Stores states in a TimescaleDB hypertable
Supports per-item downsampling (AVG, MIN, MAX, SUM) via metadata
Supports global and per-item retention
Supports optional compression of older data
Supports normal openHAB persistence queries and state writes/removes
Works well for SQL/Grafana use cases
Goal
Better long-term performance and lower storage usage for larger installations.
If you use TimescaleDB (or plan to), I would really appreciate feedback on:
Super interesting! I have been wanting to drop influxdb for while and ideally in favour of postgres/timescaledb since I have an instance of that running anyway.
I have a question though regarding how to pull data out of it again which I hope you might be able to shed some light on.
With 2 items defined as follows, I am able to pass additional data thorugh to influxdb:
This allows me to use grafana to create a graph based on the battery influxdb value and I will automticaly get all items that have a battery. Similar things are happening with temperature as well.
Are you aware of a way to do something similar to this with postgres/timescaledb instead of influxdb?
@peterhoeg For the timescaledb I’ve not implemented it this way yet, but could be done. At the moment the timescaledb persistence just reads the metadata, but does not write them to the database. We could extend the current persistence to write the metadata value as well.
In order to safe space, I use a item_meta table to keep the item names etc.
So this said, I’m currently waiting for the maintainers to approve the new persistence bundle. Once this is done, I could add an enhancement to it.
I unfortunately don’t know enough about the persistence layer to say anything remotely intelligent about but that’s not going to stop me! Off-hand, I would imagine that the influxdb specific metadata configuration could be made generic as all persistence services could have a mechanism to handle metadata, but I would imagine that this would probably have an easier way to mainline if you make it specific to your timescaledb persistence bundle.
in any case, as soon your have metadata writing supported, I’ll flip in a heart beat. One of the good things about having generated configuration is that it’s very easy to make these types of moves.
@BrettLHolmes I did not provide a build version so far. The PR is under review right know, when it makes it into main, the next snapshot will contain it.
For the cloud connection - I guess you can use it, never tested it - just give it a try. Whether this is really a fast approach depends on the physical distance between your openhab instance and the cloud endpoint.
@seime I migrated from mongodb, which is completely different than jdbc. I did this by direct bulk copy with a python script (thanks to Claude code). For a more generic approach you could use the openhab persistence REST API get all data for an item and write it to the other service.
I’m happy to share that the PR got approved and the new timescaledb persistence addon is available in openhab-addons. You can find the current snapshot build here: JFrog