Custom Services

Tags: #<Tag:0x00007efec73cf880> #<Tag:0x00007efec73cf7b8> #<Tag:0x00007efec73cf6c8>


I’m very new to openhab and trying to develop a custom processing pipeline service.

The pipeline will be a plug-able system which should be configured by the user.
Examples for plug-ins of this pipeline could be a filter or a data-type transformer.

The purpose of this pipeline service is to route all events from openhab through it,
do some filtering, do some transformations and processing
and actually at least at the end of the pipeline push some events/commands back to the openhab event-bust.
Additionally i need read and write access to to the persistence layer in each plug-in.

Simple example:
Filter all events concerning doors, do some processing defined by the user
and push an action/event back to openhab which switches the lights in the relevant rooms on or off.

I have looked into the sources of mysql persistence bundle [1], and created a service similar to it.
I have configured it like a persistence bundle which pushes all events to my pipeline service.
All events are routed to this pipeline.
So far this is a good starting point for accessing all event,
but persistence and pipeline won’t work at the same time.
I admit this is the wrong way to do it, but i do not know how to it the right way.

For doing it the right way i need your help.

How can create a service which can register itself or can be configured to have access to all events
without hindering the persistence bundle from doing its work?

Which are the relevant classes i need to extend?

How can read and write from/to the persistence service/database in each of the plug-ins of the pipeline?

[1] :

Thanks in advance!

You could use rules and groups to achieve this?

Actually this is not possible as it implies to be a static solution.

The purpose of the service i am developing is to find out relations between events/actions
and suggest new actions to the user.
As the user accepts or declines the suggested actions, after a while,
the system should automagically(ML) formulate rules for those actions
and automate the suggested actions in the future.

Also thats the reason why read/write access to the database is needed in every plug-in of the pipeline.
Some plug-ins need to access historical events/actions and some generate more appropriate data
and save those.