Nevertheless, I see further problems. I don’t need to be able to publish a higher data load once, but several times. I have several items whose states change regularly. Let me give you an example: I think I have 48 lamps. They have the states of the switch, the brightness, color values etc… If I switch them all on at the same time, I would have 144 states. So I would say that I have several scenarios where you also have to publish between 50 to 300 states as soon as possible. Without the example of turning on all the lights at the same time, I just have to realize that I have many devices that communicate their status message every second.
If I see your suggestion, then this would only bring me something once, that I publish several items. But the items must still be detected by a ValueUpdateEvent. There it could happen that almost at the same time 300 items pass on their state or command.
That is the thing - I started two HABApp instances on the same machine and publish 10k states in 6 secs to a local mosquitto broker and it works without a disconnect on either of the instances.
This gives me about 1600 msgs per sec but obviously not on a raspberry pi.
You can start the benchmark yourself with the -b command line switch.
Do the other slaves disonnect during the benchmark?
It doesn’t loose the connection, openhab responds to a request with “Internal Server Error” and then HABApp tries to reconnect because this is often observed when OH is shutting down or starting. The issue is definitively on the OH side.
I’m still wondering if the devices are powerful enough to run this many items.