OK, to summarize the situation we have now the following (documenting for future generations as well… Perhaps we can snip out something for the wiki in the future as well)
Summary
1. single slave in modbus.cfg, low inter transaction delay (10ms), with generous read timeout (280ms), poll period practically minimal (5ms)
/dev/ttyS0:38400:8:none:1:rtu:280:10
(config) (as reported reported here and here)
- performance good (logs), 1864 transactions per minute. Log snippet you pasted shows that on average we execute 1 transaction in 32ms (median 24ms), ie ~1875 transactions/minute[1]
- Performance is slower than poll period 5ms, but that is quite expected and perfectly all right – the system reads as fast as it can in the sense that there are no “exra” waits between transactions.
2. Many slaves (13) in modbus.cfg, high inter transaction delay (80ms), with generous read timeout (280ms). Poll slower 50ms**
(config)
modbus:serial.agsBusMasterHolding.connection=/dev/ttyS0:38400:8:none:1:rtu:280:80
(as reported here)
- performance poor, average 1 transaction in 80ms, ie ~750 transactions/minute (logs)
- performance is poor because we artificially ask the system to wait between transactions, on average 43ms is waited after each transaction
- no errors since the delays are really conservative
3. Many slaves (13) in modbus.cfg, high inter transaction delay (80ms), with tight read timeout (50ms). Poll slower 50ms**
(same as this config but with different read timeout)
(as reported here)
- poor performance, EOF errors
- since same timeout goes for small (short) reads, and long reads (long) reads [3] (e.g. 14 registers), we get timeouts.
[1] grepped timestamps with: grep "to allow delay between transactions" single_device.log|cut -d' ' -f1
[2] grepped average wait time using: grep "to allow delay between transactions" multi-device.txt|cut -d' ' -f8
[3] Here’s the corresponding place in code which makes the read block until 1) timeout occurs 2) all message bytes are read
Steps forward
Decreasing the read timeout was a bit poor advice, and definately that is not the issue since the single device configuration works extremely fast with read timeout of 280ms.
Based on the observations I think this might really work with multi device scenario (good general configuration instructions as well):
-
short poll period, e.g. 5ms. The binding will simply block until previous read is done, and immediately start the new poll round (checked this from openhab core code as well). If there is no requirement to update item status fast, this can be increased ofcourse.
-
generous read timeout, e.g. 280ms. When the line works, the read returns much faster. In case of errors (e.g. lost bit in transmission for whatever reason), the system waits 280ms before giving up. Default 500ms is pretty good default actually. With a lot of registers, the read timeout likely needs to be increased. If we get EOF errors, the value might be too low.
-
shorter inter transaction delay, e.g. 10ms. As has been proven by you in the single device scenario, even delay of ~30ms+some seems to work just fine with your device. For high performance requirements, this should be set as low as possible. If we get EOF errors, the value might be too low. This is likely hardest out of all parameters to tune in the sense that it depends on device performance as well.
Summarizing the above advice, can you try this kind of connection string with the multi-slave setup
# - Short poll period (5ms). Binding will simply block until previous read is done, and immediately start the new poll round
#
modbus:poll=5
# - Read timeout (280ms) needs to be enough for long queries to work out. When the line works, the read returns much faster.
# - Short inter transaction delay (10ms). For high performance requirements, this should be set as low as possible. If we get EOF errors, the value is too low
#
modbus:serial.X.connection=/dev/ttyS0:38400:8:none:1:rtu:280:10````
Best,
Sami