Modbus lag

Sorry about that, try now.

1 Like

While I dig into the logs tomorrow, you might want to try one thing.

In order to rule out any hardware issues, you would like to replace the server, not the client. You could setup diagslave (many instances with different ports if necessary) for each endpoint you have. Would be nice to know if you get transaction id errors thenā€¦

Best
Sami

Was I not doing that?

nathan@marge linux $ ./modpoll -c 1 -r 2 -m tcp 10.88.64.46 -t 1
modpoll 3.4 - FieldTalkā„¢ ModbusĀ® Master Simulator
Copyright Ā© 2002-2013 proconX Pty Ltd
Visit http://www.modbusdriver.com for Modbus libraries and tools.

Protocol configuration: MODBUS/TCP
Slave configurationā€¦: address = 1, start reference = 2, count = 1
Communicationā€¦: 10.88.64.46, port 502, t/o 1.00 s, poll rate 1000 ms
Data typeā€¦: discrete input

ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 0
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 0
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 1
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 1
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 1
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 0
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 0
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 0
ā€“ Polling slaveā€¦ (Ctrl-C to stop)
[2]: 0
ā€“ Polling slaveā€¦ (Ctrl-C to stop)

That was I believe pulling from the client and it updated from 0 to 1 the second I opened the door.

Well the terminology is a bit confusing but it goes like this

  • modbus slave is your physical slave. With modbus tcp, slave is tcp server
  • modbus master is openhab. With modbus tcp, master is the tcp client

The client (=master) initates the read/write requests.

When you used modpoll, it is another client, similar to openhab. Did you have the openhab running the same time you run modpoll?

I can think of the following ways to test this

  • run instances of diagslave, one corresponding to your slave ips. That should be five slaves corresponding to the ip addressed ending with .45, .46, .47, .48 and , .49. Start openhab and see if you get any issues.
  • you can try ā€œsimulateā€ opening a door by calling pollmb.py (see this section in wiki for more details)
  • run openhab against real hardware. At the same time (when opening door), run modpoll to see if you can pick up the changes. This is also a good test since the slave device will be under stress (requests bombarded by openhab)

Best,
Sami

@sipvoip

Now with improved logging I can comment more.

With the last log, there is only single ā€œtransaction id matchā€ error, so that should not be the root cause after all.

The queries to to the slaves seem to be bottlenecked. For example, coils are read from 10.88.64.45 every ~10s.

$ grep -B5 "Sending request.*ReadCoils" ~/Downloads/modbus.log|grep  64.45|grep borrowing
2017-02-18 16:23:02 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55674]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 1 ms
2017-02-18 16:23:11 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55715]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 1 ms
2017-02-18 16:23:22 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55758]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:23:33 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55804]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 1 ms
2017-02-18 16:23:44 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55860]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 1 ms
2017-02-18 16:23:54 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55929]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:24:09 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=55990]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 1 ms
2017-02-18 16:24:21 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56042]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:24:32 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56099]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:24:42 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56145]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:24:55 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56197]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:25:08 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56253]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:25:21 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56320]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:25:31 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56367]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:25:43 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56412]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:25:55 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56467]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:26:07 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56531]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:26:18 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56585]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:26:30 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56621]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:26:41 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56679]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms

I believe this matches the lag you observe.

We can analyze how long does the binding wait in order to get a connection (using command like: grep borrowing ~/Downloads/modbus.log|rev|cut -d' ' -f1-5|rev|grep 10.88.64.45)

  • 10.88.64.45: 0-1 ms
  • 10.88.64.46: 1000ms or 2000ms often, just couple of times 1ms
  • 10.88.64.47: 1000ms or 2000ms often, just couple of times 1ms
  • 10.88.64.48: 1000ms or 2000ms often, just couple of times 1ms
  • 10.88.64.49: almost always 1-3ms, once 1000ms

This means that the slaves with ip ending with 46, 47 and 48 slow down polling in general. This is due to the fact that each ā€œslaveā€ (water, basement, etc.) are polled in turn, something like this:

  • poll water (45)
  • poll basement (46)
  • poll basement_shared (46)
  • poll basement_ponet (46)
  • poll first (47)
  • poll first_ponet (47)
  • poll first_shared (47)
  • poll second (48)
  • poll second_ponet (48)
  • poll second_shared (48)
  • poll shed (49)

By default, connection is closed after each read and write request. By default, the binding waits 60ms before proceeding with request to the same endpoint (same ip and port).

So in reality the basic polling goes like this:

  • poll water (45)
  • poll basement (46)
  • wait 60ms, poll basement_shared (46)
  • wait 60ms, poll basement_ponet (46)
  • poll first (47)
  • wait 60ms, poll first_ponet (47)
  • wait 60ms, poll first_shared (47)
  • poll second (48)
  • wait 60ms, poll second_ponet (48)
  • wait 60ms, poll second_shared (48)
  • poll shed (49)

I noticed that you had lot of writes happening at the same time, which means polling is ā€œinterruptedā€ due to the writes, something like this

  • poll water (45)
  • poll basement (46)
  • wait 60ms, poll basement_shared (46)
  • wait 60ms, poll basement_ponet (46)
  • poll first (47)
  • wait 60ms, write to ip .47
  • wait 60ms, poll first_ponet (47)
  • wait 60ms, write to ip .47
  • wait 60ms, poll first_shared (47)
  • poll second (48)
  • wait 60ms, poll second_ponet (48)
  • wait 60ms, poll second_shared (48)
  • poll shed (49)

This makes the poll take longer time.

With this hypothesis I have been investigating what happens between two polls to the same ip. Iā€™m looking the following time interval:

2017-02-18 16:26:30 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56621]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms
2017-02-18 16:26:41 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (water): borrowing connection (got TCPMasterConnection@66c735a8[socket=Socket[addr=/10.88.64.45,port=502,localport=56679]]) for endpoint ModbusTCPSlaveEndpoint@34524c60[address=10.88.64.45,port=502] took 0 ms

Letā€™s check out all connection attemps in this interval:

grep " connecting" ~/Downloads/modbus_log_2017_02_18_16_26_from_30_to_41s.txt|cut -d' ' -f1,2,15
    2017-02-18 16:26:30 TCPMasterConnection@2297e31[socket=Socket[addr=/10.88.64.47,port=502,localport=58742]]
    2017-02-18 16:26:31 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37163]]
    2017-02-18 16:26:31 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34289]]
    2017-02-18 16:26:32 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37170]]
    2017-02-18 16:26:32 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34295]]
    2017-02-18 16:26:33 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37176]]
    2017-02-18 16:26:33 TCPMasterConnection@2297e31[socket=Socket[addr=/10.88.64.47,port=502,localport=58760]]
    2017-02-18 16:26:33 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34305]]
    2017-02-18 16:26:34 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37184]]
    2017-02-18 16:26:34 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34313]]
    2017-02-18 16:26:35 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37193]]
    2017-02-18 16:26:35 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34320]]
    2017-02-18 16:26:36 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37199]]
    2017-02-18 16:26:36 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34323]]
    2017-02-18 16:26:37 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37203]]
    2017-02-18 16:26:37 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34328]]
    2017-02-18 16:26:38 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37207]]
    2017-02-18 16:26:39 TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34333]]
    2017-02-18 16:26:39 TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37212]]
    2017-02-18 16:26:40 TCPMasterConnection@2297e31[socket=Socket[addr=/10.88.64.47,port=502,localport=58795]]


grep " connecting" ~/Downloads/modbus_log_2017_02_18_16_26_from_30_to_41s.txt|cut -d' ' -f1,2,15|cut -d'=' -f3|sort|uniq -c
   8 /10.88.64.46,port
   3 /10.88.64.47,port
   9 /10.88.64.48,port

During the ~10 second interval, 20 connections are made to other slaves.

We can confirm that all these connections need to wait the ~60ms period between transactions:

grep "delay between connections" ~/Downloads/modbus_log_2017_02_18_16_26_from_30_to_41s.txt|cut -d' ' -f1-2,5-6,15
2017-02-18 16:26:30 Waited 59ms TCPMasterConnection@2297e31[socket=Socket[addr=/10.88.64.47,port=502,localport=58742]]
2017-02-18 16:26:31 Waited 58ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37163]]
2017-02-18 16:26:31 Waited 58ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34289]]
2017-02-18 16:26:32 Waited 58ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37170]]
2017-02-18 16:26:32 Waited 59ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34295]]
2017-02-18 16:26:33 Waited 58ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37176]]
2017-02-18 16:26:33 Waited 57ms TCPMasterConnection@2297e31[socket=Socket[addr=/10.88.64.47,port=502,localport=58760]]
2017-02-18 16:26:33 Waited 57ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34305]]
2017-02-18 16:26:34 Waited 57ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37184]]
2017-02-18 16:26:34 Waited 57ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34313]]
2017-02-18 16:26:35 Waited 58ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37193]]
2017-02-18 16:26:35 Waited 57ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34320]]
2017-02-18 16:26:36 Waited 59ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37199]]
2017-02-18 16:26:36 Waited 57ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34323]]
2017-02-18 16:26:37 Waited 58ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37203]]
2017-02-18 16:26:37 Waited 58ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34328]]
2017-02-18 16:26:38 Waited 59ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37207]]
2017-02-18 16:26:39 Waited 58ms TCPMasterConnection@2a326a36[socket=Socket[addr=/10.88.64.46,port=502,localport=34333]]
2017-02-18 16:26:39 Waited 58ms TCPMasterConnection@17f95457[socket=Socket[addr=/10.88.64.48,port=502,localport=37212]]
2017-02-18 16:26:40 Waited 58ms TCPMasterConnection@2297e31[socket=Socket[addr=/10.88.64.47,port=502,localport=58795]]

Summary and next steps:

  • the performance could be improved by having separate poll threads for different endpoints (ip+port). Something that has been discussed in the context of error handling before here in community threadsā€¦ Something I can consider for the openhab2 refactoring Iā€™m working onā€¦
  • currently it is hardcoded that at most single connection is allowed to the endpoint. Lifting this restriction or making it configurable should improve performance (assuming slave can deal with multiple clients ā€“ not all can). Something I can consider for openhab2.
  • Assuming your slave can handle the requests, you should be able to improve performance by reducing the wait time between transactions by configuring interTransactionDelayMillis connection string parameter. Consult Advanced connection parameters (since 1.9.0) section in wiki for more details.
  • If possible/feasible, reducing amouont of write commands certainly will improve the performance.

Let me know how it goes with the interTransactionDelayMillis connection parameter. Remember to configure the same connection settings to all openhab ā€œslavesā€ sharing the same endpoint (in your case, ip).

Best,
Sami

Trying to setup diagslave and I am getting:

2017-02-19 14:12:47 WARN  o.o.b.m.internal.ModbusSlave[:363]- ModbusSlave (digislave): Error getting a new connection for endpoint ModbusTCPSlaveEndpoint@159b64ea[address=10.88.64.6,port=502]. Error was: Unable to validate object
2017-02-19 14:12:47 TRACE o.o.b.m.internal.ModbusSlave[:366]- ModbusSlave (digislave): borrowing connection (got null) for endpoint ModbusTCPSlaveEndpoint@159b64ea[address=10.88.64.6,port=502] took 1 ms
2017-02-19 14:12:47 WARN  o.o.b.m.internal.ModbusSlave[:504]- ModbusSlave (digislave) not connected -- aborting read request net.wimpi.modbus.msg.ReadCoilsRequest@7bfc63b9. Endpoint ModbusTCPSlaveEndpoint@159b64ea[address=10.88.64.6,port=502]

My config looks right to me:

modbus:tcp.diagslave.connection=10.88.64.6:502:0
modbus:tcp.diagslave.length=55
modbus:tcp.diagslave.type=coil

I tried lowering the interTransactionDelayMillis to 0 and that I think helped a little, but the root of the problem is still there. I guess I will see if the manufacture knows anything, the load measured on the device is very low, less ten 10%.

The odd thing is that if I go from poll=100 to 800 the delay is still the same. Its also odd that I canā€™t see that delay (could be looking wrong) on wireshark.

Figured out the Unable to validate object, it was firewalld runing on the box I was running diagslave on.

Is it possible to run modbus binding more then once?

While clearly not the right fix, I was able to fix my slow issue by setting receiveTimeoutMillis to 20 ms. This broke the connection and passed the device. I think the root issue is a bad software stack on my modbus devices. You can checkout the pcap at http://share.robotics.net/modbus.pcap.

I add up that polling cycle to around a half second, i.e. longer than the binding poll parameter. Curious, what happens here - requests pile up until they fail through timeout, or missed periodic polls are simply missed out? Iā€™m presuming a cycle isnā€™t cut short and restarted.

Is it possible to make the binding issue a warning when poll cycle incomplete before next is scheduled?

Hi

@sipvoip What do you mean running modbus binding more than once?

The odd thing is that if I go from poll=100 to 800 the delay is still the same. Its also odd that I canā€™t see that delay (could be looking wrong) on wireshark.

This is expected as poll period is not the the bottleneck, but various delays happening during the cycle (e.g. delay between transactions, reading responses etc.). You can even use poll period of 1ms to ā€œpoll constantlyā€. (see also below on the wireshark analysis)

I was able to fix my slow issue by setting receiveTimeoutMillis to 20 ms

This is interesting. So basically if the modbus response read operations take more than that, the transaction fails.

It was hard to see from the logs whether time between request and response is high since the log timestamp used only second accuracy (not millisecond). Perhaps it could be seen from the pcap file.

I had a look at the pcap file and perhaps the see the issue. It is related to closing the connection (SYN)? You can see retransmissions of some messages (from master to slave, and vice versa). This seems to take the time you are suffering from (1s waited before master re-transmits, 1-2s waited before slave re-transmits).

I wonder if this all is related to discussion in issue 4972. Not sure really, since we do not see (abortive close) RST messages.

Did you succeed to try out the binding with diagslave? Would be interesting to capture the traffic there.

Given that connection closing (SYN/FIN retransmission) takes the time, you might be able to speed up by keeping the connections open longer. You can set reconnectAfterMillis to 600000 (which means 300s or 5min) to keep the connections 5 minutes open before re-opening them.

Best
Sami

Hi,

The ā€œpoll schedulingā€ is actually done by openhab framework, not by the binding.

I have investigated the details previously since ā€œmissing the poll windowā€ is quite often the case (e.g. users running 1ms poll loop). The polling takes whatever it takes, and then next poll round will be started after the ā€œpoll periodā€. For example

  1. poll all the slaves ā€“ takes 10s
  2. sleep for ā€œpoll periodā€, e.g. 250ms
  3. poll all the slaves ā€“ takes 10s.
  4. etcā€¦

For more details, the logic is found here AbstractActiveService

Best
Sami

Would not 600000 be 10 min? Man, all I know is it fixed my issue. I just checked all my slaves and they 90% of the time they show up as 0 ms, and the rest are 1 ms. I donā€™t see anything higher then 1 ms for any slave.

I did and I did not see the issue with diagslave, so the issue must be with the PoKeys57 device. When I emailed them for support they said donā€™t close the connection every time. :slight_smile:

BTW, why is the default reconectAfterMillis 0? I would think it makes more sense to keep the TCP socket open.

Would not 600000 be 10 min?

Yeah my bad :slight_smile:

BTW, why is the default reconectAfterMillis 0? I would think it makes more sense to keep the TCP socket open.

Wellā€¦ That is a long story. Refer to this one for the discussion

Long story short: keeping the connection open causes serious issues in some cases. For example, if the slave can handle only single client at a time, openhab would essentially block the access to the slave from other consumers.

Great to hear that you got it working! Glad to help always.

Best
Sami