Possible performance of bindings?

Felix took it from the rasperry installation, whereby the numbers are randomly generatd and different.

The data is here (modbus register 806), where you can see already a jump back from 16426 to 16345:

2018-07-09 12:53:20.720 [DEBUG] [odbus.handler.ModbusDataThingHandler] - Thing modbus:data:gasanalyse:pull:1ae1af00 channels updated: {modbus:data:gasanalyse:pull:1ae1af00:number=16426, modbus:data:gasanalyse:pull:1ae1af00:lastReadSuccess=2018-07-09T12:53:20.685+0200}. readValueType=int16, readIndex=Optional[806], readSubIndex(or 0)=0, extractIndex=106 → numeric value 16426 and boolValue=true. Registers RegisterArrayWrappingInputRegister(00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0f 00 00 00 00 00 00 40 2a 00 00 00 00 d9 01 d9 01 d9 01 00 00 d9 01 d9 01 d9 01 00 00 d9 01 d9 01 d9 01) for request ModbusPollerThingHandlerImpl.ModbusPollerReadRequest@1d97436[slaveId=1,functionCode=READ_MULTIPLE_REGISTERS,start=700,length=120,maxTries=1]

2018-07-09 12:53:21.025 [DEBUG] [odbus.handler.ModbusDataThingHandler] - Thing modbus:data:gasanalyse:pull:1ae1af00 channels updated: {modbus:data:gasanalyse:pull:1ae1af00:number=16345, modbus:data:gasanalyse:pull:1ae1af00:lastReadSuccess=2018-07-09T12:53:20.993+0200}. readValueType=int16, readIndex=Optional[806], readSubIndex(or 0)=0, extractIndex=106 → numeric value 16345 and boolValue=true. Registers RegisterArrayWrappingInputRegister(00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0f 00 00 00 00 00 00 3f d9 00 00 00 00 d9 01 d9 01 d9 01 00 00 d9 01 d9 01 d9 01 00 00 d9 01 d9 01 d9 01) for request ModbusPollerThingHandlerImpl.ModbusPollerReadRequest@1d97436[slaveId=1,functionCode=READ_MULTIPLE_REGISTERS,start=700,length=120,maxTries=1]

Hi, thanks for the clarification. How should the register 806 update? +1 every second or something else?

I note the lastReadSuccess stamps ARE consistent with the order of processing. They are not consistent with counter values differing by 81 if the counter rate is +1 seconds.
The jumbling appears to be happening at some prior level, depending on where in the chain that stamp is generated.

It’s a pity raw Modbus transactions don’t have a sequence ID, but I think TCP packets do.
You could use wireshark or something to check the order and content coming across TCP.

Yes, about one second (1…1.1 s)

Modbus/TCP however has a Transaction ID which would be returned by our client (server).
I am sure, that the data is not delayed at the server, therefore I don’t think that wireshark would be useful if there is no transaction ID used. Is the transaction ID used?

You are correct, it’s part of the spec. The binding uses globabally increasing counter (shared by all outgoing requests) for all transaction IDs.

Regarding lastReadSuccessit is generated by data thing when it receives the data from the poller / transport bundle. Strictly speaking, it does not tell much about original time when the data was received.

I tried to reproduce the issue with no success. Using (clean manual install of) openHAB 2.3.0, and latest version of the binding and transport:

openhab> bundle:list|grep -i modbus
189 │ Active   │  80 │ 2.4.0.201807061205     │ Modbus Binding
190 │ Active   │  80 │ 2.4.0.201807061205     │ openHAB MODBUS Transport Bundle

Dummy server incrementing a counter every second (register 700)

# server.py
# Run with python3 server.py
# Requires umodbus library.
import logging
import os
import time
import sys
from socketserver import ThreadingMixIn, TCPServer



from umodbus import conf
from umodbus.server.tcp import RequestHandler, get_server
from umodbus.utils import log_to_stream



#logging.basicConfig(level=logging.INFO)
# Add stream handler to logger 'uModbus'.
log_to_stream(stream=sys.stdout, level=logging.WARN)

LOGGER = logging.getLogger('server')

INTERNAL_LOOP_INTERVAL = 0.1

# Enable values to be signed (default is False).
conf.SIGNED_VALUES = True

start_epoch = time.time()
PORT = int(os.environ.get('PORT', 55502))
data_store = [0] * 120  # registers 700 - 819

class ThreadedTCPServer(ThreadingMixIn, TCPServer):
    pass


class Server(ThreadedTCPServer):

    def service_actions(self):
        # raise timeout
        new_val = int(time.time() - start_epoch)
        if new_val != data_store[0]:
            LOGGER.warn('Increasing timeout to %d' % new_val)
        data_store[0] = new_val
        


Server.allow_reuse_address = True
app = get_server(Server, ('0.0.0.0', PORT), RequestHandler)


# 3 = holding
@app.route(slave_ids=[0, 1], function_codes=[3], addresses=list(range(700, 820)))
def read_holding(slave_id, function_code, address):
    """" Return value of address. """
    assert isinstance(address, int) 
    return data_store[address-700]


if __name__ == '__main__':
    try:
        app.serve_forever(INTERNAL_LOOP_INTERVAL)
    finally:
        app.shutdown()
        app.server_close()

I use exact same configuration you provided, just pointing to localhost at port 55502:

Bridge	modbus:tcp:gasanalyse	[ host ="127.0.0.1", port=55502, id=1, timeBetweenTransactionsMillis=10 ,connectMaxTries=1 ] { 
...
}

EDIT: It seems you have way too much stuff in the callback queue, explaining slow handling of the data. E.g.

2018-07-09 12:56:19.867 [TRACE] [ternal.ModbusManagerImpl.PollMonitor] - POLL MONITOR: callbackThreadPool queue size: 154, remaining space 2147483647. Active threads 5

There are various reasons that can explain this, e.g. slow persistence. I would try with minimal configuration first.

Ideas to proceed

  • threads command output when the queue size is large. This would be most helpful. I have used this myself to diagnose some other queue buildups
  • Have you tried to test with minimal configuration, i.e. no other rules / items / things / persistence / bindings in openHAB? Might make sense.
  • tcpdump as you suggested – would rule out some speculations at least. Not useful, the problem is the stuff in the queue.

Best,
Sami

Hey Sami,

yea the callback queue size is definitely a problem. Would be interesting to see how it gets all crowded up like this in the first place.
There is a point on which the build-up of the callback queue starts, because if the setup is very limited there doesn’t seem to be a problem .
I’m a little busy this week, but I will get on testing that next week and hit you up with the results.

I thought about running a test on a fresh install and the current one, just to see if anything got messed up in the process. (running on the same hardware, that is).
Additionally I will go through your ideas and give feedback on them.

Great thanks! And just the output of threads will be most useful. Evaluate it couple of times, few seconds apart to “sample” what is the bottleneck.

@awiflx, the latest snapshot should not have a problem with increasing queue sizes anymore. See issue #4135 for more information:

Thank you. At the moment we don’t need the modbus binding. Currently we use mqtt within OpenHab instead of the modbus binding. I have programmed a program which reads and writes to shared memory and accepts and publishes mqtt. Several instances of a second program communicate with the modbus slaves and read and write from the same shared memory. Now the high traffic with cyclinc polling many modbus slaves is replaced by mqtt messages only if data change.