And remember that a normal polling cycle (typically with hundreds of addresses to poll) would probably cost hundreds of times the cost of opening and closing a tcp connection (very few millisecs anyway) .
I see your point but closing and opening has issues as well. But please note that we have already got feedback from our users that connections/requests attempts are discarded with some PLCs due to the internal clock cycle. Thus I propose that connction closing should be configurable (perhaps defaulting to closing the connection at every poll).
I also think that the connection pooling in a modbus environment is just overkill in the best scenario, harmful in a more realistic scenario.
I think the my term connection pooling might be misleading as the plan is (see the discussion in the thread) to limit the max number of open connections to one (per each modbus endpoint). The idea is not flood the end server with multiple points – I believe this how you interpreted the connection pooling. Furthermore, the connection pooling fixes the long-standing issues we have had with modbus serial slaves in openhab (currently there is no proper synchronization).
I certainly am aware of practical limitations, for example the at-most-one-connection issue is one of the key factors motivating the connection pooling (perhaps the names is misleading, I should talk about connection sharing). Currently the binding is/was unusable since many connections are established. For me this is really solving a practical problem, not over-engineering.
The only way I will ever use modbus, if I want a reliable behavior over modbus transport, is to “open, communicate, close”, nothing more. Does it cost a few msec over a maybe 200/300msecs of full communication for data transfer? I can live with it … it the communication stays reliable.
Years ago my connections were left open; after time I started to put it as configurable and keeping the as always on demand; after time I just removed the configuration because it is harmful.
modbus/tcp is not tcp/ip, there is not a reliable way to detect and manage a “falling socket” on the physical device; usually the device just timeouts in order to detect a falling socket, they do not have a full and reliable tcp stack; and within that time you just get bunch of errors … because of keeping those connections opened.
I really appreciate the practical experience-based feedback. I’m aware of the falling socket issues, I actually incorporated some protection against using SO_KEEPALIVE flag (but nothing’s perfect, I know).
And moreover tcp is very very often implemented over tcp/serial converter (like moxa) with limited capabilities. So a connection pooling should also be mediated with your converter capabilities.
Would you open this up more? I’m not sure I understand fully. In the connection pooling thread we discussed the tcp-serial converters and the general conclusion was that it should fit quite well with those.
And you cannot modify the modbus binding just thinking that openhab is the only software accessing a modbus device. May be the device is accepting only 1 connection, alternatively from openhab or from other systems. You have to use the resource and free it as soon as possible; Modbus slaves are usually very poor devices and often also badly implemented. The only way to work in a reliable way with them is the simple one: “open, talk, close”… and keeping every communication very well queued in FIFO style for each modbus slave target.
I have exactly that case here with my modbus server, one of the reason for going with the connection pooling/sharing in the first place. You are quite right that we cannot assume that and I understand that we need to make the reconnecting functionality configurable… No need to get personal though, I believe we both are frustrated with the issues in the binding…
Related to the assumption that “you cannot modify the modbus binding just thinking that openhab is the only software accessing a modbus device”, I would like invite you to review the pull request #3684. It’s about holding state in the binding (basically assuming that state is not changed by an external party between polls).
I propose that we go as follows
- implement configuration (slave specific) key boolean reconnecting to close the connection after every transaction (please see above why it would make sense to make it configurable). Default would be to close the connection after every transaction. Idea would be to move reconnection lines to the finally block. Would you see this is as a useful feature for serial ports as well?