Modbus performance management in openHAB 2

These notes apply to modbus binding version 2.x
This is not a Modbus how-to, but rather a tuning guide for advanced users.
Related - Modbus error management in openHAB

OpenHAB performance issues with Modbus

openHAB is often deployed on limited performance systems, and extensive use of Modbus devices can highlight limitations. That is not about Modbus technology in particular, but more about the side effects of any polling tech within openHAB.

If you have only a handful of Modbus connected OH Items, the impact is minimal and you need not be concerned. For those with a hundred or more, though ā€¦

Background

Modbus is a polling technology. Only the ā€˜masterā€™ (openHAB in our case) can initiate communication, so we usually configure to request ā€˜slaveā€™ status frequently. Typically every second.

The default situation then, would be to update all of our linked OH Items with the up to date states that the binding has just fetched. Potentially, we could be generating hundreds of Item updates per second, something outside of most openHAB users experience.

Weā€™ll look into two areas; how to reduce the system workload created by many updates, and then how to reduce the frequency of updates in the first place. I would recommend following the whole process, not just trying one ā€œmagic bulletā€.

3 Likes

Event bus overheads

Large numbers of Item updates on openHAB event bus will obviously cause system workload to increase - while it carries out the actual update, looks to see if any rules should be triggered, performs logging, etc.

There is little that we, as users, can do to make this part more efficient.

Logging

I do not suggest reducing logging levels just to try and lighten the load. In any case, openHAB2 does not log ā€œsame stateā€ Item updates with ordinary settings, only changes. Youā€™ve got logging because you might need it, and while there is some overhead in system I/O writing to files it is fairly minimal.

We will look later at reducing the events themselves, though.

Having said that, do remember that you can generate extra log traffic by using trace level log settings to analyse some problem. Just remember to restore normal logging when your investigations are over!

Rules

You may wish to review your rule triggers in particular. For example, a rule that performs some calculation based on a temperature sensor Item could be triggered by Item update. It works, of course ā€“ but is it necessary to calculate every second? Would a changed type trigger give you the same result, but avoid the unnecessary overhead of running the rule every second to get the same value?

You may occasionally write rules that initiate some time-based action in response to a Modbus generated event. Take care that your rules logic does not build a big stack of timers etc. if updates may arrive quicker than actions are performed.

Persistence

This area has caused people noticeable problems, so it is worth your attention.

It is all too easy to configure your persistence service(s) to store every update for every Item. Where you have a hundred updates a second, this can present a substantial system load updating database files, particularly with slower storage media like SD cards

Review what you are storing, and when. everyChange or everyMinute strategies may be more appropriate than everyUpdate.

If you routinely persist all/many Items for restoreOnStartup purposes, consider if that is sensible for Modbus polled data Items. The binding should be working and polling data before your rules get going; so restoreOnStartup becomes pointless for polled data. (Note that the binding will poll all read Things at startup, regardless of the chosen polling refresh rate.)

There isnā€™t much more we can do to reduce the impact of many frequent Item updates. We should also look at ways to actually reduce the number of updates altogether.

3 Likes

Selective data

Just because you can read data from your Modbus slave doesnā€™t mean you should. Review if you really need data to end up as an Item state.

For example, slave device firmware version ā€“ thereā€™s no point updating an Item with that every second of course. But do you need it at all?

Donā€™t forget you can have a poller reading a block of registers, but you do not have to have a data Thing (or Item) for every register.

Modbus writes

Since writing to Modbus device registers is carried out on demand rather than on a polling schedule, there isnā€™t really any ā€œtuningā€ to do here. The binding is not structured to carry out block writes of many Item commands, as this is very rarely needed.

Polling frequency

Itā€™s more or less a common standard to poll everything once per second in Modbus set ups.

The openHAB Modbus binding allows us the luxury of polling different slaves or different blocks of registers at different rates, so letā€™s explore the possibilities.

For example for a pushbutton input, you really do want a frequent poll, since a thumb press takes less than a second.

A room temperature sensor however ā€“ thereā€™s probably no need to read that more than once a minute.

Polling coils to see what state a relay is in is something worth considering in more detail. If you always control the relays from openHAB, do you need to read this back often? Or even at all? The binding does allow you to configure poller Things that donā€™t actually poll, and data Things (registers) that are write-only.

With use of no-poll pollers, you can implement a read-on-demand scheme. Issuing a REFRESH command to a Modbus Item from a rule will cause a one-time read poll (along with any other channels/Items also belonging to that poller). That would be a rare case, but has uses. For example, initialising an otherwise write-only Item at openHAB start up, by reading the slave.

Usually you will reach a compromise here, because you will want some of the registers in a pollerā€™s ā€˜blockā€™ more often than others, but the poll rate for the whole will have to be set by the most frequently needed. Thereā€™s not much benefit in splitting a poller Thing into two or more blocks, as you would then just be increasing the total number of polls and adding Modbus workload instead.

Indeed it is worth looking at the opposite to reduce Modbus traffic ā€“ can you combine poller Things? Sometimes you will have a slave with say registers 15 and 19 of interest, but registers 16-18 are undefined by the manufacturer. Some slaves ā€“ but not all ā€“ will let you read a block of registers including undefined ones. You could try to set up one poller Thing for all registers 15-19, with data Things for just 15 and 19. (Caution ā€“ the design of Modbus protocol limits you to around 120 registers in one poll)
ā€œBut I donā€™t want those other registers!ā€ Thatā€™s fine, just ignore them. The overheads for your host, network, and slave of one large poll are less than two small pollsā€¦

Most likely, after reviewing your polling setup, you could still end up with many Items being more frequently updated than you really need, just to get an important few regularly.

Update ā€œunchangedā€

This binding feature is a powerful tool to reduce unwanted Item updates. Iā€™ve deliberately saved this for last, because there are worthwhile benefits from looking critically at the other areas already mentioned.

The binding could update each channel (and hence each Item) every time data is polled from the modbus device. But in fact the binding remembers the previous polled state. If the data does not change from one poll to the next, the binding can avoid updating the channel (and so avoid creating an Item update on the openHAB event bus).

This ā€œrememberedā€ state is eventually considered to go ā€œstaleā€, and a channel/Item update is made even if it hasnā€™t changed.

If the polled data changes, an immediate update is always made.

The ā€˜helperā€™ channel lastReadSuccess is still updated at every poll, if you use that.

The data Thing parameter updateUnchangedValuesEveryMillis controls the maximum time a channel/Item will go without an update. By default this is 1000mS, but you can set a much larger number here and so avoid unnecessary Item updates for minutes, hours or even days.

This setting can be different for each data Thing, and so different for each Item. This can be useful when you want to use some Item updates to trigger periodic rules or persistence storing. You should consider if, and when, you need regular updates for charting purposes and suchlike.

Typically you might set updateUnchangedValuesEveryMillis=3600000 for updates at least hourly. This would reduce the load on openHABā€™s event bus by many thousand events ā€“ per Item!

Note, that if you use the expire binding technique to detect Items that donā€™t get updated, you will need to set the Itemā€™s expire time to be longer than the data Thingā€™s updateUnchangedValuesEveryMillis time.

Note also, that the ā€œremembered stateā€ is that state sent to the channel after any data Thing transformation. You may have a transform that rounds a number for example, and it is possible for the polled registers to change value without changing the channel state.

Summary

A fast polling service like Modbus used with large numbers of data inputs can highlight performance limitations in openHABā€™s events bus and persistence services. With careful use of the available tools, we can greatly reduce the impact of large Modbus configurations and still have responsive and timely data available.

4 Likes

TCP tuning

If you are a user only of Modbus-RTU over serial, ignore this section. Modbus-TCP users read onā€¦

I wondered whether to make this a separate guide, as this is more about performance external to openHAB, but it is closely related.

In Modbus-TCP, we use the standard TCP/IP communications protocol as a ā€œcarrierā€ for our Modbus flavoured data transactions. TCP in general is really peer-to-peer, but with Modbus our Master (openHAB) becomes a TCP Client, initiating communications with TCP Servers ā€“ which are our Modbus Slaves.

To make best use of the TCP parameters available in the OH Modbus binding, we should know a little about how TCP/IP works.

TCP connections ā€“ a really simplistic view

IP carries data packets between here and there, without much intelligence and few guarantees ā€“ not even that data arrives in the same order it was sent.

TCP adds some controlling rules on top of that ā€“ hence ā€œTCP/IPā€ ā€“ and allows receipt acknowledgments, retries, proper ordering. All the stuff we need for Modbus!
We donā€™t need to know how exactly, but the important part is the connection concept.

openHAB (as TCP client) contacts a remote device (TCP server) and they do some introductions and handshaking, and agree to establish a connection.
Once that TCP connection is open, they can exchange data packets (our Modbus transactions) easily and reliably.

Connections do not last forever ā€“ it all goes wrong if anything changes at either end or along the way inbetween.
Either end can close the connection, and thatā€™s the end of that until you establish a new one.
Connections can last just one exchange, or for days with gigabytes of transfers ā€“ it all depends on use and environment.

Alright, that is the childā€™s eye view but it is good enough here.

Modbus binding TCP defaults

By default, the binding opens a new TCP connection for each data poll (or data write) and then closes it again.

What? Well, why not. It works in most cases, and the defaults are there to get you started in first simple use of the binding.

Why close the connection?
Modbus has a lot of history, slaves can be very old designs with few resources to spare.
It is not that uncommon for a slave to support only one TCP connection at a time, rejecting other attempts while connected.
In that circumstance, keeping a connection open is anti-social behaviour ā€“ you are hogging the slaveā€™s attention and no-one else can talk to it. Especially frustrating in an environment like a HVAC system where another ā€œuserā€ like a display panel might need a turn.

So, safest for openHAB to be a ā€œgood neighbourā€ and close the connection after each binding transaction.

Isnā€™t that an overhead, making work?
Yes it is. Creating a connection uses resource, at both ends.
So as your system grows beyond reading just a few registers we must pay more attention. Creating hundreds of connections per minute can begin to stress the host systems. That probably isnā€™t going to matter very much to your openHAB host, but remembering our resource-limited slaves it can definitely be a problem at that end.

IF you find out that your slave can support more than one TCP connection, youā€™ll almost certainly want to change this behaviour in any non-trivial setup.

You can do this separately for each slave, so you can run a configuration with a mix of capabilities.

The reconnectAfterMillis parameter

This is a time value, giving the effect that the TCP connection will be held open for at least the given time, then closed and established anew at the next read poll or write.

Where youā€™re probably polling each second, you might typically set that for one or ten minutes (60000mS). You can go up to a week here.
All those Modbus transactions for that slave then take place over the existing TCP connection.
Donā€™t be tempted to go much longer than an hour without good reason, or you may accidentally make it harder to recover from any environment changes. Also, occasionally there might be a firewall or suchlike in the path that can block off connections it thinks have been open too long.

Donā€™t worry about error recovery here, a serious error during a connection will likely cause one or the other end to close it pre-emptively, thatā€™s all part of the TCP hidden machinery. If we can successfully re-establish a new connection, the timing starts over.

The timeBetweenReconnectMillis parameter

Another time value, this sets a minimum delay between closing a connection and establishing a new one.

Why not immediately? There is housekeeping for both ends to do when closing a TCP connection, as well as opening, so we want to be sure thatā€™s all finished before starting over. Again, remember a resource limited slave could take much longer to tidy up than our openHAB host. For some slaves, you may need to make this more generous than the default 0mS (for immediate attempt).
You might try 100mS or more if you seem to get a timeout at each reconnect.

That all assumes an orderly planned connection closure. But if there was an error of some kind, we donā€™t know how long the other end will take to recognize for itself that there is a problem, and make its own closure. Again, for some devices we may need to be generous here.

Note for users of Gateways
A device like a radio or serial gateway allows many real Modbus slaves to be available at the same IP address and port - but with different slave IDs.
The binding requires you to set up separate TCP Bridge Things for each slave, but using the shared IP/port.
You must make sure you use the same TCP option parameters for each Bridge with the common IP/port.

Summary

In most Modbus-TCP uses, overheads (and very often occasional timeout-retry incidents) can be much reduced by holding TCP connections for a considerable time.

6 Likes