OH4.2.1 modbus.sungrow Binding - WiNet-S2 stops responding

I’m running openHAB V4.2.1 on a Raspberry PI 3B+ with the modbus sungrow add-on. My inverter/battery is a SH10RS/SBR192 with a WiNet-S2 comms module.

I can configure the Modbus TCP Slave and Sungrow Inverter things fine. I have configured 27 items associated with the inverter thing and it all seems to function fine (values are returned)…for a while. After some period of operation (it varies), I see the following in the openHAB log:

2024-08-24 14:21:28.066 [WARN ] [ing.ModbusSlaveConnectionFactoryImpl] - Connect reached max tries 1, throwing last error: Connect timed out. Connection TCPMasterConnection [m_Socket=Socket[unconnected], m_Timeout=3000, m_Connected=false, m_Address=inverter/192.168.1.5, m_Port=502, m_ModbusTransport=net.wimpi.modbus.io.ModbusTCPTransport@e99fdd, m_ConnectTimeoutMillis=10000, rtuEncoded=false]. Endpoint ModbusIPSlaveEndpoint [address=inverter, port=502]
2024-08-24 14:21:28.071 [WARN ] [ing.ModbusSlaveConnectionFactoryImpl] - Error connecting connection TCPMasterConnection [m_Socket=Socket[unconnected], m_Timeout=3000, m_Connected=false, m_Address=inverter/192.168.1.5, m_Port=502, m_ModbusTransport=net.wimpi.modbus.io.ModbusTCPTransport@e99fdd, m_ConnectTimeoutMillis=10000, rtuEncoded=false] for endpoint ModbusIPSlaveEndpoint [address=inverter, port=502]: Connect timed out
2024-08-24 14:21:28.074 [WARN ] [rt.modbus.internal.ModbusManagerImpl] - Could not connect to endpoint ModbusIPSlaveEndpoint [address=inverter, port=502] -- aborting request ModbusReadRequestBlueprint [slaveId=1, functionCode=READ_INPUT_REGISTERS, start=13001, length=46, maxTries=1] [operation ID 99f9c164-e64c-4439-9cc0-8ad6cf388e47]
2024-08-24 14:21:28.077 [DEBUG] [grow.internal.SungrowInverterHandler] - Failed to get modbus data
2024-08-24 14:21:28.106 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'modbus:sungrow-inverter:5c552136b1:d898067245' changed from ONLINE to OFFLINE (COMMUNICATION_ERROR): Failed to retrieve data: Error connecting to endpoint ModbusIPSlaveEndpoint [address=inverter, port=502]
2024-08-24 14:21:38.084 [WARN ] [ing.ModbusSlaveConnectionFactoryImpl] - Connect reached max tries 1, throwing last error: Connect timed out. Connection TCPMasterConnection [m_Socket=Socket[unconnected], m_Timeout=3000, m_Connected=false, m_Address=inverter/192.168.1.5, m_Port=502, m_ModbusTransport=null, m_ConnectTimeoutMillis=10000, rtuEncoded=false]. Endpoint ModbusIPSlaveEndpoint [address=inverter, port=502]
2024-08-24 14:21:38.086 [WARN ] [ing.ModbusSlaveConnectionFactoryImpl] - Error connecting connection TCPMasterConnection [m_Socket=Socket[unconnected], m_Timeout=3000, m_Connected=false, m_Address=inverter/192.168.1.5, m_Port=502, m_ModbusTransport=null, m_ConnectTimeoutMillis=10000, rtuEncoded=false] for endpoint ModbusIPSlaveEndpoint [address=inverter, port=502]: Connect timed out
2024-08-24 14:21:38.087 [WARN ] [rt.modbus.internal.ModbusManagerImpl] - Could not connect to endpoint ModbusIPSlaveEndpoint [address=inverter, port=502] -- aborting request ModbusReadRequestBlueprint [slaveId=1, functionCode=READ_INPUT_REGISTERS, start=5007, length=29, maxTries=1] [operation ID 8ce424a9-fa3c-42a9-afb4-94df55ae4fc6]
2024-08-24 14:21:38.089 [DEBUG] [grow.internal.SungrowInverterHandler] - Failed to get modbus data

The WiNet-S2 appears to shutdown its interface and stops responding to pings. I have to unplug it from the inverter and plug it back in to get it to reboot and everything goes back to normal (until the issue comes up again).

I think it might be a conflict between openHAB and iSolarCloud (Sungrow’s app). When the WiNet-S2 stops responding, the iSolarCloud app shows the system status as OFFLINE and Live Data not enabled. Once I unplug and replug the WiNet-S2, the app goes back to normal.

The reason I think it might be a conflict with the iSolarCloud app, is because the WiNet-S2 and app run fine by themselves (without openHAB running). I have the Sungrow Inverter thing pollinterval set to 5000ms and if I increase it (to say 60000ms) it takes a lot longer for the problem to appear.

I did a wireshark packet capture before, during and after the problem. The WiNet-S2 is 192.168.1.5 and the RPi is 192.168.1.84. Around the time of the problem, it looks like:

----
# Packet 4778 from /media/nas001/wireshark.pcapng
- 4964
- 14:21:13.052306069
- 192.168.1.5
- 192.168.1.84
- TCP
- 60
- 502 → 48408 [ACK] Seq=102 Ack=14 Win=131056 Len=0

----
# Packet 4779 from /media/nas001/wireshark.pcapng
- 4965
- 14:21:13.053344645
- 192.168.1.5
- 192.168.1.84
- TCP
- 60
- 502 → 48408 [FIN, ACK] Seq=102 Ack=14 Win=131056 Len=0

----
# Packet 4780 from /media/nas001/wireshark.pcapng
- 4966
- 14:21:13.053446570
- 192.168.1.84
- 192.168.1.5
- TCP
- 54
- 48408 → 502 [ACK] Seq=14 Ack=103 Win=33664 Len=0

----
# Packet 4781 from /media/nas001/wireshark.pcapng
- 4967
- 14:21:13.061966163
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- 48420 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484879828 TSecr=0 WS=128

----
# Packet 4782 from /media/nas001/wireshark.pcapng
- 4968
- 14:21:13.063244266
- 192.168.1.5
- 192.168.1.84
- TCP
- 66
- 502 → 48420 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1440 WS=8 SACK_PERM

----
# Packet 4783 from /media/nas001/wireshark.pcapng
- 4969
- 14:21:13.063378326
- 192.168.1.84
- 192.168.1.5
- TCP
- 54
- 48420 → 502 [ACK] Seq=1 Ack=1 Win=33664 Len=0

----
# Packet 4784 from /media/nas001/wireshark.pcapng
- 4970
- 14:21:13.111850874
- 192.168.1.84
- 192.168.1.5
- Modbus/TCP
- 66
-    Query: Trans:  4806; Unit:   1, Func:   4: Read Input Registers

----
# Packet 4785 from /media/nas001/wireshark.pcapng
- 4971
- 14:21:13.115626174
- 192.168.1.5
- 192.168.1.84
- Modbus/TCP
- 121
- Response: Trans:  4806; Unit:   1, Func:   4: Read Input Registers

----
# Packet 4786 from /media/nas001/wireshark.pcapng
- 4972
- 14:21:13.115702735
- 192.168.1.84
- 192.168.1.5
- TCP
- 54
- 48420 → 502 [ACK] Seq=13 Ack=68 Win=33664 Len=0

----
# Packet 4787 from /media/nas001/wireshark.pcapng
- 4973
- 14:21:13.128091844
- 192.168.1.84
- 192.168.1.5
- TCP
- 54
- 48420 → 502 [FIN, ACK] Seq=13 Ack=68 Win=33664 Len=0

----
# Packet 4788 from /media/nas001/wireshark.pcapng
- 4974
- 14:21:13.129327864
- 192.168.1.5
- 192.168.1.84
- TCP
- 60
- 502 → 48420 [ACK] Seq=68 Ack=14 Win=131056 Len=0

----
# Packet 4789 from /media/nas001/wireshark.pcapng
- 4975
- 14:21:13.130352951
- 192.168.1.5
- 192.168.1.84
- TCP
- 60
- 502 → 48420 [FIN, ACK] Seq=68 Ack=14 Win=131056 Len=0

----
# Packet 4790 from /media/nas001/wireshark.pcapng
- 4976
- 14:21:13.130453366
- 192.168.1.84
- 192.168.1.5
- TCP
- 54
- 48420 → 502 [ACK] Seq=14 Ack=69 Win=33664 Len=0

----
# Packet 4791 from /media/nas001/wireshark.pcapng
- 4978
- 14:21:18.053399875
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484884819 TSecr=0 WS=128

----
# Packet 4792 from /media/nas001/wireshark.pcapng
- 4979
- 14:21:19.094714625
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484885861 TSecr=0 WS=128

----
# Packet 4793 from /media/nas001/wireshark.pcapng
- 4980
- 14:21:20.134702422
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484886901 TSecr=0 WS=128

----
# Packet 4794 from /media/nas001/wireshark.pcapng
- 4981
- 14:21:21.174701525
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484887941 TSecr=0 WS=128

----
# Packet 4795 from /media/nas001/wireshark.pcapng
- 4982
- 14:21:22.214707558
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484888981 TSecr=0 WS=128

----
# Packet 4796 from /media/nas001/wireshark.pcapng
- 4983
- 14:21:23.254702657
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484890021 TSecr=0 WS=128

----
# Packet 4797 from /media/nas001/wireshark.pcapng
- 4984
- 14:21:25.334750105
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 43974 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484892101 TSecr=0 WS=128

----
# Packet 4798 from /media/nas001/wireshark.pcapng
- 4985
- 14:21:28.073751378
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- 39812 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484894840 TSecr=0 WS=128

----
# Packet 4799 from /media/nas001/wireshark.pcapng
- 4986
- 14:21:29.094692658
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 39812 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484895861 TSecr=0 WS=128

----
# Packet 4800 from /media/nas001/wireshark.pcapng
- 4987
- 14:21:30.134700540
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 39812 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484896901 TSecr=0 WS=128

----
# Packet 4801 from /media/nas001/wireshark.pcapng
- 4988
- 14:21:31.174701760
- 192.168.1.84
- 192.168.1.5
- TCP
- 74
- [TCP Retransmission] 39812 → 502 [SYN] Seq=0 Win=33580 Len=0 MSS=1460 SACK_PERM TSval=484897941 TSecr=0 WS=128

The WiNet-S2 log around the same time looks like:

2024-08-24 14:20:59&0X05&mqtt_extend.c&12453&nHisStartTime =1723228799,nHisEndTime =1724473259.
2024-08-24 14:20:59&0X05&mqtt_extend.c&12505&MQTT Get SigID 6002 Failed.
2024-08-24 14:20:59&0X05&mqtt_dm.c&988&pu8SigOffset is null/pstCfgSig is null @ nDeviceId=2, nSigId=21527, nSigSize=2.
2024-08-24 14:20:59&0X05&mqtt_extend.c&12505&MQTT Get SigID 6002 Failed.
2024-08-24 14:20:59&0X05&mqtt_extend.c&12520&pub hisdata dev num is 0.
2024-08-24 14:21:00&0X02&port_mgr.c&1400&Port_ComDataStats: total[549467] recv[549466] err[0].
2024-08-24 14:21:02&0X02&net.c&2145&server close the connection!
2024-08-24 14:21:02&0X02&net.c&2145&server close the connection!
2024-08-24 14:21:07&0X02&net.c&2145&server close the connection!
2024-08-24 14:21:07&0X02&net.c&2145&server close the connection!
2024-08-24 14:21:07&0X05&mqtt_dm.c&988&pu8SigOffset is null/pstCfgSig is null @ nDeviceId=2, nSigId=273, nSigSize=2.
2024-08-24 14:21:12&0X02&net.c&2145&server close the connection!
2024-08-24 14:21:12&0X02&net.c&2145&server close the connection!
2024-08-24 14:21:30&0X05&mqtt_extend.c&12453&nHisStartTime =1723228799,nHisEndTime =1724473290.
2024-08-24 14:21:30&0X05&mqtt_extend.c&12505&MQTT Get SigID 6002 Failed.
2024-08-24 14:21:30&0X05&mqtt_dm.c&988&pu8SigOffset is null/pstCfgSig is null @ nDeviceId=2, nSigId=21527, nSigSize=2.
2024-08-24 14:21:30&0X05&mqtt_extend.c&12505&MQTT Get SigID 6002 Failed.
2024-08-24 14:21:30&0X05&mqtt_extend.c&12520&pub hisdata dev num is 0.
2024-08-24 14:22:00&0X05&mqtt_dm.c&881&pu8SigOffset is null nDeviceId=1,nSigId=8224.

The Modbus TCP Slave thing is configured as follows:

UID: modbus:tcp:5c552136b1
label: Modbus TCP Slave
thingTypeUID: modbus:tcp
configuration:
  rtuEncoded: false
  connectMaxTries: 1
  reconnectAfterMillis: 0
  timeBetweenTransactionsMillis: 60
  port: 502
  timeBetweenReconnectMillis: 0
  connectTimeoutMillis: 10000
  host: inverter
  afterConnectionDelayMillis: 0
  id: 1
  enableDiscovery: true

The Sungrow Inverter thing is configured as follows:

UID: modbus:sungrow-inverter:5c552136b1:d898067245
label: Sungrow Inverter
thingTypeUID: modbus:sungrow-inverter
configuration:
  pollInterval: 5000
bridgeUID: modbus:tcp:5c552136b1
channels:
  - id: sg-overview#sg-total-active-power
    channelTypeUID: modbus:sg-total-active-power
    label: Total Active Power
    configuration: {}
  - id: sg-overview#sg-total-dc-power
    channelTypeUID: modbus:sg-total-dc-power
    label: Total DC Power
    configuration: {}
  - id: sg-overview#sg-phase-a-voltage
    channelTypeUID: modbus:sg-phase-a-voltage
    label: Phase A Voltage
    configuration: {}
  - id: sg-overview#sg-phase-b-voltage
    channelTypeUID: modbus:sg-phase-b-voltage
    label: Phase B Voltage
    configuration: {}
  - id: sg-overview#sg-phase-c-voltage
    channelTypeUID: modbus:sg-phase-c-voltage
    label: Phase C Voltage
    configuration: {}
  - id: sg-overview#sg-phase-a-current
    channelTypeUID: modbus:sg-phase-a-current
    label: Phase A Current
    configuration: {}
  - id: sg-overview#sg-phase-b-current
    channelTypeUID: modbus:sg-phase-b-current
    label: Phase B Current
    configuration: {}
  - id: sg-overview#sg-phase-c-current
    channelTypeUID: modbus:sg-phase-c-current
    label: Phase C Current
    configuration: {}
  - id: sg-overview#sg-reactive-power
    channelTypeUID: modbus:sg-reactive-power
    label: Reactive Power
    configuration: {}
  - id: sg-overview#sg-grid-frequency
    channelTypeUID: modbus:sg-grid-frequency
    label: Grid Frequency
    configuration: {}
  - id: sg-overview#sg-daily-pv-generation
    channelTypeUID: modbus:sg-daily-pv-generation
    label: Daily PV Generation
    configuration: {}
  - id: sg-overview#sg-total-pv-generation
    channelTypeUID: modbus:sg-total-pv-generation
    label: Total PV Generation
    configuration: {}
  - id: sg-overview#sg-power-factor
    channelTypeUID: modbus:sg-power-factor
    label: Power Factor
    configuration: {}
  - id: sg-overview#sg-internal-temperature
    channelTypeUID: modbus:sg-internal-temperature
    label: Internal Temperature
    configuration: {}
  - id: sg-mppt-information#sg-mppt1-voltage
    channelTypeUID: modbus:sg-mppt1-voltage
    label: MPPT1 Voltage
    configuration: {}
  - id: sg-mppt-information#sg-mppt1-current
    channelTypeUID: modbus:sg-mppt1-current
    label: MPPT1 Current
    configuration: {}
  - id: sg-mppt-information#sg-mppt2-voltage
    channelTypeUID: modbus:sg-mppt2-voltage
    label: MPPT2 Voltage
    configuration: {}
  - id: sg-mppt-information#sg-mppt2-current
    channelTypeUID: modbus:sg-mppt2-current
    label: MPPT2 Current
    configuration: {}
  - id: sg-battery-information#sg-battery-voltage
    channelTypeUID: modbus:sg-battery-voltage
    label: Battery Voltage
    configuration: {}
  - id: sg-battery-information#sg-battery-current
    channelTypeUID: modbus:sg-battery-current
    label: Battery Current
    configuration: {}
  - id: sg-battery-information#sg-battery-power
    channelTypeUID: modbus:sg-battery-power
    label: Battery Power
    configuration: {}
  - id: sg-battery-information#sg-battery-level
    channelTypeUID: modbus:sg-battery-level
    label: Battery Level
    configuration: {}
  - id: sg-battery-information#sg-battery-healthy
    channelTypeUID: modbus:sg-battery-healthy
    label: Battery Healthy
    configuration: {}
  - id: sg-battery-information#sg-battery-temperature
    channelTypeUID: modbus:sg-battery-temperature
    label: Battery Temperature
    configuration: {}
  - id: sg-battery-information#sg-battery-capacity
    channelTypeUID: modbus:sg-battery-capacity
    label: Battery Capacity
    configuration: {}
  - id: sg-battery-information#sg-daily-charge-energy
    channelTypeUID: modbus:sg-daily-charge-energy
    label: Daily Charge Energy
    configuration: {}
  - id: sg-battery-information#sg-total-charge-energy
    channelTypeUID: modbus:sg-total-charge-energy
    label: Total Charge Energy
    configuration: {}
  - id: sg-battery-information#sg-daily-battery-charge
    channelTypeUID: modbus:sg-daily-battery-charge
    label: Daily Battery Charge
    configuration: {}
  - id: sg-battery-information#sg-total-battery-charge
    channelTypeUID: modbus:sg-total-battery-charge
    label: Total Battery Charge
    configuration: {}
  - id: sg-battery-information#sg-daily-battery-discharge-energy
    channelTypeUID: modbus:sg-daily-battery-discharge-energy
    label: Daily Battery Discharge Energy
    configuration: {}
  - id: sg-battery-information#sg-total-battery-discharge-energy
    channelTypeUID: modbus:sg-total-battery-discharge-energy
    label: Total Battery Discharge Energy
    configuration: {}
  - id: sg-grid-information#sg-daily-export-energy
    channelTypeUID: modbus:sg-daily-export-energy
    label: Daily Export Energy
    configuration: {}
  - id: sg-grid-information#sg-total-export-energy
    channelTypeUID: modbus:sg-total-export-energy
    label: Total Export Energy
    configuration: {}
  - id: sg-grid-information#sg-daily-import-energy
    channelTypeUID: modbus:sg-daily-import-energy
    label: Daily Import Energy
    configuration: {}
  - id: sg-grid-information#sg-total-import-energy
    channelTypeUID: modbus:sg-total-import-energy
    label: Total Import Energy
    configuration: {}
  - id: sg-grid-information#sg-daily-export-power-from-pv
    channelTypeUID: modbus:sg-daily-export-power-from-pv
    label: Daily Export Power from PV
    configuration: {}
  - id: sg-grid-information#sg-total-export-energy-from-pv
    channelTypeUID: modbus:sg-total-export-energy-from-pv
    label: Total Export Energy from PV
    configuration: {}
  - id: sg-grid-information#sg-export-power
    channelTypeUID: modbus:sg-export-power
    label: Export Power
    configuration: {}
  - id: sg-load-information#sg-load-power
    channelTypeUID: modbus:sg-load-power
    label: Load Power
    configuration: {}
  - id: sg-load-information#sg-daily-direct-energy-consumption
    channelTypeUID: modbus:sg-daily-direct-energy-consumption
    label: Daily Direct Energy Consumption
    configuration: {}
  - id: sg-load-information#sg-total-direct-energy-consumption
    channelTypeUID: modbus:sg-total-direct-energy-consumption
    label: Total Direct Energy Consumption
    configuration: {}
  - id: sg-load-information#sg-self-consumption-today
    channelTypeUID: modbus:sg-self-consumption-today
    label: Self Consumption Today
    configuration: {}

Anyone have any ideas or come across this before? Thanks.

I tried to ‘slow’ things down by changing the configuration of the Modbus TCP Slave to:

UID: modbus:tcp:5c552136b1
label: Modbus TCP Slave
thingTypeUID: modbus:tcp
configuration:
  rtuEncoded: false
  connectMaxTries: 5
  reconnectAfterMillis: 1000
  timeBetweenTransactionsMillis: 1000
  port: 502
  timeBetweenReconnectMillis: 10000
  connectTimeoutMillis: 10000
  host: inverter
  afterConnectionDelayMillis: 1000
  id: 1
  enableDiscovery: true

and set the pollinginterval on the Sungrow Inverter thing to 30000. The connect errors started after about 7 hours, so that’s not the solution.

Well, this is interesting.

The WiNet-S2 has an ethernet connection and a WiFi connection. I originally went with the ethernet connection for reliability and left the WiFi unconfigured. To determine whether the WiNet-S2 was shutting down completely or just disabling the network interface, I enabled the WiNet-S2 Wifi interface and connected it to my WiFi network. I changed nothing else. openHAB is still configured to poll via the ethernet interface. I re-enabled the Sungrow Inverter thing in openHAB. That was 48 hours ago. Since then, the WiNet-S2 has remained online and there have been no hiccups.

I will continue to monitor the situation.

Even more interesting…

With the wifi interface enabled but still polling via the ethernet interface, the ethernet interface shutdown after 2.2 days. The wifi interface stayed up but I was unable to browse to it (“Network connection is abnormal. Please check network.” was displayed by the WiNet-S2). I reconfigured openHAB to use the wifi interface rather than the ethernet interface. I changed nothing else. After 4 days of this working flawlessly, I unplugged the WiNet-S2 and plugged it back in again (to get the ethernet interface up). Both interfaces are up and openHAB is polling via the wifi interface. I’ll see how long this is stable for.

P.S. I have tested the ethernet cable and it checks out OK.

Have been working with Sungrow on this and they are now sending a new WiNet-S2 dongle. Looks like it could be a hardware problem. Will report back when the replacement arrives and it is tested.

Sungrow have replaced the WiNet-S2 with a WiNet-S and the LAN connection seems stable. Still unable to have both the WiFi and LAN interfaces enabled for any length of time. Not a huge issue as long as the LAN interface remains stable. The web interface of the new WiNet-S doesn’t stay active for more than an hour or two (error is “WebSocket connection to ‘ws://inverter:8082/ws/home/overview’ failed: WebSocket is closed before the connection is established.”). I need to unplug/plug the WiNet-S to re-enable the web interface. Once again, not a huge issue given that you don’t really need the web interface once it is configured. The iSolarCloud and MODBUS comms continue to operate even when the web interface dies, so all the monitoring is OK.

There’s quite a few registers you can access through the http interface - but can’t access through the modbus interface. After sungrow upgraded the firmware on my WiNet while troubleshooting some other issues - I ran into similar issues running SunGather which polls over modbus.

As far as I can tell - if you poll an inaccessible (via modbus) register too many times - it eventually crashes the module (might be filling some log??), and I similarly had to unplug/replug the module.

Eventually narrowed it down and removed the following registers from what it was polling. This is on the SH6.0RS with WiNet-S using Wifi.
5113
6100
6196
6290
6386
6469
6565
6648
6744
13003

Past 3-4 days it seems to have been stable, whereas previously if I ran SunGather it would crash within 20hrs or so.

Hi @jpwise - thanks for the info. TBH, I am a little uncomfortable with Sungrow being able to upgrade firmware without my knowledge, but hey.

According to V1.1.4 of the Communications Protocol document (attached), register 5113 is unused and registers 6100-6826 aren’t available over Ethernet TCP/IP (“WINET-S forwarding via Ethernet TCP/IP is not supported”) but I assume it would be the same for WiFi. I guess that explains why you had to remove them from the polling list to stabilise the environment.

Register 13003 (and 13004) SHOULD be able to be read, as long as you are reading it as U32 and not anything else.

Communication Protocol of Residential Hybrid Inverter-V1.1.4.pdf (742.9 KB)

Hey @owl770 thanks for that. The most recent modbus list I had previously was v1.0.25 - but picked up 1.1.14 from the other thread yesterday - haven’t had a chance to read through it yet tho.

With the inverter being online managed I’m not surprised about the updates, just a little annoyed as I’d deliberately held off on updating the Wifi module as I kind of expected issues. :confused:

For my purposes, I originally noted the issue with Sungather.
When I finally had time to troubleshoot it, i broke the ranges out into ranges of 10 and looked for the errors.
5113 - Sungather lists it as ‘daily_running_time’ - but it’s not linked to a specific model - so guessing it’s it’s used in the SG series?

When i went through the ones triggering read errors - most often there was only a single register in each block, but it’s also entirely possible the error was transient if something else (ie sungrow binding) was accessing the inverter at the same time. Reading the latest revision about the 6110-6826 not being accessible over ethernet (assuming they mean Modbus TCP) it covers a good chunk of the registers I identified (and was accessing over sungrow-http interface of sungather previously). Also I’d agree 13003 should be safe to read.

In either event - repeated reading of the ?illegal? addresses definitely seems to trigger the module to crash. Some other threads I’ve read suggested it could be a log on the module filling up - which sounds plausible. There were also a few comments that the module appeared to reboot ~midnight each night automatically - but I didn’t leave it crashed for long enough to confirm that.

Thx
Jp.