No, I didn’t do a PUSH nor a PR yet as I wanted to react quicky but of course I can do so during the day. #I can provide you with the compiled bundle jar, though, just pm me.
I am note sure how we deal with PRs on 3.4. currently. Actually we already concentrating on 4.0 as you know. Providing it on 4.0 currently doesn’t make sense yet at least to my because 4.0 is so early yet that it doesn’t run on my side yet.
So, if any maintainer want’s to jump in if I should still provide it as a PR on 3.4, then let me know.
Ok, writing helped again but it is actually nothing new:
As I already mentioned on github, this is coming up from time to time:
2022-12-28 13:21:06.304 [WARN ] [su.litvak.chromecast.api.v2.Channel ] - Error while reading, caused by su.litvak.chromecast.api.v2.ChromeCastException: Remote socket closed
2022-12-28 13:21:06.306 [WARN ] [su.litvak.chromecast.api.v2.Channel ] - <-- null payload in message
which obviously leads to
2022-12-28 13:21:06.325 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'chromecast:chromecast:5fa9d22389a1977c0cff5326b87d88f7' changed from ONLINE to OFFLINE
2022-12-28 13:21:06.328 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'chromecast:chromecast:5fa9d22389a1977c0cff5326b87d88f7' changed from OFFLINE to OFFLINE (COMMUNICATION_ERROR): Interrupted while waiting for response
2022-12-28 13:21:16.433 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'chromecast:chromecast:5fa9d22389a1977c0cff5326b87d88f7' changed from OFFLINE (COMMUNICATION_ERROR): Interrupted while waiting for response to ONLINE
but even though I am on TRACE neither a debug or trace comes up.
If you can reproduce that it happens without the logging and does not happen with the additional logs (does it also happen when you set the binding to trace logging WITHOUT your additions?) I would remove log statements until it re-occurs. And see if there is anything around that logging that might be timing sensitive.
The errors don’t appear until after a while for me (using the standard addon, not your version). They don’t appear immediately after an openhab restart. In my case, it took about 40+ minutes until I started seeing it going offline.
Yes, it does happen without the logging as the logging comes on INFO on the “ab.event.ThingStatusInfoChangedEvent”. Only then I switch on the chromecast logs which are actually pretty silent even with trace on (probably not a lot happening when you are not using the device).
I’ll send you a PM and try it out here. I’m probably a good volume use case as I currently have 19 active things on the Chromecast binding. I’m running the Docker image so I might need a little coaching on how to deploy this addon manually.
Same here regarding rrd4j after moving from 2.5 () to 3.4:
2022-12-29 22:29:02.103 [WARN ] [d4j.internal.RRD4jPersistenceService] - Failed to open rrd4j database 'Power_UG_Garagedoor_wattmeter' (java.lang.IllegalArgumentException)
However, this is only the case, when the rrd4j file in question does not exists. As soon as I copied the old rrd4j file to /srv/openhab-userdata/persistence/rrd4j/Power_UG_Garagedoor_wattmeter.rrd the WARN disappeared and the rrd4j file gets updated normally.
But my question is now: What I have to do, that the rrd4j files get initially created? This could be a bug, because before the upgrade I never made something special for the initial rrd4j creation. But RRD4j is the default persistence service since OH 3; because I came from OH 2.5, I can’t say that it is 3.4 related for sure.
(BTW: I started with a new fresh openHABian 1.7.5 image and migrated old stuff manually)
In my case, the rrd files already exists and date of last change is same as for all rrd files in that directory. Maybe I could remove them and restart OH (afaik, this triggers the creation of new “fresh” files)
Only post to this thread on issues specific to 3.4 code, i.e. issues that were introduced with 3.4 and didn’t exist before (that means in 3.3, not in whatever version you have been running earlier).
Only report issues that you can reproduce and confirm to be general issues with the code and not just issues of yours that you have not analyzed to be caused by the 3.3->3.4 change or that are otherwise specific to your setup.
For anything else, please open your own thread(s).
Help us keep communication clean and effective.