hmm, interesting, but according to the documentation:
I have a lot of rule files, so that would only fit 100% as long as I have at least dummy rules not in every file. But that would not be a problem, so thanks for that!
Yes I think .rules file won’t trigger this, but items file will definitely do (any items file). I think the same happens with .things file if I remember correctly.
Yes, I thought about that, but was not sure about the comments that contain the “word”. Would have to optimize the regex, e.g. the only characters in a line must be “when” and nothing else, no // or /* or other things… that would work. I guess ‘^when’ would work in “most” cases. It’s not perfect though, but I can live with it for now
For counting the things you could use the REST call: http://myopenhabserver:8080/rest/things
The response is a JSON string containing all things.
Similar would work for items: http://myopenhabserver:8080/rest/items?recursive=false
If you are using JSR223 or NGRE Rules you can use the REST API the same as Jürgen shows for Things.
For Rules DSL Rules use something like you post above. Pick a word or string of words that are unlikely to occur more than once per rule, grep for it and use wc -l to get the count.
Provide the full path for grep and wc. Replace the spaces with @@ the same as is sometimes required by the Exec binding. If all else fails, put it into a shell script and run the script from OH.
Thanks, I have replaced my code with this, and also cleaned up the code a bit, so it is easier for others to modify (also some copy paste error is there in your code fence on the last line ):
import requests, sys, glob, re
# Modify these variables to fit your system
url_openhab = 'http://localhost:8080'
thing_item_name = "openHAB_AllThings"
item_item_name = "openHAB_AllItems"
rule_item_name = "openHAB_AllRules"
rule_path = "/etc/openhab2/rules/*.rules"
items = len(requests.get(url = url_openhab + "/rest/items?recursive=false").json())
requests.post(url = url_openhab + "/rest/items/" + item_item_name, data = str(items), verify=False)
print("Items: "+ str(items))
things = len(requests.get(url = url_openhab + "/rest/things").json())
requests.post(url = url_openhab + "/rest/items/" + thing_item_name, data = str(things), verify=False)
print("Things: "+ str(things))
rules = 0
for fileName in glob.glob(rule_path):
with open(fileName, 'r') as file:
fileContent = file.read()
fileContent = re.sub(re.compile(r"/\*.*?\*/",re.DOTALL ) ,"" ,fileContent) # remove all occurrences streamed comments (/*COMMENT */)
fileContent = re.sub(re.compile(r"//.*?\n" ) ,"" ,fileContent) # remove all occurrence single-line comments (//COMMENT\n )
rules = rules + len(re.findall('rule', fileContent, re.IGNORECASE))
requests.post(url = url_openhab + "/rest/items/" + rule_item_name, data = str(rules), verify=False)
print("Rules: "+ str(rules))
However this says that I have 114 rules, but with grep and wc I posted earlier it says 84. Interesting because it doesn’t even look for comments and other and still it says less rule… I will look into this.
The regex expression is not enough in re.findall(). It will also match any string containing ‘rule’ inside, so for example “rules” will also also match this.
So here is my better version, it does have your suggested fix but also counts the ends, whens, thens and takes the lowest counts it finds
And… I added logging for that, which I kept, logging is good…
And… I started with your version, kind of,
And… I found out how to post code the right way