Configurations updating very slow

Hello,

I am having trouble with my openHAB instance.
When I am changing something in configurations files like *.rules, *.sitemap it takes extremely long for the updates to show up (sometimes > 15 minutes or only after reboot).

Here some data:

##        Ip = Unable to parse ip . Please debug.
##   Release = Raspbian GNU/Linux 10 (buster)
##    Kernel = Linux 5.4.72-v7l+
##  Platform = Raspberry Pi 4 Model B Rev 1.2
##    Uptime = 0 day(s). 9:40:21
## CPU Usage = 26% avg over 4 cpu(s) (4 core(s) x 1 socket(s))
##  CPU Load = 1m: 1.28, 5m: 1.22, 15m: 0.89
##    Memory = Free: 2.88GB (76%), Used: 0.89GB (24%), Total: 3.78GB
##      Swap = Free: 1.99GB (100%), Used: 0.00GB (0%), Total: 1.99GB
##      Root = Free: 443.51GB (98%), Used: 6.73GB (2%), Total: 469.34GB
##   Updates = 0 apt updates available.
##  Sessions = 1 session(s)
## Processes = 120 running processes of 32768 maximum processes
openHAB 2.5.10-1 (Release Build)
500GB internal Samsung Pro SSD via SATA USB adapter (few weeks old)

Something seems to be wrong with the status. I always get a warning:


[09:52:35] openhabian@openHABianSHS:~$ sudo /bin/systemctl status openhab2.service
Warning: The unit file, source configuration file or drop-ins of openhab2.service changed on disk. Run 'systemctl daemon-reload' to reload unit
● openhab2.service - openHAB2 instance, reachable at http://openHABianSHS:8080
   Loaded: loaded (/usr/lib/systemd/system/openhab2.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/openhab2.service.d
           └─override.conf
   Active: active (running) since Sat 2020-10-31 00:11:01 CET; 9h ago
     Docs: https://www.openhab.org/docs/
           https://community.openhab.org
  Process: 737 ExecStartPre=/bin/bash -c /usr/bin/find ${OPENHAB_CONF} -name "*.rules" -exec /usr/bin/rename.ul .rules .x {} \; (code=exited, s
  Process: 756 ExecStartPost=/bin/sleep 120 (code=exited, status=0/SUCCESS)
  Process: 1373 ExecStartPost=/bin/bash -c /usr/bin/find ${OPENHAB_CONF} -name "*.x" -exec /usr/bin/rename.ul .x .rules {} \; (code=exited, sta
 Main PID: 755 (java)
    Tasks: 155 (limit: 4915)
   CGroup: /system.slice/openhab2.service
           └─755 /usr/bin/java -Dopenhab.home=/usr/share/openhab2 -Dopenhab.conf=/etc/openhab2 -Dopenhab.runtime=/usr/share/openhab2/runtime -D

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Rebooting,
updating,
sudo systemctl daemon-reload,
sudo openhab-cli reset-ownership,
sudo openhab-cli clean-cache,
Apply Improvements,
didn’t fix it.

Any idea what I can do?

How did you install OpenHAB?
What version and type do Java? Some types do not use hardware floats.
What do the OS and OpenHAB logs say?
Is it still slow when using an SD card? That could point to a USB adapter or SSD issue.

Describe better please.
Is it on every change or just at times ?
Since when ? Did you change any handling since ?
Change Karaf logging on org.eclipse.smarthome.model.script to debug to see how long compilation takes when you change a .rules or even .items. .sitemaps should be fast though.

The SW status of your box is ok, it’s just cosmetical bugs.
HW though, OS on USB adapters is no good idea. Always a potential source of pain.
Which is why we don’t support it in openHABian.

I activated boot from USB via Raspberry Pi 4 Firmware.
No SD card is used.
Image (openhabian) is extracted via etcher on the SSD and the SSD is installed in a case and connected via USB 3.0 cable to the Pi. Case and SSD is newly bought.

I didn’t have the problem from the start.
It happened when I had a syntax error in a rule. Maybe I did something else like a reboot in the meantime. I am not sure.
From that moment on it took forever for a rule to load.
Before the error the new loaded rule was shown almost instantly in Log Viewer.

When I change something in the rule file, it will not be used until the new file is loaded. So in between there is no execution of the old rule.
When I do a syntax error again it won’t be shown in Log Viewer. The rule will just be unloaded. When I correct the file it takes a long time to load or I have to reboot.

Maybe it has to do something with rule delay or Z-Ram?
Maybe I changed a handling by mistake?

As far as I know I only change files like items, things, rules, persistence and sitemaps.
The issue happens every time with all files. It takes a long time. Before it was almost instant.

My only way of debugging is log viewer. I don’t know about other ways to see the logs.
I activated debug for org.eclipse.smarthome.model.script but there is too much data to comprehend for me.
The last update of my bigger rule file took 23 minutes from saving to executing.
Now the log is spamming and I cant stop it:

2020-10-31 19:15:47.921 [DEBUG] [e.osgi.LoggingCommandSessionListener] - Command: 'log:set OFF org.eclipse.smarthome.model.script' returned 'null'

How are you determining that a rule is ‘loaded’? Are you talking about the openhab.log “Loading model …” messages?

What are you seeing to conclude that?

Hmm so you don’t know when it started. That doesn’t help in diagnosing.

But your statements show a number of misunderstandings so I’m convinced what you believe to see is not what was/is happening.
Rule loading always takes time on a RPi unless the rule is broken and processing is stopped early.
The time a rule takes to load also greatly varies depending on if you use some specific code pieces like Java primitives (for example .intValue).
Changing .items is particularly bad as it can trigger reloads of ALL rules in a single go.
This is why I told you to enable that debug setting. Start with that. You will see messages Loading xxx.rules and Refreshing xxx.rules in openhab.log whenever a rule reloads so you will get more insight WHEN that happens and which rule(s) are the ones to take the most time.
That should help you in narrowing down the cause.

On your HW choice, well … your installation method is not recommended at all.
You effectively have ZRAM running although you have a SSD which you probably chose because you want to avoid SD wearout. But that’s what ZRAM already does. Doing both is overkill and adversely affects each other.
I assume you did not read the openHABian documentation because it warns to avoid this combo and
tells you to install the image from SD. If you had gone with the recommendations that would not have happened. But now that you did we cannot know if that’s the origin of you problems but possible it is.

Yes I see loading model. Shortly after that the rule is working. Mostly it is a cron job for something to do every minute.

I think it is unloaded when the rule for updating an item is not working every minute.

Is there a way to filter the log files? Because there are like thousands of entries every second.

Example:

19:40:36.067 [DEBUG] [org.eclipse.jetty.http.HttpGenerator ] - NO_CONTENT
19:40:36.073 [DEBUG] [org.eclipse.jetty.client.HttpSender  ] - Generated headers (174 bytes), chunk (-1 bytes), content (0 bytes) - FLUSH/HttpGenerator@e955ca{s=COMPLETING}
19:40:36.084 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - >flush SslConnection@bb350a::SocketChannelEndPoint@c86e52{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=-,to=14049/0}{io=1/1,kio=1,kro=1}->SslConnection@bb350a{NOT_HANDSHAKING,eio=-1/-1,di=-1,fill=INTERESTED,flush=IDLE}~>DecryptedEndPoint@df8ec{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=W,to=14107/0}=>HttpConnectionOverHTTP@52b9aa(l:/192.168.180.32:37684 <-> r:fritz.box/192.168.180.1:443,closed=false)=>HttpChannelOverHTTP@6d6967(exchange=HttpExchange@158af6d req=PENDING/null@null res=PENDING/null@null)[send=HttpSenderOverHTTP@1600012(req=HEADERS,snd=SENDING,failure=null)[HttpGenerator@e955ca{s=COMPLETING}],recv=HttpReceiverOverHTTP@59a189(rsp=IDLE,failure=null)[HttpParser{s=START,0 of -1}]]
19:40:36.104 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - flush b[0]=HeapByteBuffer@138cd2b[p=0,l=174,c=4096,r=174]={<<<GET /webservices/...: fritz.box\r\n\r\n>>>v20190813\r\nHost: ...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
19:40:36.112 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - flush b[1]=HeapByteBuffer@c48643[p=0,l=0,c=0,r=0]={<<<>>>}
19:40:36.120 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - flush b[2]=HeapByteBuffer@1e6dbb4[p=0,l=0,c=0,r=0]={<<<>>>}
19:40:36.126 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - flush NOT_HANDSHAKING
19:40:36.133 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - wrap Status = OK HandshakeStatus = NOT_HANDSHAKING bytesConsumed = 174 bytesProduced = 203 [p=0,l=203,c=18432,r=203] ioDone=false/false
19:40:36.144 [DEBUG] [org.eclipse.jetty.io.ChannelEndPoint ] - flushed 203 SocketChannelEndPoint@c86e52{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=-,to=14109/0}{io=1/1,kio=1,kro=1}->SslConnection@bb350a{NOT_HANDSHAKING,eio=-1/0,di=-1,fill=INTERESTED,flush=IDLE}~>DecryptedEndPoint@df8ec{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=W,to=14167/0}=>HttpConnectionOverHTTP@52b9aa(l:/192.168.180.32:37684 <-> r:fritz.box/192.168.180.1:443,closed=false)=>HttpChannelOverHTTP@6d6967(exchange=HttpExchange@158af6d req=PENDING/null@null res=PENDING/null@null)[send=HttpSenderOverHTTP@1600012(req=HEADERS,snd=SENDING,failure=null)[HttpGenerator@e955ca{s=COMPLETING}],recv=HttpReceiverOverHTTP@59a189(rsp=IDLE,failure=null)[HttpParser{s=START,0 of -1}]]
19:40:36.155 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - net flushed=true, ac=true
19:40:36.174 [DEBUG] [rg.eclipse.jetty.io.ssl.SslConnection] - <flush true SslConnection@bb350a::SocketChannelEndPoint@c86e52{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=-,to=13/0}{io=1/1,kio=1,kro=1}->SslConnection@bb350a{NOT_HANDSHAKING,eio=-1/-1,di=-1,fill=INTERESTED,flush=IDLE}~>DecryptedEndPoint@df8ec{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=W,to=14195/0}=>HttpConnectionOverHTTP@52b9aa(l:/192.168.180.32:37684 <-> r:fritz.box/192.168.180.1:443,closed=false)=>HttpChannelOverHTTP@6d6967(exchange=HttpExchange@158af6d req=PENDING/null@null res=PENDING/null@null)[send=HttpSenderOverHTTP@1600012(req=HEADERS,snd=SENDING,failure=null)[HttpGenerator@e955ca{s=COMPLETING}],recv=HttpReceiverOverHTTP@59a189(rsp=IDLE,failure=null)[HttpParser{s=START,0 of -1}]]
19:40:36.187 [DEBUG] [org.eclipse.jetty.io.WriteFlusher    ] - Flushed=true written=174 remaining=0 WriteFlusher@1c2e374{WRITING}->null
19:40:36.201 [DEBUG] [org.eclipse.jetty.client.HttpSender  ] - Generated headers (-1 bytes), chunk (-1 bytes), content (-1 bytes) - DONE/HttpGenerator@e955ca{s=END}
19:40:36.207 [DEBUG] [org.eclipse.jetty.client.HttpSender  ] - Request committed HttpRequest[GET /webservices/homeautoswitch.lua HTTP/1.1]@ebf2fb
19:40:36.215 [DEBUG] [org.eclipse.jetty.client.HttpSender  ] - Request success HttpRequest[GET /webservices/homeautoswitch.lua HTTP/1.1]@ebf2fb
19:40:36.222 [DEBUG] [org.eclipse.jetty.client.HttpExchange] - Terminated request for HttpExchange@158af6d req=TERMINATED/null@null res=PENDING/null@null, result: null
19:40:36.227 [DEBUG] [org.eclipse.jetty.client.HttpSender  ] - Terminating request HttpRequest[GET /webservices/homeautoswitch.lua HTTP/1.1]@ebf2fb
19:40:36.662 [DEBUG] [org.eclipse.jetty.io.ManagedSelector ] - Selector sun.nio.ch.EPollSelectorImpl@fcc61b woken up from select, 1/1/1 selected
19:40:36.669 [DEBUG] [org.eclipse.jetty.io.ManagedSelector ] - Selector sun.nio.ch.EPollSelectorImpl@fcc61b processing 1 keys, 0 updates
19:40:36.677 [DEBUG] [org.eclipse.jetty.io.ManagedSelector ] - selected 1 sun.nio.ch.SelectionKeyImpl@3884f3 SocketChannelEndPoint@c86e52{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=-,to=519/0}{io=1/1,kio=1,kro=1}->SslConnection@bb350a{NOT_HANDSHAKING,eio=-1/-1,di=-1,fill=INTERESTED,flush=IDLE}~>DecryptedEndPoint@df8ec{fritz.box/192.168.180.1:443<->/192.168.180.32:37684,OPEN,fill=FI,flush=-,to=14700/0}=>HttpConnectionOverHTTP@52b9aa(l:/192.168.180.32:37684 <-> r:fritz.box/192.168.180.1:443,closed=false)=>HttpChannelOverHTTP@6d6967(exchange=HttpExchange@158af6d req=TERMINATED/null@null res=PENDING/null@null)[send=HttpSenderOverHTTP@1600012(req=QUEUED,snd=COMPLETED,failure=null)[HttpGenerator@e955ca{s=START}],recv=HttpReceiverOverHTTP@59a189(rsp=IDLE,failure=null)[HttpParser{s=START,0 of -1}]]

Yes, my plan is to have a stable system for years so I tried to avoid an SD card.
Should I reinstall openhab on SD and use “Move root to USB”?
If so. Can I take my influxdb database with me?

What do you mean?? That the rule reloads every minute ?

No that’s a misunderstanding of yours. openHAB serializes all rule compilations and executions so while a rule is being (re)compiled, no event is processed.

While that’s a little better (less off the mainstream), the openHABian philosophy is different.
No SSD, stay on SD only, use ZRAM and - most important - take care of backups from day 1 on.
Read about the auto backup.

I mean there are rules like this in the specific rule file:

rule "Berechnung Niederschlag"
when
	Time cron "0 * * * * ?"   // every 1 minutes
then
	Weather_Rain_Last.postUpdate(OU_Garden_Rain_Gauge.lastUpdate("influxdb").toString())
end

The rule file is not reloading every minute.

No that’s a misunderstanding of yours. openHAB serializes all rule compilations and executions so while a rule is being (re)compiled, no event is processed.

Yes I understand now because a rule like this one above is not updating in the meantime.

You get to see what you configured to get to see. You seem to have activated debug settings for all modules ??
Throttle back. Start with the log defaults, then it’s less than a msg per sec.

These are the log settings:

openhab> log:get
Logger                                             │ Level
───────────────────────────────────────────────────┼──────
ROOT                                               │ DEBUG
javax.jmdns                                        │ ERROR
javax.mail                                         │ ERROR
org.apache.karaf.jaas.modules.audit                │ INFO
org.apache.karaf.kar.internal.KarServiceImpl       │ ERROR
org.apache.karaf.shell.ssh.SshUtils                │ ERROR
org.apache.karaf.shell.support                     │ OFF
org.apache.sshd                                    │ WARN
org.eclipse.lsp4j                                  │ OFF
org.eclipse.smarthome                              │ INFO
org.eclipse.smarthome.model.script                 │ OFF
org.jupnp                                          │ ERROR
org.openhab                                        │ INFO
org.openhab.ui.paper                               │ WARN
org.openhab.ui.paper.internal                      │ INFO
org.ops4j.pax.url.mvn.internal.AetherBasedResolver │ ERROR
org.ops4j.pax.web.pax-web-runtime                  │ OFF
smarthome.event                                    │ INFO
smarthome.event.InboxUpdatedEvent                  │ ERROR
smarthome.event.ItemAddedEvent                     │ ERROR
smarthome.event.ItemRemovedEvent                   │ ERROR
smarthome.event.ItemStateEvent                     │ ERROR
smarthome.event.ThingAddedEvent                    │ ERROR
smarthome.event.ThingRemovedEvent                  │ ERROR
smarthome.event.ThingStatusInfoEvent               │ ERROR

Only ROOT is on DEBUG level. Is this the reason for the high logging level?
I only used

openhab> log:set DEBUG org.eclipse.smarthome.model.script

Yes. That’s not normal.

When I set ROOT to OFF the debugging log is gone, but after reboot everything is back to full debug logging.
All rules stopped working.
I guess everything is broken now and I have to reinstall everything.

How can I take influxdb to a new installation? Sorry I am a linux noob. Very hard for me to find tutorials on this.

What is the best hardware setup? Just use an SD card?
SSD is a bad thing?
I had no problem with this kind of setup for years but I have no problem to move on.

Something weird with your install then. Probably best to reinstall yes.
You can install on SD so your SSD remains operative (to copy data, lookup things, have a fallback).

Why? Just install from scratch, openHABian has an option to do that for you.
(if you want to keep your data I’m sure you can import it, too, but you would need to search/g**gle for that).

If you ask me yes. I prefer to keep proven setup and as simple and unmodified, in particular as you say you’re a noob.
Truth is though there’s no single opinion on this. Others prefer SSD but you should be proficient in Linux IMHO to go there as it has its pitfalls and you will be mostly on your own. Choose yours. Just don’t dump the image to disk but use regular openHABian menu option like move to USB if SSD.

1 Like

I have run an RPI3 B+ with a MSATA drive for ages and everything in the openhabian script menu just worked. Maybe I got lucky.

I am also running
OH3M1 using latest
openhabian 1.6 64bit
RPI4 8g
16g SD card
I guess I can skip putting a SSD in it. It is not recommended to run 64bit and I only testing it.

@ingenieur89

You could do a backup of your system then boot from “new” SD card without SSD and restore backup.
Running using zram its important to shut down correctly, I have a USBC power brick that I can use while charging it acts as a dumb UPS.

1 Like

I reinstalled on a 32GB microSD and transfered my old items, things, sitemaps, rules, etc. files.
I put every rule in a seperate file.
Still they are loading very slow and sometimes don’t react.
For example a rule like this is not triggered when the condition is met:

when
	Item OU_GardenShed_Twilight_Switch changed from OFF to ON
then
	logInfo("nightmode", "Nachtmodus wurde gestartet.")
end

Sometimes it will trigger like 30 minutes after the condition was met.
Also i still get this:

[20:59:36] openhabian@openHABianDevice:~$ sudo /bin/systemctl status openhab2.service
Warning: The unit file, source configuration file or drop-ins of openhab2.service changed on disk. Run 'systemctl daemon-reload' to reload units.
● openhab2.service - openHAB2 instance, reachable at http://openHABianDevice:8080
   Loaded: loaded (/usr/lib/systemd/system/openhab2.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/openhab2.service.d
           └─override.conf
   Active: active (running) since Mon 2020-11-16 20:37:14 CET; 22min ago
     Docs: https://www.openhab.org/docs/
           https://community.openhab.org
  Process: 826 ExecStartPre=/bin/bash -c /usr/bin/find ${OPENHAB_CONF} -name "*.rules" -exec /usr/bin/rename.ul .rules .x {} \; (code=exited, status=0/SUCCESS)
  Process: 858 ExecStartPost=/bin/sleep 120 (code=exited, status=0/SUCCESS)
  Process: 1499 ExecStartPost=/bin/bash -c /usr/bin/find ${OPENHAB_CONF} -name "*.x" -exec /usr/bin/rename.ul .x .rules {} \; (code=exited, status=0/SUCCESS)
 Main PID: 857 (java)
    Tasks: 160 (limit: 4915)
   CGroup: /system.slice/openhab2.service
           └─857 /usr/bin/java -Dopenhab.home=/usr/share/openhab2 -Dopenhab.conf=/etc/openhab2 -Dopenhab.runtime=/usr/share/openhab2/runtime -Dopenhab.userdata=/var/lib/openhab2 -Dopenhab.logdir=/var/log/openhab2 -Dfelix.cm.dir=/var/lib/open

Nov 16 20:35:14 openHABianDevice systemd[1]: Starting openHAB2 instance, reachable at http://openHABianDevice:8080...
Nov 16 20:37:14 openHABianDevice systemd[1]: Started openHAB2 instance, reachable at http://openHABianDevice:8080.

It seems like the SSD was not part of the problem. It is disconnected now.

how? openHABian image ?

no need for that

I used the latest version of openHABian and strictly followed the installation steps.

What do you mean by don’t edit files in place?
You mean I can’t just open a file, change it and save it? How can I modify it then?

What I notice is that update via openhabian-config is stopping here:

2020-11-22_18:16:42_CET [openHABian] Adding delay on loading openHAB rules... OK
2020-11-22_18:20:03_CET [openHABian] Adding an openHAB dashboard tile for 'openhabiandocs'... Replacing... OK
2020-11-22_18:22:05_CET [openHABian] Updating Linux package information... OK
2020-11-22_18:22:05_CET [openHABian] Updating repositories and upgrading installed packages...

Maybe something is wrong with my rules. This is like a template of many rules that I have:
Weather_Humidity.rules

rule "Berechnung Luftfeuchte"

when
	Time cron "20 * * * * ?"   // every 1 minutes
then

	logInfo("humidity", "Luftfeuchtedaten wurden berechnet.")

	Weather_RelativeHumidity_Today_Max.postUpdate((OU_Backyard_RelativeHumidity.maximumSince(now.withTimeAtStartOfDay(), "influxdb").state as Number).doubleValue)
	Weather_RelativeHumidity_Today_Max_Time.postUpdate(Weather_RelativeHumidity_Today_Max.lastUpdate("mapdb").toString())
	
	Weather_RelativeHumidity_Today_Min.postUpdate((OU_Backyard_RelativeHumidity.minimumSince(now.withTimeAtStartOfDay(), "influxdb").state as Number).doubleValue)
	Weather_RelativeHumidity_Today_Min_Time.postUpdate(Weather_RelativeHumidity_Today_Min.lastUpdate("mapdb").toString())
	
	Weather_RelativeHumidity_Today_Ave.postUpdate(OU_Backyard_RelativeHumidity.averageSince(now.withTimeAtStartOfDay(), "influxdb"))
	
	Weather_RelativeHumidity_24h_Max.postUpdate((OU_Backyard_RelativeHumidity.maximumSince(now.withTimeAtStartOfDay(), "influxdb").state as Number).doubleValue)
	Weather_RelativeHumidity_24h_Max_Time.postUpdate(Weather_RelativeHumidity_24h_Max.lastUpdate("mapdb").toString())
	
	Weather_RelativeHumidity_24h_Min.postUpdate((OU_Backyard_RelativeHumidity.minimumSince(now.withTimeAtStartOfDay(), "influxdb").state as Number).doubleValue)
	Weather_RelativeHumidity_24h_Min_Time.postUpdate(Weather_RelativeHumidity_24h_Min.lastUpdate("mapdb").toString())
	
	Weather_RelativeHumidity_24h_Ave.postUpdate(OU_Backyard_RelativeHumidity.averageSince(now.withTimeAtStartOfDay(), "influxdb"))

	OU_Backyard_AbsoluteHumidity.postUpdate((6.112 * Math.exp((17.67 * (OU_Backyard_Temperature.state as Number).doubleValue) / (243.5 + (OU_Backyard_Temperature.state as Number).doubleValue)) * (OU_Backyard_RelativeHumidity.state as Number).doubleValue * 2.1674) / (273.15 + (OU_Backyard_Temperature.state as Number).doubleValue)) // g/m^3
	
end