Steinhart-Hart calculation for NTC

I was curious enough to run this in a Rule

	var double Resistance=0.0
	var double steinhart=0.0
	var Number analog1=600.0
    Resistance = ((1023.0 / analog1.doubleValue)  -1.0).doubleValue 
    Resistance = (6800.0 / Resistance.doubleValue).doubleValue       
    steinhart = (Resistance.doubleValue  / 5000.0).doubleValue       
    steinhart = Math::log(steinhart.doubleValue)                  
    steinhart = (steinhart / 3470.0).doubleValue         
    steinhart = (steinhart.doubleValue + (1.0 / (25.0 + 273.15))).doubleValue
    steinhart = (1.0 / steinhart.doubleValue).doubleValue                   
    steinhart = (steinhart.floatValue - 273.15).doubleValue                
    logInfo("test"," result " + steinhart.toString)

result displayed 9.067224121093773

Keeping just the one double to invoke the log function

	var Number Resistance=0.0
	var Number steinhart=0.0
	var Number analog1=600.0
    Resistance = (1023.0 / analog1)  -1.0 
    Resistance = 6800.0 / Resistance       
    steinhart = (Resistance / 5000.0       
    steinhart = Math::log(steinhart.doubleValue)                  
    steinhart = steinhart / 3470.0         
    steinhart = steinhart + (1.0 / (25.0 + 273.15))
    steinhart = 1.0 / steinhart                   
    steinhart = steinhart - 273.15                
    logInfo("test"," result " + steinhart.toString)

result 9.06721130

either is close enough to the hand workings I’m sure.

Back guys.

I found the big diference between Excel and Openhab calculation.

in Excel Log(1.9291)
result =0.285354

in the OpenHab

steinhart = log(1.9291)
result =0.65704

Why ??

There are logs and logs;
0.65704 is the natural log, log e, aka ln
0.285354 is the base 10 log

Which does your calculation call for?

EDIT - upss, wrong way round, fixed

The problem with double is that any decimal value is just an approximation! Look into it!
When receiving information from sensors in OpenHAB, depending on the binding you will see strange values with undefined floating point precision, making double calculations useless.
So no, my statement is not misleading at all.
It’s how Java works!

Yor right rossko57.
Anyone know the natural log format for the OpenHab

math::log(x) is the base 10.

Back.

I found this usefull site https://docs.oracle.com/javase/7/docs/api/java/lang/Math.html

log(double a) -> Returns the natural logarithm (base e) of a double value.
log10(double a) -> Returns the base 10 logarithm of a double value.

I just double checked with Excel :slight_smile: :slight_smile: :slight_smile:

I´m goind to do some field tests.

Did you look at the compared results with/without double? They are indeed different but not significantly different. There are not many sensors reading to 0.0001% resolution.
People get the hump when an expected 25.0 is shown as 24.99999999999, but that is a matter of (poor) presentation, not accuracy.

On the practical side, there is any case no Log function available to OH Rules that works with BigDecimal, so far as I know.

As you have found, math::log(x) is the natural log e
Are you sure you want log10 in this equation? The usual presentation of Steinhart–Hart describes log e

You are right, in this case the difference is irrelevant, but when iterative calculations are done the error increases exponentially when using double for no defined precision floating point values , ie Colebrooke White equation (been there, done that :wink:)!

So as best practice, I would stick to BigDecimal for all such values. The BigDecimal class handles rounding in a very specific way, giving the option to the programmer to choose how and when to round.

BR,

George

The usual presentation of Steinhart–Hart is log e. :slight_smile:


steinhart = Math::log(steinhart.doubleValue)

George,

People have been doing complex calculations for years using IEEE double floats, much more complex, and more importantly, much more ill-conditioned than the Colebrooke [sic] White equation. Check out LINPACK, for example, that goes back to the 1970s and FORTRAN 66, as I recall.

What you suggest is hardly “best practice” or, for that matter, common practice. Being able to control the rounding of BigDecimal is certainly interesting, but hardly required for most engineering/control applications. The computational cost of using a BigDecimal can be a burden for embedded systems, both in terms of available CPU cycles, as well as the associated power load.

The world has been running just fine on 4- and 8-bit microcontrollers for decades.

1 Like

Dear Jeff,

We definitely agree to disagree :wink:!
I do agree on one thing: history is what makes the present and possible future!
I will not start an argument on the cumbersome of double subtraction and many other limitations when dealing with undefined float points!
I agree that you can solve anything with the methods you specified, but with a custom approach to each and every process, not in a general and applicable way for any process!
The creators of the BigDecimal class are as you say: clueless! They have been wasting valuable resources for nothing!

Respectfully,
George