XTend Performance example

I did an implementation in jython.
Here is the worker-code.

    while( steps > 0):
        start_step = time.time()
        steps = steps - 1

        for key in fade:
            start_step1 = time.time()
            ist = HSBType( str(fade_items[key].state))
            h = float(str(ist.hue))           + fade[key]["step h"]
            s = float(str(ist.saturation))    + fade[key]["step s"]
            b = float(str(ist.brightness))    + fade[key]["step b"]
            logger.debug("{:15s} | duration1 ({:3.5f}ms)".format( "DoFade", (time.time() - start_step1) * 1000))

            h = max(h, 0.0)
            s = max(s, 0.0)
            b = max(b, 0.0)
            
            h = min(h, 100.0)
            s = min(s, 100.0)
            b = min(b, 100.0)
            
            logger.debug("{:15s} | duration2 ({:3.5f}ms)".format( "DoFade", (time.time() - start_step1) * 1000))
            BusEvent.sendCommand( fade_items[key],  "{:.4f},{:.4f},{:.4f}".format(h,s,b))
            logger.debug("{:15s} | duration3 ({:3.5f}ms)".format( "DoFade", (time.time() - start_step1) * 1000))
        
        logger.debug("{:15s} | duration ({:3.5f}ms)".format( "DoFade", (time.time() - start_step) * 1000))
        logger.debug("{:15s} | sleep ({:3.5f}ms)".format( "DoFade", (0.050 - (time.time() - start_step)) ))

        duration = 0.050 - (time.time() - start_step)
        if( duration > 0.0):
            time.sleep( duration)

Turns out the main part of the delay comes from posting a command to an item (difference between duration2 and duration3 ). The jython-code is neglectably fast.

2016-03-21 13:41:31.266 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration1 (2.00009ms)
2016-03-21 13:41:31.269 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration2 (5.00011ms)
2016-03-21 13:41:31.297 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration3 (32.00006ms)
2016-03-21 13:41:31.301 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration1 (0.99993ms)
2016-03-21 13:41:31.303 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration2 (3.99995ms)
2016-03-21 13:41:31.327 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration3 (27.99988ms)
2016-03-21 13:41:31.331 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration1 (2.00009ms)
2016-03-21 13:41:31.334 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration2 (3.99995ms)
2016-03-21 13:41:31.353 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration3 (23.99993ms)
2016-03-21 13:41:31.357 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration1 (2.00009ms)
2016-03-21 13:41:31.360 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration2 (3.99995ms)
2016-03-21 13:41:31.379 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration3 (23.00000ms)
2016-03-21 13:41:31.383 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration1 (0.99993ms)
2016-03-21 13:41:31.386 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration2 (3.99995ms)
2016-03-21 13:41:31.410 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration3 (27.99988ms)
2016-03-21 13:41:31.414 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration1 (1.00017ms)
2016-03-21 13:41:31.416 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration2 (3.99995ms)
2016-03-21 13:41:31.444 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration3 (32.00006ms)
2016-03-21 13:41:31.447 [DEBUG] [penhab.model.jsr223.Wohnzimmer] - DoFade          | duration (182.00016ms)

However, it is still noticeable, that the jython implementation is running ~10 times faster (~4ms compared to ~40ms) while doing the same thing.
Unfortunately I did not reach my goal of 50ms per cycle. I have to think of some code to prevent unnecessary item-commands.
Performance wise I am happy nonetheless. I save ~5min lag time on first time rule loading and the load on the pi seems to be much less (response-time of other rules went down, but I did not measure that).

Maybe I’ll go for six parallel worker threads as you suggested. We’ll see! Thank you for you ideas and contribution! :thumbsup: