OH2: ideas to help advanced users to create config files

That’s funny ! For information, I just discovered that few console commands are already existing, like:
smarthome:things
smarthome:items

Ok, it is implemented by class ThingConsoleCommandExtension for things. Output is provided by method printThings. We could imagine to add an optional format parameter in order to output config DSL or JSON.

This reply from @Kai to another post tonight got me thinking that we can already hard code what is contained in the mapDB database if we wanted, i.e. using a *.things file. In fact when I went and looked at my system I remembered that I already have 4 things set up this way.

Just need to work out how and where to code the channel links.

@MikeD : it is exactly what I have done. Even if my mapDB database is corrupted, I can delete it and I loose (almost) nothing. You can define channel links in your items file.
But I had to define all this manually with the help of bindings documentation. What I propose is just something that would help us to produce the config files.

What I can see now is it could be done easily by just enhancing a little the existing console commands. You can already list items and things through console commands, a specific output format is used. I propose to add another format, in particular DSL format.

After that, you can simply do copy/paste from the console to your config files.

As I will break nothing and only add a new output format for console commands, I propose to implement this enhancement.

That would work for me!

@Kai: please give your agreement before I spend time on that :wink:

Ok what I propose is to change the following console commands:

smarthome:items list [<pattern>]
smarthome:things list

by

smarthome:items list [<format>] [<pattern>]
smarthome:things list [<format>] [<thingUID>]

format is optional. If not set, the current output will be used. If set to “DSL”, I will produce the format accepted in config files.
As an additional and optional feature, I allow to display only a specific thing rather than all things.

Where can I find a full syntax of the DSL for things and items ?

It looks like syntax is defined by thing.xtext and items.xtext.
I have to find how to control the syntax, something that is certainly done by the Desginer.

Hi all,

Sorry to join the party so late.
This is a topic that has been discussed multiple times already, just to give you some links:

So here the discussion is a bit reduced in scope, what you are more or less asking for is a human readable/modifiable version of the content of MapDB, right? So yes, an export/import mechanism might be the easiest solution for that.

Some remarks:

  • It isn’t trivial to implement a database, so I would not try to replace MapDB by something done on your own. Rather do export/import only.
  • Alternatively, any other database/storage mechanism can be hooked in, since ESH nicely abstracts this layer
  • The database storage must be in all cases independent of DSLs/Xtexts. Note that it was introduced to give systems a chance to operate completely without any Xtext stuff as Xtext is quite heavyweight for small embedded systems. What is worse is that it brings a lot of complexity to the runtime. I spend 3 full days last week getting the circular dependency issue fixed, which was to a big part due to Xtext using Guice. As I seem to be the only person on the planet being able to analyze and fix such issues, you do not want to bet on this for own safety!
  • As the internal format of Things in MapDB is indeed JSON, JSON would be the most natural fit for an export/import format.
  • I agree that it would be nice though to be able to export directly in the DSL format as well, which is handy for all users who want to further maintain it as a text file themselves. If you want to implement this, it has to be optional (see above) and you should avoid duplicating code. The Thing DSL e.g. still sometimes evolve and changing the format should not require code changes at very different places. So serialization to DSL format must be done through Xtext itself, which supports this.

Just my 2 cents :slight_smile:
Kai

1 Like

I will take a look into Xtext serialization. That is a good news that it exists. Bad news is that I will have to fully understand Xtext stuff.

I’ve implemented the json storage idea I mentioned above - seems to work well. I’ve implemented it as separate files per data table, so we get one for items, things, links, inbox…

The things looks like so -:

{
  "zwave:device:155e57cafb9:node2": {
    "class": "org.eclipse.smarthome.core.thing.internal.ThingImpl",
    "value": {
      "label": "Lounge Lights (Node 2)",
      "bridgeUID": {
        "segments": [
          "zwave",
          "serial_zstick",
          "155e57cafb9"
        ]
      },
      "channels": [
        {
          "acceptedItemType": "Dimmer",
          "uid": {
            "segments": [
              "zwave",
              "device",
              "155e57cafb9",
              "node2",
              "switch_dimmer"
            ]
          },
          "channelTypeUID": {
            "segments": [
              "zwave",
              "switch_dimmer"
            ]
          },
          "label": "Dimmer",
          "configuration": {
            "properties": {}
          },
          "properties": {
            "binding:Command:OnOffType": "SWITCH_MULTILEVEL,BASIC",
            "binding:*:PercentType": "SWITCH_MULTILEVEL,BASIC"
          },
          "defaultTags": []
        }
      ],
      "configuration": {
        "properties": {
          "config_30_1": 3,
          "config_11_1": 1,
          "config_10_1": 1,
          "group_1": [],
          "group_3": [
            "node_1_0"
          ],
          "group_2": [],
          "switchall_mode": "SWITCH_ALL_INCLUDE_ON_OFF",
          "action_reinit": "-232323",
          "config_16_1": 1,
          "config_17_1": 0,
          "config_18_1": 0,
          "config_39_2": 600,
          "config_12_1": 99,
          "config_19_1": 0,
          "config_13_1": 1,
          "config_14_1": 0,
          "config_15_1": 1,
          "config_20_1": 110,
          "action_failed": "-232323",
          "powerlevel_level": 0,
          "action_remove": "-232323",
          "binding_pollperiod": "1800",
          "action_heal": "-232323",
          "config_1_1": -1,
          "config_7_1": 1,
          "config_8_1": 1,
          "config_9_1": 5,
          "powerlevel_timeout": 0,
          "config_6_1": 0
        }
      },
      "properties": {
        "zwave_class_basic": "ROUTING_SLAVE",
        "zwave_class_generic": "MULTILEVEL_SWITCH",
        "zwave_deviceid": "262",
        "zwave_frequent": "false",
        "zwave_nodeid": "2",
        "zwave_version": "1.6",
        "zwave_listening": "true",
        "zwave_routing": "true",
        "zwave_beaming": "true",
        "zwave_class_specific": "POWER_SWITCH_MULTILEVEL",
        "zwave_manufacturer": "271",
        "zwave_devicetype": "256",
        "zwave_neighbours": "4,6,7,8,9,12,14"
      },
      "uid": {
        "segments": [
          "zwave",
          "device",
          "155e57cafb9",
          "node2"
        ]
      },
      "thingTypeUID": {
        "segments": [
          "zwave",
          "fibaro_fgd211_00_000"
        ]
      }
    }
  }
}

and items…

{
  "zwave_serial_zstick_155e57cafb9_serial_sof": {
    "class": "org.eclipse.smarthome.core.items.ManagedItemProvider$PersistedItem",
    "value": {
      "groupNames": [],
      "itemType": "Number",
      "tags": [],
      "label": "Start Frames",
      "category": "Temperature"
    }
  }
}

I want to add auto backups, but as proof of concept, it works well…

I am very concerned about performance with that approach.
How well does it work, if you have 2000 items and you sequentially add a tag to 50 of them?
I guess you will need to implement a delayed-write mechanism for this and consistency then can become a problem as well - not in a proof-of-concept, but in real life afterwards.
After all, you will have to implement all features of any decent db, which isn’t something that is quickly done.

Wouldn’t it be better to try the import/export approach and just use json as a backup format?

I’ll have a look at the performance. At the moment I’ve only added a small number of devices back in to my system after loosing the mapdb database.

I’m not sure that I would need to “implement all features of any decent db” - this is a text based system so errors can be corrected relatively easily. I’ve used this approach with HABmin data (saving to XML) for the past few years and not had a problem. Sure - this is slightly more complex, but not so much really!

What I personally would like to avoid is having to manually export and import data as I think this is prone to error. I also don’t want to rely on mapdb as it’s clearly not reliable as many people had to delete their databases over the past week and that will take quite some time to add everything back.

I also don’t want to use the DSL for defining my conifguration as that means I need to use a text editor instead of the GUI, and this isn’t how I personally want to run my system on a day to day basis. I like to have the ability to edit the data in a text editor, but not exclusively.

A question for you - if we configure a thing/channel etc, where is that data now stored if the channel is not managed? Is it stored in mapdb, or will I get an error about configuring an unmanaged device?

1 Like

Yes, there will be an error, this is what the discussion I linked above is about.
DSL-defined things are read-only and cannot be changed through the UI. So for anyone wanting to use UIs to edit/configure their system, DSLs should not be used (at least for the parts that should be maintained through UI, you can also mix both approaches).

Ok, that’s what I thought. This means that configuring ZWave devices is very problematic. Currently, I don’t try and send configuration to the device when the thing initialises (this would be a mess for many reasons). I’m not sure if the thing model detects changes to configurtion in the model and calls the configuration updated method (I guess not?). This is one of the main drivers for me to ‘need’ to use the UI.

Note that the configuration was only meant to be a configuration of the handler to be able to work, not for the device. Many of your issues seem to be related to that mismatch. I wonder, if we need to introduce something more specifically about “remote device management”.

Hmmm - ok, I hadn’t considered that the configuration wasn’t intended to allow configuration of the device as this has always been discussed in the past. If this needs to be considered separately, then that’s fine, but I think that this is something that clearly should be managed within the system.

I’m not completely sure what would change - maybe we could add an attribute to the configuration description to indicate that a parameter is dynamic/transient. The other alternative I guess if you don’t want to use device configuration in this way is to have a separate configuration description, but that seems way over the top to me…

So, straight out of the box, a test with 2000 inserts takes 15.8 seconds on my Mac with the Json storage. This is using the “pretty print” option to format the json in a nicely formatted way for easy reading - if we don’t use this option, the time comes down by around 10% (13 to 14 seconds).

Exactly the same test on the MapDB takes 8.2 seconds. So, yes, it is certainly slower, but not so much as I might have thought.

Adding a 100ms delayed write means the json storage implementation runs in around 350ms. This is implemented as a retriggerable timer that will write the json file if there are no transactions in 100ms. This test was done without waiting for the 100ms - so the test was run with the timer calls in place, but then instead of waiting for the 100ms timer to time out, the test calls the commit method manually. If you think this is cheating, add 100ms and let’s call it 500ms total - still over 15 times FASTER than MapDB.

A similar test where the items are already added to the database, and I then update them with a new label takes 7.8 seconds with MapDb, and Json takes around 160ms.

So, not too shabby on the performance front. I would guess that the poor performance of MapDB might be related to the internal use of json, so every single write and update gets implemented with a call to gson, where the json implementation with the deferred write only makes that call once (although obviously with a larger data set).

EDIT
A few more stats. I ran a test with 20,000 records (just in case anyone is crazy enough to have such a system! but also to get more benchmarks). mapdb took around 80 seconds (so 10x more than 2,000 records) and json, with delayed write, took less than 1 second (so maybe 5x more than 2,000 records).

For filesize, json is around 380kB for the 2,000 record test (or 550kB with pretty printing turned on). mapdb seems to be around the same (570kb), but after the add and update, it’s doubled in size which might be the result of caching.

All in all, I’d say mapdb isn’t that great. Maybe its performance can be improved somehow, but even if it is made 10x faster, the json implementation will still perform just as well so I really don’t see any performance killers with this method.

The only question I have at the moment is if integrity is an issue - time will tell, but I currently don’t believe it will be… The major advantage for me is the ability to easily backup and edit the files in a text editor, or even better, a json editor.

If this is of interest I can make a PR for further comment.

3 Likes

That would be cool to provide the choice between mapDB and JSON through a setting.
I will choose myself JSON if it had better performance while it is a readable format.

2 Likes

Note that it could be easier to export to other formats from a JSON

Looking interesting. Readable format for all user generated data.

I’ve created a PR if anyone wants to comment. I’m sure it needs further work, but seems to work ok so far.