[JDBC-MySQL] Messed up item mappings after MySQL server migration

Hi there,

i made some changes at my local Server and switche to ESXi with VMs.
Before this i made a backup of my MySQL database und afterwards recovered this backup.
It seems like the backup made a mistake at the datatypes.
31 of my tables now have the datatype DOUBLE
Is there any information what datatypes are correct?

The problem is that openHAB can´t store value into tables with the datatype DOUBLE

2019-05-24 11:36:20.246 [ERROR] [org.knowm.yank.Yank                 ] - Error in SQL query!!!

java.sql.SQLException: Data truncation: Incorrect datetime value: '24.4' for column 'Value' at row 1 Query: INSERT INTO Item4 (TIME, VALUE) VALUES( NOW(3), ? ) ON DUPLICATE KEY UPDATE VALUE= ? Parameters: [24.4, 24.4]
	at org.apache.commons.dbutils.AbstractQueryRunner.rethrow(AbstractQueryRunner.java:392) ~[252:org.openhab.persistence.jdbc:1.13.0]
	at org.apache.commons.dbutils.QueryRunner.update(QueryRunner.java:491) ~[252:org.openhab.persistence.jdbc:1.13.0]
	at org.apache.commons.dbutils.QueryRunner.update(QueryRunner.java:457) ~[252:org.openhab.persistence.jdbc:1.13.0]
	at org.knowm.yank.Yank.execute(Yank.java:194) [252:org.openhab.persistence.jdbc:1.13.0]
	at org.knowm.yank.Yank.execute(Yank.java:177) [252:org.openhab.persistence.jdbc:1.13.0]
	at org.openhab.persistence.jdbc.db.JdbcBaseDAO.doStoreItemValue(JdbcBaseDAO.java:332) [252:org.openhab.persistence.jdbc:1.13.0]
	at org.openhab.persistence.jdbc.internal.JdbcMapper.storeItemValue(JdbcMapper.java:141) [252:org.openhab.persistence.jdbc:1.13.0]
	at org.openhab.persistence.jdbc.internal.JdbcPersistenceService.store(JdbcPersistenceService.java:133) [252:org.openhab.persistence.jdbc:1.13.0]
	at org.openhab.core.persistence.internal.PersistenceServiceDelegate.store(PersistenceServiceDelegate.java:55) [197:org.openhab.core.compat1x:2.4.0]
	at org.eclipse.smarthome.core.persistence.internal.PersistenceManagerImpl.handleStateEvent(PersistenceManagerImpl.java:148) [106:org.eclipse.smarthome.core.persistence:0.10.0.oh240]
	at org.eclipse.smarthome.core.persistence.internal.PersistenceManagerImpl.stateUpdated(PersistenceManagerImpl.java:469) [106:org.eclipse.smarthome.core.persistence:0.10.0.oh240]
	at org.eclipse.smarthome.core.items.GenericItem$1.run(GenericItem.java:249) [102:org.eclipse.smarthome.core:0.10.0.oh240]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
	at java.lang.Thread.run(Thread.java:748) [?:?]

It seems like openHAB messed up the mappings between items and the correct tables…
Item4 should be a datetime string 2018-12-12 19:57:16 but openHAB tries to store a temperature value into this table.

How to fix this messed up mappings without loosing the stored data?

  • Platform information:
    • Hardware: Raspberry Pi 3B
    • OS: raspbian 9
    • Java Runtime Environment:
      • openjdk version “1.8.0_152”
        OpenJDK Runtime Environment (Zulu Embedded 8.25.0.76-linux-aarch32hf)
        OpenJDK Client VM (Zulu Embedded 8.25.0.76-linux-aarch32hf)
    • openHAB version: 2.3.0-1 (Release Build)

This feels a bit like openHAB Item type has changed. If persistence creates MySQL tables, and you later edit or delete an Item, then later create a new Item with the same name but a different type, it will fail.

I can´t understand how this affects my runtime after the backup/recovery process but not before.
openHAB didn´t changed as it´s running on a raspi3.
Only the MySQL changed from being installed in Server 2016 directly on the hardware compared to Server 2016 in a VM on ESXi now.

I had a look at the backup and it seems like the backup process failed at some point and messed up.

Seems like the only way to clean up this mess is to clear the tables and let oH start again.
Maybe it´s a good time to change from itemX to item names as table names.

kind regards
Michael

I couldn’t say, but its easy to check if ItemXX is type blah and MySQL table is type bleh. I think you only have to delete the offending table.

EDIT - thinking on a bit, if the ItemXX table names are autogenerated and the OH Items do not get fielded in the same order …