[21:22:55] root@openhab:/home/openhabian# curl -v --insecure http://localhost:8086/query --data-urlencode "q=CREATE USER admin WITH PASSWORD '123' WITH ALL PRIVILEGES"
* Expire in 0 ms for 6 (transfer 0x2037880)
* Expire in 1 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 1 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 1 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 2 ms for 1 (transfer 0x2037880)
* Expire in 0 ms for 1 (transfer 0x2037880)
* Expire in 1 ms for 1 (transfer 0x2037880)
* Expire in 1 ms for 1 (transfer 0x2037880)
* Trying ::1...
* TCP_NODELAY set
* Expire in 149998 ms for 3 (transfer 0x2037880)
* Expire in 200 ms for 4 (transfer 0x2037880)
* connect to ::1 port 8086 failed: Verbindungsaufbau abgelehnt
* Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 149998 ms for 3 (transfer 0x2037880)
* Connected to localhost (127.0.0.1) port 8086 (#0)
> POST /query HTTP/1.1
> Host: localhost:8086
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Length: 79
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 79 out of 79 bytes
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json
< Request-Id: 208c2a64-4c43-11ea-9640-b827ebe34847
< Www-Authenticate: Basic realm="InfluxDB"
< X-Influxdb-Build: OSS
< X-Influxdb-Version: 1.7.9
< X-Request-Id: 208c2a64-4c43-11ea-9640-b827ebe34847
< Date: Mon, 10 Feb 2020 20:22:58 GMT
< Content-Length: 55
<
{"error":"unable to parse authentication credentials"}
* Connection #0 to host localhost left intact
I think I can explain some of what happened, but I cannot be 100% sure. Up above you removed Grafana, but not InfluxDB. Yet you are now trying to reinstall InfluxDB and that is causing troubles.
Some notes on this:
We thought InfluxDB was not running, but it is. The curl you have is trying IPv6 first, then goes to IPv4. That leads us astray because it says “connection refused”, but then it goes on to try IPv4 and there I think it is succeeding.
I think the initial InfluxDB installation was successful. And you didn’t remove it. Therefore it has the original admin account that you set up. According to this page authentication is not used until after the first user is set up. The first installation was successful, and now you are trying to add the user again, but you are not authorized and you get the 401 error message.
Was anything broken when you first installed it? There are messages in the log file, but if they are all related to IPv6, then maybe the installation actually worked and you are only seeing issues due to IPv6.
I am not very familiar with IPv6 and IPv4 issues. Maybe someone else can help with that.
That sound to be realy the problem, I just did the final check, removed influxDB, removed Grafana and than installed via openhab-config with simple passwords
Sadly, I got again the same errors. Looks like I have to do a complete new install from scratch.
[21:36:59] root@openhab:/home/openhabian# openhabian-config
2020-02-12_21:40:44_CET [openHABian] Checking for root privileges... OK
Holen:1 http://archive.raspberrypi.org/debian buster InRelease [25,1 kB]
Holen:2 http://raspbian.raspberrypi.org/raspbian buster InRelease [15,0 kB]
Ign:3 https://dl.bintray.com/openhab/apt-repo2 stable InRelease
OK:4 https://repos.influxdata.com/debian buster InRelease
OK:5 https://packages.grafana.com/oss/deb stable InRelease
Holen:6 https://dl.bintray.com/openhab/apt-repo2 stable Release [6.051 B]
Holen:7 http://archive.raspberrypi.org/debian buster/main armhf Packages [277 kB]
Es wurden 324 kB in 4 s geholt (83,3 kB/s).
Paketlisten werden gelesen... Fertig
2020-02-12_21:40:58_CET [openHABian] Loading configuration file '/etc/openhabian.conf'... OK
2020-02-12_21:41:01_CET [openHABian] openHABian configuration tool version: [master]v1.5-541(5158a5f)
2020-02-12_21:41:02_CET [openHABian] Checking for changes in origin... OK
2020-02-12_21:41:14_CET [openHABian] Updating myself... OK - No remote changes detected. You are up to date!
2020-02-12_21:41:24_CET [openHABian] Setting up InfluxDB and Grafana...
Installing InfluxDB...
$ apt-get -y install apt-transport-https
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
apt-transport-https ist schon die neueste Version (1.8.2).
0 aktualisiert, 0 neu installiert, 0 zu entfernen und 29 nicht aktualisiert.
--2020-02-12 21:43:45-- https://repos.influxdata.com/influxdb.key
Auflösen des Hostnamens repos.influxdata.com (repos.influxdata.com)… 2600:9000:21f3:1c00:7:7790:e740:93a1, 2600:9000:21f3:8600:7:7790:e740:93a1, 2600:9000:21f3:600:7:7790:e740:93a1, ...
Verbindungsaufbau zu repos.influxdata.com (repos.influxdata.com)|2600:9000:21f3:1c00:7:7790:e740:93a1|:443 … verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet … 200 OK
Länge: 3108 (3,0K) [application/pgp-keys]
Wird in »STDOUT« gespeichert.
- 100%[====================================================================================================>] 3,04K --.-KB/s in 0,001s
2020-02-12 21:43:46 (3,40 MB/s) - auf die Standardausgabe geschrieben [3108/3108]
OK
$ apt-get update
OK:1 http://raspbian.raspberrypi.org/raspbian buster InRelease
OK:2 https://repos.influxdata.com/debian buster InRelease
Ign:3 https://dl.bintray.com/openhab/apt-repo2 stable InRelease
OK:4 http://archive.raspberrypi.org/debian buster InRelease
OK:5 https://packages.grafana.com/oss/deb stable InRelease
Holen:6 https://dl.bintray.com/openhab/apt-repo2 stable Release [6.051 B]
Es wurden 6.051 B in 4 s geholt (1.579 B/s).
Paketlisten werden gelesen... Fertig
$ apt-get -y install influxdb
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
Die folgenden NEUEN Pakete werden installiert:
influxdb
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 29 nicht aktualisiert.
Es müssen 59,4 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 158 MB Plattenplatz zusätzlich benutzt.
Holen:1 https://repos.influxdata.com/debian buster/stable armhf influxdb armhf 1.7.10-1 [59,4 MB]
Es wurden 59,4 MB in 18 s geholt (3.287 kB/s).
Vormals nicht ausgewähltes Paket influxdb wird gewählt.
(Lese Datenbank ... 47634 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../influxdb_1.7.10-1_armhf.deb ...
Entpacken von influxdb (1.7.10-1) ...
influxdb (1.7.10-1) wird eingerichtet ...
Trigger für man-db (2.8.5-2) werden verarbeitet ...
Updating FireMotD available updates count ...
$ systemctl daemon-reload
$ systemctl enable influxdb.service
$ systemctl restart influxdb.service
OK Configure InfluxDB admin account... curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
FAILED Configure listen on localhost only...
$ sed -i -e /# Determines whether HTTP endpoint is enabled./ { n ; s/# enabled = true/enabled = true/ } /etc/influxdb/influxdb.conf
$ sed -i s/# bind-address = ":8086"/bind-address = "localhost:8086"/g /etc/influxdb/influxdb.conf
$ sed -i s/# auth-enabled = false/auth-enabled = true/g /etc/influxdb/influxdb.conf
$ sed -i s/# store-enabled = true/store-enabled = false/g /etc/influxdb/influxdb.conf
$ systemctl restart influxdb.service
FAILED
Setup of inital influxdb database and InfluxDB users... curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
FAILED Installing Grafana...--2020-02-12 21:46:02-- https://packages.grafana.com/gpg.key
Auflösen des Hostnamens packages.grafana.com (packages.grafana.com)… 2a04:4e42:1b::729, 151.101.114.217
Verbindungsaufbau zu packages.grafana.com (packages.grafana.com)|2a04:4e42:1b::729|:443 … verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet … 200 OK
Länge: 1694 (1,7K) [application/x-iwork-keynote-sffkey]
Wird in »STDOUT« gespeichert.
- 100%[====================================================================================================>] 1,65K --.-KB/s in 0s
2020-02-12 21:46:02 (15,6 MB/s) - auf die Standardausgabe geschrieben [1694/1694]
OK
$ apt-get update
OK:1 http://raspbian.raspberrypi.org/raspbian buster InRelease
OK:2 https://repos.influxdata.com/debian buster InRelease
OK:3 https://packages.grafana.com/oss/deb stable InRelease
Ign:4 https://dl.bintray.com/openhab/apt-repo2 stable InRelease
Holen:5 https://dl.bintray.com/openhab/apt-repo2 stable Release [6.051 B]
OK:7 http://archive.raspberrypi.org/debian buster InRelease
Es wurden 6.051 B in 25 s geholt (239 B/s).
Paketlisten werden gelesen... Fertig
$ apt-get -y install grafana
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
Die folgenden NEUEN Pakete werden installiert:
grafana
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 29 nicht aktualisiert.
Es müssen noch 0 B von 34,6 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 102 MB Plattenplatz zusätzlich benutzt.
Vormals nicht ausgewähltes Paket grafana wird gewählt.
(Lese Datenbank ... 47660 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../grafana_6.6.1_armhf.deb ...
Entpacken von grafana (6.6.1) ...
grafana (6.6.1) wird eingerichtet ...
### NOT starting on installation, please execute the following statements to configure grafana to start automatically using systemd
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server
### You can start grafana-server by executing
sudo /bin/systemctl start grafana-server
Trigger für systemd (241-7~deb10u2+rpi1) werden verarbeitet ...
Updating FireMotD available updates count ...
$ systemctl daemon-reload
$ systemctl enable grafana-server.service
Synchronizing state of grafana-server.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable grafana-server
$ systemctl start grafana-server.service
OK
Updating Grafana admin password...{"message":"Invalid username or password"}OK Updating Grafana configuration...
$ sed -i -e /^# disable user signup \/ registration/ { n ; s/^;allow_sign_up = true/allow_sign_up = false/ } /etc/grafana/grafana.ini
$ sed -i -e /^# enable anonymous access/ { n ; s/^;enabled = false/enabled = true/ } /etc/grafana/grafana.ini
$ systemctl restart grafana-server.service
Connection Grafana to InfluxDB...{"message":"Invalid username or password"}Adding openHAB dashboard tile for Grafana... 2020-02-12_21:48:09_CET [openHABian] Adding an openHAB dashboard tile for 'grafana'... Replacing... OK
Adding install InfluxDB with database configuration to openHAB
$ touch /etc/openhab2/services/influxdb.cfg
2020-02-12_21:48:29_CET [openHABian] Checking for default openHABian username:password combination... OK
2020-02-12_21:48:30_CET [openHABian] We hope you got what you came for! See you again soon ;)
[21:48:30] root@openhab:/home/openhabian#
Try disabling IPv6 altogether by adding net.ipv6.conf.all.disable_ipv6=1 to /etc/sysctl.conf and enable the setting with sudo sysctl -p.
Uninstall InfluxDB (and make sure to remove any remaining config file) and reinstall it. Hope this fixes the initial connection error…
Are you sure that it is really broken? Have you tried running Grafana or using InfluxDB? I think you are seeing the IPv6 failures for each command followed by a successful IPv4 command. For example in the Grafana password update, it says “Invalid username or password”, but then says “OK”. I think the first error is from IPv6, but then the IPv4 works.
If you try this command, do you see InfluxDB running? Maybe it is running?
sudo systemctl status influxdb
Anyway, try disabling IPv6 as @noppes123 suggests as that will get rid of any extraneous issues.
OK, I did the sollowing steps:
disabled IPV6 as described from @noppes123
Removed grafana and influxdb, deleted all config files I found in openhab/services and var/lib
As result I still get the same installation results
$ apt-get update
OK:1 http://raspbian.raspberrypi.org/raspbian buster InRelease
OK:2 http://archive.raspberrypi.org/debian buster InRelease
OK:3 https://repos.influxdata.com/debian buster InRelease
Ign:4 https://dl.bintray.com/openhab/apt-repo2 stable InRelease
OK:5 https://packages.grafana.com/oss/deb stable InRelease
Holen:6 https://dl.bintray.com/openhab/apt-repo2 stable Release [6.051 B]
Es wurden 6.051 B in 4 s geholt (1.578 B/s).
Paketlisten werden gelesen... Fertig
$ apt-get -y install influxdb
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
Die folgenden NEUEN Pakete werden installiert:
influxdb
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 1 nicht aktualisiert.
Es müssen noch 0 B von 59,4 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 158 MB Plattenplatz zusätzlich benutzt.
Vormals nicht ausgewähltes Paket influxdb wird gewählt.
(Lese Datenbank ... 47650 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../influxdb_1.7.10-1_armhf.deb ...
Entpacken von influxdb (1.7.10-1) ...
influxdb (1.7.10-1) wird eingerichtet ...
Trigger für man-db (2.8.5-2) werden verarbeitet ...
Updating FireMotD available updates count ...
$ systemctl daemon-reload
$ systemctl enable influxdb.service
$ systemctl restart influxdb.service
OK Configure InfluxDB admin account... curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
FAILED Configure listen on localhost only...
$ sed -i -e /# Determines whether HTTP endpoint is enabled./ { n ; s/# enabled = true/enabled = true/ } /etc/influxdb/influxdb.conf
$ sed -i s/# bind-address = ":8086"/bind-address = "localhost:8086"/g /etc/influxdb/influxdb.conf
$ sed -i s/# auth-enabled = false/auth-enabled = true/g /etc/influxdb/influxdb.conf
$ sed -i s/# store-enabled = true/store-enabled = false/g /etc/influxdb/influxdb.conf
$ systemctl restart influxdb.service
FAILED
Setup of inital influxdb database and InfluxDB users... curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
curl: (7) Failed to connect to localhost port 8086: Verbindungsaufbau abgelehnt
FAILED Installing Grafana...--2020-02-13 20:36:08-- https://packages.grafana.com/gpg.key
Auflösen des Hostnamens packages.grafana.com (packages.grafana.com)… 151.101.114.217, 2a04:4e42:1b::729
Verbindungsaufbau zu packages.grafana.com (packages.grafana.com)|151.101.114.217|:443 … verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet … 200 OK
Länge: 1694 (1,7K) [application/x-iwork-keynote-sffkey]
Wird in »STDOUT« gespeichert.
- 100%[====================================================================================================>] 1,65K --.-KB/s in 0s
2020-02-13 20:36:08 (16,1 MB/s) - auf die Standardausgabe geschrieben [1694/1694]
OK
$ apt-get update
OK:1 http://archive.raspberrypi.org/debian buster InRelease
OK:2 http://raspbian.raspberrypi.org/raspbian buster InRelease
OK:3 https://repos.influxdata.com/debian buster InRelease
Ign:4 https://dl.bintray.com/openhab/apt-repo2 stable InRelease
Holen:5 https://dl.bintray.com/openhab/apt-repo2 stable Release [6.051 B]
OK:6 https://packages.grafana.com/oss/deb stable InRelease
Es wurden 6.051 B in 4 s geholt (1.516 B/s).
Paketlisten werden gelesen... Fertig
$ apt-get -y install grafana
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
Die folgenden NEUEN Pakete werden installiert:
grafana
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 1 nicht aktualisiert.
Es müssen noch 0 B von 34,6 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 102 MB Plattenplatz zusätzlich benutzt.
Vormals nicht ausgewähltes Paket grafana wird gewählt.
(Lese Datenbank ... 47676 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../grafana_6.6.1_armhf.deb ...
Entpacken von grafana (6.6.1) ...
grafana (6.6.1) wird eingerichtet ...
### NOT starting on installation, please execute the following statements to configure grafana to start automatically using systemd
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server
### You can start grafana-server by executing
sudo /bin/systemctl start grafana-server
Trigger für systemd (241-7~deb10u2+rpi1) werden verarbeitet ...
Updating FireMotD available updates count ...
$ systemctl daemon-reload
$ systemctl enable grafana-server.service
Synchronizing state of grafana-server.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable grafana-server
$ systemctl start grafana-server.service
OK
Updating Grafana admin password...{"message":"Invalid username or password"}OK Updating Grafana configuration...
$ sed -i -e /^# disable user signup \/ registration/ { n ; s/^;allow_sign_up = true/allow_sign_up = false/ } /etc/grafana/grafana.ini
$ sed -i -e /^# enable anonymous access/ { n ; s/^;enabled = false/enabled = true/ } /etc/grafana/grafana.ini
$ systemctl restart grafana-server.service
Connection Grafana to InfluxDB...{"message":"Invalid username or password"}Adding openHAB dashboard tile for Grafana... 2020-02-13_20:37:44_CET [openHABian] Adding an openHAB dashboard tile for 'grafana'... Replacing... OK
Adding install InfluxDB with database configuration to openHAB
$ touch /etc/openhab2/services/influxdb.cfg
2020-02-13_20:38:07_CET [openHABian] Checking for default openHABian username:password combination... OK
2020-02-13_20:38:07_CET [openHABian] We hope you got what you came for! See you again soon ;)
Yes, but maybe it is working? Have you tried the status command below? I think the InfluxDB and Grafana are running okay. The issue is that you get errors when curl tries IPv6, but the IPv4 curl works. Have you tried to use Grafana or InfluxDB? Are there any errors in the /var/log/syslog from them?
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
Can you post more lines from the openhab log? Is there anything else around the error? What time was the error? Does the error repeat? I ask because the systemctl status lines indicate that openHAB is writing successfully (and somebody is reading), yet your other log indicates an error. So is it working some of the time?
These look like it is working with a write every minute:
OK, here is my complete log
The error repeats every minute since
2020-02-14 22:33:51.526 [me.event.ThingUpdatedEvent] - Thing 'homematic:HM-Sec-SCo:3014F711A061A7D8A9AB31F2:PEQ0569555' has been updated.
2020-02-14 22:33:51.875 [vent.ItemStateChangedEvent] - at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:50:00.045 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:51:00.145 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:52:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:53:00.147 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
==> /var/log/openhab2/events.log <==
2020-02-14 22:53:09.072 [vent.ItemStateChangedEvent] - PV_AC_Voltage changed from 231.7 to 231.4
==> /var/log/openhab2/openhab.log <==
2020-02-14 22:54:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:55:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:56:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
==> /var/log/openhab2/events.log <==
2020-02-14 22:56:49.646 [vent.ItemStateChangedEvent] - homematic_bridge_3014F711A061A7D8A9AB31F2_DUTY_CYCLE_RATIO changed from 20 to 21
==> /var/log/openhab2/openhab.log <==
2020-02-14 22:57:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:58:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
2020-02-14 22:59:00.146 [ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"database not found: \"openhab_db\""}
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
at org.influxdb.impl.$Proxy260.writePoints(Unknown Source) ~[?:?]
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
==> /var/log/openhab2/events.log <==
2020-02-14 22:59:35.171 [vent.ItemStateChangedEvent] - homematic_bridge_3014F711A061A7D8A9AB31F2_DUTY_CYCLE_RATIO changed from 21 to 16
just tried it again with admin and no password, I get progress now but after so many changes, I lost my next steps…
Connected to http://localhost:8086 version 1.7.10
InfluxDB shell version: 1.7.10
> auth
username: admin
password:
> use openhab_db
WARN: error authorizing query: create admin user first or disable authentication
Using database openhab_db
> CREATE USER admin WITH PASSWORD '1234' WITH ALL PRIVILEGES
> use openhab_db
WARN: authorization failed
Using database openhab_db
No, I think those are successful writes (there should be a log entry for every write - this is normal). I think it is working. This is probably a good write (although the end of it is cutoff, so I can’t tell for sure):
You get errors when it tries IPv6, then you get a successful transaction when it falls back to IPv4.
I am convinced that fundamentally the problem is the IPv6 vs IPv4 issue. If you solve that, all of these funky issues will go away. That’s where you should focus. All I can say for that is to Google search. I’ve used RPI’s and I’ve never seen this issue, but that’s a small sample.
What’s in your /etc/hosts file?
Of course everything above could be totally wrong…
Looks like somehow there is an openhab_db database that already existed (from previous setup?) before you created the initial admin user.
I suggest to delete the data directory and recreate everything:
stop the InfluxDB daemon with sudo systemctl stop influxdb
delete the data directory and subdirectories (specified in influxdb.conf)
start the daemon
start the InfluxDB CLI
create the admin user
create the openHAB database
create the openHAB user
grant the openHAB user all privileges on the openHAB database
create the Grafana user
grant the Grafana user read privileges on the openHAB database
don’t forget to match the user/password/database in influxdb.cfg
Indeed, these entries are just records of a write action. But given the strange behaviour shown on the commandline, I suggest the start clean.
Hi,
I installed influxdb and the related persistence. But there is no data in de DB (i looked with Grafana).
can someone help on this issue?
in the openhab log file i see the following :
==> /var/log/openhab/openhab.log <==
2021-02-25 23:34:29.001 [ERROR] [.client.write.events.WriteErrorEvent] - The error occurred during writing of data
at com.influxdb.internal.AbstractRestClient.responseToError(AbstractRestClient.java:98) ~[bundleFile:?]*
at com.influxdb.client.internal.AbstractWriteClient.toInfluxException(AbstractWriteClient.java:574) [bundleFile:?]*
at com.influxdb.client.internal.AbstractWriteClient.lambda$new$12(AbstractWriteClient.java:181) [bundleFile:?]*
at io.reactivex.internal.subscribers.LambdaSubscriber.onNext(LambdaSubscriber.java:65) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableDoFinally$DoFinallySubscriber.onNext(FlowableDoFinally.java:84) [bundleFile:?]*
at io.reactivex.internal.operators.mixed.FlowableConcatMapMaybe$ConcatMapMaybeSubscriber.drain(FlowableConcatMapMaybe.java:284) [bundleFile:?]*
at io.reactivex.internal.operators.mixed.FlowableConcatMapMaybe$ConcatMapMaybeSubscriber.onNext(FlowableConcatMapMaybe.java:137) [bundleFile:?]*
at io.reactivex.internal.operators.mixed.FlowableConcatMapSingle$ConcatMapSingleSubscriber.drain(FlowableConcatMapSingle.java:279) [bundleFile:?]*
at io.reactivex.internal.operators.mixed.FlowableConcatMapSingle$ConcatMapSingleSubscriber.innerSuccess(FlowableConcatMapSingle.java:179) [bundleFile:?]*
at io.reactivex.internal.operators.mixed.FlowableConcatMapSingle$ConcatMapSingleSubscriber$ConcatMapSingleObserver.onSuccess(FlowableConcatMapSingle.java:317) [bundleFile:?]*
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onSuccess(SingleMap.java:64) [bundleFile:?]*
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onSuccess(SingleMap.java:64) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableCollectSingle$CollectSubscriber.onComplete(FlowableCollectSingle.java:119) [bundleFile:?]*
at io.reactivex.internal.subscribers.BasicFuseableSubscriber.onComplete(BasicFuseableSubscriber.java:120) [bundleFile:?]*
at io.reactivex.internal.subscribers.BasicFuseableConditionalSubscriber.onComplete(BasicFuseableConditionalSubscriber.java:119) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableGroupBy$State.checkTerminated(FlowableGroupBy.java:686) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableGroupBy$State.drainNormal(FlowableGroupBy.java:626) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableGroupBy$State.drain(FlowableGroupBy.java:558) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableGroupBy$State.onComplete(FlowableGroupBy.java:548) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableGroupBy$GroupedUnicast.onComplete(FlowableGroupBy.java:474) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableGroupBy$GroupBySubscriber.onComplete(FlowableGroupBy.java:213) [bundleFile:?]*
at io.reactivex.processors.UnicastProcessor.checkTerminated(UnicastProcessor.java:430) [bundleFile:?]*
at io.reactivex.processors.UnicastProcessor.drainRegular(UnicastProcessor.java:314) [bundleFile:?]*
at io.reactivex.processors.UnicastProcessor.drain(UnicastProcessor.java:397) [bundleFile:?]*
at io.reactivex.processors.UnicastProcessor.onComplete(UnicastProcessor.java:487) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableWindowBoundarySupplier$WindowBoundaryMainSubscriber.drain(FlowableWindowBoundarySupplier.java:254) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableWindowBoundarySupplier$WindowBoundaryMainSubscriber.innerNext(FlowableWindowBoundarySupplier.java:166) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableWindowBoundarySupplier$WindowBoundaryInnerSubscriber.onNext(FlowableWindowBoundarySupplier.java:316) [bundleFile:?]*
at io.reactivex.processors.PublishProcessor$PublishSubscription.onNext(PublishProcessor.java:360) [bundleFile:?]*
at io.reactivex.processors.PublishProcessor.onNext(PublishProcessor.java:243) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.tryEmit(FlowableFlatMap.java:282) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onNext(FlowableFlatMap.java:668) [bundleFile:?]*
at io.reactivex.subscribers.SerializedSubscriber.onNext(SerializedSubscriber.java:100) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableWindowTimed$WindowExactBoundedSubscriber.drainLoop(FlowableWindowTimed.java:498) [bundleFile:?]*
at io.reactivex.internal.operators.flowable.FlowableWindowTimed$WindowExactBoundedSubscriber$ConsumerIndexHolder.run(FlowableWindowTimed.java:579) [bundleFile:?]*
at io.reactivex.Scheduler$Worker$PeriodicTask.run(Scheduler.java:479) [bundleFile:?]*
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66) [bundleFile:?]*
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57) [bundleFile:?]*
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]*
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]*
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]*
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]*
I have the same problems like @knaxman - but I can´t understand your statement: In my influxdb.conf there are no data directories specified. And the deletion of influxdb/data/ was not enough…
Would you be so kind, to specify this for me, please…?!