Logging to GrayLog Revisited

This tutorial is going to cover a lot of territory. Skip to the bottom section if all you need is to know what to put into log4j2.xml to get OH to log to GrayLog directly.

It’s also way more than most home automation users would want or need to run.

You are not going to run this on your openHABian Raspberry Pi.

This is also not a full GrayLog tutorial. There are plenty of those out there. It’s really just enough to get up and running.

Why GrayLog?

I can’t say I spent a whole lot of time researching all the options. But my requirements were relatively simple:

  1. self hosted/on premises
  2. FOSS

Surprisingly, after just those two requirements one is left with two options: OpenSearch Stack and GrayLog.

The OpenSearch Stack is a fork of ElasticSearch 7 and they have a fork of Kabana from the same point in time and it is essentially the ELK stack only with an FOSS license. Elastic has basically closed source ElasticSearc and Kabana as of ElasticSearch 8 forward.

My first inclination was to run ELK stack because I already run ElasticSearch for Nextcloud but after a little struggling I couldn’t get it to work with my already running ElasticSearch. So I next tried to get the OpenStack equivalent to work too but ran into struggles there also.

I even tried a bunch of stuff to get GrayLog and all it’s ancillary stuff up and running separately too but gave up there too. As a last resort I tried using the official docker compose files with a couple of minor modifications and to my surprise it worked great!

Getting GrayLog up and Running

Note: setting up backups is something I’ve not yet done.

Ultimately all I did was clone the repo (git@github.com:Graylog2/docker-compose.git), modified the open-core/docker-compose.yml file with the secret and hashed root password, and remapped a couple ports since I already have ElasticSearch running on port 9200 and 9300. Finally I ran docker compose up and to my surprise everything came up like a champ.

The Ansible tasks:

---
# tasks file for graylog

- name: Bump up max map count
  ansible.builtin.lineinfile:
    line: vm.max_map_count=262144
    path: /etc/sysctl.conf
  register: max_map_count
  become: true

- name: Restart sysctl # noqa: no-handler
  ansible.builtin.command:
    cmd: sysctl -p
  changed_when: true
  when: max_map_count.changed
  become: true

- name: Create the directory to clone into
  ansible.builtin.file:
    path: "{{ graylog_home }}"
    state: directory
    owner: "{{ default_user }}"
    group: "{{ default_user }}"
    mode: u=rwx,g=rwx,o=rx
  become: true

- name: Clone the docker-compose repo # noqa: latest[git]
  ansible.builtin.git:
    clone: true
    dest: "{{ graylog_home }}"
    repo: git@github.com:Graylog2/docker-compose.git

- name: Create the .env file
  ansible.builtin.blockinfile:
    path: "/tmp/.env"
    create: true
    mode: u=rw,g=rw,o=r
    block: |
      GRAYLOG_PASSWORD_SECRET="{{ graylog_secret }}"
      GRAYLOG_ROOT_PASSWORD_SHA2="{{ graylog_root_password }}"
  changed_when: false

- name: Copy the official compose file to mine
  ansible.builtin.copy:
    remote_src: true
    src: "{{ graylog_home }}/open-core/docker-compose.yml"
    dest: "/tmp/docker-compose.yml"
    mode: u=rw,g=rw,o=r
  changed_when: false

- name: Change the datanode ports so they don't conflict with ElasticSeach 9200
  ansible.builtin.lineinfile:
    path: "/tmp/docker-compose.yml"
    regex: '      - "9200:9200/tcp".*'
    line: '      - "9400:9200/tcp"'
  changed_when: false

- name: Change the datanode ports so they don't conflict with ElasticSeach 9300
  ansible.builtin.lineinfile:
    path: "/tmp/docker-compose.yml"
    regex: '      - "9300:9300/tcp".*'
    line: '      - "9500:9300/tcp"'
  changed_when: false

- name: Start the graylog services
  community.docker.docker_compose_v2:
    project_name: graylog
    project_src: "/tmp"
    wait: true
    wait_timeout: 600

I usually include backup and restore in my tasks but this is the first time I’ve used docker volumes instead of host folders to store the data. That’s left as an exercise for later.

Initial GrayLog Setup

Log into the GrayLog server (port 9000) with the username admin and password you set and there are a couple minor setup steps to take.

Now go to System → Inputs. You’ll want to enable Beats, GELF TCP, and and Syslog TCP based on the configs below. Adjust as necessary.

Configure the collectors

I used the Apache 2.0 licensed version of journalbeat to export the journalctl logs from most of my hosts to GrayLog. However, it’s not compiled for older ARM machines (i.e. RPi 3 and below) so on those hosts I installed and configured rsyslog to export the syslogs to GrayLog instead.

Here’s the Ansible tasks for those:

---
# tasks file for graylog-collectors

# TODO: Should I be using filebeats?
- name: Set up journalbeats
  when: not ("old_pis" in group_names)
  block:
    - name: Install apt-transport-https
      ansible.builtin.apt:
        name: apt-transport-https
      become: true

    - name: Get elastic's signing key
      ansible.builtin.shell:
        cmd: |
          set -o pipefail
          wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor > /tmp/elastic.gpg
        executable: /bin/bash
      changed_when: false
      become: true

    - name: Get cksum of existing key
      ansible.builtin.stat:
        path: "{{ apt_keyring_dir }}/elastic.gpg"
      register: curr_elastic_key

    - name: Get cksum of new key
      ansible.builtin.stat:
        path: /tmp/elastic.gpg
      register: new_elastic_key

    - name: Install elastic's signing key
      ansible.builtin.copy:
        remote_src: true
        src: /tmp/elastic.gpg
        dest: "{{ apt_keyring_dir }}/elastic.gpg"
        mode: u=rw,g=r,o=r
      become: true
      when: (not curr_elastic_key.stat.exists) or (curr_elastic_key.stat.checksum != new_elastic_key.stat.checksum)

    - name: Add elastic's repo
      ansible.builtin.apt_repository:
        repo: 'deb [signed-by={{ apt_keyring_dir }}/elastic.gpg] https://artifacts.elastic.co/packages/oss-7.x/apt stable main' # noqa: yaml[linelength]
        state: present
      become: true

    - name: Install journalbeat
      ansible.builtin.apt:
        name: journalbeat
        update_cache: true
      become: true

    - name: Enable journalbeat to start as a service
      ansible.builtin.systemd_service:
        enabled: true
        state: started
        name: journalbeat
      become: true

    - name: Install the journalbeat config
      ansible.builtin.template:
        dest: /etc/journalbeat/journalbeat.yml
        mode: u=rw,g=r,o=r
        src: journalbeat.yml.j2
      become: true
      register: journal_beat_config

    - name: Restart journalbeat if the config changed # noqa: no-handler
      ansible.builtin.systemd_service:
        state: restarted
        name: journalbeat
      become: true
      when: journal_beat_config.changed

- name: Set up rsyslog
  when: ("old_pis" in group_names)
  block:
    - name: Install rsyslog
      ansible.builtin.apt:
        name: rsyslog
      become: true

    - name: Configure rsyslog
      ansible.builtin.blockinfile:
        path: /etc/rsyslog.conf
        block: |
          *.* action(type="omfwd"
                     target="{{ graylog_ip }}"
                     port="{{ graylog_syslog }}"
                     protocol="tcp"
                     action.resumeRetryCount="100"
                     queue.type="linkedList"
                     queue.size="10000")
          # Log anything (except mail) of level info or higher.
          # Don't log private authentication messages!
          *.info;mail.none;authpriv.none;cron.none      /var/log/messages
          # The authpriv file has restricted access.
          authpriv.*                                    /var/log/secure
          # Log all the mail messages in one place.
          mail.*                                        /var/log/maillog
          # Log cron stuff
          cron.*                                        /var/log/cron
          # Everybody gets emergency messages
          *.emerg                                       :omusrmsg:*
          # Save news errors of level crit and higher in a special file.
          uucp,news.crit                                /var/log/spooler
          # Save boot messages also to boot.log
          local7.*                                      /var/log/boot.log
      become: true
      register: rsyslog_config

    - name: Restart rsyslogd # noqa: no-handler
      ansible.builtin.systemd_service:
        name: rsyslog
        state: restarted
      become: true
      when: rsyslog_config.changed

You should see the relevant Inputs start to receive messages once these run.

Note: if you have opnSense, go to System → Settings → Logging / targets and add a TCP destination to log to your host and port 5140. Be sure to check “rfc5424”.

Setting up openHAB to log directly to GrayLog

There a lots of ways you can configure OH to log straight to GrayLog. Most of the existing tutorials on this forum have openHAB log to syslog and then configuring syslog to push the logs to GrayLog. However, I discovered some instructions that lets OH log straight to GrayLog, and it doesn’t require getting the GELF Appender installed and available.

Edit $OH_USERDATA/etc/log4j2.xml under <Appenders> add the following (replacing host with your GrayLog host or IP).

                <!-- Gelf appender -->
                <!-- https://logging.apache.org/log4j/2.x/manual/layouts.html#GELFLayout -->

                <Socket name="GRAYLOG" host="10.10.1.111" port="12201" protocol="tcp" immediateFail="true">
                        <GelfLayout host="argus" compressionType="OFF" includeNullDelimiter="true" includeStacktrace="true">
                                <!-- <KeyValuePair key="additionalField1" value="constant value"/>
                                     <KeyValuePair key="additionalField2" value="$${ctx:key}"/> -->
                        </GelfLayout>
                </Socket>

See the URL for more properties or how to use UDP if desired.

The commented out KeyValuePair is there fore reference. It’s how you define custom fields to send as part of the log statements.

Then add the appender to any of the Loggers you want to log to GrayLog too. For example:

                <!-- Karaf Shell logger -->
                <Logger level="OFF" name="org.apache.karaf.shell.support">
                        <AppenderRef ref="STDOUT"/>
                        <AppenderRef ref="GRAYLOG"/>
                </Logger>

Now what?

That’s up to you. Set up pipelines, dashboards, or what ever it is that has you wanting to use GrayLog in the first place and have fun!

It’s a very powerful platform so there is a lot you can do.

4 Likes