Collecting statistics to influx - cron jobs, telegraf - mqtt?

Not an openHAB question in the strictest sense.

So I have a bunch of scripts that currently run via cron. Pull information from an api and insert it to influx for graphing with Grafana. They currently run on the bare metal of one of my Linux servers…but most other services and applications I’ve now pushed to docker.

So I could make each one of these (there’s about 10) into their own docker container pretty easy. But it strikes me as an inefficient use of memory and compute time. So I started looking at some sort of thing that would allow me to bundle a number of these scripts into one run time container. Ie a single container that ran multiple python scripts.

From there I started looking at things like collectd and telegraf. These are extendable collection systems, looks like it could be the thing that controls the scheduling and be the frame work to build in metrics collectors.

Another option might be to turn off my scripts talking to influx directly and use mqtt. Then pull mqtt into influx. But I haven’t fixed the runtime problem if I do that.

Has anyone tried using telegraf in this way? Or written some inputs and outputs for collecting various bits of data? The things I’m talking about are Plex stats via tautulli, gas meter readings I take with pixometer (then download from their api), power usage I download from my electricity company, unifi network usage from their api (as well as via snmp which is why I already have telegraf running)