Openhab MCP Server

I added that as an env var to the docker run command and there’s no change. This is a strange one - given it’s docker it should be the same for me as you and others.

True… anything new in the logs?

Wait a minute: could you try again by specifying a TRANSPORT to sse or streamable-http explicitly (see explanation above). I think the default is stdio which I haven’t used so far so I guess stdio might be listening on a different port

EDIT: I double checked and the port should always be 8000. you can override this via the FASTMCP_PORT variable.

Hey guys, what you do is amazing.

Can you tell us on what hardware you are running all your servers since I’m pretty sure my RPi5 can’t handle it.

Actually the MCP Server part will run on any hardware. The LLM is a different story of course but if you use a decent PC you should be fine I guess. I actually use cloud based LLMs right now because I’m mostly working on my notebook where it does not make a lot of sense to run an LLM locally. My server hardware is also too weak since it does not have GPU. So running the whole stack locally would be nice but actually the openAI API is extremely cheap if you just use this every once in a while. So i think from an economical point of view this is the better deal though of course it comes at the cost of giving away your data…

OK… Some tweaking got things going. And re-reading this thread helped quite a lot too. I haven’t used too many MCPs locally yet, so this was an interesting one.

I got things going by manually starting the docker container, ensuring the ports were edited and transport updated. This docker-compose was the result:

services:
  openhab-mcp:
    build: .
    ports:
      - "8081:8000"  # Changed to 8081 to avoid conflict with OpenHAB
    environment:
      - OPENHAB_URL=http://<OPENHAB_HOST>:8080
      - OPENHAB_API_TOKEN=<TOKEN>
      - FASTMCP_HOST=0.0.0.0
      - OPENHAB_MCP_TRANSPORT=streamable-http

The key points for me above were:

  • The port changes to 8000 for the container
  • FASTMCP_HOST is important
  • I was getting the OPENHAB_MCP_TRANSPORT wrong and using just TRANSPORT. Your post above had it correctly in the command line, but I hadn’t scrolled across, I just read the ENV vars you shared. Once I read the code, this became obvious.

This had it running in VScode, using this in settings.json, but it relies on manually launching the container. Thanks for sharing your Windsurf config above for this:

            "openhab": {
                "command": "npx",
                "args": [
                    "mcp-remote",
                    "http://localhost:8081/mcp",
                    "--allow-http"
                ],
            },

I was then able after some trial and error configure VSCode to launch the docker container and access/use tools by adding this to settings.json (and removing the npx format above)


          "openhab": {
              "type": "stdio",
              "command": "docker",
              "args": [
                  "run",
                  "-i", // this is IMPORTANT!
                  "--rm",
                  "-p",
                  "8081:8000",
                  "-e",
                  "OPENHAB_URL=http://<URL>:8080",
                  "-e",
                  "OPENHAB_API_TOKEN=<TOKEN>",
                  "-e",
                  "FASTMCP_HOST=0.0.0.0",
                  "--name",
                  "openhab-mcp",
                  "openhab-mcp",
              ]
          }

The critical change here was the -i directive. Otherwise the container immediately closes. Tbh, not sure the ports matter in this context either, but I haven’t tried removing.

Using the MCP from VSCode works fairly well with Github Copilot Agent. It’s less successful in Ask or Edit mode of GHC.

LMK if you want me to PR any of this back to your repo to make it easier on the new newbie like me :slight_smile:

Maybe I should have mentioned that I haven’t made any changes in the README yet. It’s still the same as in the repo I forked from and even there the documentation was not entirely correct. Your most welcome to contribute of course, this would definitely make life easier for beginners.

However it might make sense to get in touch with the Original author first. @tom think you are the author of the original repo? Thank you for your work! Are you interested in continuing the development on your MCP implementation?

@TiQ: MCP server on the Raspi 4 where my openHAB 4.5 is running.
Local LLM, e.g. qwen3:14b with ollama on my PC (12GB graphic card memory), oterm as interface.
Very good intro to the general MCP topic here

Maybe PR your changes back to his repo? He can choose to accept or not at that stage.

Yes I will do that. I started with some major cleanup and further improvements here and there as well as adding test cases for a more reliable test approach. So I will still be busy for a while. In the meantime I can of course also integrate other contributions so feel free to create a PR any time :blush:

I pushed a lot of changes to the server today. A lot of these changes are stability improvements but I also added some new features like inbox handling for example and better tag handling.

I also added a tool that allows you to retrieve (or add) execution plans for more complex tasks however my results so far are not very good unfortunately… The idea is that you can prepare a step by step guide for multi step tasks in natural language or pseudo-code and the LLM can use this as a plan to perform the steps. However most of the time the LLMs that I used didn’t want to follow the plan but made up their own mind and tried all sorts of crazy things to „solve“ the task. I got one very good run though after I explained the LLM in detail that I wanted it to respect the order of things, respect the error handling and ask for user confirmation where required. I would be very interested to hear about your experiences with this feature. As I said you can also create your own plans (you can let the LLM create these with another MCP tool). If you run this on docker make sure to use a bind mount for the /app/process_templates/override folder so that you do not loose your plans on an upgrade. I provided a basic docker compose file in the repo that has this already configured. If you change one of the default plans they will also be saved there and override the default plan until they are deleted again. Note that a manual deletion of a template from the file system will not clear the search cache, so you better let the LLM delete these for you.

Hi,

It’s a great idea and a nicely compact implementation. I wanted to use it with n8n, so I had to make a few changes. I updated to the latest FastMCP packages and added support for HTTP and SSE transport. Now OpenHAB and n8n are working well together using an MCP CLIENT node.

I pushed a commit to your repo, take a look.

For me it works great on my Synology Rackstation as a docker image.

Best

Markus

Hey thank you for your feedback. I’m also using it in n8n so no idea what these changes might be but I will have a look :eyes: :+1: (all transport types are already supported)

As I don’t see a pull request you have probably used the upstream project. Mine is located here:

Ahh, I used the one on the top if this topic..

I will create one for your project