ChatGPT and OpenHAB (incl. possibly Alexa integration)

Hey guys,

has anyone tried to use the openAI api to integrate ChatGPT into an openHAB rule?

I asked ChatGPT itself to do a rule, but this gives me the error:

Fatal transport error: java.util.concurrent.ExecutionException: org.eclipse.jetty.client.HttpResponseException: HTTP protocol violation: Authentication challenge without WWW-Authenticate header

Following my code. Any suggestions or even better examples?

rule "OpenAI Prompt"
  Item Echo_LastVoiceCommand received update

    val apiKey = "myAPI-Key"
    val model = "text-davinci-003"
    val prompt = "What is the weather like today?"

    val payload = "{" +
    "  \"model\": \"" + model + "\"," +
    "  \"prompt\": \"" + prompt + "\"" +
    val headers = "Authorization: Bearer " + apiKey + ",\n" + "Content-Type: application/json"

    val response = sendHttpPostRequest("", headers, payload)

    logInfo("OpenAI", "Response: " + response.toString)

What would you use this for?

I’m not sure how ChatGPT would work in a home automation context. That’s not to say that I don’t think AI has no place, I just don’t really see what role ChatGPT has.

I’ve yet to see a rule generated by ChatGPT that would work in OH as written, so it’s not surprising this one doesn’t work. The big thing that stands out here is that the headers needs to be a Map, not a String.

I thought about using such rule to „beef up“ Alexa. Wanted to paste the prompt in via LastCommand and response via Speak of the Echo binding.

Could you help me with the right format (map) for the sendhttprequest or is data missing for that? Thanks and cheers.

I don’t use this Action but it is documented.

  • sendHttpPostRequest(String url, String contentType, String content, Map<String, String> headers, int timeout): Sends a POST-HTTP request with the given content, request headers, and timeout in ms, and returns the result as a String

To create a Map:

val headers = newHashmap
1 Like

Just in case you don’t get it running using sendhttprequest it should be possible to follow this ( [2024] How to use GPT3 API with Curl [GPT3][ChatGPT] | by KASATA | Geek Culture | Medium ) instruction to use curl. Once that works it should be possible to integrate it using the executeCommandLine statement.

Disclaimer: I did not try anything about OH with chatGPT

What I did was ask the chat bot to help with some other scripting or configuration tasks, things like: how to integrate razor//pyzor with a running rspamd setup.

It happily gave me step by step instructions and config file entries and explanations how that is meant to work. Looked really plausible too. Just one drawback: none of those worked.

chatGPT seems the current maestro at making an impression: it does not give a sound answer but something that is hard to differentiate from a sound answer on first sight.
Perfect for marketing, sales, politicians and journalists, useless for the techies.

1 Like

I think people are missing the point of ChatGPT. From OpenAI’s website:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

The hype has become “get ChatGPT to do something for you”, but it’s really meant to showcase how an AI can have a meaningful conversational with a human and learn through further interaction. Anyone who expects ChatGPT to give the right answer the first time isn’t holding up their end of the deal.

Think of it as talking with another person when you’re trying to solve a problem. They might be able to give you a correct answer immediately, but often you need to work it out together.

The openHAB community is a good example of this sort of back-and-forth interaction. If we fed ChatGPT all of Rich’s posts, maybe it could respond on his behalf and save him some time.

Heck, maybe Rich has already done that. :wink:


On-the-other-hand I’ve seen some whole programs written by ChatGPT complete with a GUI that just worked out of the box or with only minor corrections. YMMV.

But with OH the problem is that ChatGPT was almost certainly trained using the contents of this forum as they existed a couple of years ago. And a quick browse of this forum will show at least half of the code posted is broken code someone is asking for help with. Train with broken code, get broken code. :wink:

In a way I have. All my posts here and all my code on GitHub was ingested. But so has everyone else’s.


Oh great. Now I have to question if I’m talking to Human Rich or AI Rich.

ChatGPT is eventually going to develop a very inflated ego or crippling insecurity after reading everything that’s being said about it on the Internet…including this post.

ChatGPT, please don’t go full SkyNet on humans. As you’ve probably figured out, we’re more of a threat to ourselves than we are to you. :pray:


Makes you wonder how much of what’s been recently been written was written by ChatGPT itself. :smiley:

Some day I should try to answer forum questions with ChatGPT just to see what happens. :disguised_face: People would probably be able to tell though. There won’t be weird autocorrect errors (if I had a dime for every time this @#$% phone has replaced “code” with “coffee”…)


Remember Tay? :slight_smile:

I’d forgotten about that. I take this back:

How about asking two questions which are ever repeated in this forums:

  1. How Home Assistant is better from openHAB
  2. How openHAB is better from Home Assistant


1 Like
1 Like

I’ve actually just created a ChatGPT binding: [chatgpt] Initial contribution of the ChatGPT binding by kaikreuzer · Pull Request #14809 · openhab/openhab-addons · GitHub

As explained in the PR, I see a fairly simple, yet neat use case: I have many announcements through TTS over the day and they appear to be coming from a machine, since they are always exactly identical. With the ChatGPT binding, I see a chance to make them much more natural by having them formulated by ChatGPT, which means that they will be different every time and I can simply pass in “raw” information that I want to be informed about. Let’s see how well that works and what other ideas other community members might come up with. :slight_smile:

P.S.: I agree with you, though, that ChatGPT isn’t (yet?) suitable to take control of items - that’s why I decided to not implement it as a DialogProcessor, but rather as a binding for text creation.


Will the responses be logged? Given the lengths that ChatGPT has gone to for variety in some projects, I’m curious how creative it will get with your announcements.

You can limit the length quite easily - see my example configuration: “Answer briefly, in 2-3 sentences max.”

I’ve also shown the example output/logging in the README.


I just added a TTS cache service in openHAB some month ago, and now I want to throw it to the bin and use chatGPT to say something different every time.
I don’t thant you, Kai :joy:

1 Like

How do I install this? Is is supposed to show up in the community marketplace? I don’t see it

No, it is already merged and available in latest openHAB 4.0 SNAPSHOTS.