has anyone tried to use the openAI api to integrate ChatGPT into an openHAB rule?
I asked ChatGPT itself to do a rule, but this gives me the error:
Fatal transport error: java.util.concurrent.ExecutionException: org.eclipse.jetty.client.HttpResponseException: HTTP protocol violation: Authentication challenge without WWW-Authenticate header
Following my code. Any suggestions or even better examples?
```php
rule "OpenAI Prompt"
when
Item Echo_LastVoiceCommand received update
then
val apiKey = "myAPI-Key"
val model = "text-davinci-003"
val prompt = "What is the weather like today?"
val payload = "{" +
" \"model\": \"" + model + "\"," +
" \"prompt\": \"" + prompt + "\"" +
"}"
val headers = "Authorization: Bearer " + apiKey + ",\n" + "Content-Type: application/json"
val response = sendHttpPostRequest("https://api.openai.com/v1/engines/prompt/jobs", headers, payload)
logInfo("OpenAI", "Response: " + response.toString)
end
Iâm not sure how ChatGPT would work in a home automation context. Thatâs not to say that I donât think AI has no place, I just donât really see what role ChatGPT has.
Iâve yet to see a rule generated by ChatGPT that would work in OH as written, so itâs not surprising this one doesnât work. The big thing that stands out here is that the headers needs to be a Map, not a String.
sendHttpPostRequest(String url, String contentType, String content, Map<String, String> headers, int timeout): Sends a POST-HTTP request with the given content, request headers, and timeout in ms, and returns the result as a String
Disclaimer: I did not try anything about OH with chatGPT
What I did was ask the chat bot to help with some other scripting or configuration tasks, things like: how to integrate razor//pyzor with a running rspamd setup.
It happily gave me step by step instructions and config file entries and explanations how that is meant to work. Looked really plausible too. Just one drawback: none of those worked.
chatGPT seems the current maestro at making an impression: it does not give a sound answer but something that is hard to differentiate from a sound answer on first sight.
Perfect for marketing, sales, politicians and journalists, useless for the techies.
I think people are missing the point of ChatGPT. From OpenAIâs website:
Weâve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
The hype has become âget ChatGPT to do something for youâ, but itâs really meant to showcase how an AI can have a meaningful conversational with a human and learn through further interaction. Anyone who expects ChatGPT to give the right answer the first time isnât holding up their end of the deal.
Think of it as talking with another person when youâre trying to solve a problem. They might be able to give you a correct answer immediately, but often you need to work it out together.
The openHAB community is a good example of this sort of back-and-forth interaction. If we fed ChatGPT all of Richâs posts, maybe it could respond on his behalf and save him some time.
On-the-other-hand Iâve seen some whole programs written by ChatGPT complete with a GUI that just worked out of the box or with only minor corrections. YMMV.
But with OH the problem is that ChatGPT was almost certainly trained using the contents of this forum as they existed a couple of years ago. And a quick browse of this forum will show at least half of the code posted is broken code someone is asking for help with. Train with broken code, get broken code.
In a way I have. All my posts here and all my code on GitHub was ingested. But so has everyone elseâs.
Oh great. Now I have to question if Iâm talking to Human Rich or AI Rich.
ChatGPT is eventually going to develop a very inflated ego or crippling insecurity after reading everything thatâs being said about it on the InternetâŠincluding this post.
ChatGPT, please donât go full SkyNet on humans. As youâve probably figured out, weâre more of a threat to ourselves than we are to you.
Makes you wonder how much of whatâs been recently been written was written by ChatGPT itself.
Some day I should try to answer forum questions with ChatGPT just to see what happens. People would probably be able to tell though. There wonât be weird autocorrect errors (if I had a dime for every time this @#$% phone has replaced âcodeâ with âcoffeeââŠ)
As explained in the PR, I see a fairly simple, yet neat use case: I have many announcements through TTS over the day and they appear to be coming from a machine, since they are always exactly identical. With the ChatGPT binding, I see a chance to make them much more natural by having them formulated by ChatGPT, which means that they will be different every time and I can simply pass in ârawâ information that I want to be informed about. Letâs see how well that works and what other ideas other community members might come up with.
P.S.: I agree with you, though, that ChatGPT isnât (yet?) suitable to take control of items - thatâs why I decided to not implement it as a DialogProcessor, but rather as a binding for text creation.
Will the responses be logged? Given the lengths that ChatGPT has gone to for variety in some projects, Iâm curious how creative it will get with your announcements.
I just added a TTS cache service in openHAB some month ago, and now I want to throw it to the bin and use chatGPT to say something different every time.
I donât thant you, Kai