There’s a big difference between code “working” and the working correctly and optimally. If code just is “an enigma” that you can’t really penetrate, I can see how it can be difficult to know the difference, but for anybody that can see what the code actually does, it doesn’t take much complexity before it all comes crashing down.
Some factors will also decide how much “success” you will experience, if you typically just asks it to “do” simple things that have lots and lots of examples online, your chances are somewhat better. But, as soon as you ask it to do anything the slightest out of the ordinary, I’m sure you’ll have a very different “success rate”.
Another thing is that writing prompts have some pretty significant limitations itself. If I were to try to describe a piece of code in English, instead of just writing the code, I would often spend much more time and effort than I would by just writing the code. It doesn’t take much complexity before your “prompt” becomes a small book.
The whole thing is actually putting everything on its head. We have developed “programming languages” as an interface between man and machine, the goal has been to find a way where we can accurately describe what we want the machine to do as efficiently and effortlessly as possible. This is what we call “computer code”. The idea that “plain English” (or any other “natural” language) is somehow an easier way to tell the machine what to do is… bizarre. What you do is you take a tool that is very unsuitable for the task, and use it “as best you can”, hoping that this “AI” will magically “make the right guesses” on all the details that can’t easily be defined using “natural” language.
It is in many ways as if you wanted to have mathematicians throw all their math symbols and rules for how to express math out the window, and instead “describe” the math using plain English. It would a very effective way of setting them back a few thousand years.
There are two sides to that coin. If people stop asking questions that humans can answer, the “AI” will run out of “knowledge” to draw on, and will instead increasingly have to start trying to serve a derivate of its own babble. It’s easy to see that the quality will drop rapidly if humans stop doing questions and answers “manually”.
I see the LLMs as nothing more than “advanced search tools”. It can save you a lot of time finding the information you’re after, no doubt, but it can also mislead you wildly; so you can’t really use it on an area where you don’t have a solid foundation of knowledge, because how can you then tell facts from fiction?