I'd love if the ambiguities could be a dialogue of question/answer, rather than being fully specified ahead of time like we generally have programming today. It seems much more efficient.
Have you not spent much time working with ChatGPT? Or maybe you haven't upgraded to plus so you get GPT4? It's so fucking good. It's a bit like pair programming. Even though it can't always give you the result you want, it does so an appreciable percentage of the time and it's a fabulous way to think through problems, especially as a complement to the "google -> stackoverflow -> copy -> paste" style programming when you're trying things out or unsure of how to do something.
Do people actually do that much copy paste programming? People I work with are more inclined to read the docs and lean on intellisense. The people I've seen cling to chatgpt have been spending a lot of time forgetting LSPs exist and wondering why some hallucinated method doesn't exist... They also tend to think really long 1 liners are good code over explicit easy to read 3 liners...
Yea but when you really look at the numbers most people are copy pasting git commands, npm package manager commands, warning suppression syntax, etc. That's not really programming and is a symptom that the tools people are using stink or that they don't use those features very often. Not to say there aren't millions of other copy pasted but most of the remaining ones seem to be about data science, which is again a good hint that an API is complicated, and the fallout there isn't too bad because as data scientists work they check their manipulations.
Yep! And I'd bet that's a huge part of why it took off. An equally capable model with a "write a better prompt and try again" UX wouldn't be nearly as useful.
The ambiguity is exponential. I wish that the people hyping on llms read the older literature and sentence parsing.
The only reason why people are so impressed is that chatgpt sometimes gives better results than Google. Which just ought to tell you hiw bad google has gotten.
Well, programmers provide a natural language interface and somehow we usually manage the ambiguity and complexity OK.
In my experience, a lot of support requests for bespoke/in-house software go like this:
> User: Why is my wibble being quarked? This shouldn’t be happening!
> Dev: Wibble ID, please?
> User: ID 234567. This is terrible!
> Dev: [rummages in git blame] Well, this wibble is frobnicated, and three years ago [links to Slack thread] you said that all frobnicated wibbles should be automatically quarked.
> User: Yes, but that was before we automated the Acme account. We never frobnicate their wibbles!
> Dev: ...so, is there a way for me to tell if a client wants their wibbles unfrobnicated, or should I hard-code an exception for Acme?
(And then, six months later: “Why are none of Acme’s wibbles being frobnicated automatically?”)
If you could introduce an AI assistant that could answer these questions instantly (instead of starting with a support ticket), it’d cut the feedback loop from hours or days down to seconds, and the users (who are generally pretty smart in their field, even if my frustration is showing above) would have a much better resource for understanding the black box they’ve been given and why it works the way it does.
> If you could introduce an AI assistant that could answer these questions instantly
If you have some change documentation so good that you are able to answer that kind of question for things that a previous developer changed, you may have a chance of making the computer answer it.
Personally, I have never seen the first part done.
Yes, obviously the computer can’t find answers that have been lost to the mists of time; pointing to a specific discussion is a best-case scenario, and relies on a good commit history.
But even just providing a brief explanation of the current code would be a great help (even if it gets confused and gives bad answers occasionally; so do I sometimes!); and even when the history is vague you can usually pull useful information like “this was last changed eight years ago, here’s a ticket number” or “it’s worked like this since the feature was added, I have no idea what they were thinking at the time” or “the change that caused this to start happening claims to be a refactor, but seems to have accidentally inverted a condition in the process”.
And in a magical world where the AI is handling this entire conversation automatically, it would naturally write a good commit message for itself, quoting the discussion with the relevant user, so it has something to point to when the topic comes up again. (And it’d be in all the Slack channels, so when someone mentions in #sales-na-east that Acme has asked about quarking services, it can drop into the conversation and point out that the Wibble Manager might need changing before that turns into an urgent change request because we’ve accidentally sent them a batch of unquarked wibbles. Well, one can dream, anyway.)
Oops, only now do I realize that should have been “we never quark their wibbles” and “a client wants their wibbles unquarked.” (Hopefully doesn’t make a difference to comprehension since they’re nonsense words anyway, but there you go.)
But that's exactly the point. The game of 20 questions is exponential as well. To uniquely identify a thing, the more specific you go, the level of precision needed to be unambiguous blows up. However, as a dialogue, you don't have to fully spec out every branch of the tree ahead of time. They ask a yes or no question and the ambiguity decreases exponentially for every question asked.
By having a dialogue, you can resolve only the ambiguities pertinent to the specific question at hand.
There's no need to detail how individual bricks of a house will be laid out when discussing the overall plan of it. Current LLMs, from my experience, don't branch out too well when facing ambiguity, but rather pick the most likely answer consistent with the history. But it's imaginable that these concerns will be addressed once systems start maximizing the returns over whole conversations and not just individual interactions.
Google search has eroded a lot over the years. I think some of that is scale, but also adversarial SEO listings, etc. The biggest jump in its degregation Ive noticed happened around the time chatgpt launched after it's beta. Now every major search engine has an LLM product... Has had me wondering...
Yeah. I actually mis-parsed the headline myself, before noting the date. I was thinking "is going to work" meant it would be happening in more and more workplaces.