Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fair enough - assuming steady state, but the acceleration is the curve I'm most curious about.

The point I was alluding to above was that the prompts themselves will be recursively mined over time. Eventually, except for truly novel problems, the AI interpretation of the prompts will become more along the lines of "that's what I wanted".

Some things to think about: What happens when an entire company's slack history is mined in this fashion? Or email history? Or GIT commit history, with corresponding links to Jira tickets? Or the corporate wiki? There are, I'd guess, hundreds of thousands to millions of project charter documents to be mined; all locked behind an "intranet" - but at some point, businesses will be motivated to, at the least, explore the "what if" implications.

Given enough data to feed upon, and some additional code/logic/extensions to the current state of the art, I think every knowledge worker should consider the impact of this technology.

I'm not advocating for it (to be honest, it scares the hell out of me) - but this is where I see the overall trend heading.



This is the doomsday scenario again, though.

In a world where we have the technology to go from two lines of prompt in a textbox to a complete app, no questions asked, then the same technology can run the entire company. It's kind of hard to believe transformers models are capable of this, given we are already starting to see diminishing returns, but if that's what you believe they are, then you believe they can effectively do anything. It's the old concept of AI-complete.

If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.

This remains true for any version of a language model, even an hypothetical future LLM that has "solved" natural language. I would not rather write natural language than formal language given the chance.


> If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.

But what if the "programming language" is not a general-purpose language, but a context/business domain specific language? One that is trained on the core business at hand? What if that "language" had access to all the same vocabulary, project history (both successful and unsuccessful), industry regulations, code bases from previous (perhaps similar) solutions, QC reports, etc.? What if the "business savvy" consumer of this AI can phrase things succinctly in a fashion that the AI can translate into working code?

I don't see it as a stretch "down the road." Is it possible today? Probably not. Is it possible in 5-10 years time, I definitely think so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: