Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Makes you wonder if an LLM wouldn't be the perfect air traffic controller.

Only one on duty? How about 100 agents continually monitoring



This kind of thinking is why AI hype is dangerous.


Thinking is never bad, it's what happens afterwards that's important.


Certain types of thoughts are definitely "bad" regardless of what comes after.


I'm not sure I'd agree, "Intrusive thoughts" can happen to anyone for example, and it doesn't mean you're a bad person because you literally cannot control them. But what you can control, is acting/listening to those thoughts or not, which for me would be the important part.


> Intrusive thoughts

An intrusive thought is not necessarily a bad one. Bad thoughts are those that are counter-productive and should be avoided. e.g. self loathing, hatred, paranoia and many more.

> it doesn't mean you're a bad person

Nobody suggested that.

> you can control, is acting/

One's thoughts can also be controlled to a large degree. This is why people practice meditation, affirmations, cultivation of wisdom, gratitude etc.


> Makes you wonder if an LLM wouldn't be the perfect air traffic controller.

It really doesn't.


What happens when a situation that the agent hasn’t been trained on occurs?

Is it going to have the critical thinking skills to understand the situation and make the right decision or is it going to just hallucinate some impossible answer and get people killed?

I’m not saying human controllers don’t make mistakes but this should be one of the last areas to fully automate.


< ai cant do math

< Let it direct airplanes


Makes you wonder if LLMs are in the comments section trying to keep the hype going




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: