Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewstuart's commentslogin

Doesn’t seem anywhere near enough.

All future and present conflict is fundamentally based around drones.


I'm not sure how true that is. Sure it's what we're seeing in Ukraine right now with both sides using them a lot, but my understanding is that has to due with the fact that neither side is able to get air superiority with conventional aircraft. The same reason Iran is using a lot of drones now. It doesn't seem like the US would be in a conflict where they don't have air superiority.

Now I would agree that the US military can still find uses for drones, and that many of the people it fights will have a large usage of drones, but I don't think it's fair to say all conflict will be based around them.


> The same reason Iran is using a lot of drones now. It doesn't seem like the US would be in a conflict where they don't have air superiority.

Hmmm, this sentence appears to be a paradox? Is the US not fighting Iran right now?

Iran has a very weak air force and the US claims air superiority, yet Iran is using a lot of drones.

I think your comment proves GP's point, regardless of traditional air power, drones will feature heavily in any conflict.


Iranian drones have done nothing to prevent the US and Israel dropping gravity bombs en-mass over their capital right now. JDAMs and unguided munitions are still far cheaper for the explosion size than any drone today. That's not the situation in the Ukraine war on either side.

The US has used one-way "drones" since the 80s or earlier. The entire Gulf War in the early 90s featured a ton of tomahawk cruise missiles. The only real change is that the new shaheeds are way cheaper, slower, and smaller, but can be spammed in larger numbers.


I disagree. Iranian drones have taken out a lot of US sensing capabilities in the theater, from ground-based radars to AWACS planes, in addition to some logistical support like refueling tankers.

That has made US overflight of Iranian territory uch riskier than was expected at the beginning of the conflict, and it's notable that the US has continued to rely heavily on stand-off munitions.


Iran launched around 2k drones over the war. Ukraine uses around 200k/m.

https://www.kyivpost.com/post/55897

Either way the US needs way more drones instead of just expensive missiles/jets/boats/armor if they are going to face anyone serious like China.


What we're seeing in Ukraine suggests that drones cannot win the war for you, but they are essential for not losing it. And what we saw in Iran was that US air superiority is no longer a given. While the US had conventional air superiority, it was unable to neutralize the threat from Iranian drones.

A million suicide drones is far cheaper than 10,000 infantry.

Very soon, "good enough" robotic autonomous infantry will exist which will make soldiers in the 21st century look as outdated as cavalry.


you can keep looking at iran as the example - the US is uneilling to boots on the ground because even with air superiority, the drones are too dangerous

Still seems to be cyber warfare and mass social engineering.

The whole selling point of drones is that they are _cheap_. Spending billions brings you back to missile territory.

> All future and present conflict is fundamentally based around drones.

...all the more reason to reduce spending on them.


Just keep bringing it back infinitely.

Opposition gets tired, money doesn’t.


And this is why we need to get rid of Citizen's United and get unlimited dark money out of politics. Publicly funded elections, nobody gets more money than anyone else. Convince people with your ideas and proposed policies, not with $$$.

Sensible defaults would be nice.

They should run their own data center.

Companies should have native capability to work computers, especially those whose business is pure information, like banks.


Analogies to other professions give your argument an air of legitimacy, with none.

There’s plenty of people in this world who are expert programmers without following any traditional path.

“Oh yeah, like who”, you say.

Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.


I don’t say you cannot learn by yourself, my claim is you cannot learn without doing. Was that really unclear??

Evidence?

This is the 21st century.


Avoid people who seek to be offended.

They always find a way to get what they seek.


I found it unusable due to out of memory errors with a billion row 8 column dataset.

It needs manual tuning to avoid those errors and I couldn’t find the right incantation, nor should I need to - memory management is the job of the db, not me. Far too flakey for any production usage.


That sounds like a rather serious application. Did you file an issue?

No, I tried Clickhouse instead, which worked without crashing or manual memory tuning.

Search the issues of the duckdb GitHub there’s at least 110 open and closed oom (out of memory) and maybe 400 to 500 that reference “memory”.


> Search the issues of the duckdb GitHub there’s at least 110 open and closed oom (out of memory) and maybe 400 to 500 that reference “memory”.

Ah, missed this the first time around. Will check this out. And yes, I noticed that DuckDB rather aggressively tries to use the resources of your computer.


Understood: SQLite is to Postgres as DuckDB is to ClickHouse.

I don’t see the analogy, if you’re using it to excuse crashing on small data sets and indexes.

SQLite isn’t small and crashy, it’s small and reliable.

There’s something fundamentally wrong with the codebase/architecture if there’s so many memory problems.

And the absolute baseline requirement for a production database is no crashes.


I think the authors disagree with me, but I see it like a online analytical processing (OLAP) database, not like a OLTP (online transaction processing) database, so crashes are more tolerable.

Agree with your assessment of small and reliable for SQLite. Disagree with your baseline requirement. ACID is more important for me and does not contain `No crashes`.

I filed many issues. They were aurtoclosed after 3 months of inactivity

Yeah somewhere deep in Facebook they’ve put a black mark against my profile “filthy 300 CD player buyer, keep an eye on him”.

I like async and await.

I understand that some devs don’t want to learn async programming. It’s unintuitive and hard to learn.

On the other hand I feel like saying “go bloody learn async, it’s awesome and massively rewarding”.


Intuition is relative: when I first encountered unix-style synchronous, threaded IO, I found it awkward and difficult to reason about. I had grown up on the callback-driven classic Mac OS, where you never waited on the results of an IO call because that would freeze the UI; the asynchronous model felt like the normal and straightforward one.

> It’s unintuitive and hard to learn.

Funny, because it was supposed to be more intuitive than handling concurrency manually.


It is a tool. Some tools make you more productive after you have learned how to use them.

I find it interesting how in software, I repeatedly hear people saying "I should not have to learn, it should all be intuitive". In every other field, it is a given that experts are experts because they learned first.


> I find it interesting how in software, I repeatedly hear people saying "I should not have to learn, it should all be intuitive". In every other field, it is a given that experts are experts because they learned first.

Other fields don't have the same ability to produce unlimited incidental complexity, and therefore not the same need to rein it in. But I don't think there's any field which (as a whole) doesn't value simplicity.


I feel like it's missing my point. Using a chainsaw is harder than using a manual saw, but if you need to cut many trees it's a lot more efficient to first learn how to use the chainsaw.

Now if you take the chainsaw without spending a second thinking about learning to use it, and start using it like a manual saw... no doubt you will find it worse, but that's the wrong way to approach a chainsaw.

And I am not saying that async is "strictly better" than all the alternatives (in many situations the chainsaw is inferior to alternatives). I am saying that it is a tool. In some situations, I find it easier to express what I want with async. In others, I find alternatives better. At the end of the day, I am the professional choosing which tool I use for the job.


Except you're hearing it from someone who doesn't have a problem handling state machines and epoll and manual thread management.

Right but how do you expose your state machine and epoll logic to callers? As a blocking function? As a function that accepts continuations and runs on its own thread? Or with no interface such that anyone who wants to interoperate with you has to modify your state machine?

And that was intuitive and easy to learn?

I find state machines plus some form of message passing more intuitive than callbacks or any abstraction that is based on callbacks. Maybe I'm just weird.

When I did not know how to program, neither async nor message passing were intuitive. I had to learn, and now those are tools I can use when they make sense.

I never thought "programming languages are a failure, because they are not intuitive to people who don't know how to program".

My point being that I don't judge a tool by how intuitive it is to use when I don't know how to use it. I judge a tool by how useful it is when I know how to use it.

Obviously factoring in the time it took to learn it (if it takes 10 years to master a hammer, probably it's not a good hammer), but if you're fine with programming, state machines and message passing, I doubt that it will take you weeks to understand how async works. Took me less than a few hours to start using them productively.


It is. A lot.

But concurrency is hard and there's so much you syntax can do about it.


Some come to async from callbacks and others from (green)threads.

If you come from callbacks it is (almost) purely an upgrade, from threads is it more mixed.


Yeah, that's what annoys me, async comes from people who only knew about callbacks and not other forms of inter thread communication.

Not true. I’ve used both, and I often prefer the explicitness of async await. It’s easier to reason about. The language guarantees that functions which aren’t async can’t be preempted - and that makes a lot of code much easier to write because you don’t need mutexes, atonics and semaphores everywhere. And that in turn often dramatically improves performance.

At least in JS. I don’t find async in rust anywhere near as nice to use. But that’s a separate conversation.


Frankly, async being non-intuitive does not imply that manual concurrency handling is less so; both are a PITA to do correctly.

It IS intuitive.

After you’ve learned the paradigm and bedded it down with practice.


It is an intrinsic tradeoff. With async there is significantly more code complexity with substantially higher performance and scalability.

If you don't need the performance and scalability then it is not unreasonable to argue that async isn't worth the engineering effort.


Or... we've learned it and don't like it? For legitimate reasons?

I can't follow that it's hard to learn and unintuitive

What's awesome or rewarding about it?

It forces programmers to learn completely different ways of doing things, makes the code harder to understand and reason about, purely in order to get better performance.

Which is exactly the wrong thing for language designers to do. Their goal should be to find better ways to get those performance gains.

And the designers of Go and Java did just that.


> It forces programmers to learn completely different ways of doing things, makes the code harder to understand and reason about, purely in order to get better performance.

Technically, promises/futures already did that in all of the mentioned languages. Async/await helped make it more user friendly, but the complexity was already there long before async/await arrived


Yes - I was really talking about "asynchronous programming" in general, not the async/await ways to do it in particular.

What different way of doing things?

If I want sequential execution, I just call functions like in the synchronous case and append .await. If I want parallel and/or concurrent execution, I spawn futures instead of threads and .await them. If I want to use locks across await points, I use async locks, anything else?


Really? async/await is the model that makes it really easy to ignore all the subtleties of asynchronous code and just go with it. You just need to trial and error where/when to put async/await keywords. It's not hard to learn. Just effort. If something goes wrong, then "that's just how things go these days".

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: