Hacker Newsnew | past | comments | ask | show | jobs | submit | fancythat's commentslogin

Calculations from me and others have proven that cloud providers use 5-10x multipliers when selling you things. The less you use them, the better is your bottom line. At the beginning it maybe makes sense to use cloud credits to get you moving, but when credits expire or your organization grows, it is wise to invest in people that can setup things on their own. The biggest lie that cloud providers managed to sell to the world, that you don't need knowledgeable people to run things in cloud.

I had a similar thinking that this should be one of the most important USP for LLMs in coding. Does anyone here has more insights or experience in using LLM to cut through years of abstractions and just rewrite code in asm or any other low-level language?

As the old saying goes (I made this up), if it was worth that much, it wouldn't be released to the public. There is absolutely zero chance that something "dangerous" would be available for 20 USD / month to basically anyone in the world. To this day, I am still puzzled when some professionals don't apply the basic logic to certain bombastic events.


You mean like ghost guns?


Ghost guns are not dangerous, people were making all kind of weapons in their basements for centuries.


Great work sending me down nostalgia memory lane!


I don't know how much servers are they using or server specs besides ancient Opterons, but how is this even an issue in 2025?

On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.

What are we missing here, besides that build farm was left to decay?


Either they want to run on ideologically pure hardware too, without pesky management bits in it (or even indeed UEFI), or they are just "it used to work perfectly" guys.

In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.


I agree with you. Unfortunately usually, the simplest explanation is often the truth, so they just probably ignored this issue, until it surfaced up.


In other words,

> they are just "it used to work perfectly" guys.


Well if you wanted to compromise F-Droid you could target their build server's ME or a cloud vm's hypervisor.

To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.

The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.

There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.

F-Droid likely has upgrade options even in the all-open scenario.



I didn't look at this video, but be vigilant when seeing one, as I was also surprised by someone demonstrating what they can do with Cursor and I went so far to install the exactly the same version of the app, use the same model and everything (prompt, word capitalization...) I could gather from the video and the results were nowhere near what was demonstrated in the video (recreating mobile web page from screenshot). I know that LLMs are not deterministic machines, but IMO there is a lot of incentive to be "creative" with marketing of this stuff. For the reference, this was less than two months ago.


I will use an opportunity to confirm that cloud is ill-suited for almost all but niche business cases and majority of users were dragged into cloud platforms either by free credits or (my suspicion) some grey kick-back schemes with C-level guys.

At my current project (Fortune 500 saas company, was there for both on-prem to cloud and then cloud-to-cloud migration):

a) Resources are terribly expensive. Usual tricks you find online (spot instances) usually cannot be applied for some specific work related reason. In our estimates, in contrast to even the hw/sw list-prices, cloud is 5x-10x more expensive, of course depending on the features you are planning to use.

b) There is always a sort of "direction" cloud provider pushes you into: in my case, costs between VMs and Kubernetes are so high, we get almost weekly demands to make the conversion, even though Kubernetes for some of the scenarios we have don't make any sense.

c) Even though we are spending 6 figures, now maybe even 7 figures on the infrastructure monthly, priority support answer that we receive are borderline comical and in-line with one response we received when we asked why our DB service was down, quote: "DB has experienced some issues so it was restarted."

d) When we were having on-prem, some new features asked from ops side, were usually implemented / investigated in a day or so. Nowadays, in most cases, answers are available after week or so of investigation, because each thing has its own name and lingo with different cloud providers. This can be solved with specific cloud certifications, but in real-world, we cannot pause the business for 6 months until all ops are completely knowledgeable about all inner workings of the currently popular cloud provider.

e) Performance is atrocious at times. That multi-tenancy some guys are mentioning here is for provider's benefit not for the customer. They cram ungodly amount of workload on machines, that mostly works, until it doesn't and when it does not, effects are catastrophic. Yes, you can have isolation and dedicated resources, but a)

f) Security and reliability features are overly exaggerated. From the observable facts, in the last year, we had 4 major incidents lasting several hours strictly related to the platform (total connectivity failure, total service failure, complete loss of one of the sites, etc).

In the end, for anyone who wants to get deeper into this, check what Ahrefs wrote about cloud.


Voice, in-person, in a coffee shop, with mobile phones on the table.


I have noticed a similar trends, when speaking about some specific topics without following up on the device by traditional means (keyboard), ads started to show up in a day or to.

Last occurrence, 6 months or so ago, of that happening was when one of my colleagues discussed vacation in a specific place I absolutely have no interest in visiting, so I was 100% sure I didn't google it or discussed it online. Surprisingly, the next day, I was swamped with booking.com and airbnb deals for stays in that specific area.

I emphasize next day occurrences intentionally, as I am under impression that it takes some time for them to process the data and supply the results to the marketeers.


Could it be that they have the info of the holiday from your colleague. They then track the proximity between the two of you and then display you ads on things he is interested in.

You only notice these ads when it matches what you spoke about.


Well, of course I cannot be 100% sure, but I usually don't see his other interests popping up on my ads. I must say that I am the person that doesn't use ad blockers or similar tech, as I want to see what is getting advertised on which content, etc. so I think that I am more aware than the average person of what I see and why. The holiday thing was extremely specific, it is a very small (<500 people) town in a very specific location, so unless you were not listening to our conversation and its context, it would be a very lucky guess to connect the needed dots (edit: grammar).


If he googled it from the same network as you were on, then you could easily have been grouped together due to that.


Nope, that didn't happen. I access Internet for personal use only on my non-shareable mobile connection. I noticed what you are talking about when I google stuff on the workplace where we share common ip.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: