Hacker Newsnew | past | comments | ask | show | jobs | submit | Centigonal's commentslogin

crazy work from whoever at Alphabet is in charge of selecting domain names for their sites.

That's been Alphabet's site since the restructuring happened back in 2015. The site has barely changed in a decade.

from doc.new to blog.google, they are on top of that

Also they have got some cool ipv4 range like 8.8.8.8 too

(Edit I had confused it earlier with 1.1.1.1 which is from cloudflare)


After all these years, I still think that this was a wasted opportunity on Google's side. As is well-known, Google (Googol) is a constant: 10^100. Instead of Alphabet, they should have named the umbrella company AlephBet [1] with a tag line: "we stopped thinking in constants".

[1] https://en.wikipedia.org/wiki/Aleph_number


the problem isn't one that can be solved with prompts. If I gave a panel of food and nutrition experts (human or machine) a bunch of pictures of food, they still wouldn't be able to tell if, e.g. a slice of cake was made with whole milk or skim.

The "pic of packaged food --> LLM --> nutrition DB call" pipeline is workable, but many users of these apps are using them for fresh prepared foods, which is just an unworkable problem without either an understanding of the preparation process or a bomb calorimeter.


maybe, but not always. I could make two identical-looking sandwiches with very different calorie content by changing the type and quantity of sauce on the inside of the bread. I could give you two "pasta with creamy sauce" dishes that look similar on camera but have different macros by partially swapping Greek yogurt for heavy cream. Dropping a couple tbsp olive oil into my marinara sauce does wonders for flavor but barely affects appearance when plated. Same with lard in my refried beans.

Context: there are a lot of very popular apps (e.g. Macrofactor) that are being promoted on social media and downloaded for exactly this feature (calculating nutrition based on pictures of food). The users don't understand that this is an impossible task. This is a scam that affects people's well-being, and it's good that there's data proving it.

You're right: the MapReduce pattern is very old, and it is well-known that applying it to AI training to enable geographically distributed training runs would be very beneficial. We haven't done it yet because model training workloads are more difficult to parallelize with high intra-node latency than a lot of traditional workloads.

This paper proposes a work partitioning scheme that removes a constraint that makes parallelizing AI training inefficient. The idea of a work partitioning scheme isn't novel, but the scheme itself is.


Germany resisted Google Street View until 2023, which was something I thought was very impressive.

"UI" is a category that contains GUI as well as other UIs like TUIs and CLIs. "UX" encompasses a lot of design work that can be distilled into the UI, or into app design, or into documentation, or somewhere else.

> “UX" encompasses a lot of design work that can be distilled into the UI

like how git needs you to “commit” changes as if you’re committing a change to a row in a database table? thats a design/experience issue to me, not an “it has commands” issue.


it's a last gasp... except when it isn't, like with Google, youtube, facebook, reddit, etc

You mean you're not excited to use Copilot Chat in the Microsoft 365 Copilot App??

(This is the real, official name for the AI button in Office)


Microsoft 365 Copilot For Business? (which isn't real - but yeah, the naming is...)

Microsoft spent a lot of effort to develop a really powerful editing interface. If you can replace that interface with a text input box, then their applications moat becomes a lot shallower.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: