Hacker Newsnew | past | comments | ask | show | jobs | submit | DanielHB's commentslogin

Microsoft is a major shareholder of OpenAI, they don't want their investment to go to 0. You don't just take a loss on a multiple-digit billion investment.

I think you’re right about this deal. But it’s kind of funny to think back and realize that Microsoft actually has just written off multi-billion-dollar deals, several times in fact.

One (1) year after M$ bought Nokia they wrote it off for $7.6 Billion.

There’s no upper limit to their financial stupidity.


The metaverse is another example if anyone doubts the bounds of corporate stupidity.

Why?

FaceBook largely requires an Apple iPhone, Apple computer, "Microsoft" computer, "Google" phone, or a "Google" computer to use it. At any point one of those companies could cut FaceBook off (ex. [1]).

The Metaverse was a long term goal to get people onto a device (Occulus) that Meta controlled. While I think an AR device is much more useful than VR; I'm not convinced that it's a mistake for Meta to peruse not being beholden to other platforms.

[1]: https://arstechnica.com/gadgets/2019/01/facebook-and-google-...


For the money spent(over $80b), they could have launched a phone or a car. Now their pivot is to smart glasses which require a phone so once again they are beholden to phone manufacturers.

I think this is sane washing their idea in the modern context of it having failed. I think at the time, they thought VR would be the next big thing and wanted to become the dominant player via first mover advantage.

The headsets don’t really make sense to me in the way you’re describing. Phones are omnipresent because it’s a thing you always just have on you. Headsets are large enough that it’s a conscious choice to bring it; they’re closer to a laptop than a phone.

Also, the web interface is like right there staring at them. Any device with a browser can access Facebook like that. Google/Apple/Microsoft can’t mess with that much without causing a huge scene and probably massive antitrust backlash.


I think headsets might work, but I think Meta trying to use their first mover advantage so hard so early backfired. Oculus, as a device, became less desirable after it required Facebook integration.

It's kind of like Microsoft with copilot - the idea about having an AI assistant that can help you use the computer is great. But it can't be from Microsoft because people don't trust them with that.


> I'm not convinced that it's a mistake for Meta to peruse not being beholden to other platforms.

Devoid of other context, it’s hard to disagree. But your parent comment only asserted that the metaverse specifically as proposed by Facebook was an obviously stupid idea.


Naming your company off a product that doesn't really exist yet and then ultimately fails is a pretty crazy and stupid thing to do. A bit cart before horse.

>Why?

Patrick Boyle did a nice video a few weeks back: https://www.youtube.com/watch?v=8BaSBjxNg-M


Can anybody cut meta off? I don't think you could mass market a device with no access to FB, IG or WS.

Maybe a niche product could do it, but good luck selling a laptop that won't open FB


so after $80 billion spent, they must have an ecosystem of hundreds of millions of users? Right?

Maybe they should have spent that on the facebookphone


Because it's been a massively expensive failure. They can't just will their own platform into existence just because it would be good to have, consumers have a say and they've rejected it completely.

Because it's been very clear for a long time that the vast majority of people do not want to play VR Second Life.

Meta's vision was worse than that. They were trying to hype doing work meetings in VR. There's a case to be made that VR games and VR universes can be fun... But work meetings?

Mark Zuckerberg using his company to build things he's the primary user for?

It worked when he wanted a system for ranking Harvard girls by appearance.

A pirate I was meant to be, trim the sail and roam the sea

Monkey Island 3 taught me a good deal of english too. I was lucky to get a text-translated version with english voiceover.

We all would avoid scurvy if we eat an orange...


Old intel macbook pros definitely didn't last 10+ years, the overheating problems really reduced their lifetime.

I have an Intel MacBook Pro from 2013. It’s running Linux and my kids now use it as a SNES emulator.

> Bear in mind thats this 4-7% loss only counts dies that have just one broken CPU unit. There are many other failure modes as well. That just seems very very high.

Is it? I thought the average for lastest-architecture chips was around 5%.


Sorry I was unclear about what "very high" meant.

From what I can see, one can expect about 80-90% yield per wafer, the bit that that doesn't make sense is that the "binned" narrative implies that of those broken parts of the wafer, 25-50% are usable with just one GPU disabled.

To me that sounds wrong, and far too high.


I would expect 80% of the failures would have only one core not pass QA.

I remember back in the day it wasn't that unusual for intel to sell quad core CPUs and dual core CPUs that exactly the same hardware-wise, but the dual-core ones didn't pass the QA to be sold as a quad-core.

In fact they sold many functional quad-core CPUs as dual-cores with 2 cores disabled and you could unlock the extra cores with some magic if you got lucky and got one that passed the quad-core QA.


> What code is truly about is precision: code is unambiguous, even when it’s abstract. It’s easy to conflate ambiguity and abstraction—both refer to “a single statement that could have to multiple meanings.” But the meanings of an ambiguous statement are entirely unconstrained.

I used to believe this, but after working at a successful SaaS I have come to believe that correctness and unambiguity are not entirely necessary for successful software products.

It was a very sad realization that systems can be flaky if there is enough support people to solve edge case problems. That features can be delivered while breaking other features as long as enough users don't run into the middle of that venn diagram, etc.

Fact is it always comes down to economics, your software can afford to be as broken and unpredictable as your users will still be willing to pay money for it.


Right. One thing I learned over the years is that you can support arbitrarily high level of tech debt and still be able to effectively maintain and enrich a successful software system, as long as you throw enough warm bodies at it.

The overhead will get absurd, you'll end up with 10x or more increase in engineers working on the system, all of them making slow progress while spending 90% time debugging, researching, or writing docs and holding meetings to work out this week's shared understanding of underlying domain semantics - but progress they will make, system will be kept running, and new features will be added.

If the system is valuable enough for the company, the economic math actually adds up. I've seen at least one such case personally - a system that survived decades and multiple attempts at redoing it from scratch, and keeps going strong, fueled by massive amount of people-hours spent on meetings.

Adding AI to the mix today mostly just shifts individual time balance towards more meetings (Amdahl's Law meets Parkinson's law). But ironically, the existence of such systems, and the points made in the article, actually reinforce the point of AI being key to, if not improving it, then at least keeping this going: it'll help shorten the time to re-establish consensus on current global semantics, and update code at scale to stay consistent.


The classical case of this are banks that tend to keep ancient systems alive for a very, very long time.

The implicit cost of systems associated with banks failing is why they continue going on as is.

> Amdahl's Law meets Parkinson's Law

[Infinite screaming]


What a 'fancy' way of distinguishing between capital expenses vs operating expenses....

Why do people on here do this? Just keep it simple. lmao.

"the economic math actually adds up"

just lol.


This is the realization that pretty much all engineers go through as they become more senior. The areas where the business really can’t afford issues is very small. Usually only accounting / billing and then that’s solved not by great code / design but just audit it once and never touch it again.

In the end most challenges for a business holding them back to better code quality are organizational, not technical.


> In the end most challenges for a business holding them back to better code quality are organizational, not technical.

This is true. And I get sad every time it is used as an argument not to improve tooling. It feels like sort of a self-fulfilling prophecy: an organizational problem that prevents us from investing into technical improvements... is indeed an organizational problem.


Well yeah, I'm not sure why that's sad. One can't find all edge cases at the beginning but only through usage of the app, and fix them over time. Be glad at least someone is using the app as that means the role of software is being fulfilled, as a tool to help people accomplish some goal, because much software written isn't even used by a single person.

A decision can be right even if the outcome doesn’t work out and a decision can be wrong even if the outcome works out.

Example: Gambling is wrong but you can win big money.


That comment seems off-topic, but just to exemplify:

In your example even as the interface for those products is unstable (UI that changes all the time, slightly broken API), those products are coded in a language like C++ or Java, which benefit from compiler error checking. The seams where it connects with other systems is where they're unstable. That's the point of this blog post.


Yeah that's true, a product can be successful with truly bad code, but it also makes developers lives miserable each time they need to add a new feature, solve a bug, or simply understand how that entangled mess works.

Management and sales may not appreciate good software design and good code, the next developer that has to work on system will.


You can also run a successful manufacturing company without considering environmental impact. A successful tobacco company without considering public health. A successful social media persona without considering cultural impact. A mobile app slop farm that floods the app store. If economics is your priority, then everything comes down to that.

I wonder if the facebook redesign also sucked a lot of manual labor and it is now mostly done so they don't need so many people anymore to maintain that product.

There is also a huge surface area of security problems that can't happen in practice due to how other parts of the code work. A classic example is unsanitized input being used somewhere where untrusted users can't inject any input.

Being flooded with these kind of reports can make the actual real problems harder to see.


They wouldn't be classed as vulnerabilities then, since, you know, there is no vulnerability. Unless you have evidence that most of these issues are unexploitable, but I would be surprised to hear that they were considered vulnerabilities in that case.

I believe the LLM would flag this kind of thing as a potential issue.

This actually came up with multiple companies I worked at in Sweden. Apparently the law here is quite strict that you _can_ use your computer for personal matters and that your employer is not allowed to spy on you on those matters.

So they can monitor your email and slack server-side, but not your client-side stuff that doesn't touch their servers. However if you use a VPN then they can also monitor your DNS requests and every website you visit. Any kind of client-side telemetry is limited to a few things, however those things can involve what applications you have installed (like spotify) for security reasons or USB sticks plugged in.


Have also been using Bazzite since march on my home desktop and you are spot on. I think the main reason for average person linux being difficult these days are laptops with weird hardware configurations.

I use MacOS at work and although it is miles better than windows, if I had a choice, I would also use Linux for work.


It is absurd that there is no standardized UI toolkit, or rather that the web browser _is_ the standard with is characteristic _lack_ of user interaction idioms.

The fact that there are multiple platforms for UIs* is a huge failure of the industry as a whole. Apple, Microsoft and Google could have had a sit down together at any point in the last 20+ years to push some kind of standard, but they decided not to in order to protect their gardens.

*: a standardized UI platform doesn't necessarily mean a standardized platform. Just standardization of UI-related APIs and drawing.


My guess 10 or so years ago was that Google would be the first to bake Material UI into browser with web components, and then any browser would essentially reuse that to extend out whatever style they wanted. It really seemed like the way the web (and Google was heading). Instead we got bad Material UI knock-offs in about 45 different UI frameworks.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: