Hacker Newsnew | past | comments | ask | show | jobs | submit | mbeavitt's commentslogin

Honestly I've been doing a lot of image-related work recently and the biggest thing here for me is the 3x higher resolution images which can be submitted. This is huge for anyone working with graphs, scientific photographs, etc. The accuracy on a simple automated photograph processing pipeline I recently implemented with Opus 4.6 was about 40% which I was surprised at (simple OCR and recognition of basic features). It'll be interesting to see if 4.7 does much better.

I wonder if general purpose multimodal LLMs are beginning to eat the lunch of specific computer vision models - they are certainly easier to use.


I assume that by "higher resolution images" you mean images with a bigger size in pixels.

I expect that for the model it does not matter which is the actual resolution in pixels per inch or pixels per meter of the images, but the model has limits for the maximum width and the maximum height of images, as expressed in pixels.


Did you try the same with gemini 3 models? Those usually score higher on vision benchmarks

Having briefly experienced weight loss drugs - and the bliss of that constant “EAT!” voice in your head just going quiet - I’m pretty convinced most humans have a genuine genetic predisposition to overeating.

And when you zoom out to the population level, the “we’re all autonomous individuals” argument gets a lot shakier. Like yeah, at the individual level you have agency, you make choices, fine. But at scale? We are absolutely at the mercy of whoever has figured out how to tickle our monkey brains in just the right way to get us buying their fattening food.


Humans and dogs: how many dog owners have to store their dog’s food in a bin the dog can’t get into? How many can’t leave more than one meal’s worth of food out at a time?

Until the past century or so, “eat up the available food while available” was generally a plus for survival for most populations - a person who could keep some of that excess around on them was more likely to survive a famine than their leaner peers.

Even my grandmothers (born in early 1920s Texas) remembered not always getting as much to eat as they wanted as children, and it wasn’t because their mothers were afraid of them getting fat - there just wasn’t any extra food. One of them likely did have a caloric deficit a few times here and there around age 10-12, and it showed: she was rather small.

One of my grandfathers lied his way into the Army at 16 just to be one less mouth for his mother to have to feed.

We’re really not that far separated from “eat all the food” being a health benefit.


if you read the article...

Instead of discarding stock, companies are encouraged to manage their stock more effectively, handle returns, and explore alternatives such as resale, remanufacturing, donations, or reuse.

I guess remanufacturing/reuse might be the intended solution if it's absolutely not to be worn.


Well one link deeper says "Restrict the export of textile waste" but I'm still unclear why they preferred these measure over a carbon tax.

Edit: "To prevent unintended negative consequences for circular business models that involve the sale of products after their preparation for reuse, it should be possible to destroy unsold consumer products that were made available on the market following operations carried out by waste treatment operators in accordance with Directive 2008/98/EC of the European Parliament and of the Council3. In accordance with that Directive, for waste to cease to be waste, a market or demand must exist for the recovered product. In the absence of such a market, it should therefore be possible to destroy the product." This is a rather interesting paragraph which seems to imply you can destroy clothes if truly nobody wants it.


I bet there's some price at which someone will happily take that Luis Vuitton bag or Burberry coat.


> Don't take criticism from someone you wouldn't take advice from.

The funny thing I find about criticism is that you actually don’t have a choice about whether or not it affects your future actions. Criticism that I have dismissed has persistently come back to haunt me, perhaps via my subconscious.


We care so much that we even care about the opinions of those we do not care about.

Or, as Marcus Aurelius put it, "It never ceases to amaze me, we all love ourselves more than other people, but care more about their opinion than our own."


Heh. This is why there's no need to win arguments.


what evidence do you have to back up this baseless claim? They openly publish their financial reports: https://wikimediafoundation.org/who-we-are/financial-reports

$178 million might sound like an extremely large amount of money if you're a member of the general public, but for a global resource kept up to date that serves hundreds of billions of visitors per year this is actually not a huge quantity of money.


https://wikimediafoundation.org/annualreports/2023-2024-annu...

They spend only $3mil on internet hosting. They spent almost 6 mil on travel and conferences, and 26 mil on awards and grants.

They could easily run all wikimedia hosting on investment income (endowment) alone so the banner that often pleads for donations to keep wikipedia running is pretty scammy.


I don't work for wikipedia and haven't seen their budget in depth, but the link is mostly fluff. Reading it I have no idea if "support for volunteers" means supporting wikipedia editors, corporate, donations, etc. All we really know is that hosting costs are about 3.1 million a year.

Usually non-profit organizations like this get significant corporate funding because they do work for companies and political organizations, which is where the corruption comes from. I don't think there's any doubt Wikipedia is a politically biased organization, all you have to do is look at their URL blacklists to figure that out. The NYT is regarded as a high quality link, meanwhile you're not even allowed to link the epoch times as a reference despite it being the most comparable right-wing competitor to the NYT. Basically every major right-wing paper is banned, while every major left-wing paper is allowed


> Reading it I have no idea if "support for volunteers" means supporting wikipedia editors, corporate, donations

I think its pretty clear that "support for volunteers" does not mean corporate.

I too would like more detailed budgets, but we do have some info here.

> Usually non-profit organizations like this get significant corporate funding because they do work for companies and political organizations

The list of large donors is public https://wikimediafoundation.org/annualreports/2023-2024-annu... there are only 27 who gave > $50000. Which ones do you think Wikipedia is giving biased coverage to?

I'd also point out that there is a wall of separation between editors and the foundation.

> all you have to do is look at their URL blacklists to figure that out. The NYT is regarded as a high quality link, meanwhile you're not even allowed to link the epoch times as a reference despite it being the most comparable right-wing competitor to the NYT. Basically every major right-wing paper is banned, while every major left-wing paper is allowed

Discussion about epoch times at: https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Not... my understanding is the concern is around them promoting conspiracy theories without evidence not their political alignment.


>my understanding is the concern is around them promoting conspiracy theories without evidence not their political alignment.

That might be their justification but is it actually true?

Specifically: Do you believe that every major right-wing paper promotes conspiracy theories while every major left-wing paper does not?


I would disagree with your premise that every major right wing paper is listed.

For example fox news and the Washington Examiner are considered right wing and are not listed as a deprecated source.

Similarly there are left wing sources on the deprecated list like the grayzone or Occupy Democrats. (Arguably those are rather fringe)

Certainly the epoch times has been widely criticized for being factually inaccurate in ways the new york times has not been.

I also don't particularly think the new york times is equivalent to the epoch times in terms of reputation.


Fox News was deemed "generally unreliable" for politics and science, which in practice means that it's unusable in most cases, not so different from deprecated.

I also find it interesting that Al-Manar (essentially Hezbollah's media office) has a slightly better status than, say, Daily Mail.

See also (Wikipedia cofounder) Larry Sanger's critique of Wikipedia' source bias - https://larrysanger.org/2021/06/wikipedia-is-more-one-sided-...


You can't tease me with "Algorithm" in the title and then not actually define an algorithm... TL;DR: Read lots of widely acclaimed books that are close to the source of what you want to learn?


It's a reading equivalent to the Feynman Problem Solving Algorithm (which I personally think is really just a variant of the universal Nike Algorithm applied specifically to problem solving).


Is it though? The algorithm you reference is meant to be a joke:

1) Write down the problem.

2) Think real hard.

3) Write down the solution.

This is not a useful algorithm in any sense, apart from that it might be thought provoking.

What's the "universal nike algorithm"? I didn't find anything on google.


I assume it's a play on Nike's tag-line: "Just do it"


Oh, I get it. Missed the joke.



Merge conflicts are a blessing - they tell us exactly what the problem is...


But if you recorded the intent of the editing operation, there wouldn't be a problem in the first place.

The problem with git is that it has to do a 3-way merge. But there is much more data. Basically every keypress can be recorded and taken into consideration by the merge algorithm.


Just curious - what do you see as being a viable alternative to the current git paradigm?


I'm playing about with collaborative code editors at the moment. See https://mastodon.social/@Edent/114862593530392361

I want an experience more like Google Docs and less like emailing diffs to a mailing list.


Having a private local copy of the codebase is a feature. It means I can make whatever changes I want to the code without anybody else even knowing. And once I've finished mucking about with it and learning what I need to do, I can tidy it up into something I'm ready for other people to look at.

Not having a private playground is one of the big drawbacks of all the modern cloud SaaS stuff. If I want to play around and learn something, suddenly that affects everyone else. It shouldn't.


Even in the hypothetical scenario that sources only exist on a server, there would still have to be branches and personal workspaces for lots of reasons. But there'd be none of the current ridiculously overblown git ceremony around sharing a branch with a colleague if you want them to have a look.


I can't imagine the horror of tracking down a regression and finally fixing it, only to find someone else edited another section of the codebase, ruining all my efforts.

This is fine in text documents (to an extent, obviously references to sections of text that no longer exist can happen) because different sections are not as inextricably linked to each other.


Once everyone is done collaboratively editing, how do you control changes to the file in actual source control (in other words, commit a known configuration and receive formal,audit-sufficient approval from others to integrate the changes into source control)?


How do you test something if multiple people are making changes at the same time?


You don't. Having isolated code bases for each developer and a well-defined merge procedure is a feature, not a bug.

I've seen teams develop on a single target system, all of them bashing on the same source code base at once. It sort of works with a well-tuned team, that communicates well, and is capable of dealing with the resulting inconveniences, up to maybe 3 or 4 developers. Probably helps to have dynamic scripting languages in use by most of the devs so you aren't trying to time your compiles with each other's actions. And they still stepped on each other more than I would have been able to tolerate. There's no way this scales.

For non-code documents, by all means share. English prose doesn't crash if you start editing page 5 while someone else has an incomplete sentence fragment they're in the process of changing on page 2. But trying to edit code that way doesn't scale for squat.

From this point of view, whether it's "files" or "a database" or whatever that's being developed on isn't relevant. The point is that I actively do not want somebody else's half done work randomly getting integrated with what I'm working on right now.


How do you test something is working if one person is making changes?

You stop then test.


You're going to ask everyone else to stop working while you test something? Even if they agree, what if they continue working in between your changes? What if they are halfway through something and the code won't even compile until they're done?

I'm sure you have good ideas about better workflows here but I don't think they're as obvious to other people as you maybe assume?


Not sure if you've ever used git, but developers usually create a branch before editing.

So you make a branch, a bunch of you discuss and edit in real time. Once everyone is ready, you stop and test.


> a bunch of you discuss and edit in real time.

So now you have to get everyone into the same headspace at the same time and schedule coding sessions across timezones?

What if two people independently feel like doing some quick incompatible changes late at night? Do they have to message everyone on the team to see if it’s OK? Or do they make a branch of the branch of the branch to test it by themselves? How is that more convenient? And in that world, how can you do those tests privately, without everyone on the team (plus the service you’re using) being able to see them? And what happens when you don’t have an internet connection (or your service is down) but you want to continue working?

I agree with the parent comment, there are too many unanswered questions in your proposed scenario.

The whole blog post in general feels incongruent, and it’s not surprising to me you’re getting conflicting feedback. You’re conflating different scenarios and proposing broad vague ideas which are not only impractical for a multitude of scenarios, they remove user agency and give more power to corporations, which is exactly the opposite of what we should be doing.


How do you personally do that branching without git? What tool(s) do you use?

> So you make a branch, a bunch of you discuss and edit in real time. Once everyone is ready, you stop and test

What if you want to test your work out, and don't want to wait until everybody else has finished with their work and stopped working to test? What if you don't want to try to coordinate 100 different instances of "everybody stop working (but also make sure the build isn't broken), I want to run tests" per day?


Once you have branches, you have merges, so you're back to a version of git that's somehow less convenient.


I think what your parent comment is getting at is “how can you reliably test your changes if someone else is changing something else somewhere which may interfere with what you’re editing”.

Your “one person” rebuttal doesn’t work, because one person is not sabotaging themselves in other files of the same project.


Yeah I was just about to ask the same thing! I'm a bit of a luddite, maybe, in that I tend to prefer what I'm already used to -- but I can generally see the other side of the argument. When it comes to git, though, I genuinely don't get what is being proposed as the alternative, let alone why it would be an improvement.

edit: I've see the author's reply, and I guess the original piece was mainly a call to develop better ways of doing things, rather than a claim that they already exist and we should hurry up and start using them. I'd still be interested in more detail on what is fixably wrong with git, though (as opposed to the annoyances that are corollaries of necessary features)


I was also wondering this. I don't want my codebase to be a shared word document, how will it ever be in a compilable state?


The idea of google-docs-like collaborative code editing is intriguing, but it's hard to imagine it being practical. I could see it working quite nicely for e.g. pair programming on a feature branch, but at some point you need to merge that branch.


Sounds like the author is advocating for SVN.


Ok but the lab has access to the unencrypted data? You haven't removed the requirement that the user needs to trust the lab with their raw genome markers. This entire operation hinges on the lab's trustworthiness, does it not?


Excellent point.

We are developing a solution that will allow cryptography for DNA molecules, allowing DNA to be secure in any lab. It fits well with Monadic's front end.

https://www.geneinfosec.com/


We address this point in the article. On the default path with existing rules and regulations some degree of trust in labs will always be required.

At-home sequencing could be a game changer.

The other reply also mentions molecular cryptography which could provide really strong anonymity and privacy guarantees. We hope to do a PoC accordingly some time in the neat future.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: