Hacker Newsnew | past | comments | ask | show | jobs | submit | flossly's commentslogin

KDE also IIRC. Just works in all load and save dialogs :)

Is it me, or did get issues get a lot worse with the transfer to MSFT?

The purchase wasn't a year ago, it was 8 years ago. In that time how much has it grown? 10x? 100x? More?

It's probably due to a more recent change, IE, focusing on features over stability. Or, it could be that there was some turnover in ops and someone who was a hawk about stability isn't there.

If I were to bet, there's probably a product manager or other leader who's just gung-ho on new features and loosing track of who their customers are and what their needs are.


IMO, it's probably a combination of factors. I get the feeling GitHub has no clear leadership by anybody who actually USES it. The priorities internally were almost certainly "get onto azure, shove copilot/AI down everybody's throat, and other generic "product driven" initiatives. The user-hostile move to react was done in a way that broke browser back-button functionality, especially in Pull Requests.

They don't/didn't care because what are you going to do?

On one hand, the free users shouldn't complain too much, though I get their anger. But the place I work is an enterprise paying customer and this is bullshit.


That can happen many times during a buyout. Some company buys a thing. The problem then is ownership of the thing. Who in the new company is going to own the 'make sure it stays good' problem. Sometimes with a buy out the people who were doing that may even stay at the company. But it is a matter of motivation. MS has a real serious problem. You can see the gaps where they have glued together at least 10 companies together and called it microsoft. They have a huge reputational risk issue. Where something breaking in the xbox div can have a negative impact on the tools division. Also the other way around. They lack focus on many items. They have needed a 'service pack 2' stop the presses moment and fix this mount everest of tech debt.

I think is more related to vibe coding

Definitely not, I remember some 4 years ago some random bug in a github-supported github-action and a comment in an issue saying: "I heard the team responsible for this action was laid off, don't expect a fix". This was shortly after the microsoft acquisition.

But the vibe coding BS probably made it 10 times worse.


> I remember some 4 years ago ... This was shortly after the microsoft acquisition.

The acquisition was 8 years ago.


They started with a hands off approach and then went hands on, I’m not sure but that ‘hands on’ timing is likely to happen shortly after the usual acquisition vesting period of 3 years when the old guard starts to leave.

Yes you are correct, ~4 years ago was when they had a lot of layoffs at microsoft and github. Initially after the acquisition it was mostly fine, but after the layoffs it was a noticeable degradation in service quality and reliability.

> But the vibe coding BS probably made it 10 times worse.

Yup, keep seeing this in various companies. Teams that were effective and did solid engineering now are more effective and does even better engineering. Teams that were effectively already just "boilerplate monkies" now produce a lot more code than before, but the quality is the same so effectively they're worse at contributing now than before, and take more shortcuts, not less.

From my point of view, agents are amplifiers, so if you usually build spaghetti projects, agents just help you do that faster, not avoid the spaghetti altogether. If you usually build well-designed stuff, they can help you put that together faster.


Agreed. In general the amount and variety of bugs introduced since everyone started vibing is worrying. It is probably a national security concern but I guess so is the economy tanking due to failed AI investments. Guess we will see

I'm not sure it's specific to vibe coding so much as the AI feature add rush. Every SAAS company is throwing more shit at the wall than I've ever seen, to the point where I'm actively avoiding some software because I don't want yet another new feature release pop-up when I log in.

Add in them being extremely high scale and critical infrastructure and it's easy to see where things can go wrong, vibe added code or not. I think we'd all prefer they have long slow roadmaps but clearly leadership thinks they're in a fight with the other AI companies to release the newest and bestest every day.


In terms of at Microsoft's end, or in general with the amount of new repos and pushes / commits from other people vibe-coding?

GitHub actions sucked and fell over itself long before vibe coding became mainstream.

Yes, of course, but also more recently under the new CoreAI unit: https://www.businessinsider.com/microsoft-ai-coding-rivals-o...

Even after decades, the policy is the same:

Embrace, extend, and extinguish.


Which was a policy that increased their market dominance for their existing dominate products.

What exactly are they extinguishing GitHub to the benefit of? Azure Repos?


Perhaps they can’t help themselves out of habit, it is their nature.

The original red dog team that started azure is long gone and the general success of the cloud papers over all levels of incompetence so that the incompetence is now entrenched and unable to do better.

Cloud service providers have this unfortunate property where poor designs will make more money which makes it hard to maintain a culture of excellence. I tried to push a design change that would result in a 10x throughput for a certain product and was told that a 90% drop in usage is the last thing they want. I self host my own stuff with GitLab, so far not a single unplanned outage in 6 years.

Perhaps a Roman decimation is in order, whenever GitHub experiences an outage fire one GitHub employee at random. That should help get interests in line and allow for cross org cooperation. With 150 outages per year and a staff of 6,000 that amounts to 2.5% per year if no improvements are made.


It is absolutely not just you

Is it just me, or [thing that has been repeated a billion times every day on this and every other website]

It certainly seems like low effort engagement farming.

When I review a PR, I don't add comments to the code. If I think something needs to be commented: I comment on the PR and reject the PR.

If I think --as is discussed in the article-- that the comment "would be nice, but is not strictly necessary", then I comment on this on the PR and approve the PR.


> Just because America is doing bad things doesn't mean China is good, or vice versa.

When someone points out hypocrisy, this is "the answer", it seems. But it is just a statement, not a rebuttal of the hypocrisy that was pointed out.

Hypocrisy is still hypocrisy.

And bad things are bad things. Yet no amount of propaganda (red scare, "eew dictatorship", Uyger-genocide, Taiwan threat) can convince me that the China is as evil (or more evil) than the US-Israel alliance of the the last 50 years.


Hypocrisy would be if the person only points out Chinese authoritarianism without acknowledging problems e.g. in US policy.

Not mentioning US problems every time they criticize CCP problems is not automatically hypocrisy, and this idea basically means you cannot criticize anything without criticizing everything someone considers just as bad or worse at the same time.

Calling a discussion on China hypocritical because it doesn't say "but US worse" is essentially trying to build in whataboutism into every discussion.

It's a symptom of increasing polarization and part of the problem.


There's US AI and China AI. Those are the two contenders. We are discussing the problems of using the Chinese AI because of the "evil" govt there. The evil at this point clearly is less evil than that of the US govt.

That's the hypocrisy: not seeing the block of wood in the eye of one while complaining about the speck of wood in the eye of the other.

By trying to be less hypocritical we create a more level playing field based on facts, instead of gut-feeling based hatred.

Whatabboutism is, IMHO, used a lot as a way to circumvent having to address the glaring hypocrisy: i see it's used to shut up those to point out hypocrisy.


There is also European AI and probably other places.

You may think only those two are relevant, and so every discussion of one party requires discussion of the other. I disagree with that fundamentally.

I'll be honest, I also disagree that one side is "clearly less evil than the other". For me, living in Taiwan and caring about democracy, the risk of Chinese authoritarian expansion and invasion is not a "speck of wood in the eye", frankly I find that framing disrespectful.

But that discussion is entirely beyond the question of hypocrisy. Again, the guy people accuse of hypocrisy said nothing of the sort that indicates that he doesn't see the problems of the US government. He merely didn't talk about them.

This isn't "shutting down talk about hypocrisy".


> Uyger-genocide

I'm gonna go out on a crazy limb here and say that this is on par with the genocide in Gaza? Mass sterilization, forced labor, sex, and torture on a larger scale than Gaza. Certainly we can argue about which is worse, but they're both incredible atrocities. The only thing that makes China less scary IMO is that they currently aren't the empire ruling the world and at the center of the global economy. If that changes, as seems likely, I don't see any reason to believe China would be a better or more compassionate world ruler than the US.


There are no scales to weigh 2 atrocities against one another. There is only a hole for the humanity we have all lost. North hell is no different from west, east, south or central hell.

Never used the CLI, but I do use their browser plugin. Would be quite a mess if that got compromised. What can I do to prevent it? Run old --tried and tested-- versions?

Quite bizarre to think much much of my well-being depends on those secrets staying secret.


Integration points increase the risk of compromise. For that reason, I never use the desktop browser extensions for my password manager. When password managers were starting to become popular there was one that had security issues with the browser integration so I decided to just avoid those entirely. On iOS, I'm more comfortable with the integration so I use it, but I'm wary of it.

The problem is that the UX with a browser extension is so much better.

I also find it far easier to resist accidentally entering credentials in a phishing site... I'm pretty good about checking, but it's something I tend to point out to family and friends to triple check if it doesn't auto suggest the right site.

Exactly. Same principle of passkeys, Yubikeys and FIDO2. Much harder to phish because the domains have to match.

I’m impressed with their feature to add the URL for next time, after manually filling on an unmatched URI. Hairs raised on neck clicking confirm though.

Importantly IMO is the extra phishing protection that the UX is really nice if and only if the url matches what's expected. If you end up on a fake url somehow, it's a nice speed bump that it doesn't let you auto-fill to make you think, hold on, something is wrong here.

If you're used to the clunkier workflow of copy-pasting from a separate app, then it's much easier to absent-mindedly repeat it for a not-quite-right url.


The 1Password mobile and desktop apps have such a nice UX that I’m happy copy pasting from and into it instead of having any of the browser extensions enabled.

I have 1Password configured to require password to unlock once per 24 hours. Rest of the time I have it running in the background or unlock it with TouchID (on the MacBook Pro) or FaceID (on the iPhone).

It also helps that I don’t really sign into a ton of services all the time. Mostly I log into HN, and GitHub, and a couple of others. A lot of my usage of 1Password is also centered around other kinds of passwords, like passwords that I use to protect some SSH keys, and passwords for the disk encryption of external hard drives, etc.


> The 1Password mobile and desktop apps have such a nice UX that I’m happy copy pasting from and into it instead of having any of the browser extensions enabled.

Also a great way of missing out on one of the best protections of password managers; completely eliminating phishing even without requiring thinking. And yes, still requires you to avoid manually copy-pasting without thinking when it doesn't work, but so much better than the current approach you're taking, which basically offers 0 protection against phishing.


My approach is that for critical sites like banking, I use the site URL stored in the password manager too, I don't navigate via any link clicking. I personally am fine with thinking when my entire net worth is potentially at stake.

It's not only about how you get there, but that the autofill shows/doesn't show, which is the true indicator (beyond the URL) if you're in the right place or not.

Rouge browser extensions for example could redirect you away from the bank website (if the bank website has poor security) when you go there, so even if you use the URL from the password manager, if you don't use the autofill feature, you can still get phished. And if the autofill doesn't show, and you mindlessly copy-paste, you'd still get phished. It's really the autofill that protects you here, not the URL in the password manager.


You don't need a autofill for a indicator. Simply bookmark your banks login page, even if it gets silently redirected later you will notice as the page wont be bookmarked anymore.

> even if it gets silently redirected later you will notice as the page wont be bookmarked anymore

What? Are you not talking about browser bookmarks? They don't change because the target website starts redirecting somewhere, at least not the browsers I typically use.


In firefox at least the bookmark star indicator disappears if you leave the site and the url does not match the orignal bookmarked anymore = phishing protection without installing more unnecessary software and increasing attack surface.

If you have rogue browser extensions installed, the browser extension can surely read the values that got filled into the login page without having to redirect to another site.

Not necessarily, a user could have accepted a permission request for some (legit) redirect extension that never asked for content permission, then when the rogue actor takes over, they want to compromise users and not change the already accepted permissions.

Concretely, I think for redirect browser extension users I'd use "webRequest" permission, while for in page access you'd need a content-script for specific pages, so in practice they differ in what the extension gets access to.


In Safari on iOS I have all the main pages I use as favourites, so that they show on the home screen of Safari.

Likewise I have links in the bookmarks bar on desktop.

I use these links to navigate to the main sites I use. And log in from there.

I don’t really need to think that way either.

But I agree that eliminating the possibility all-together is a nice benefit of using the browser integration, that I am missing out on by not using it.


Which works great until tags.tiqcdn.com, insuit.net or widget-mediator.zopim.com (example 3rd party domains loaded when you enter the landing page from some local banks) get compromised. I guess it's less likely to happen with the bigger banks, my main bank doesn't seem to load any scripts from 3rd party as an counter-example. Still, rouge browser extensions still scare me, although I only have like three installed.

Also, you want to avoid exposing your passwords through the clipboard as much as possible.

On unix-like OSes you can use `xsel` and configure it to clear clipboard after a single paste and/or after a set period of time.

> The problem is that the UX with a browser extension is so much better.

It's better, but calling it so much better [that it's unreasonable to forgo the browser extension] is a bit silly to me.

1. Go to website login page

2. trigger the global shortcut that will invoke your password manager

3. Your password manager will appear with the correct entry usually preselected, if not type 3 letters of the site's name.

4. Press enter to perform the auto type sequence.

There, an entire class of exploits entirely avoided. No more injecting third party JS in all pages. No more keeping an listening socket in your password manager, ready to give away all your secrets.

The tradeoff? You now have to manually press ctrl+shift+space or whatever instead when you need to log in.


The tradeoff is that you need to know how to setup a global shortcut or even know it's even possible. I wish people would stop minimizing the knowledge they have as something everyone just knows.

How do you set up this shortcut? I'd prefer to get rid of extensions, if for no better reason than sometimes it switches to my work profile and I have to re-login

On iOS I feel I have less control over what's running than on Linux (dont get me started on Windows or Android). So that's the order of how I dare to use it. But a supply chain attack: I'll always use a distributed program: the only thing I can do is only use old versions, and trusted distribution channels.

In theory the browser integration shouldn’t leak anything beyond the credentials being used, even if compromised.

When you use autofill, the native application will prompt to disclose credentials to the extension. At that point, only those credentials go over the wire. Others remain inaccessible to the extension.


We need cooldowns everywhere, by default. Development package managers, OS package managers, browser extensions. Even auto-updates in standalone apps should implement it. Give companies like Socket time to detect malicious updates. They're good at it, but it's pointless if everyone keeps downloading packages just minutes after they're published.

Exactly this. For anyone who wants to do it for various package managers:

  ~/.npmrc: 
  min-release-age=7 (npm 11.10+)

  ~/Library/Preferences/pnpm/rc: 
  minimum-release-age=10080 (minutes)

  ~/.bunfig.toml 
  [install]: 
  minimumReleaseAge = 604800 (seconds)

This would have protected the 334 people who downloaded @bitwarden/cli 2026.4.0 ~19h ago (according to https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi...). Same for axios last month (removed in ~3h). Doesn't help with event-stream-style long-dormant attacks but those are rarer.

(plug: released a small CLI to auto-configure these — https://depsguard.com — I tried to find something that will help non developers quickly apply recommended settings, and couldn't find one)



Love it, I'll link to it!

That is why we have discussions like these: https://x.com/i/status/2039099810943304073

X is the worst place to hold community discussions.

I am not sure that works - imagine that the next shellshock had been found. Would you want to wait 7 days to update?

We need to either screen everybody or cut of countries like North Korea and Iran from the Internet.


These vulnerabilities are all caught by scanners and the packages are taken down 2-3 hours after going live. Nothing needs to take 7 days, that's just a recommendation. But maybe all packages should be scanned, which apparently only takes a couple of hours, before going live to users?

Shellshock was in 2014 and Log4Shell was 2021. It's far more likely that you're going to get pwned by using a too-recent unreviewed malicious package than to be unknowingly missing a security update that keeps you vulnerable to easy RCEs. And if such a big RCE vuln happens again, you're likely to hear about it and you can whitelist the update.

> What can I do to prevent it?

My two most precious digital possessions - my email and my Bitwarden account - are protected by a Yubikey that's always on my person (and another in another geographical location). I highly recommend such a setup, and it's not that much effort (I just keep my Yubikey with my house keys)

I got a bit scared reading the title, but I'm doing all I can to be reasonably secure without devolving into paranoia.


If the software gets poisoned then your YubiKey will not save you.

I think they mean to secure your most valuable accounts with a hardware token rather than in a normal password manager, so they aren't at risk if your password manager has an issue.

Use the desktop or web vault directly, don't use the browser plugin.

How are they clearly less susceptible to a supply chain attack?

Maybe the web vault, but then we do not know when it's compromised (that's the whole idea); so we trust them not to've made a mess...


How to prevent it?

tl;dr

- https://cooldowns.dev

- https://depsguard.com

(disclaimer: I maintain the 2nd one, if I knew of the first, I wouldn't have released it, just didn't find something at that time, they do pretty much the same thing, mine in a bit of an overkill by using rust...)


Do either of those work on browser extensions that I install as a user? I don't see anything relating to extensions in there.

Nope but that’s a good idea

You should use hunter2 as your password on all services.

That password cannot be cracked because it will always display as ** for anyone else.

My password is *****. See? It shows as asterisks so it's totally safe to share. Try it!

... Scnr •́ ‿ , •̀


ah, the old bash.org.

I thought Zig has a C compiler built in? Or is it just the Zig build system that's able to compile C, but uses an external compiler for that?

Still a proper programmer-flex to build another one.


Zig actually bundles LLVM's Clang, which it uses to compile C with the `zig cc` command. But the long term goal seems to not be so tightly coupled to LLVM, so I'm expecting that to move elsewhere. They still do some clever stuff around compiler-rt, allowing it to be better at cross-compilation than raw Clang, but the bulk of it is mostly just Clang.

There is also another C compiler written in Zig, Aro[1], which seems to be much more complete than TFA. Zig started using that as a library for its TranslateC functionality (for translating C headers into Zig, not whole programs) in 0.16.

[1]: https://github.com/Vexu/arocc


They're not planning on dropping Clang.

They kinda are: "This issue is to fully eliminate LLVM, Clang, and LLD libraries from the Zig project." https://github.com/ziglang/zig/issues/16270

Or maybe look at something like Zen C. A language without the baggage or change in direction.

[1] https://github.com/zenc-lang/zenc


Yes, as a backend. Clang as the `zig cc` frontend will stay (and become optional) to my knowledge.

libraries, not processes.

I find that a very bold move, how will they reivent the wheel on the man-years of optimization work went into LLVM to their own compiler infrastructure?

They're just removing the obligate dependency. I'm pretty sure they will keep it around as a first-class supported backend target for compilation.

No, the whole point is to eliminate dependencies that they have to maintain. "not obligate" really doesn't mean anything if it's available as a backend--the obligation is on the Zig developers to keep it working, and they want to eliminate that obligation.

And the original question was "how will they reivent the wheel on the man-years of optimization work went into LLVM to their own compiler infrastructure?" -- the answer is that Andrew naively believes that they can recreate comparable optimization.

There are a whole lot of misstatements about Zig and other matters in the comments here by people who don't have much knowledge about what they are talking about--much of the discussion of using low-level vs high-level languages for writing compilers is nonsense. And one person wrote of "Zig and D" as if those languages are comparable, when D is at least as high level as C++, which it was intended to replace.


First I was naive to believe I could make a new programming language, then I was naive to believe it would be anything but a toy project, then I was naive to believe that we could make our own backends for debug mode, now I'm naive to believe that we can add optimizations to the pipeline. It's getting old. Just because you lack the creativity, willpower, and work ethic to accomplish something, doesn't mean I do.

I admire your creativity, willpower, and work ethic, and a few other things about you, but I don't admire reactionary garbage like this ... I'm actually rather shocked by it and how it leans heavily on the strawman "I'm naive to believe that we can add optimizations to the pipeline" which is not the statement that was made, but I will maintain my high regard for you and your efforts despite it ... no human is perfect. I have a lengthy list of technical brilliancies in Zig that I admire that I won't bore you with but do often bore others with.

At least you acknowledge that I am correct about your belief, whereas someone else said I was exactly wrong.

As for me, while I had a successful software development career spanning 6 decades, received a mention in a two-digit RFC, and hold several networking patents, my best years are far behind me, but even in my heyday I couldn't hold a candle to your creativity, willpower, work ethic, or productivity ... but how is that at all relevant?


my implication is that you are projecting - believing that because something is true for you, it is therefore true for other people.

zsf team is perfectly capable of implementing compiler optimizations.


To clarify, my statement was based on comments I have seen and heard from Andrew Kelley when discussing this subject. I can't locate those at the moment, but here is https://news.ycombinator.com/item?id=39156426 by mlugg, a primary member of the Zig development team (emphasis added):

"To be clear, we aren't saying it will be easy to reach LLVM's optimization capabilities. That's a very long-term plan, and one which will unfold over a number of years. The ability to use LLVM is probably never going away, because there might always be some things it handles better than Zig's own code generation. However, trying to get there seems a worthy goal; at the very least, we can get our self-hosted codegen backends to a point where they perform relatively well in Debug mode without sacrificing debuggability."

The current interim plan (which I think was developed after the comments that I heard from Andrew, perhaps in recognition of their naivete) is for Zig to generate LLVM binary files that can be passed to a separate LLVM instance as part of the build process. Is that "a first-class supported backend target for compilation"? I suppose it's a matter of semantics, but that certainly won't be the current LLVM backend that does LLVM API calls.

P.S. It may be helpful to read through https://github.com/ziglang/zig/issues/13265


> The current interim plan...

What do you mean by "interim"? As I explicitly stated in the comment you quoted, it has never, and likely will never, been planned for the Zig compiler to become incapable of using LLVM. The LLVM backend still sees plenty of active development by the core team [0]---that's perfectly compatible with improving the experience of users (including ourselves) by avoiding unnecessary uses of LLVM [1].

> ...is for Zig to generate LLVM binary files that can be passed to a separate LLVM instance as part of the build process. Is that "a first-class supported backend target for compilation"? I suppose it's a matter of semantics, but that certainly won't be the current LLVM backend that does LLVM API calls.

I think you are incorrectly assuming that we currently make heavy use of the LLVM API. As indicated by #13265 being closed, that is not true. The Zig compiler already generates bitcode by itself, without touching the LLVM API. The only thing we actually use the LLVM API for is feeding that bitcode to LLVM, which can easily be done by invoking a CLI instead. Users quite literally would not be able to tell if, for instance, we changed the compiler to pass the bitcode to Zig's embedded build of Clang over CLI.

[0]: https://ziglang.org/devlog/2026/#2026-04-08

[1]: https://ziglang.org/download/0.15.1/release-notes.html#x86-B...


> the answer is that Andrew naively believes that they can recreate comparable optimization.

That's exactly wrong.

> There are a whole lot of misstatements about Zig and other matters in the comments here by people who don't have much knowledge about what they are talking about.

Well spoken. You should look in the mirror.


Proebsting's Law: Compiler Advances Double Computing Power Every 18 Years

You need to implement very few optimizations to get the vast majority of compiler improvements.

Many of the papers about this suggest that we would be better off focusing on making quality of life improvements for the programmer (like better debugger integration) rather than abstruse and esoteric compiler optimizations that make understanding the generated code increasingly difficult.


All that will still be available just not in main zig repo. Someone may have asked same question about LLVM when GNU compiler exist.

as a comment about a particular project and its goals and timelines, this is fine. as a general statement that we should never revisit things its pretty offensive. llvm makes a lot of assumptions about the structure of your code and the way its manipulated. if I were working on a language today I would try my best to avoid it. the back ends are where most of the value is and why I might be tempted to use it.

we should really happy that language evolution has started again. language monoculture was really dreary and unproductive.

20 years ago you would be called insane for throwing away all the man-years of optimization baked into oracle, and I guess postgres or mysql if you were being low rent. and look where we are today, thousands of people can build databases.


I expressly said "not be so tightly coupled to LLVM" because I know they're not planning on dropping it altogether. But it is the plan for LLVM and Clang not to be compiled into the Zig binary anymore, because that has proven to be very burdensome. Instead, the plan seems to be to "side-car" it somehow.

I did quite some experimenting with this.

Fruit moves fastest and green leaves. Meat, cheese, oil and fats slowest.

But we often eat combinations: and the slowest component of your food determines the speed of the whole.

Also: it's a one lane road and "over taking" is not possible.

So, eating a fast moving meal after a slow moving meal results in the fast mover getting stuck behind the slow mover.

Hence I start my day without and slow food (only fruit, herbs, green leaves, spices, ginger => usually a smoothy); and end the day with slow food (oily food, nuts, seeds, beans; usually combined with green leaves as we need a lot green leaves).

YMMV


There have been alternative (often mad) health proponents who have insisted upon only eating fruit in the morning for years - similar(ish) reasons. I think there is probably something to it.

Whole fruit also has a lower glycemic index due to the fiber. This slow release of sugar helps reduce insulin resistance and balance out hormone response in general.

Hormonal imbalance is severely underrated as a root cause of common mental health issues like anxiety, depression, etc.

Having fruit in the morning is a little boost without the guilt. Adding in some light exercise, like walking, also helps prime the day. It even gets easier to wake up early for all this the more regularly it's done. It's one big reinforcement cycle for healthy habits.


I remember this being a thing in some Tony Robbins book!

I mean the most obvious reason is fibers

For me the most obvious reason is that we are primates and most primates have a fruit heavy diet. So I believe that's a reason for our human bodies to be very well adapted to fruit.

And that's the fibers, vitamins, minerals and sugars... The whole package.


Yeah, but during what part of the day do they eat it?! ^_^

Isn't slow food going through your body during sleep something that'll impact your sleep quality?

When you wake up you are basically fasting so your body is ready to take a hit. Slow food will go through your body faster when you eat it in the first half of your day.


I found the same food goes about as fast (on a rather empty GI tract) no matter when eaten.

Slow food takes about 24h for me (so even when eaten early).


I don't think the last 12h impacts sleep quality that heavily though

Interesting.

A great opportunity to add "YMMV"


Your Movements May Vary?

YBMMV

Did it!

Just to be clear I thought the typical advice has been fiber -> protein -> carbs, for blood sugar reasons, you're saying to frontload fiber/carbs & backload proteins for easier digestion? That is interesting, I wonder what studies there are on this.

My reason is I dont want the fast moving food to get stuck behind the slow moving food.

Another reason I do it like this: I get no after diner dip from fast moving food. Slow moving food makes me crash a little, and I prefer to experience that in the evenings.

I did also experiments with fruit (and leaves) only diets. No crashes at all. Nice! But I did crave savory, chewy food a lot.


> Also: it's a one lane road and "over taking" is not possible.

I've eaten some hamburgers at Krystal's that definitely overtook whatever else was in front. Some folks have had the same or similar effect from White Castle, although I never have. Chipotle on occasion moves things along rather briskly too. No fruit or green leaves in any of them.

It may be that it's not a digestion thing, but some other factor they have that accelerates the process.


Not a doctor but I know that some foods, in some guts, at some times can trigger a 'quick-release valve' of sorts. You're not digesting properly when that happens.

This. Eject != digest

I don't disagree with your findings, but here's the model I use:

- Fiber: ^

- Dairy: v

- Coffee: ^^

- NSAIDs: vv

- Ice cream splurges: vvv


My breakfast routine for ~40 years has been coffee, muesli, coffee, yoghurt, coffee, fresh fruit all served with plenty coffee.

I end up eating a similar breakfast when I visit Swiss/Austrian/Dutch friends in their natural habitat.

yeah my mileage is i eat fruit and i get cramps and the squirts for the next two days.

as I've gotten older my ability to consume fruit, onions, garlic and most dairy (and coffee :-( ) has been taken away from me. its really a miserable experience for someone that enjoys eating new and interesting things all the time.


IANAD, but sounds like IBS or similar, not necessarily age-related, and potentially treatable.

Look into the low FODMAP diet.

What do you mean, the human stomach is absolutely not a "one line road", your comments lacks the basic biological understanding. What you're describing is a good generic diet and maybe that's why it feels good but please learn a bit more about the stuff you are expetimenting on.

I did not mention stomach. I meant the GI-tract as a whole.

I've used food coloring and indigestibles (like corn kernels) to do experiments on whether meals can "overtake" or "merge" or "join" with other meals into poops.


I'd love to read more about your experiments, please don't refrain from writing about it someday.

that's the most insane thing i read today. kudos to your curiosity

Then you'll love https://onlinelibrary.wiley.com/doi/full/10.1111/jpc.14309

"Six paediatric health-care professionals were recruited to swallow a Lego head."


omg this article is wild lol

> To standardise bowel habit between participants, we developed a Stool Hardness and Transit (SHAT) score to look at stool consistency over time. The SHAT score is the sum of the Bristol Stool Chart scores over a specific time period divided by that time period in days.

> Post-ingestion, stools were monitored and examined in search of the excreted item. The search was conducted on an individual basis, and search technique was decided by the participant. The primary outcome was the Found and Retrieved Time (FART) score.


Adding debug prints to your diet.

Food coloring is a liquid dye. It will mix with whatever chyme it encounters in the stomach and small intestine, dyeing a large portion of the stool. It does not prove that food stayed in a single-file line.

Also, again the GI-tract as a whole is also not a "one-lane road".

Please educate yourself and do not do "experiments" on yourself. A good place to start learning more would be: https://accessmedicine.mhmedical.com/book.aspx?bookid=691 if you're intrested.


You are very judgey and controlling. Maybe some prune juice would help?

Eating bitter greens can cause the body to secrete more bile and that speeds up fat digestion.

I use TUDCA for that though it can make me gassy if I take too much.

I will add an anecdote that from observation, two people on the same diet over long periods can have significantly different poop frequencies, and differing regularity.

YMMV. It's not just determined by the food intakes, there are individual factors.

At a guess, these individual factors start with 1) genetic component to reactions to substances such as lactose and to caffeine. 2) Gut microbiome.

In other words, saying "change diet and you can change the poop schedule" is true, but "with this diet you will definitely get this schedule" is not.


For me i drink close to a gallon of water a day and that truly cleans me out daily.

> It's a one lane road and "over taking" is not possible.

Best poop-related comment I've seen.


n=1

But interesting nonetheless, thanks for sharing your findings.


I have a small following of people how also saw improvements doing this.

Then, I did not come up with this myself, but found a lot of anecdotals in this direction.

And... I comment on a real science piece that seems to be making similar claims.


Have you found that coffee speed things up?

I have, but I think any stimulant would do similar. I no longer smoke, but that did it too.

I cannot stand the taste coffee. So no data from my side.

Or just don't eat meat and cheese at all

I've not for the last 12 years. But fake meats/dairies have a very similar macro profile, so I expect them to digest similarly.

I'll not try though, as I'm vegan for ethical reasons.


Better for the environment too.

The best thing for the environment is dying. You stop wasting resources and start fertilizing the soil.

Only a Sith deals in absolutes.

Pijul builds on the Sanakirja db library, which is interesting in it's own respect.

https://docs.rs/sanakirja/latest/sanakirja/

https://pijul.org/posts/2021-02-06-rethinking-sanakirja/


I prefer this reason, "risk", from the "cost savings" reasons we've seen in Germany, Russia, Germany (Munich at first) and Spain (Extremadura at first)


JDBC does not allow pipelining (a Postgres only feature).

It can reduce the number of db round-trips a lot, especially when using Supabase+RLS (or other systems that require frequent setting of configuration values that are basically fire-and-forget).

Meet Bpdbi, a library with first-class pipelining, which provides a Postgres db driver (that's binary only, as the legacy text-based protocol is no longer needed, it just takes up space) and exposes an API that's more close to Jdbi's that to JDBC's (developer friendly).

https://github.com/bpdbi

It has an extensive benchmark that shows it's on par or faster compared to other db connectivity stacks.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: