Hacker Newsnew | past | comments | ask | show | jobs | submit | droelf's commentslogin

We're working hard to get ROS out of dependency hell - https://prefix.dev/blog/reproducible-package-management-for-...

Would love to hear your thoughts.


Fun fact, we've been using pixi to compile everything Python related internally. In fact PeppyOS was even started with pixi as a base layer (but we pivoted away from it since the project is in Rust and Cargo is the de-facto toolchain). We support uv by default for Python (since it's what's the most used these days) but pixi is already supported, see the note on this page: https://docs.peppy.bot/guides/first_node/


Thank you, this might be a great way to improve the developer experience in the conda/conda-forge ecosystem.


I've been using unikraft (https://unikraft.org/) unikernels for a while and the startup times are quite impressive (easily sub-second for our Rust application).


What drove you to choose that over something like containers?


Yeah, boot time, isolation (proper VM vs containers), and ease of use on a larger Hetzner box.


Did you notice a substantial difference in those factors between more traditional micro VMs that use OCI images (like Firecracker) and unikernels?


shorter cold-boot times.


If we’re talking about cold boot times, wouldn’t the relevant metric for unikernels be the hypervisor’s boot time?


How would that compare with containers running on Firecracker or other virtio-based μVM's?


A unikernel on Firecracker is probably going to start faster than a container on Linux on Firecracker.


I assume they meant using an OCI image for the rootfs of a firecracker VM, not running a container inside a firecracker VM.

Still difficult to see how the unikernel could be slower, but I doubt the difference would be huge? Don't have anything to back that up though.


Fast boot up means nothing if your agent/app is slow at runtime (due to virtualization tax or QEMU emulation). Fast boot up is a PR term, which can easily be optimized for compared to designed a better virtualization layer that performs near-bare-metal.


Wouldn't faster boot times mean that scale-out can be done on-demand? Whether this is preferable or not over poorer runtime performance is up to the domain, no?


When scaling out, edge latency will overshadow kernel boot-up times: speeding up boot-up from 1.5s to 150ms will not have any perceived impact on app performance when scaling on edge to meet the demand.


Cool! Emscripten-forge also recently got a R distribution that runs natively on the browser: https://blog.jupyter.org/r-in-the-browser-announcing-our-web...


Pixi works for this use case: https://pixi.sh/latest/

It gives you cross-platform binary packages, quickly (also written in Rust).


I think for a small configuration TOML might be fine. Where it breaks down in my opinion is in larger configuration files. It just becomes pretty unreadable.

Think about a Github Action being written in TOML ... would probably not look great!


We've been working on Pixi, which uses Conda packages. You can control versions precisely and also easily build your own software into a package to ship it. Would be curious to chat if it could be useful as an alternative to `mise`.


We've been building `pixi` and more specifically "pixi global" as a replacement for homebrew, but based on Conda packages. It creates a single virtual environment per globally installed tool (deduplication works by hard-linking) and then links out the binaries from the isolated environments to a single place.

It's written in Rust and quite fast: https://pixi.sh


You should really try `pixi` and `pixi global` - uses Conda packages but much faster, and great experience for installing packages globally.

https://pixi.sh


I think this is really cool. We're tackling this problem from the other side by building `pixi` (pixi.sh) which bundles project / package management with a task runner so that any CI should be as simple as `pixi run test` and easy to execute locally or in the cloud.


My team has a setup that sounds essentially the same using Nix via devenv.sh. We deterministically bundle and run everything from OpenTofu and all its providers to our programming languages runtimes to our pre-commit hooks this way, and it also features a task runner that builds a dependency graph and runs things in parallel and so on.

Our commands for CI are all just one liners that go to wrappers than pin all our dependencies.

Lately I've been working with a lot of cross-platform Bash scripts that run natively on macOS, WSL, and Linux servers, with little to no consideration for the differences. It's been good!


I'd be interested to know more about a team that uses Nix and Guix. Is there a website or email an interested party can contact?


that’s not really what’s new or special about pixi, is it? poetry (poethepoet) and uv can both do variations of this.

From the outside, pixi looks like a way to replace (conda + pip) with (not-conda + uv). It’s like uv-for-conda, but also uses uv internally.

Better task running is cool, but it would be odd to use pixi if you don’t otherwise need conda stuff. And extra super duper weird if you don’t have any python code!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: