Hacker Newsnew | past | comments | ask | show | jobs | submit | thomashabets2's commentslogin

While not transparent to users, I'd just use SSH ProxyCommand like I did in https://github.com/ThomasHabets/huproxy

Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.


A problem with that approach is that libc can after an upgrade decide to start doing syscalls you were not expecting. Like the first time you call `printf()` it calls `newfstatat()`. Only the first time. Maybe in the future it'll call it more often than that, and then your binary breaks.

I'm not sure what glibc's latest policy is on linking statically, but at least it used to be basically unsupported and bugs about it were ignored. But even if supported, you can't know if it under some configurations or runtime circumstances uses dlopen for something.

Or maybe once you juggle more than X file descriptors some code switches from using `poll()` to using `select()` (or `epoll()`).

My thoughts last time I looked at seccomp: https://blog.habets.se/2022/03/seccomp-unsafe-at-any-speed.h...


This is a problem but fwiw libc's should be falling back to old system calls. You can block clone3 today and see that your libc will fall back to clone.


Yeah. But it still means wandering into de facto unsupported territory in a way that pledge/unveil/landlock does not.

Your example may be true, but I'm guessing it's not a guarantee. Not to mention if one wants to be portable to musl or cosmopolitan libc. The others inherently are more likely to work in a way that any libc would be "unsurprised" by.


Yeah for sure, it's a real issue. In general, seccomp feels hard to use unless you own your stack top to bottom.


> A problem with that approach is that libc can after an upgrade decide to start doing syscalls you were not expecting.

That would break capsicum, too, so I don’t see how that’s a problem when “comparing Capsicum to using seccomp in the same way”.


That's the approach I meant by "that approach", the library the parent commenter was talking about writing for a customer. Compare this to Landlock or OpenBSDs pledge/unveil.


Now that Landlock actually is a thing, have you considered writing another followup? Given what I've seen of landlock, I expect it'll be spicy...


I took the bait.

“The goal of Landlock is to enable restriction of ambient rights (e.g. global filesystem or network access) for a set of processes. Because Landlock is a stackable LSM [(Linux Security Model)], it makes it possible to create safe security sandboxes as new security layers in addition to the existing system-wide access-controls. ... Landlock empowers any process, including unprivileged ones, to securely restrict themselves.”

https://docs.kernel.org/userspace-api/landlock.html


I've actually found it pretty fine. It doesn't have full coverage, but they have a system of adding coverage (ABI versions), and it covers a lot of the important stuff.

The one restriction I'm not sure about is that you can't say "~/ except ~/.gnupg". You have to actually enumerate everything you do want to allow. But maybe that's for the best. Both because it mandates rules not becoming too complex to reason about, and because that's a weird requirement in general. Like did you really mean to give access to ~/.gnupg.backup/? Probably not. Probably best to enumerate the allowlist.

And if you really want to, I guess you can listdir() and compose the exhaustive list manually, after subtracting the "except X".

I find seccomp unusable and not fit for purpose, but landlock closes many doors.

Maybe you know better? I'd love to hear your take.


I definitely don't know better, and after taking a few more looks at landlock, I'm not even sure what my objections were, probably got it confused with something else entirely. Confusion and ignorance on my part I guess.


Yeah I'm not a fan of seccomp (https://blog.habets.se/2022/03/seccomp-unsafe-at-any-speed.h...).

On Linux I understand that Landlock is the way to go.


Landlock right now doesn't offer a lot for things that aren't file system access. Other than that it's great, you can have different restrictions per-thread if you want to.


Yeah, but the file system is where I put most of my files. :-)

Between file system, bind/connect, and sending signals, that covers most of it. Probably the biggest remaining risk is any unpatched bugs in the kernel itself.

So one would need to first gain execution in the process, and then elevate that access inside the kernel, in a way that doesn't just grant you root but still Landlocked, and with a much smaller effective syscall attack surface. Like even if there's a kernel bug in ioctl on devs, landlock can turn that off too.


I agree, but it would be nice if it had similar fine-grained APIs for network calls. That said I solved it by using LD_PRELOAD and socks5. It's not perfect, but good enough.

Landlock is one of my favorite linux-only APIs almost feels like it was FreeBSD's answer to some Linux feature.


> Java on the other hand makes it impossible to get a single distributable.

Heh, I find this very amusing and ironic, seeing how Write Once Run Anywhere was a stated goal with Java, that failed miserably.


It was successful for a while. Java applets were once fairly common. But then Flash largely replaced them, and then Html5 killed them flat.


A very limited "anywhere", but yes.

For that use case, was it active content, was it shipping intermediate representation, was it a sandbox? To all three: yes, very poorly.


I'm not saying we should phase Java out. But it's pretty clear to me that Java was a bad experiment in almost every aspect, and we should at least not add new use cases for it.

So no. No, please god no, no Java in the terminal.

More ranting here: https://blog.habets.se/2022/08/Java-a-fractal-of-bad-experim...


And yet Java is more then Java. There are lots of more modern languages on the JVM. The ecosystem is huge and still has lots of inertia.


Yeah. Some of my critique applies to the language, some on the JVM and thus cross language.

Kotlin sure is less awful, for example. But the JVM, as I describe, was always a failed experiment.


Is that the difference between forced pre commits vs opt in? I don't want to commit something that doesn't build. If nothing else it makes future bisects annoying.

But if I intend to squash and merge, then who cares about intermediate state.


> I don't want to commit something that doesn't build.

This is a really interesting perspective. Personally I commit code that will fail the build multiple times per day. I only care that something builds at the point it gets merged to master.


so basically, not adhering to atomic commits. That's fine if it's a deliberate choice, but some people like me think commits should stand on their own.

(i'm assuming your are not squashing when merging, else it's pretty much the same workflow)


> i'm assuming your are not squashing when merging, else it's pretty much the same workflow

I AM squashing before merging. Pre-commit hooks run on any commit on any branch, AFAIK. In any serious repo I'd never be committing to master directly.


Honestly, i find that a really weird view. I use (Local) commits for work in progress. I feel like insisting on atomic commits in your local checkout defeats the entire purpose of using a tool like git.

What do you do when you are working on something and are forced to switch to working on something else in the middle of it?


I'm merely the grandparent commenter, not the one you replied to directly, but I can tell you what I do for checkpointing some exploratory work or "I'll continue this next week".

I usually put it on a branch, even if this project otherwise does all its development on the main branch. And I commit it without running precommits, and with a commit message prefix "WIP: ". If it's on a branch you can even push it to not lose work if your local machine breaks/is stolen.

When it's time to get it into the main branch I rebase to squash commits into working ones.

Now, if my final commit history of say 3 commits all actually build at each commit? For personal projects, no. Diminishing returns. But in a collaborative environment: How fun will it be for future you, or your team mates, to run bisect if half the commits don't even build?

I have this workflow because it's so easy to add a feature, breaking 3 tests, to be fixed later. And formatting is bad. And now I add another change, and I just keep digging and one can end up in a "oh no, how did I end up here?" state where different binaries in the tree need to be synced to different commits to even build.

> I feel like insisting on atomic commits in your local checkout defeats the entire purpose of using a tool like git.

WIP commits is hardly the only benefit of git or other DVCS over things like subversion.


> What do you do when you are working on something and are forced to switch to working on something else in the middle of it?

`git stash` is always an option :) but even if you want to commit it, you can always undo (or `--amend`) the commit when you get back to working. I personally am also a big fan of `git rebase -i` and all the things it allows me to fix up in the history before merging (rebasing) in to the main branch.


All of those are things i would refer to as making a commit :)


I interpreted the parents post as: as long as my combination of commits results in something working before getting merged, it's fine.

Local wip commits didn't come to mind at all


Well we are in a discussion about pre-commit hooks. Pre-commit hooks run on local wip commits.


Well, unless you inhibit them with `-n`. Which I would for WIP commits.


Then what’s the point? Just leave them off and run the tests when you want to run them.


Because 99% of my commits are not WIP commits. So I almost always want to run them.

Hell, even most WIP commits will pass the tests (e.g. tests are not yet added for the new code), so I'd run them then too.


Some people write tests first.


And commit in such that the final timeline has broken tests for half of commits?

Sounds like an awful way to live your life.


No, we're not talking about the final timeline. That is finalised when (or if) code is merged to the mainline. We're talking about what happens when the command "git commit" is executed.


Ok, if you're talking about just WIP commits that will be squashed and will never be part of mainline, then shrug.

For me that's a tiny proportion of commits. I'd rather avoid taking a whole finished feature branch and then spend more time cleaning up a sloppy commit history.

Sure, sometimes it's correct to squash, but for nontrivial changes I go with https://github.com/google/eng-practices/blob/master/review/d...


I feel like I found better git commands for this, that don't have these problems. It's not perfect, sure, but works for me.

The pre commit script (https://github.com/ThomasHabets/rustradio/blob/main/extra/pr...) triggers my executor which sets up the pre commit environment like so: https://github.com/ThomasHabets/rustradio/blob/main/tickbox/...

I run this on every commit. Sure, I have probably gone overboard, but it has prevented problems, and I may be too picky about not having a broken HEAD. But if you want to contribute, you don't have to run any pre commit. It'll run on every PR too.

I don't send myself PRs, so this works for me.

Of course I always welcome suggestions and critique on how to improve my workflow.

And least nothing is stateful (well, it caches build artefacts), and aside from "cargo deny" no external deps.


Only a minor suggestion: git worktrees is a semi-recent addition that may be nicer than your git archive setup


The portability story for Go is awful. I've blogged about this before: https://blog.habets.se/2022/02/Go-programs-are-not-portable....

It's yet another example of Go authors just implementing the least-effort without even a slight thought to what it would mean down the line, creating a huge liability/debt forever in the language.


You expect Go to magically make systemd journaling exist in macOS?

I can't even begin to comprehend the thought process here.


I can't even begin to comprehend how you got from here to there.

I encourage you to elaborate on how you think that's connected, and not make a strawman argument. You may have not done so deliberately, but if you can't begin to comprehend that I would mean what you said, then you could give me the benefit of doubt and maybe entertain the idea that I did not.

Edit: In my blog post I give the example of getpwnam_r. One should not check for if the OS is Solaris in a given version range, but if getpwnam_r is one or the other type.


No language is going to do that for you. And I don't think Go promissed otherwise.

Perhaps it's about managing expectations.


I mean… my whole blog post is about how autotools does that easily, and Go does not.

"Language", no. But Go's build comments are not really part of the language proper.


It doesn't.

For users in the UID range in sysctl `net.ipv4.ping_group_range` the normal ping command uses this non-root way.

Sure, maybe your system still sets suid root on your ping binary, or shows it adding `cap_net_raw` according to `getcap`, but mine does not.


Since basically all the comments are about how both the author and many commenters are confused about what UDP and DGRAM sockets are, I have corrected the author's code to no longer miscommunicate what protocol is being used.

https://github.com/ThomasHabets/rust-ping-example-corrected

There is no UDP used anywhere in this example. ICMP is not UDP.

I'm not saying my fix is pretty (e.g. uses unwrap(), and ugly destination address parsing), but that's not the point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: