Hacker Newsnew | past | comments | ask | show | jobs | submit | bjpbakker's commentslogin

A capabilities system like pledge could be a way to safer use _existing_ packages. However, I think that it's not a very nice way to continue. Every application will end up doing its own capability pledging, and mistakes will be made. A lot.

Another approach could be to use an effect system like PureScript does. The main problem with Node.js packages is that any function you use can execute arbitrary code (such as wiping systems with an IP that is from the Russian region). Having an effect system in place the library author has no other means than to come forward with the side-effect, or code won't compile.


> "If I can't search the file for this variable and find it easily, it's a bad name"

Like parent mentions, this is about scoping. My rule about short variables is that I have to see their _full_ use (including declaration) in only a few lines on the screen.

Personally I prefer single letter variables especially in lambdas for consistency (x, y, z).


From the thread is becomes painfully clear how horrible Actalis is set up to act as a CA. Instead it seems they chose to break the BR by default. Almost 5 months to reissue a little over 250k certificates is not what you may expect from a CA that a major browser should trust.

The argument that there might be some end-users unable to renew their prescription seems mostly used to gain sympathy. Also this will most probably not be “a large swath of internet users”.

I do hope Actalis step up their game and regain some trust. Or it may become the next symantec.


On the other hand, as far as I can tell:

* The baseline requirement is 64 bits of entropy and Actalis were providing 63 bits, i.e. only short by a single bit. It would seem unusual if the baseline requirements were a mere one bit of entropy from insecurity.

* The requirement for 64 bits of entropy is to reduce the risk of hash collision attacks [1] - which have only ever been demonstrated for MD5 and SHA-1, neither of which are used to sign certificates any more.

If web security was a tightrope, this would be like hearing that the second safety net, underneath the first believed-to-be-robust safety net, was found to be strong enough to catch a 900 lbs person, when it was specified for 1000 lbs.

[1] https://cabforum.org/2016/03/31/ballot-164/


Mozilla recognizes that in some exceptional circumstances, revoking misissued certificates within the prescribed deadline may cause significant harm, such as when the certificate is used in critical infrastructure and cannot be safely replaced prior to the revocation deadline, or when the volume of revocations in a short period of time would result in a large cumulative impact to the web. However, Mozilla does not grant exceptions to the BR revocation requirements. It is our position that your CA is ultimately responsible for deciding if the harm caused by following the requirements of BR section 4.9.1 outweighs the risks that are passed on to individuals who rely on the web PKI by choosing not to meet this requirement.

That statement "may cause significant harm" is what I expect weighed on the CA's mind. When revoking a certificate could kill someone, and there is still a high barrier to exploit (i.e. no "proven method that exposes the Subscriber's Private Key to compromise") it should be up to the CA to clearly explain the situation, and up to Ryan to accept the explanation given. ("It is our position that your CA is ultimately responsible for deciding if the harm [...] outweighs the risks")

Clearly Actalis was not in a position to articulate the harm, which is their fault.

That said, I'm fully aware of the compliance hoops that must be jumped through when providing updates to medical devices. If you have to distribute firmware to medical devices, 4 months can be a remarkably fast turnaround. But in that case, CA-issued certificates are probably inferior to self-signed certificates (on an organisational level) that are not subject to external revocation.


I agree it's on them to get it right. It just seem extenuating circumstances at least played a roll. I think I quoted the wrong part of the BR - it was adjacent to the large amount of users part - but was more along the lines of negatively impacting safety or security or somesuch.


> the author prefers dubious advantages to real improved security

Encrypting the data transfer doesn’t improve any of the HTML email security issues. So I fail to see how that would be “real improved security”.

But I do share the authors sentiment on the failure to not using open standards of both Protonmail and Tutanota. So maybe I’m biased.


The data in a protonmail account is encrypted with your own key. How is protonmail supposed to encrypt an email if they receive it unencrypted over SMTP?


There are various ways. One would be to use standard imap and decrypt the message on the client. Their bridge sort of does that but with proprietary protocol.

Either way, that has absolutely nothing to do with the security issues of html email. Eg phishing and tracking still works when you decrypt the message and open it.


> I constantly need to spend mental energy on remembering which is the Ok and which is the Error

Right is actually a synonym for Ok.. besides, Either is useful for _much_ more than error handling. That is exactly why I dislike Error for the left side; often it's just not an error.


> you went out of your way to choose a hardware combination that is known to properly handle PM on Linux

This is no different than with windows. Except that more vendors sell you windows compatible pre-built systems. With macos the situation is far worse.

If you want more Linux enables hardware it will help to vote with your wallet and buy hardware from manufacturers that do provide support for Linux.


no different ... except...

very funny


> ELFs still don't even have an accepted embedded icon standard FFS

Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s).

> you could publish a binary containing native versions for all existing architectures

This sort of ignores the hardest part of shipping binaries; linked libraries. Dynamic linking everything is simply not always feasible. Not to mention libc.

Also I don't really understand why anyone on Linux would want this. The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem. I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.

(edit: wording, see below)


> Their app bundles are not binaries, they are a directory structure.

Yes, but on Linux no file manager understands directories as bundles (except perhaps GNUstep's GWorkspace).

> Also I don't really understand why anyone on Linux would want this.

Because they want to distribute binaries themselves or via a 3rd party distribution site (ie. not part of a linux distribution) without having the user compile the code themselves (either out of convenience or because they do not want or cannot distribute the source code).

Having said that this is mainly useful in case you want to distribute a single binary that supports multiple architectures. Almost everything is distributed in archives (even self-extracting archives can be shell scripts - although annoyingly enough, software like GNOME's file manager make this harder) so you can use a shell script to launch the proper binary without kernel support.


> Yes, but on Linux no file manager understands directories as bundles (except perhaps GNUstep's GWorkspace).

And the ROX Filer, which sadly hasn't been an active project for many years.


Which wouldn't really be much of a problem for its users if it wasn't gtk breaking backwards compatibility between 2 and 3 (but that is another painful topic).


> I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.

This predates the Appstore by a huge margin. They added universal binaries to make the transition between 32bit and 64bit seamless. And it worked really well actually.

Soon after tools popped up to reduce the binary sizes by stripping out the 64 or 32 bit part of it.

The other part that's a bit special is that apple has these special variables @executable_path, @loader_path, @rpath in linker options with an install_name_tool that allowed(s?) you to rewrite the system path to an application specific one, which allowed you to bundle the necessary libraries with a linker path that's relative to the executable or app resource path. I think this has gotten better recently, but pretty much everyone struggled with this at the beginning.

In linux it was basically outsourced to system packaging. So the developers outsourced it to the distro, whereas in Mac environment because of the lack of said packaging, the burden was placed on whoever is distributing software. Making people think twice about what they link.


> They added universal binaries to make the transition between 32bit and 64bit seamless.

Not just 32 vs 64-bit, but entire architectures. Mach-O universal binaries originated at NeXT, where at one time a binary could (and many did) run on SPARC, PA-RISC, x86 and 68K. On http://www.nextcomputers.org/NeXTfiles/Software you can see this in their filename convention: the "NIHS" tag tells you which architectures (NeXT 68K, Intel, HP, SPARC). The binary format carried over into OS X, where it was secretly leveraged as part of Marklar for many years.

In fact, Universal even on OS X really meant PowerPC and i386 at the beginning of the Intel age. It eventually morphed into the present meaning. I even maintained a fat binary with ppc750, ppc7400 (that is, non-AltiVec and AltiVec) and i386 versions.


>Also I don't really understand why anyone on Linux would want this. The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem. I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.

Couldn't agree more and yet Snap and Flatpack exist. It's probably so that third-parties can package closed-source stuff for all distros easily. These days one of the first things I do on a fresh Ubuntu install is get rid of snapd because they use it for things where it's useless (e.g., gnome-calculator). If someday they stop packaging the apps directly I'll probably finally go back to Debian.


It isn't just for closed source stuff. Some developers actually care about the user experience and don't want to have to tell people "sorry, you have to wait until someone comes along and decides to package that for your distro, or compile it from source!".


> Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s)

You actually can put an icon into the resource fork of a Mach-O binary, and it’ll show up in the Finder and Dock (assuming the executable turns itself into a GUI app).

It’s an uncommon thing to do, but Qemu uses it, and unfortunately I don’t think there’s another way to embed an icon in a bare Mach-O binary


The current Apple application structure (the .app directories that originated in NextStep) isn't what the previous poster was referring to -- the binaries in traditional (pre-OSX) MacOS weren't like this but actual files that could run on either Motorola 68000-series chips or (in the 1990s) IBM's Power PC chips.


Apple does not use ELF for their binaries, MacOS ld will produce mach-o format.


Right, I worded that quite strange. Thanks for the feedback!


> The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem.

I've always found this to be an interesting observation about free software. So many complicated things like FatELF, dll-hell are just straight up _not_ and issue when you're working in a source code world where you just compile the software for the machine you're using it on.

Most of the efforts around FatELF, FlatPak, etc seem to be to be driven by the desires of corporations who want to ship proprietary software on linux, and as such need better standardization at the binary level rather than the software level.

It's a win for Free Software in my mind, that we shouldn't typically have to worry about this added complexity. Just ship source code, and distributions can ship binaries compiled for each specific configuration that they choose to support.


Note that source code access and FOSS are orthogonal. AFAIK in older Unix systems software you'd buy would often be in source code form. In fact at the past severa lLinux distributions had a lot of such software.

As an example Slackware distributes a shareware image viewer/manipulator called xv (which was very popular once upon a time): http://www.trilon.com/xv/

It is the license that makes something FOSS, not being able to compile/modify the source code.


Well, except I work on a large open source project and we have to blacklist random versions of gmp and gcc out code doesn't work with due to bugs.

And, we can't reasonably test with every version of the compiler and libraries, so we just have to wait for bug reports, then try to find out what's wrong.

Whereas I pick one set of all the tools, make a docker image, and then run 60 cpu days of tests. No Linux distro is going to do that much testing.


> So many complicated things like FatELF, dll-hell are just straight up _not_ and issue when you're working in a source code world where you just compile the software for the machine you're using it on.

Said like someone who has never actually had to compile someone else's software. Why do you think so many projects these days have started shipping Docker containers of their build environment? Why are there things like autoconf?


> Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s).

Pedantry. You could mount an ELF as a filesystem if you had any desire to. Structures are just structures.

> This sort of ignores the hardest part of shipping binaries; linked libraries.

Time has shown that dynamic linking all the things is a terrible idea on many fronts anyway. Why do you think there's all this Docker around and compiling statically is on an upward trend?

The solution is simple: DLLs for base platform stuff that provides interfaces to the OS and common stuff, statically compile everything else. Then the OS just ships a "virtual arch" version of the platform DLLs in addition to native on every arch.

The reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability. I mean, the Kernel is stable (driver ABI excepted), but basically nothing outside of that is.


> The reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability.

I'd argue that the reason the Linux Community doesn't want this is that it introduces maintenance burdens on the community that only really serves to support corporations shipping proprietary software.

I really don't care about making proprietary software easier on linux, but I do care about linux having to carry the baggage of backwards compatability like Windows has had to handle just so that Google can deliver Chrome as a binary more reliably.


> reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability

And yet I fearlessly upgrade my Linux system at any time. With OSX you first have to check if the software you use is at all compatible, especially if you use proprietary software..


> And yet I fearlessly upgrade my Linux system at any time

Only because you've never encountered an issue due to an upgrade, probably because your usecases are so mainstream and minimal that you've never had to use applications that aren't in the repo and well tested before release. If you look around and aren't wearing blinders you'll notice that a lot of people do have problems from upgrading.


Lol. This is actually the first time ever someone said I use very mainstream tools, so thank you for that! I can tell you don't know me :)


That _long_ list is limited to the "basic level" telemetry data and does not include e.g. automatic sample documents that Defender sends by default (those include you actual documents).

Microsoft proves over and over again that they are not trust worthy. I agree you shouldn't be using their OS, but sadly many people don't consider this a choice since it comes pre-installed on most PCs.


As far as I'm aware Defender will not send documents without prompting you first.


Since there's nobody else responsible for releasing Apple's closed source OS, what makes you think Apple is not to blame on this?


Perhaps Nvidia are writing drivers that don't conform to the guidelines Apple provide for approval? Maybe they're trying to pull a Logitech and include lots of data gathering that Apple object to. Or they're ignoring things like events from power management. Or their drivers are just shitty Mojave citizens. Or they're trying to force a Mojave equivalent of GeForce Experience to be installed with their drivers.

If Nvidia are being dicks in the face of reasonable requests, why would that be Apple's fault?


> If Nvidia are being dicks in the face of reasonable requests, why would that be Apple's fault?

Because Apple sold the hardware /with/ the software, and now they completely broke that (<5 year old) hardware?


Point of clarification, the article does not mention and I have no reason to suspect that Apple broke hardware they themselves sold. In fact the article does seem to point out there is support for specific Nvidia cards that Apple sold or approved. Also I find it rather impossible to believe that Nvidia couldn't release something that would restore this ability. Would a user need to disable some security feature temporarily to be able to install it? Maybe, but that's the price you pay for unsupported hardware.

Apple got burned hard [0] by Nvidia and swore off them back around 2009. And Linux Torvald also called them out back in 2013ish IIRC. Nvidia is not a "good" company. Now people have be running things unsupported and now Apple closes that hole and they are all up in arms?

[0] https://gizmodo.com/5061605/apple-confirms-failing-nvidia-gr...


> I have no reason to suspect that Apple broke hardware they themselves sold

In fact my own 15" MBP late 2013 has a GFX750, which is no longer supported according to Apple Support [1].

The worst for me is that this didn't withold Apple from pushing the update, so running Mojave with an external monitor is hardly possible now.

> Nvidia is not a "good" company.

Agreed. Neither is Apple.

[1] - https://support.apple.com/en-us/HT208898


> In fact my own 15" MBP late 2013 has a GFX750, which is no longer supported according to Apple Support [1]

That support article doesn't mention the GFX 750 and I can't find any record of Apple selling a MBP with a GFX 750...

Edit:

I believe what you meant to say is you have a "NVIDIA GeForce GT 750M with 2GB of GDDR5 memory and automatic graphics switching" [0] which it appears does not support metal.

[0] https://support.apple.com/kb/sp690?locale=en_US


Metal's supported on that card. Works on a GT650M too, from a 2012 rMBP.


You're correct about the card, sorry about that. Got confused I suppose. Indeed I meant to say I have the GT 750.


I should have added this in my edit but I'll put it here:

I stand corrected. I DO think it's wrong for Apple to drop support for something they shipped, especially since it's just over 4 years old. I thought this issue was limited to people who had Mac Pro's or Hackintosh's and put an unsupported card in it. I'll admit I've only used 13" MBP's for the last 10+ years until my most recent MBP and so dedicated graphic cards wasn't in my wheelhouse. I honestly thought they stopped ALL Nvidia cards back in 2009ish.


As a sibling comment pointed out, it seems it does appear to be supported? I have the same machine:

  Chipset Model:	NVIDIA GeForce GT 750M
  Type:	GPU
  Bus:	PCIe
  PCIe Lane Width:	x8
  VRAM (Dynamic, Max):	2048 MB
  Vendor:	NVIDIA (0x10de)
  Device ID:	0x0fe9
  Revision ID:	0x00a2
  ROM Revision:	3776
  Automatic Graphics Switching:	Supported
  gMux Version:	4.0.8 [3.2.8]
  Metal:	Supported, feature set macOS GPUFamily1 v4


You are aware that third party hardware vendors have been writing drivers for closed sourced operating systems for well over 30 years aren’t you?


> Can you just not write graphics drivers for macOS?

So this is seriously what you suggest owners of an older MBP (official Apple hardware, just over 4 years old) to do?

Since both Apple's drivers and NVidia's drivers are completely closed source, I'd say it's hardly possible to write a working driver (w/ hardware acceleration) for it.


I’m quoting an NVIDIA engineer here; I’m asking why they can’t write their own driver instead of waiting for Apple. (Though, I’m sure that people in the Hackintosh community would not shy away from stepping up if asked…)


Since it's hardware sold by Apple I would assume that Apple figures that out with NVIDIA and collaborates. Apparently I'm wrong and Apple just doesn't give a crap as long as they can sell new products.

For a hobby computer it can be interesting to write your own driver. For a professional laptop not so much :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: