> macOS on Apple silicon processors (M1, M2, and M3) includes a feature which controls how and when dynamically generated code can be either produced (written) or executed on a per-thread basis. […] With macOS 14.4, when a thread is operating in the write mode, if a memory access to a protected memory region is attempted, macOS will send the signal SIGKILL instead.
This isn’t just any old thread triggering SIGKILL, it’s the JIT thread privileged to write to executable pages that is performing illegal memory accesses. That’s typically a sign of a bug, and allowing a thread with write access to executable pages to continue executing after that is a security risk.
But I know of other language runtimes that take advantage of installing signal handlers for SIGBUS/SIGSEGV to detect when they overflow a page so they can allocate more memory, etc. This saves from having to do an explicit overflow check on every allocation. Those threads aren’t given privilege to write to executable memory, so they’re not seeing this issue…
So this sounds like a narrow design problem the JVM is facing with their JIT thread. This blog doesn’t explain why their JIT thread needs to make illegal memory accesses instead of an explicit check.
> "This blog doesn’t explain why their JIT thread needs to make illegal memory accesses instead of an explicit check."
Because explicit checks on every memory access (pointer dereference) makes Java significantly slower, even with compiler optimisations to remove redundant checks[1]. Memory protection is a fundamental, very useful, hardware feature and it's perfectly reasonable for user space language runtimes to take advantage of it.
Or, to put it another way, SIGSEGV has been a part of Unix-family OSes for decades. It works perfectly fine on Linux and Windows and there's no reason it shouldn't work on macOS.
[1] (Many years ago I worked on a cross-platform implementation of the Java runtime and wrote much of the threads and signal handling code. We had an option to enable explicit memory checks, which got us up and running faster on new platforms where the SIGSEGV handlers hadn't been written yet. From memory this made everything something like 30-50% slower, so it was definitely worthwhile to implement SIGSEGV handling. In our case SIGSEGV handlers were used both as part of the garbage collector/memory management and to implement Java's NullPointerException)
Well, everyone breaks Adobe Flash... that's the whispered exception at the end of the rule: "don't break userspace and then blame the user (unless that user is Adobe Flash... F--- Adobe Flash)" ;P.
I feel like preventing illegal writes to protected memory is less "breaking user space" and more "protecting all space".
This is like arguing to allow the guy who can't drive and just pin-balls his way down the freeway bouncing of other cars, because to prevent him from driving would be to take away his personal freedoms.
There was no conceivable version of a road system where that behavior would ever be okay. However, it's not only conceivable but, apparently standard practice in systems programming, to "Try and Fail" instead of "Only Proceed if allowed".
So, if we want a tortured metaphor what JVM is doing is like trying to pass a turnstile to see if the pass is still valid so that on the happy path it saves the extra check. Now Apple decided that instead of just showing a red X and letting you buy a new pass, in the future you get shot in the back of the head if you try with an invalid pass.
That’s how MS works which leads to compatibility, but less stability. Historically with Apple, it’s their way or the highway. Less compatibility, but the OS is more stable.
That probably needs to be qualified with "Oracle JVM Java applications" because I've had many hours of running Minecraft on macOS 14.4 under Zulu's JVM (a pre-14.4 release which means it doesn't have any workarounds) without any issues.
Almost all Java builds including Zulu are made from the same OpenJDK code base. The fact that you didn't experience crashes does not tell anything. May be you were lucky or this particular bug was not manifesting for this particular application.
Only Java is unstable because it’s not following Apple’s rules. Everything else, including the apps that follow the rules, are stable. That’s how it’s always been for the Apple ecosystem.
Java is not breaking any rules. macOS has memory management provisions for JITs and virtual machines just like every other OS. SIGSEGV has been part of Unix for decades and works perfectly fine on other OSes. And still works "most of the time" on macOS. Apple broke something here, and I'm sure they will fix it pretty soon.
this reads like a description dumbed down meant to be read by someone that does not know the technical details, and on top of that, with an agenda to mislead(in this circumstance) :)
It said it affected back to Java 8, so seems like this design has been there for a while, and since older versions are EOL, any Java level fix would not be patched back.
Android team has been forced to accept Kotlin without the Java ecosystem is an oxymoron, thus not only is ART updatable since Android 12, Java 17 LTS is now the latest supported version.
And on the SDK side, they need to use whatever InteliJ requires.
No, you can just download Adoptium, Azul and probably other builds. They're up to date and will be for few years. Not sure about Oracle, but wouldn't recommend it anyway.
“The Java Virtual Machine […] leverages the protected memory access signal mechanism both for correctness (e.g., to handle the truncation of memory mapped files) and for performance.”
Where by “protected memory access signal mechanism”, they mean SIGBUS/SIGSEGV, i.e., a segfault.
This is probably because the JVM is doing “zero cost access checks”, which is where you do the moral equivalent of:
…because it’s faster than checking file permissions before every write. (It’s a common pattern in systems programming, so it’s not quite as crazy as it sounds.)
I guess my opinion on this is that if you write your program to intentionally trigger and ignore kill(10) / kill(11) from the host OS, for the sake of a speed boost, you can’t really get too mad when the host OS gets fed up and starts sending kill(9) instead.
I also wonder what happens in the (extremely rare) case where the signal the JVM is trapping is a real segfault, and not an operating system signal.
This isn't about files, this is about plain pages of RAM[0]. It is a basic CPU operation to trap on unmapped pages, and OSes rightfully expose this useful feature (in addition to using it themselves), allowing processes to do many things, from lazily-computed memory regions to removing significant amounts of overhead doing a thing the CPU will inevitably do itself anyway.
I believe the "the truncation of memory mapped files" section is for when the Java process memory-maps a file (as Java provides memory-mapping operations in its standard library, and probably also uses them itself), and afterwards some other unrelated process truncates the file, resulting in the OS quietly making (parts of) the mappings inaccessible. Here the process couldn't even check the permissions before reading (never mind how utterly hilariously inefficient that would be, defeating the purpose of memory-mapping) as the mappings could change between the check and subsequent read anyway.
And it's worth noting that while man mmap on macOS doesn't indicate what happens when the protections are violated (that is, if you try to read, write, or execute in violation of the set protections) the related function mprotect has this to say in macOS 14.3 (what I have available):
> When a program violates the protections of a page, it gets a SIGBUS or SIGSEGV signal.
(The Linux man pages for mmap and mprotect indicates SIGSEGV would be signaled.)
So the past use and assumption (SIGSEGV or SIGBUS) are consistent with the expectations of mmap and mprotect given the documentation provided.
However, I still stand by my pseudocode - I claim that it will give a fairly accurate impression of the basic concept of zero-cost access checks to a reader who isn’t familiar with low-level systems programming. (That said, I have updated my comment to make it clear it’s more of a metaphor than a literal description.)
A talk at FOSDEM this year [0] describes how the OpenJDK JVM relies on triggering SIGSEGVs in order to efficiently implement thread-local safepoint checks - I wonder if that would also be affected?
> I also wonder what happens in the (extremely rare) case where the signal the JVM is trapping is a real segfault, and not an operating system signal.
Just an educated guess, but the JVM knows if a thread may expect a segfault at a given point or not. If no thread expects one, then I assume the segfault handler just writes out that a segfault happened with some useful info, and terminates the program. I mean, I’m sure about the effect as I have caused a JVM to segfault a couple of times with native memory, so it handles it as expected.
"The issue was not present in the early access releases for macOS 14.4, so it was discovered only after Apple released the update."
I wonder if Oracle really didn't know beforehand.
Apple has long been telling people (writing JITs) that to write to executable memory, they need the correct entitlements (com.apple.security.cs.allow-jit, allow-unsigned--executable-memory, and or/ .disable-executable-page-protection). I wonder if Oracle has been ignoring them, satisfied with the signal-handler workaround, and Apple finally enforced their policy.
Apple also expects that developers deploying apps on MacOS that use Java have these entitlements configured on a per-app basis. Oracle likely objects that this is not really for the application developer to certify, since it's pretty much out of their control.
In any case, I'm doubting Oracle's release is the whole truth.
> Apple has long been telling people (writing JITs) that to write to executable memory, they need the correct entitlements (com.apple.security.cs.allow-jit, allow-unsigned--executable-memory, and or/ .disable-executable-page-protection). I wonder if Oracle has been ignoring them, satisfied with the signal-handler workaround, and Apple finally enforced their policy.
As far as I understand, that’s not the issue, the JIT itself works just fine. The JVM just uses the (quite common) trick that it doesn’t actually bound check everything, but let’s the hardware trigger an interrupt, expecting that to “bubble up” to the program at hand, so it can handle certain cases “for free”. This behavior was changed by apple, which causes issues.
This is honestly a wild and out there claim. The OpenJdk team would never want to see this happen to their user base. They’re some of the most professional programmers I’ve ever seen.
The whole truth is that the Apple kernel team broke user space.
The main question now is why hasn't it been exposed in pre-release 14.4. This could mean some very urgent and risky change got its way to the 14.4 release, or that the whole macos release process is broken and unstable.
nitpick: Apple doesn't follow SemVer 2.0, but they do have a semantic versioning scheme, that is, the version components carry a certain semantic, it's just so that this semantic is different than the semantic defined by the SemVer 2.0 specification.
One can have any sort of semantic versioning that is not SemVer 2.0 compliant and still be useful, see e.g Rails or Ruby.
Even .Net assemblies are not SemVer 2.0 compliant: their pattern is maj.min.patch.build but SemVer 2.0 specifies that there can only be three conponents and build info must be behind a plus, like maj.min.patch+build
I don’t think the article claims that a Java process tries to access some other process’s memory.
In a typical modern operating system, a memory page can be non-writable and non-executable, writable and non-executable, or non-writable and executable, but not simultaneously writable AND executable.
If you generate executable code at runtime, then you need write access to a page to write the executable code into that page. Then you need to tell the operating system to change the page from writable to executable.
If you then try to write to the page, you’ll get a signal (SIGSEGV or SIGBUS, according to the article).
Oracle’s JVM apparently relies on this behavior: a Java process sometimes tries to write to a page (in its own memory space) that is not marked writable. The JVM then catches the SIGSEGV and recovers (perhaps by asking the operating system to change the page back from executable to writable, or by arranging to write to a different page, or to abort the write operation altogether).
It's not. It's trying to access unmapped or protected memory in its own process.
Basically what its used for is to implement an 'if' that's super fast on the most likely path but super slow on the less likely path.
It's not super clear what its being used for (this is often used for the GC but the fact that graal isn't affected means that likely still works). Possibly they are using this to detect attempts to use inline-cache entries that have been deleted.
object.field is implemented as a direct load from the object; if the object turned out to be null, then the resultant signal is caught and turned into a NullPointerException
In a virtual memory operating system, every program has its own address space. Accessing an unmapped address is not the same as trying to access another process's memory.
It's also pretty common to use memory protection to autoextend stacks... Allocate the stack size you need, ask the OS to mark the page(s) after the stack as protected, catch the signal when you hit the protection, allocate some more stack and a new protected page unless the stack is too big. Works for heaps too.
Let the MMU hardware check accesses, so you don't have to check everything in software all the time.
A fairly common idiom is to use memory protection to provide zero cost access checks, as you can generally catch the signals produced by most memory faults, and then work out where things went wrong and convert the memory access error into a catchable exception, or to lazily construct data structures or code.
So you want the trap, but the trap itself can be handled. It sounds like there’s been a semantic change when the trap occurs for execution of an address or an access to an executable page.
There are also a bunch of poorly documented Mac APIs to inform the memory manager and linker about JIT regions and I wonder if it’s related to those. It really depends on exactly what oracle’s jvm is trying to do, and what the subsequent cause of the fault is.
Certainly it’s a less than optimal failure though :-/
I’m asking because the reasons seem dumb to me, which is why I’m asking people smarter than I am about low-level memory management if they’re legitimate.
JIT compilation can happen at any time. The runtime wants to create a native version of a previously interpreted snippet of code when it is called frequently enough to warrant this.
The article also describes W^X functionality, which means a region of memory is either executable (x)or writable. On macOS 14.4 violating this either-or condition results in a signal that can not be handled by the process.
Accessing such areas is sometimes done deliberately since programmers could rely on the OS telling them what just happened using signals instead of nuking the process wholesale. Doing it without signals is usually slow and/or clunky (null-pointer checks, read/write permissions, existence of pages), or straight out impossible.
Accessing other processes' memory is not the concern since virtual memory provides each process the illusion of having the entire address space for itself.
I just bought a MacBook Pro with the M3 Max chip and installed MATLAB R2023b. Sonoma 14.3 is in place. As a requirement, I had to also install Corretto 8. MathWorks only supports the Java 8 JRE included with Amazon Corretto 8. I am already having several problems in MATLAB with his new setup. Can I assume that updating to Sonoma 14.4 might very well cause even more problems? I really don't understand any of this.
They may not fix it, but my understanding is they are relying on undocumented features and that's always a crap shoot. My company does low-level language stuff and we've been burned like this, too. We decided not to trade performance for compatibility in the last decade or two.
EDIT: maybe not undocumented, but undefined behavior?
When a kernel update breaks all JVM versions starting from Java 8, the kernel devs fucked up. Even worse when the breaking change is in the final production release only and not the beta release. Completely obvious that this is a bug.
Segmentation fault should trigger SIGSEV, not SIGKILL. They changed the behaviour of the kernel which broke the JVM and any other applications that are designed according to the POSIX standards. https://pubs.opengroup.org/onlinepubs/9699919799/functions/V...
It is always funny to Me when Apple zealots come into threads blaming everyone but Apple that software broke. Complaining Java doesn’t follow Apple standards or some crap. Then 9 days later Apple issues a fix because they did indeed break it.
It seems highly unlikely that the macos people don't test anything on the jvm during acceptance. It's even more suspicious that this change didn't happen during the public beta. Is it possible that Apple is firing a warning shot at Java? Even as a huge fan of Hanlon's razor, this seems like such an enormous oversight its hard for me to ascribe it to incompetence.
> it seems highly unlikely that the macos people don't test anything on the jvm during acceptance.
I would be surprised if they do to be honest (Apple doesn't even catch obvious bugs in the new macOS settings panel, which really makes me wonder if there is a software QA process at all). For 3rd party apps they seem to rely on the software vendors to holler if a macOS update breaks their app. That's why the macOS prerelease versions exist. But since the bug wasn't present in the prerelease, affected vendors couldn't catch it. It's still a fuckup in Apple release process of course (which tbh also isn't surprising).
I'm stumbling over a couple of annoying problems when opening the DNS server subpanel via search (because without search it's pretty much impossible to find that panel, but that's a separate issue).
One is that occasionally there's an error popup "Extension process Network(4433) exited." just when clicking on the 'DNS servers' search result.
The other is that when accidentally hitting "Enter" after entering a new DNS server address the entire DNS Server subpanel will close even though I want to enter a second address (which sucks from a user perspective, but might even be consistent with the UX guidelines, but OTH I would expect pressing Enter on a text input box would not close the UI panel which contains the text input box, but maybe that's just me). But then clicking on the previous search result 'DNS servers' to open the DNS servers panel again, the click does nothing this time.
One has to clear the search box, enter the search term again, perform a new search, and then click the search result 'DNS servers' again to get the subpanel for entering DNS server addresses.
I guess the search is also broken like this for other subpanels, but changing the DNS servers is about the only situation where I'm using the search box.
In the old settings panel all that worked as expected (and apart from that, everything also was a lot snappier, somehow Apple engineers managed to create simple Settings window that suffers from performance problems, but again, different issue).
No idea what OP if referring to but I could pretty consistently cause Settings to soft lock for a few months by loading a configuration profile while the settings window was open, just small things like that are basically everywhere in macOS.
It's not a problem that breaks all JVM based software instantly. So maybe Apple tests but not long enough to trigger this issue.
I really don't know what Apple would be 'warning' against. Don't use Java? There are tens of thousands of business and development tools depending on the JVM. Blocking Java would diminish the value of macOS tremendously and doing so without warning would open Apple up to lots of lawsuits.
>It's not a problem that breaks all JVM based software instantly.
Do you know how long it takes to reproduce? The OP was light on details here. I assume that a memory access issue with the JIT would pop up pretty quickly, though.
I'm running an Eclipse development environment that's regularly compiling a huge codebase. Had 2 crashes this week after updating so less than once a day. That's assuming it isn't an Eclipse bug ;)
Just being curious: With or without having created a hs_err_pid<pid here>.log?
Why do I ask: Ten days now on 14.4. and cannot see any change in Eclipse and Tomcat behaviour.
Another example for how preventing users from doing rollbacks is a terrible practice. Even if it's not your application's fault, users may have very good reasons to revert an update, if only temporarily.
This also bothers me on Android. Sometimes, an app update may break something and prevent me from using it. But Google doesn't allow me to reinstall a previously published version from the Play Store. If I don't have to (or can't easily) do without that application until a fix might be released, my only option is to find an older release on some shady mirror site.
Even if this was the right thing, they could / should have changed this behavior in a pre-release because that's exactly the kind of API change in the OS that will catch people off-guard. As another commenter wrote, I'd consider this either a serious flaw in Apples release process or they learned about some very dangerous vulnerability where the old behavior was abused and they decided that they rather annoy all users and vendors of Java software out there than tolerate the vulnerability in MacOS. But in this case I'd surmise that at least now Oracle would have been informed about this.
A gross and low performance option for now might be to run Java under Rosetta, but I’m saying that based on them saying that this is apple silicon specifically and processes under rosetta have a bunch of quirks to support intel semantics. This would allow you to work around this for now.
That said I’m curious what the exact scenario that leads to this is, I’m assuming it’s not common as you would expect it to have come up during betas and pre -release seeds.
> I’m assuming it’s not common as you would expect it to have come up during betas and pre -release seeds.
The article specifically says that the issue was not present in early access releases, so it was not possible to discover it before the actual release.
I wonder if it's the same reason as why Civilization 6 stopped working on iPadOS 17.4. Did they change something deep in the kernel for DMA compliance?
I wonder if we’re about to enter 4-5 years of macOS “dark ages”, due to Apple grappling with EU/DMA.
Much like Microsoft in early 2000s, between IE/lawsuit and grappling with internet security/viruses. Windows XP, launched in 2001, was considered by most a great OS, didn’t have another good OS successor until 8-years later (Windows 7).
It’s not at all like they didn’t have the time or the resources to deal with this.
I think we already saw some of this in particular with the recent bullshit they tried to pull with PWAs in iOS 17.4 that they were hoping to just let things break and were hoping that they could shift the blame and anger towards the EU instead.
Windows, as a kernel itself and by extension as a server, is very resilient stable to a point that there is a Windows NT 4 machine of a certain railroad control system still running continuously for 14 years without any restarts. It even still reboots back without problem in disastrous cases such as power loss due to hurricane or earthquake. Trust me, it is made by Dave Culter, it, just, works.
It is really the client facing side of Windows that really sucks, (warning: explicitly strong language) such as having really shitty software known as Office, like god why Word and not Latex, and why spreadsheet when we have database that we can query efficiently? Or not being able to have multi-user RDP session due to Microsoft having licensing dispute with Citrix about 20-ish years ago (fuck you Citrix, you asshole!). Or why do I have to do a lot of hoops and install a lot of "C++ redistributable" for running some antique software? Or why do I have to jump through a lot of group policy simply to enable WinRM and get remote powershell management?
Either way, I'm typing this on a Windows 11 desktop with WSL2 on. The hybrid experience is incredible, unless you need some performance critical app (WSL2 is in general slower than bare metal Windows and bare metal Linux itself, of course, except in machine learning).
Things like 9P to cross the Window file system access also introduced a lot of pain such as permission control because Windows does not have a POSIX-like permission system, like instead of having a simple 2 bytes that split into 3 octal number (there is a reason it is maxed out at 777), you have an incredibly sophisticated, capability and token-based access control system dated almost 30 years ago that Linux doesn't even have back in the day! But that pile of shit is now full of bugs and exploits such as token/handle duplication. (oh yes I'm talking about black hat territory as I also do some red team CTF regarding these stuff)
An issue introduced by macOS 14.4, which causes Java process to terminate unexpectedly, is affecting all Java versions from Java 8 to the early access builds of JDK 22
If this affects so many versions of Java and nobody notices, is anyone even using Java on macOS?
Plenty of people develop for java on macs. The issue is that per the article this behavior was not present in the early access macOS builds, which means something changed between beta and release.
And there's a known issue with an interaction between minecraft, Java, and the video drivers that crashes out and it can be traced back all the way to here: https://github.com/glfw/glfw/issues/1997
It's not terminating directly. I've seen a few IDE crashes this week, less than one per day, but since there's no log there's no easy way to determine it's related to a macOS change.
Reading the comments from David Wartell in that thread is enough internet for me today. This guy is CTO of some company and is just harassing the thread for a fix ETA without understanding the problem at all.
You should I had some Rider and IntelliJ crashes. The crash does not happen often tough, but if your in the middle of writing code it can get you out of the flow.
Well, that’s why Apple forbids use of private APIs in the App Store apps. If you built all your tech stack on the foundation of some peculiar nondocumented platform’s behavior, don’t be surprised when this stack breaks.
This is not an API. It's the handling of writes to memory the process has protected. In the past this would generate a signal the process could handle and recover from. Now it generates a sigkill which is uncatchable / unrecoverable from.
These behaviours have been historically well documented.
And, why not? macOS is Apple’s IP and they have all rights to do with it as they want. Buy the way, Chrome/Node.js JavaScript engine uses JIT compilation too. Are they affected?
It is not obvious to me that this breaks POSIX compatibility. The kernel may choose to signal a process with SIGSEGV on a memory protection violation but I can't find anything that suggests this is required.
Last I checked, macOS formally maintains POSIX certification. Linux is not POSIX compliant, so I wouldn't use Linux as the measure of what is correct behavior under POSIX.
Maybe I'm being pedantic but as far as I can tell that document doesn't say when a SIGKILL can or can't be issued. So it seems like it would be valid to issue a SIGKILL at any time the kernel wants. Obviously, that's probably not users/programmers want but it seems to technically meet the specification you list.
If that document said something along the lines of "a SIGKILL may only be issued when XYZ" where XYZ didn't include this case then I'd agree with you. I don't see anything in there that says when a SIGKILL should be or, more importantly, should not be generated. So it seems perfectly valid for an implementation of an OOB memory access to generate a SIGKILL and a SIGSEGV. That SIGSEGV will never be seen because the SIGKILL can't be handled and will kill the process.
When kernel developers violate POSIX standards and break the JVM (and other applications not yet reported), it's a bug. Can't justify it or make excuses. Apple really fucked up here.
Linus Torvalds has a policy: "WE DO NOT BREAK USERSPACE!". You just don't release stupid changes like this in a kernel. I'm sure the team at Apple are not pleased that this bug got into production.
Ah yes that is fairly clear, thanks, I was not able to find that. A plausible interpretation is that the kernel still reserves the right to terminate the process at anytime, including immediately after a general protection fault. Still an unexpected behavior, much like the Linux OOM killer.
This isn’t just any old thread triggering SIGKILL, it’s the JIT thread privileged to write to executable pages that is performing illegal memory accesses. That’s typically a sign of a bug, and allowing a thread with write access to executable pages to continue executing after that is a security risk.
But I know of other language runtimes that take advantage of installing signal handlers for SIGBUS/SIGSEGV to detect when they overflow a page so they can allocate more memory, etc. This saves from having to do an explicit overflow check on every allocation. Those threads aren’t given privilege to write to executable memory, so they’re not seeing this issue…
So this sounds like a narrow design problem the JVM is facing with their JIT thread. This blog doesn’t explain why their JIT thread needs to make illegal memory accesses instead of an explicit check.