Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would like to offer a prophecy: For the next evolution of ACPI, Linux kernel devs (employed at hardware companies) will figure out a way to replace the bespoke bytecode with eBPF.

Windows will, of course, spawn a WSL instance when it needs to interact with ACPI. macOS is its own hardware platform, and will naturally come up with their own separate replacement.



There already is an eBPF for Windows, it's even Microsoft's own project https://github.com/microsoft/ebpf-for-windows


Ahh. Just as the prophecy foretold.


Unlikely. ACPI is made by Wintel vendors, so Windows will get support for the fancy new things and Linux will lag behind until the new thing is documented or reverse engineered.


ACPI is standardized via a specification. It's quite easy for non windows operating systems to support ACPI. I can't say the same for device tree as that requires reading Linux source.


Lots of things are available in a specification. HTML, for instance. Just being an open specification is insufficient when there's a superdominant implementation. At that point, that implementation is the specification.

In HTML, it was Internet Explorer for a long time, now it's Chrome/Chromium.

In ACPI, it's Windows. In fact, Linux pretends to be Windows because anything else is a shipwreck graveyard of disappointment and untested code.

https://askubuntu.com/questions/28848/what-does-the-kernel-b...

Also, see the whole necessity of patching your DSDT (Windows users: "The what?" Linux users: nodding sadly) https://wiki.archlinux.org/title/DSDT

Modern computers are sufficiently complicated that they really only support one OS. And of course, those are almost entirely Windows computers. Buy a computer with Linux preinstalled, with support, if you want to avoid having to care about things like this (or having a computer that never works quite right (e.g. doesn't reliably suspend or the fans are running wrong)).


The situation with HTML was worse in 2000 than it is today.

Early on Netscape introduced its own non-standard behavior for broken HTML (tags not properly closed.) Somewhere between 30-60% of HTML was broken so any competitive browser had to (i) render broken HTML and (ii) render broken HTML in the same undocumented way as Netscape!

Microsoft figured this out with IE but it was one barrier in the way to alternative browsers. This undocumented behavior was finally documented in the HTML 5 spec.

Now you might say the "whole stack" has the Chrome problem in that Chrome has some features that (some) other browsers don't have such as

https://caniuse.com/css-cascade-scope

https://caniuse.com/view-transitions

https://caniuse.com/css-text-wrap-balance

but a lot of those features are in the "cherry on top" category and there is a fairly healthy process of documenting how they work and the features proliferating to other browsers except when Apple doesn't want them to. (Gotta keep developers in that app store.)


Only because Web has effectively turned into ChromeOS with one vendor left standing.


It's not a perfect parallel. For instance, the tooling used to create HTML isn't almost universally provided by one vendor (Microsoft), and then run by the same vendor (still Microsoft.) It's also not like the CEO of that company we ever caught speculating on how to lock out competitors using that same technology. (OK, that's true of both HTML and ACPI)


That specification is a monster. "quite easy to support" is not a good description of the situation.


By comparison it is easier. The alternative is that you need to read even an order of magnitude more lines of GPL source code in the Linux kernel to write your OS, which may not be an option.


My understanding from ~10 years ago is:

There is a specification

Windows ACPI implementation is buggy

Hardware manufacturers implement Microsoft's implementation bug for bug

Everyone has to reverse engineer Microsoft's implementation because the standard isn't enough


From my 20 year memory it was other way around:

There is a specification.

Taiwanese hardware OEMs suck at programming, make mistakes.

Windows ACPI implementation is build to detect and work around those bugs. Think Win 3.x version of SimCity 2000 read after free bug and Microsoft hardcoding workaround in Windows 95 https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii...

>Windows 95? No problem. Nice new 32 bit API, but it still ran old 16 bit software perfectly. Microsoft obsessed about this, spending a big chunk of change testing every old program they could find with Windows 95. Jon Ross, who wrote the original version of SimCity for Windows 3.x, told me that he accidentally left a bug in SimCity where he read memory that he had just freed. Yep. It worked fine on Windows 3.x, because the memory never went anywhere. Here’s the amazing part: On beta versions of Windows 95, SimCity wasn’t working in testing. Microsoft tracked down the bug and added specific code to Windows 95 that looks for SimCity. If it finds SimCity running, it runs the memory allocator in a special mode that doesn’t free memory right away. That’s the kind of obsession with backward compatibility that made people willing to upgrade to Windows 95.

Everyone else stumbles on those badly implemented ACPI Tables which seemingly work just fine in Windows land.


FreeBSD in 2023 is still masquerading as "Microsoft Windows NT" in order for things to work correctly. It has been this way since 2004. It works fine in "Windows Land" because the hardware is literally special-casing behavior for Windows!

  /*
   * OS name, used for the _OS object. The _OS object is essentially obsolete,
   * but there is a large base of ASL/AML code in existing machines that check
   * for the string below. The use of this string usually guarantees that
   * the ASL will execute down the most tested code path. Also, there is some
   * code that will not execute the _OSI method unless _OS matches the string
   * below. Therefore, change this string at your own risk.
   */
  #define ACPI_OS_NAME                    "Microsoft Windows NT"
https://cgit.freebsd.org/src/tree/sys/contrib/dev/acpica/inc...


_OS is basically irrelevant, _OSI has been used for over 20 years now. The right way to think about the values the OS presents to the firmware is in terms of a contract between the OS and the firmware in terms of mutual expectations. Windows effectively embodies a contract - the behaviour of any given Windows release will remain constant. There's no way to define an equivalent definition for Linux (because Linux's behaviour keeps changing to match hardware expectations), so it makes more sense for us to attempt to mimic the stable contract Windows provides (and it helps that that's the contract that the vendor has tested against in the first place). I went into this oh good lord over 15 years ago: https://mjg59.livejournal.com/96129.html


Your blog post is very insightful, thanks!


I feel like that outcome is inevitable. All implementations have bugs, and developers implement for reality rather than for a spec. Inevitably it leads to drift and the need to retain things that weren't right to begin with.

See also, the referer header. :)


> I feel like that outcome is inevitable. All implementations have bugs, and developers implement for reality rather than for a spec.

It depends. If the spec is clear then developers will generally implement the spec. If the spec is a mess then it becomes easier to just do what works on the popular implementations. Things like spec conformance suites, or even just writing up the spec well, can move the needle.


The lack of support usually comes from the other side: the h/w vendors aren't testing with anything but Windows.

I'm yet to see a Linux laptop where ACPI wasn't broken at least for one device (the most likely suspects are the components that aren't typically used in servers, s.a. webcams or wifi modems).


> macOS is its own hardware platform, and will naturally come up with their own separate replacement.

Actually, no. The M-series SoCs use device trees [1], and in fact their Apple SoC predecessors did just as well - the earliest I could find is the iPhone 3GS [2].

[1] https://lore.kernel.org/lkml/20230202-asahi-t8112-dt-v1-0-cb...

[2] https://www.theiphonewiki.com/wiki/DeviceTree


They're very device tree oriented. They've been using them since "new world PowerPC" Macs in the 90s. Even on x86, their boot loader constructs a device tree to describe the hardware to the kernel.


They have no incentive to use or benefit from ACPI. They don't have the problem of trying to scale to an innumerable number of hardware permutations. They have a limited set which they control the entire stack of. I would certainly be very confused if they went with such an overkill solution as well.


This appears logical but the reality is that the only reason you can’t immediately run MacOS on a generic X64 computer is that it doesn’t contain the licensing chip.

If you patch out that requirement (using a Hackintosh installation) and covert the ACPI tables into the format used by Apple it runs just fine, for as far as drivers are available for your hardware.


Sure, but they don't care about supporting that use pattern. If anything having Hackintoshes break is a bonus to them.


BPF doesn't really make sense here. It can't fully specify the kinds of computation an AML replacement would need since BPF is guaranteed to terminate (it's not Turing complete).


For this use case (hardware configuration), that might actually be desirable?


I don't think you'll get uptake removing constructs like

    while(*STATUS_REG & STATUS_BSY);
since AML is less a hardware description format and more a driver binary format.


macOS already uses (at least on ARM chips) device trees. I don’t see why they would go back to ACPI as long as they keep their SoC model.


Why bother with bespoke bytecode when we have a high quality, standard ISA?

RISC-V's base RV64I has 47 instructions. Legacy ISAs can simply emulate these 47 instructions.


Bytecode is presumably chosen to minimize the program length, while the RISC-V is at the opposite end of verbosity for representing a program.

You may be one of those who believe that RISC-V is a high-quality ISA, but this is not an universal opinion and it is a belief that is typically shared only by those who have been exposed to few or no other ISAs.

In the context of something like ACPI, I would be worried by the use of something like the RISC-V ISA, because this is an ISA very inappropriate for writing safe programs. Even something as trivial as checking for integer overflow is extremely complicated in comparison with any truly high-quality ISA.


For example, Open Firmware specification used a flavour of Forth for its bytecode.


How patronising. Can you give an example of an ISA that is higher quality than RISC-V?


There is no doubt that Aarch64 is a much higher quality ISA than RISC-V. The good or bad licensing model used for an ISA has nothing to do with the technical quality of an ISA.

Even the Intel/AMD ISA, despite its horrible encoding, after excluding many hundreds of obsolete instructions that are not needed any more and after excluding from the instruction encoding the prefix bytes that are needed only for backward compatibility, would be a higher quality ISA than RISC-V, especially for expressing any task that is computationally intensive. RISC-V is particularly bad for expressing computations with big integers.

The modern subset of the Intel/AMD ISA is better than RISC-V even from the point of view of some of the design criteria on which RISC-V is based.

For instance, the designers of RISC-V have omitted many useful features under the pretext that for good performance those features would require additional read and write ports to the register file.

The Intel/AMD ISA, by allowing one of the three operands of an instruction to be a memory operand, allows identical performance with an ISA with 3 register operands, while having one less read port and one less write port in the register file.

Having instructions with one memory operand works well only in CPUs with out-of-order execution. Nevertheless, the same performance as for an ISA with a memory operand can be achieved in a low-cost in-order CPU if there are a set of explicitly addressable load buffer registers, and one or more operands of an instruction can be a load buffer, instead of a general-purpose register.

So there would have been many better ways of accomplishing the design criteria of RISC-V. RISC-V is a simple ISA that is great for teaching, but its only advantage over any ISA that a competent designer could conceive in a few days is the large amount of already existing software tools, i.e. compilers, linkers, debuggers and so on, which would take years to duplicate for any brand-new ISA.


I’ve heard really good things about SuperH (sh-2, sh-4 etc.), designed by Hitachi for their processors including those used in the Sega Saturn and Sega Dreamcast. Really high code density was one big thing, but it was covered by patents until recently. There was a group trying to make a modern open source processor in VHDL for FPGAs and later on ASICs based on it, but I think it may have mostly fizzled out.


I do Dreamcast homebrew, and my opinion is the SH ISA is way over-optimized for 90's embedded applications. It wastes the tiny instruction space on useless things like read-modify-write bit operations, intended for setting hardware registers, and has a laughably small displacement for function calls (+/-2KB, with pretty much all calls requiring manually loading a function pointer from RAM). There are parts that are still nice compared to something like RISC-V, like post-increment loads and pre-decrement stores (but hardware designers seem to hate things like that, since they require an extra write port on the register file), and the code density can be pretty good (although GCC's output is awful), but there are so many ways the ISA could be easily improved.


(sorry for being off-topic) regarding https://news.ycombinator.com/item?id=16308439, I wonder how is this project going?


I haven't done much on it since then. While it's something I would like to finish, I realized that there are other, higher priority things that I should do first.


WDC 65C02, or even WDC 65C816.

Or how about MMIX?


Well first you need to define your criteria for "high quality"


The parent (adrian_b) has to, if anyone.

But as far as I can see, they simply have a (misguided) preference for CISC.

They're very vocal against load/store architectures, and they don't seem to understand the tradeoffs RISC-V does make.

They don't even seem to get that RISC-V has had the highest density in 64bit from the start (first ratified user spec, 2019), and now has highest density among the 32bit too (as of recent Zc ratification).


What makes you think RISC-V is a good fit for device configuration?


Simplicity, lack of flags or arithmetic exceptions, as well as clear ABI and environment call mechanism.

Hardware access could be clearly gated through ecalls, and the mechanisms for this could exist as a standard extension in the SBI interface.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: