This is not a normal retina configuration. This is a highly unusual configuration where the framebuffer is much larger than the screen resolution and gets scaled down. Obviously it sucks if it used to work and now it doesn't but almost no one wants this which probably explains why Apple doesn't care.
In my case it's a standard LG UltraFine 4K monitor plugged into a standard 16" M5 MacBook Pro via standard Thunderbolt (via USB-C) - not sure what's not normal about this? I've confirmed it with other monitors and M5 Macbook Pros as well.
In macOS display settings, what scaling mode are you using? This bug appears to only affect 4K monitors that are configured to use the maximum amount of screen space (which makes text look uncomfortably tiny unless you have a very large monitor). Most people run at the default setting which gives you the real estate of a 1080p screen at 2x scale, hence the "not normal" part of this configuration.
Actually, I don't even think it's possible to run HiDPI mode at the native resolution scale from within the macOS settings app, you'd need something like `Better Display` to turn it on explicitly.
If you use the middle screen scaling you're given absolutely huge UI elements and it's the case for the inbuild 16" screen as well as external displays but when you get up to 32" displays it's almost comical how large the UI is on the middle / default setting.
Yeah, on larger monitors it's more common to run at the monitor's native resolution without scaling but even so macOS will not turn on HiDPI mode - you'd still need to do this explicitly via another app (I didn't even know it was possible to turn on HiDPI mode at native scaling until reading this article)
I use a 43" 4k tv at the standard non-retina 4k with an m1 pro. I tried your 8k supersampling but it doesn't seem to improve on the default 4:4:4 8bit rgb non-retina for me. (smoother but not as crisp outside terminals?)
The TV is unusable without BetterDisplay because the apple default negotiation preference. I hope waydabber can figure something out with you.
To be frank, it's kind of embarrassing if an entry-level Windows laptop with a decent integrated GPU handles this without much effort.
Apple is free to make its own choices on priority, but I'm disappointed when something that's considered the pinnacle of creative platforms sporting one of the most advanced consumer processors available can't handle a slightly different resolution.
I don’t know why this was downvoted, I agree that this is a highly unusual configuration. Why render to a frame buffer with 2x the pixels in each direction va the actual display, only to then just scale the whole thing down by 2x in each direction?
Supersampling the entire framebuffer is a bad way to anti-alias fonts. Especially since your font rendering is almost certainly doing grayscale anti-aliasing already, which is going to look better than 2x supersampling alone. And supersampling will not do subpixel rendering.
This is what us proles on third-party monitors have to do to make text look halfway decent. My LG DualUps (~140ppi if I recall) run at 2x of a scaled resolution to arrive at roughly what would be pixel-doubled 109ppi, which is the only pixel density the UI looks halfway decent at. It renders an 18:16 2304 x something at 2x, scaled down by 2.
It's also why when you put your Mac into "More Space" resolution on the built-in or first-party displays, it tells you this could hurt performance because thats exactly what the OS is going to do to give you more space without making text unreadable aliased fuzz, it renders the "apparent" resolution pixel doubled, and scales it down which provides a modicum of sub-pixel anti-aliasing's effect. Apple removed subpixel antialiasing a while back and this is the norm now.
I have a 4K portable display (stupid high density but still not quite "retina" 218) on a monitor arm I run at, as you suggest, 1080p at 2x. Looks ok but everything is still a bit small. If you have a 4K display and want to use all 4K, you have the crappy choice between making everything look terrible, or wasting GPU cycles and memory on rendering an 8K framebuffer and scaling it down to 4K.
I'm actually dealing with this right now on my TV (1080p which is where I'm writing this comment from). My normal Linux/Windows gaming PC that I have hooked up in my living room is DRAM-free pending an RMA, so I'm on a Mac Mini that won't let me independently scale text size and everything else like Windows and KDE let me do. I have to run it at 1600x900 and even then I have to scale every website I go to to make it readable. Text scaling is frankly fucked on macOS unless you are using the Mac as Tim Cook intended: using the built-in display or one of Apple's overpriced externals, sitting with the display at a "retina appropriate" distance for 218ppi to work.
Yeah, it sounds like this Maxio controller is quite decent and it's cheap because they don't have name recognition yet. The Acer FA200 appears to be a non-counterfeit SSD using the same controller. https://www.tomshardware.com/pc-components/ssds/acer-fa200-4...
Russia images all US bases whenever they want. That's what spy satellites are for! It's also a very small leap to assume they are selling intel to Iran. Everyone is aware of this and it's taken into account by planners on all sides.
I don’t believe that nobody else would build it for them. This chip is not purely custom to Meta, as I understand it. Rather, ARM is going to be selling it to others and entering the market for “AI chips,” which is surely a market that some other ARM licensees want to be in. So therefore, ARM is competing with its other licensees, and in a potentially very lucrative, leading edge market, which is never a great place for an IP-focused company to be. Qualcomm, for instance, is undoubtedly annoyed at this.
Then you’d say that Apple doesn’t make their laptops. Foxconn does.
The kind of work ARM would do to “make” a chip themselves goes beyond just design. It’s synthesis, P&R, test, packaging (generally a different company than the fab), yield management, inventory/logistics, etc.
Context: Early in the firmware boot process the memory controller isn't configured yet so the firmware uses the cache as RAM. In this mode cache lines are never evicted since there's no memory to evict them to.
I remember the talk about the Wii/WiiU hacking they intentionally kept the early boot code in cache so that the memory couldn’t be sniffed or modified on the ram bus which was external to the CPU and thus glitchable.
There may be server workloads for which the L3 cache is sufficient, would be interesting if it made sense to create boards for just the CPU and no memory at scale.
I imagine for such a workload you can always solder a small memory chip to avoid having to waste L3 on unused memory and a non-standard booting process so probably not.
Most definitely, I work in finance and optimizing workloads to fit entirely in cache (and not use any memory allocations after initialization) is the de-facto standard of writing high perf / low latency code.
Lots of optimizations happening to make a trading model as small as possible.
Today a local AI box (Strix Halo, DGX Spark, Mac Studio) is $2,500+. Even if it comes down, almost no one will pay upfront when they could pay $20/month instead.
Small models can run on a normal PC but similar or better quality models are free in the cloud.
reply