>"as @AmnestyTech observed and we @citizenlab can confirm, NSO Group's Pegasus spyware delivered via 0-click exploits is no longer "persistent" in the strict sense of the word (i.e., doesn't come back when you reboot). Persistence is achieved via firing the 0-click again. Because the 0-clicks they're using appear to be quite reliable, the lack of traditional "persistence" is a feature, not a drawback of the spyware. It makes the spyware more nimble, and prevents recovery of the "good stuff" (i.e., the spyware and exploits) from forensic analysis."
On the plus side, having persistence means attackers retain access through iOS updates. Their "persist-less" exploits will eventually be patched by Apple, at which point anyone who applies the update has a clean device.
That isn't how that word is normally used, AFAIK? Even most "persistent" attacks are thwarted by device updates on iOS; to get persistence through an update that fixes the underlying exploit you are using would involve messing with the update mechanism itself and refusing to update at least that part and then like, trying to simulate the update to look like it was still updated... and due to how Apple's software architecture tends to assume everything is updated at once, trying to show the user a new userland (what the user will experience) but with an old kernel (a prerequisite to maintain access if your bugs came in userland as otherwise the new kernel will refuse to run the old exploitable code)--particularly as your attack might not give you the ability to modify much kernel behavior--is actually quite difficult. (Also, the firmware update process on iOS has different ways of being kicked off: if you do it over USB from iTunes, you might not have an opportunity to mess with the process at all.)
If they have a shell on the device, they can choose to persist it whenever they like.
Should an ios update be released that patches their exploit, chances are, they can persist the exploit before the device installs the newest version.
I guess it mean it never hurt if you have any doubt.
In this case however, It wouldn't have been sufficient as the virus will reinstall quickly through the command center. If not data for a tracked device show up after a set timeout, the command center will resend a corrupted iMessage to reinfect.
In all case apply security update asap and regularly, they imply reboot and patch as much exploit as possible.
That means if we can find an active device being exploited, we can cause them to send the attack to any device of our choosing and so find the zero-day. And the command center.
It’s a reference to bootrom-level exploits. The bootrom is burned into the silicon and can’t be updated. If one finds an exploit in there, Apple can’t patch it without new chips.
But those still might or might not be persistent. If you need to once again obtain physical access to use your exploit every time the user reboots, it isn't persistent even if the involved exploits are are unpatchable.
If they have that many good zero-days, we need to rake Apple over some (metaphorical) coals until they get their shit together.
I am not sure rewriting iMessage in rust would be the answer, but it would be a start. Another start would be to refuse to sell iPhones to Israel as long as they do this.
How are image parsing exploits still a thing in 2021? Can Apple not use Rust? I struggle to understand why Apple is still relying on C/C++ in such a well known security hotspot.
>One of the major changes in iOS 14 is the introduction of a new, tightly sandboxed “BlastDoor” service which is now responsible for almost all parsing of untrusted data in iMessages (for example, NSKeyedArchiver payloads). Furthermore, this service is written in Swift, a (mostly) memory safe language which makes it significantly harder to introduce classic memory corruption vulnerabilities into the code base.
...This blog post discussed three improvements in iOS 14 affecting iMessage security: the BlastDoor service, resliding of the shared cache, and exponential throttling. Overall, these changes are probably very close to the best that could’ve been done given the need for backwards compatibility, and they should have a significant impact on the security of iMessage and the platform as a whole.
"It also indicates that Apple has a MAJOR blinking red five-alarm-fire problem with iMessage security that their BlastDoor Framework (introduced in iOS 14 to make zero-click exploitation more difficult) ain't solving."
And:
"BlastDoor is a great step, to be sure, but it's pretty lame to just slap sandboxing on iMessage and hope for the best. How about: "don't automatically run extremely complex and buggy parsing on data that strangers push to your phone?!"
To the my inner armchair security enthusiast, a good solution looks like this:
On incoming message, check one thing and one thing only: is the sender in the contact list? If YES...
+ Run the message through BlastDoor and continue as normal.
If NOT…
- Stop all non-essential parsing immediately.
- Continue with a flow similar to modern e-mail clients (To increase your privacy, we have blocked some elements of this message).
- Only continue normal message parsing if the user explicitly consents.
Apple doesn't admit wrong doing. Take their multi year long denial of butterfly keyboard issues.
The key to user satisfaction isn't security, it's consistent marketing reminders of "security".
Apple users here are often forced into the ecosystem due to iOS or safari dev, but the majority of Apple users care more about the name of the iPhone than performance. This is the nature of Veblen goods.
That’s a bit of a weird framing. Project Zero point out mistakes in Google’s products too. It’s just that they’re visible.
The assumption has to be that Apple quietly fixes minimum a hundred times more security bugs in their products, than the ones the occasional Project Zero blog post highlights.
Given their lackadaisical approach to bug reports filed by external researchers, and their grudging payments on their own bug bounty program, I think you are giving them far more credit than they deserve.
"Rewrite everything in rust" is not useful advice, and many of the decoders in ImageIO are open source libraries, that presumably have the same flaws (so you get "why isn't project X using rust" as well)
"Rewrite everything in Rust" is not that useful, but "write new code in Rust so you're not digging deeper holes" and "rewrite your most sensitive code in Rust" is pretty good advice.
Define "new" code - rust has no ABI stability story, and doesn't have a path to interact with any language other than C, so new code in existing libraries may not be feasibly written in rust.
Anyway, Apple's safe language is Swift, which they are clearly pushing and using more of as time goes by.
not on Mac/I/watch/tvOS where much of the core platform API is objective-c (Which swift has first class support for), and C++ (because of poor 90s decisions).
But also missing the point that you can't implement a subclass or what have you in rust, and if every interface you have is C then you have an exercise in mass unsafe code.
When your entire platform is plagued by security issues in one component for years. Issues that at the end of the day are causing huge personal harm to your customers. Perhaps it is time to suck it up and rewrite the damn thing in a memory safe language.
There is a bit of a risk of new bugs that Rust doesn't catch, but if your development process is at least modestly competent you'll have a large suite of automated tests for the bugs you found in the previous iteration of the software, which you can reapply to the new Rust code.
It’s the new bugs that didn’t exist in the original code that you need to be concerned about.
Consider any long-lived complex library. Count the bugs fixed since initial release. That’s your baseline — you can expect that order of magnitude of bugs in the initial release of your rewrite. Perhaps a new language will eliminate some classes of bugs, so maybe you can cut that number in half or even to a quarter. But you need to come to terms with that number. Now add on the number of bugs due to compatibility. Image formats generally are not tightly spec’ed, meaning the de facto standard libraries become the de facto standard — that is, the library being rewritten.
Rewrites are a long road that starts off one step forward and three steps back. It could sill be worth it, depending on the case. But an effort to “rewrite in rust” that doesn’t account for this will fail.
> That’s your baseline — you can expect that order of magnitude of bugs in the initial release of your rewrite.
My point was that the test suite you start with for your rewrite will be (or at least should be) a lot better than the test suite you had when you started writing the original code.
The spec will likely be better too. Also, you can learn a lot from the previous iteration.
These effects, plus the benefits of the new language, have the potential to reduce the bug count dramatically. Maybe even by an order of magnitude.
Having said that, you're certainly right that there are some steps back and the net cost or benefit is likely highly context-dependent.
It would be great to have more data on projects that have done this, to help people estimate the costs and benefits in their context.
Anyone remember about 20 years ago, there was a remote code execution possible by simply getting outlook express (internet explorer) to load a custom crafted jpeg? Things haven't changed much.
I remain highly skeptical of any messaging app or communication chat program, email client that lets people "embed" arbitrary content in it.
Things haven't change because there is a god like mindset in SW development. "I want my program to do everything". And the web browsers and messaging apps are the worst. They use a miriad of libraries to parse untrusted code and they have components which sometimes run with elevated privileges. What can go wrong ?
It's memory-safe (assuming you avoid the use of the 'unsafe' keyword) and it's performant. In particular, for fast parsing of large objects you want to avoid copying data as much as possible, and Rust's lifetime machinery lets you do that without introducing memory safety issues.
For example, Java is also memory-safe, but Java doesn't have first-class array slices.
Yep. It's the same performance ceiling. Borrow checking is not the panacea its made out to be. It's just enforcing single ownership.
What part of scanning through some immutable array of bytes to-be-parsed would be so definitively better in Rust?
Any sensibly written parser code would result in the compiler optimizing away all reference count operations. Only the most naive ARC implementations mutate the count for every reference pass implied by the source.
So again... why would it be a sensible choice to use Rust at Apple, where they created and use Swift heavily? Where the problem is that they're using some old unsafe parsing library instead of one written in a safer language?
"A sufficiently advanced compiler will optimize all the overhead away" is the standard line for all high-level languages with automatic memory management. It works for tracing GC too, you just need "sufficiently advanced" escape analysis.
Unfortunately it means there are always performance cliffs: you make some change to your code that looks innocuous and suddenly your performance collapses (e.g. because a function is now too big to be automatically inlined, so certain optimizations don't work anymore) --- and you don't understand why.
I agree that could happen, but a parser would have to be coded pretty strangely to disable these basic ARC optimizations. I don't imagine a parser needing to hold and release references beyond the call stack.
In this vein, both Swift and Rust performance will succumb to any number of their IL-level (and LLVM's IR-level) optimizations being disabled by some seemingly orthogonal change to the source code.
Either way, I hope Apple (and others) can squash this entire category of bugs soon!
Even harder to understand why it's so important for Messages to have special functionality so that has to be integrated with the OS instead of run in user space.
Oh, then I stand corrected. The last time a vulnerability in Messages was making the rounds, I thought that I learned it was possible because Messages is part of the OS and not just a pre-installed app, though I can't remember where I read it. I'd be glad to hear that I'm wrong about that though.
If by “on iOS” you mean apps, it’s not super popular but it does. Cloudflare’s 1.1.1.1 app is probably the biggest. If you mean “as part of” then I don’t believe so, yes.
As the tweet author notes, starting with iOS 14 Apple has moved iMessage parsing into a sandboxed "blastdoor" process - I'm surprised it was ineffective in stopping this exploit chain.
Because people implement parsers in languages that don’t allow direct expression of grammars (e.g. C). To safely implement parsers you must choose either algebraic datatypes or continuation passing, and a lot of programmers choose neither. CPS is annoying in most languages. ADTs are the obvious choice but somehow in 2021 most people are using languages that don’t have them. If you write a parser in Haskell, for example, you’d have to mess up pretty badly and write totally non-idiomatic code to write a parser that crashes at all, let alone crashes in a way that compromises memory safety.
You can just use a handwritten parser in any memory-safe language.
However, I suspect parsers for big objects like images tend to have more vulnerabilities because developers try to avoid copying data, for performance reasons. Many memory-safe languages have their own performance issues, or make it hard to avoid copying bulk data, so those languages aren't a great fit.
This makes Rust a particularly good choice for writing parsers: Rust is pretty strong at supporting complex data sharing patterns while remaining memory-safe.
> You can just use a handwritten parser in any memory-safe language.
It’s still probably going to be wrong for a complex grammar, even if it’s not going to literally crash. Bratus et al did a survey of PDF parsers a few years ago and iirc all the popular ones were wrong in ways that don’t necessarily correspond to memory errors (like infinite looping).
> This makes Rust a particularly good choice for writing parsers
In particular, Rust has ADTs, which is the main feature I advocate for this.
If your goal is validation (i.e. this is a JPG/PNG) and stripping of EXIF data it is entirely possible to write your own parser in a managed and safe language in less than 500 lines of code without sacrificing any performance.
Don’t load them into memory, parse them as a stream byte-by-byte in accordance with the standard for the codec, check every offset before seeking, and reject images that don’t conform to the standard.
The overhead of stream abstractions is negligible if your goal is security when processing arbitrary input files provided from a zero-trust environment.
In environments where you’re prioritizing performance I’d still argue streams are likely your best bet when the size of the file to be parsed is not a constant. You wouldn’t want to load 50 large files into ram on a server environment let alone a phone.
If your input buffer is a bunch of tiny 10 KB files and you trust them? Sure, load them into memory and access their indices on the stack. Make sure you reuse the buffer to avoid unnecessary allocations.
If you want parallel processing with zero-allocations then streams with an array pool for their backing buffer are the best bet.
Not loading arbitrary files into memory will always be safer than doing so.
As for decoding - I believe the functions for validating if an array of bytes is an image should be far removed from the decoding and presentation of those bytes to the frame buffer. You don’t need to decode a JPG to validate that a file is a JPG. It either conforms to the standard or it doesn’t; the pixel data is irrelevant.
E.g. for a Web browser like Firefox the priority has to be to be as fast or faster than the competition, THEN be secure. That's just the reality of what users care about. If the goal was just security we'd all have been using HotJava for the last 24 years.
The goal for Rust was performance plus safety. That's pretty hard to pull off.
> You wouldn’t want to load 50 large files into ram on a server environment let alone a phone.
mmap() works pretty well here.
> As for decoding - I believe the functions for validating if an array of bytes is an image should be far removed from the decoding and presentation of those bytes to the frame buffer. You don’t need to decode a JPG to validate that a file is a JPG. It either conforms to the standard or it doesn’t; the pixel data is irrelevant.
Yeah but in a browser for example you never want to just "validate" an image file, you want to decode it, and separating validation from decoding is just asking for trouble. That is the meaning of "parse, don't validate".
Using inheritance in this way is a hack to emulate some of the functionality of ADTs. Grammars are perhaps one of the most poignant examples where the various constructors in your type might have no behaviors in common, so adherence to a shared interface is nothing but a vague indication that these types are somehow related. Sealed classes let you recover a little bit more of the functionality.
Because writing a good parser that is fast, secure and works on a whole bunch of crappy/broken/non compliant formats is very hard.
Sure its easy to write a fast, standards compliant parser (assuming its not a format like .psd or a word document), but it will choke on the multitude of dodgy versions of files out there, causing complaints
_most_ users only care about the picture/sound/video/other viewing correctly, with security a distant second. (they expect it to be secure) so pressure is on to make the parse work with everything.
I get where product people will noisily demand this, but perhaps the idea of parsing even broken files successfully is a large part of the problem. HTML started this way, and it's still terrible.
At an ancient job, we did a lot of scraping content from HTML as the de facto input source. Each content provider had to have custom code written, and it was as bad as you would expect. XML came about, and its largest advantage was that invalid input was broken, and must be rejected as such.
My guess: because most parsing uses the stack a lot, and the parsed language often allows arbitrary length inputs, both of which are connected to overflow problems, which in turn can often be exploited.
Are these parsing exploits connected to JPEGs? I don't know for sure, but I can understand how this could be a can of worms.
The JPEG core format is complicated, but JPEGs in circulation today are simply nightmarish. They can include XMP, EXIF, and ICC data, plus a good dozen of other extensions that may actually affect how the JPEG is displayed. For example, knowing if a JPEG contains CMYK data depends on an Adobe extension with is about twenty years old. These extensions are in use today in images published in the web. So, just to display an image, one needs a parser for the image format, a parser for ICC, a parser for XMP, a parser for that obscure Adobe extension from twenty years ago, and so on. Often, each of them has its own library and represents decades of developer time. It's a lot of space for bugs and exploits.
Broadly, because by necessity you're dealing with untrusted inputs in a relatively complex format. The complexity leaves room for bugs in how you handle the input and the user supplied data provides an easy way to feed malicious inputs in.
The future is memory safety, but to get there, they would need to rewrite and audit millions of lines of code. Targeted attacks against VIP users don't cause significant PR damage, so why go to all that effort?
Id argue that exactly VIP users attacks cause a lot more significant PR damage and there is a more serious discussion and harsher criticism when an important person is involved somehow.
The iPhone doesn't have to support anything other than the hardware it was born with, right? That should make for a much, much smaller and much simpler kernel.
It's the name of a iOS malware/RAT by NSO, an Israeli company that likes to sell their software to governments offering varying degrees of personal freedom around the world.
It's been around with zero-click exploits for years, and apparently even now, after their big iMessage "security" rewrite with iOS 14. Very likely that they have other entrypoints as well though.
Just so no one is confused: as the Facebook lawsuit confirmed, NSO is running the C&C servers for their clients. They are not selling some software, "do what you want".
It is NSO running these operations. They are directly implicated in whatever their malware ends up doing.
Every other exploit and virus is used by impatient idiots. NSO knows how to stockpile exploits.
I've been there done that. It's how we pretty much kept Wii homebrew available consistently throughout the console's life. We had the next exploit ready before Nintendo patched the current one.
Apple isn't involved. NSO are just good at their job.
Did that actually happen? I thought the Saudi Arabia hack was a convenient tale conjured up by Bezos’ team to cover up that his girlfriend was the one sending his pictures and messages to her brother, who in turn tried selling it to the National Enquirer.
Other way around, National Enquirer was the fence using the brother as a cover story for the hack. (National Enquirer is a friend of Saudi Arabia by the way, if you didn't catch the special edition "The New Kingdom" magazine published by their parent company)
Pegasus does indeed play catchup. They seem to constantly introduce new vulnerabilities, each more advanced than the last.
The whole operation is incredibly sophisticated—this report [0] by Amnesty International is a great read. They also constantly upgrade their infrastructure—for example, they have at least -four- major versions of their C&C infrastructure, which they dub the "Pegasus Anonymizing Transmission Network". :(
Not entirely. Remember some things, like kernels, require unsafe() calls which tell Rust to turn off the memory safety to function properly. There will be significantly less bugs, but there absolutely will be bugs with memory safety remaining.
What we are seeing is a problem with the very fundamentals of software written in non-memory-safe languages. Everything we do is a bandaid.
Any sufficiently funded group will find the zero days they need to exploit the users they want. It's not even a question of if they will find it, or when they will find it. They will find it and they will use it for unethical means.
This whole situation is hopeless until we go back to the basics and build systems from the ground-up with security in mind rather than trying to slap band-aids on architectures that were developed when computer security didn't even exist as a serious concept.
That may just mean more software is being written and older software can still have a vulnerability that gets a CVE assigned. Also a lot of CVEs are for potentially dangerous crashes that not necessarily can be a exploited and of course a lot of them are for client sides requiring a lot of user intervention.
So no what Rapid7 is saying doesn't mean it's getting worse. Also they sell offensive penetration testing software, they have an interest in making your believe it's getting worse.
Conceptually, why must a kernel require unsafe calls or actions in some routines?
Is it too expensive in terms of performance, or time spent digging through the guts of ancient code? Is there some inherent property of the low level functions that require unsafe code?
Ask the people working on Redox OS, which is a Rust-base kernel and userland. They would know better than me, and they state on their website that running a completely memory-safe OS in Rust without calling unsafe() is not possible.
I might come off as quite naive here, but is there an equivalent of mercantile laws regarding exporting goods & services to regimes with diverse and relaxed human right records when it comes to cyber?
For example, it has been established that it's illegal to export munition or strategic resources to designated Hate/Terrorist groups.
Trying to control Israel or any Israeli company is a political mine field no one wants to step on if they want to continue to have a career in politics.
I would hope Apple could do something quick and easy in response to this. From the original thread:
BlastDoor is a great step, to be sure, but it's pretty lame to just slap sandboxing on iMessage and hope for the best. How about: "don't automatically run extremely complex and buggy parsing on data that strangers push to your phone?!"
That would mean, at the very least, that every image and video sent via SMS would require user interaction to even see a preview. Including active conversations with your contacts, given that SMS spoofing is a thing.
And even then, it would all be for naught as soon as the user opens a message, which is a pretty normal thing to do.
(I do think they should be more cautious though. For example, under no circumstances should Apple Mail ever auto-extract an attached zip file from a stranger.)
Tin-foil-hat-time: This is Apple's backdoor, just in the "wrong hands"
(It's just as likely they have nothing to do with it, I'm just getting out in front of the eventual iMessage leak that Tim Apple gave the plan a thumbs up emoji)
There was a period from about 3 weeks to one week ago where I had crashes almost every day from Safari, Snapchat, and a few other apps. Maybe 1-3 times a day?
Definitely not normal, something eating lots of memory? Bug in some iCloud sync bs? If you’re really keen on finding out what’s going on, you can look at the system logs through xcode.
Amnesty International released their forensic tool on GH - https://github.com/mvt-project/mvt. Basically it will analyze your iPhone/Android backups and look for the known IoCs as described in the research paper.
NSO's zero-click exploits work for previous versions of iOS as well. It's still good you upgraded since the latest update includes many fixes for other known security vulnerabilities: https://support.apple.com/en-us/HT212528
Amnesty International released their forensic tool on GH - https://github.com/mvt-project/mvt. Basically it will analyze your iPhone/Android backups and look for the known IoCs as described in the research paper.
No. More infections = more noise. If you want to target specific people for a long time, you want to make as little noise as possible. This includes unexpected traffic, file artefacts, background energy use, etc.
Although now that the cat is out of the bag, I'm sure some groups are working to reproduce it for mass-infection. Especially since this looks wormable.
The NSA was unable to protect its offensive crown jewels from the Shadow Brokers, and when they could not auction them off and decided to just release them in the public domain, that led to things like Wannacry or Petya that almost sank (pun intended) the world's No.1 shipping line, Maersk.
And yet those buffoons want us to trust them with master decryption keys for all encryption protocols.
With extremely valuable zero-days like this targeting is the way to go b/c you don't want the zero-day discovered by putting it out extensively in the wild. Obviously it's always a question of time anyways.
Passively collecting data on the wire is different from actively exploiting a device to execute malware. Any entity trying to work with intelligence agencies is definitely going to be careful and somewhat sparing with their use of an "S-tier" zero-day like this. (Unless they have reason to believe it's already likely been burned, in which case they might decide to hastily machine gun it while it's still viable.)
This is a different context, different targeted group, different use case, than what we've seen with global NSA monitoring. You're comparing apples to oranges.
It's crazy how much of Apple's security efforts have gone into preventing users from installing apps on their own devices than actually keeping them safe.
It probably cost millions of dollars to put together a team to develop and deploy this - it was only worth the investment because it could be reliably used against anyone with an iphone.
Stop using propietary software. Choose simple/libre protocols such as Jabber, IRC, or Matrix.
Avoid using videoconferences except for journalism.
Better if you use a custom build ROM for Android from the sources and then flash your phone.
Even better: get a little netbook, the most compatible with OpenBSD. Encrypt your whole SSD with bioctl and use pledge(4) and unveil(4)ed browsers.
Use text browsers as much as you can, with a good hosts file. If you need JS, choose some browser such as vimb where you can set 3rd party cookies and JS ondemand with a switch.
Use Links+ (links -g for graphics) with Tor (the proxy settings has a Tor checkbox, set 127.0.0.1:9050 for Socks4a proxies AND MARK that checkbox). Go to the Links+ settings and save them from the menu.
In order to do the same for i2pd, run Links+ with different setting on the command line to override the saved Tor settings. Basically you need to set all proxied conns through 127.0.0.1:4444.
If you trust your partner, use Email+GPG over I2PD (i2pd is the daemon). Alpine/Mutt + GPG is difficult, but Claws-Mail has some PGP settings and it's lightweight enough. OFC if you learnt to use s-nail+fdm+smtpd over I2PD (doable), then that's ok.
Oh that's bad.