Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
iOS 14.6 device hacked with a zero-click iMessage exploit to install Pegasus (twitter.com/billmarczak)
416 points by amrrs on July 18, 2021 | hide | past | favorite | 167 comments


>"as @AmnestyTech observed and we @citizenlab can confirm, NSO Group's Pegasus spyware delivered via 0-click exploits is no longer "persistent" in the strict sense of the word (i.e., doesn't come back when you reboot). Persistence is achieved via firing the 0-click again. Because the 0-clicks they're using appear to be quite reliable, the lack of traditional "persistence" is a feature, not a drawback of the spyware. It makes the spyware more nimble, and prevents recovery of the "good stuff" (i.e., the spyware and exploits) from forensic analysis."

Oh that's bad.


On the plus side, having persistence means attackers retain access through iOS updates. Their "persist-less" exploits will eventually be patched by Apple, at which point anyone who applies the update has a clean device.


That isn't how that word is normally used, AFAIK? Even most "persistent" attacks are thwarted by device updates on iOS; to get persistence through an update that fixes the underlying exploit you are using would involve messing with the update mechanism itself and refusing to update at least that part and then like, trying to simulate the update to look like it was still updated... and due to how Apple's software architecture tends to assume everything is updated at once, trying to show the user a new userland (what the user will experience) but with an old kernel (a prerequisite to maintain access if your bugs came in userland as otherwise the new kernel will refuse to run the old exploitable code)--particularly as your attack might not give you the ability to modify much kernel behavior--is actually quite difficult. (Also, the firmware update process on iOS has different ways of being kicked off: if you do it over USB from iTunes, you might not have an opportunity to mess with the process at all.)


If they have a shell on the device, they can choose to persist it whenever they like. Should an ios update be released that patches their exploit, chances are, they can persist the exploit before the device installs the newest version.


Not really. When the device reboots, hardware root of trust takes over and the bug must be manually, externally exploited to regain control.

There has not been a single known bug granting persistent loading across a reboot since iOS 9.


Does that mean people should be rebooting their phones every week / day / hour?


I guess it mean it never hurt if you have any doubt.

In this case however, It wouldn't have been sufficient as the virus will reinstall quickly through the command center. If not data for a tracked device show up after a set timeout, the command center will resend a corrupted iMessage to reinfect.

In all case apply security update asap and regularly, they imply reboot and patch as much exploit as possible.


That means if we can find an active device being exploited, we can cause them to send the attack to any device of our choosing and so find the zero-day. And the command center.


>cause them to send the attack to any device of our choosing

Can you explain what you mean? Can you clone/simulate an Apple device?


It also means journalist should probably get in the habit of frequent device reboots, which will erase any malware.


It's probably better than doing nothing but it doesn't sound very effective. With zero-click they'd just reinfect the device immediately.


I'm sure the affected people are glad to know other people won't have their secrets divulged.


All exploits stop working once they're patched.


It’s a reference to bootrom-level exploits. The bootrom is burned into the silicon and can’t be updated. If one finds an exploit in there, Apple can’t patch it without new chips.


But those still might or might not be persistent. If you need to once again obtain physical access to use your exploit every time the user reboots, it isn't persistent even if the involved exploits are are unpatchable.


If they have that many good zero-days, we need to rake Apple over some (metaphorical) coals until they get their shit together.

I am not sure rewriting iMessage in rust would be the answer, but it would be a start. Another start would be to refuse to sell iPhones to Israel as long as they do this.


How are image parsing exploits still a thing in 2021? Can Apple not use Rust? I struggle to understand why Apple is still relying on C/C++ in such a well known security hotspot.


>One of the major changes in iOS 14 is the introduction of a new, tightly sandboxed “BlastDoor” service which is now responsible for almost all parsing of untrusted data in iMessages (for example, NSKeyedArchiver payloads). Furthermore, this service is written in Swift, a (mostly) memory safe language which makes it significantly harder to introduce classic memory corruption vulnerabilities into the code base.

...This blog post discussed three improvements in iOS 14 affecting iMessage security: the BlastDoor service, resliding of the shared cache, and exponential throttling. Overall, these changes are probably very close to the best that could’ve been done given the need for backwards compatibility, and they should have a significant impact on the security of iMessage and the platform as a whole.

https://googleprojectzero.blogspot.com/2021/01/a-look-at-ime...


From later in that Twitter thread:

"It also indicates that Apple has a MAJOR blinking red five-alarm-fire problem with iMessage security that their BlastDoor Framework (introduced in iOS 14 to make zero-click exploitation more difficult) ain't solving."

And:

"BlastDoor is a great step, to be sure, but it's pretty lame to just slap sandboxing on iMessage and hope for the best. How about: "don't automatically run extremely complex and buggy parsing on data that strangers push to your phone?!"


That tweet doesn't make a lot of sense, because it's essentially saying "don't let people send you images".


To the my inner armchair security enthusiast, a good solution looks like this:

  On incoming message, check one thing and one thing only: is the sender in the contact list? If YES...
     + Run the message through BlastDoor and continue as normal.
  If NOT…
     - Stop all non-essential parsing immediately.
     - Continue with a flow similar to modern e-mail clients (To increase your privacy, we have blocked some elements of this message).
     - Only continue normal message parsing if the user explicitly consents.


to me it says:

Only run the known dodgy parsing code on stuff coming from people in your contact list, not just on any random image that comes in.


All part of the ad model


This sounds interesting, but there’s not enough detail for me to understand what you mean. Care to elaborate?


I'd expect so, but the exploits we're talking about are post-BlastDoor, so something clearly isn't working.


Or fuzzing? Why is it that Google's Project Zero has to point out their mistakes for them? Apple's complacency on security is infuriating.


Apple doesn't admit wrong doing. Take their multi year long denial of butterfly keyboard issues.

The key to user satisfaction isn't security, it's consistent marketing reminders of "security".

Apple users here are often forced into the ecosystem due to iOS or safari dev, but the majority of Apple users care more about the name of the iPhone than performance. This is the nature of Veblen goods.


> Apple doesn't admit wrong doing.

On the business end, sure.

But https://support.apple.com/en-us/HT201222#:~:text=the%20previ...


That’s a bit of a weird framing. Project Zero point out mistakes in Google’s products too. It’s just that they’re visible.

The assumption has to be that Apple quietly fixes minimum a hundred times more security bugs in their products, than the ones the occasional Project Zero blog post highlights.


Given their lackadaisical approach to bug reports filed by external researchers, and their grudging payments on their own bug bounty program, I think you are giving them far more credit than they deserve.


"Rewrite everything in rust" is not useful advice, and many of the decoders in ImageIO are open source libraries, that presumably have the same flaws (so you get "why isn't project X using rust" as well)


"Rewrite everything in Rust" is not that useful, but "write new code in Rust so you're not digging deeper holes" and "rewrite your most sensitive code in Rust" is pretty good advice.


Define "new" code - rust has no ABI stability story, and doesn't have a path to interact with any language other than C, so new code in existing libraries may not be feasibly written in rust.

Anyway, Apple's safe language is Swift, which they are clearly pushing and using more of as time goes by.


You can expose a C ABI, and so it’s a drop-in replacement for things that used to talk to C code. Which is virtually everything.


not on Mac/I/watch/tvOS where much of the core platform API is objective-c (Which swift has first class support for), and C++ (because of poor 90s decisions).

But also missing the point that you can't implement a subclass or what have you in rust, and if every interface you have is C then you have an exercise in mass unsafe code.


Both Objective C and C++ can call things that use the C ABI. Rust code ships in real products on those platforms.


Apple is using Rust too. We don't know how much, because Apple.

> doesn't have a path to interact with any language other than C

https://crates.io/crates/cxx

> so new code in existing libraries may not be feasibly written in rust

Not sure exactly what you mean here.


And you run the risk or introducing new bugs as memory bugs aren't the only bugs there are plus rust doesn't catch all of them.


When your entire platform is plagued by security issues in one component for years. Issues that at the end of the day are causing huge personal harm to your customers. Perhaps it is time to suck it up and rewrite the damn thing in a memory safe language.


A great line in a great movie "It is not economicly viable"


In other word the 'cost' reputational or monetary of a few dead journalists/activists is less than that it would take to make it safe.


There is a bit of a risk of new bugs that Rust doesn't catch, but if your development process is at least modestly competent you'll have a large suite of automated tests for the bugs you found in the previous iteration of the software, which you can reapply to the new Rust code.


It’s the new bugs that didn’t exist in the original code that you need to be concerned about.

Consider any long-lived complex library. Count the bugs fixed since initial release. That’s your baseline — you can expect that order of magnitude of bugs in the initial release of your rewrite. Perhaps a new language will eliminate some classes of bugs, so maybe you can cut that number in half or even to a quarter. But you need to come to terms with that number. Now add on the number of bugs due to compatibility. Image formats generally are not tightly spec’ed, meaning the de facto standard libraries become the de facto standard — that is, the library being rewritten.

Rewrites are a long road that starts off one step forward and three steps back. It could sill be worth it, depending on the case. But an effort to “rewrite in rust” that doesn’t account for this will fail.


> That’s your baseline — you can expect that order of magnitude of bugs in the initial release of your rewrite.

My point was that the test suite you start with for your rewrite will be (or at least should be) a lot better than the test suite you had when you started writing the original code.

The spec will likely be better too. Also, you can learn a lot from the previous iteration.

These effects, plus the benefits of the new language, have the potential to reduce the bug count dramatically. Maybe even by an order of magnitude.

Having said that, you're certainly right that there are some steps back and the net cost or benefit is likely highly context-dependent.

It would be great to have more data on projects that have done this, to help people estimate the costs and benefits in their context.


Your cost benefit analysis is off. Which would you rather have, a broken jpeg once in a blue moon or a hacked phone?

Writing image decoders for several formats is hard but it's not so hard that a trillion dollar company can't write a good one.


Anyone remember about 20 years ago, there was a remote code execution possible by simply getting outlook express (internet explorer) to load a custom crafted jpeg? Things haven't changed much.

I remain highly skeptical of any messaging app or communication chat program, email client that lets people "embed" arbitrary content in it.


Things haven't change because there is a god like mindset in SW development. "I want my program to do everything". And the web browsers and messaging apps are the worst. They use a miriad of libraries to parse untrusted code and they have components which sometimes run with elevated privileges. What can go wrong ?


What makes Rust especially good at protecting against parser vulnerabilities?


It's memory-safe (assuming you avoid the use of the 'unsafe' keyword) and it's performant. In particular, for fast parsing of large objects you want to avoid copying data as much as possible, and Rust's lifetime machinery lets you do that without introducing memory safety issues.

For example, Java is also memory-safe, but Java doesn't have first-class array slices.


Swift has all of these things. It also has the exact same performance ceiling as Rust.


Nope. Swift manages memory via reference counting, which turns read-only workloads into read-write workloads, which is inherently problematic.


Yep. It's the same performance ceiling. Borrow checking is not the panacea its made out to be. It's just enforcing single ownership.

What part of scanning through some immutable array of bytes to-be-parsed would be so definitively better in Rust?

Any sensibly written parser code would result in the compiler optimizing away all reference count operations. Only the most naive ARC implementations mutate the count for every reference pass implied by the source.

So again... why would it be a sensible choice to use Rust at Apple, where they created and use Swift heavily? Where the problem is that they're using some old unsafe parsing library instead of one written in a safer language?


"A sufficiently advanced compiler will optimize all the overhead away" is the standard line for all high-level languages with automatic memory management. It works for tracing GC too, you just need "sufficiently advanced" escape analysis.

Unfortunately it means there are always performance cliffs: you make some change to your code that looks innocuous and suddenly your performance collapses (e.g. because a function is now too big to be automatically inlined, so certain optimizations don't work anymore) --- and you don't understand why.


I agree that could happen, but a parser would have to be coded pretty strangely to disable these basic ARC optimizations. I don't imagine a parser needing to hold and release references beyond the call stack.

In this vein, both Swift and Rust performance will succumb to any number of their IL-level (and LLVM's IR-level) optimizations being disabled by some seemingly orthogonal change to the source code.

Either way, I hope Apple (and others) can squash this entire category of bugs soon!


If Apple were a less secretive company, you could ask the teams that are using Rust about why, but I doubt they'll directly answer.

We know that they are because they've posted job descriptions for Rust-specific jobs.


Even harder to understand why it's so important for Messages to have special functionality so that has to be integrated with the OS instead of run in user space.


Messages code does run in userspace.


Oh, then I stand corrected. The last time a vulnerability in Messages was making the rounds, I thought that I learned it was possible because Messages is part of the OS and not just a pre-installed app, though I can't remember where I read it. I'd be glad to hear that I'm wrong about that though.


This is a case where using Wuffs would be a better idea than Rust: You get more safety than Rust and you can incrementally convert the codebase.


Apple is using Rust, but not for this. Or at least, I haven’t heard of it.


I don't think any Rust ships on iOS (but would be very interested in hearing corrections!)


If by “on iOS” you mean apps, it’s not super popular but it does. Cloudflare’s 1.1.1.1 app is probably the biggest. If you mean “as part of” then I don’t believe so, yes.


On the server, we have firecracker and gvisor to provide and extra layer of defense by not allowing userspace to directly access the kernel.

Will that be the future on client devices as well given kernels are just too complex to secure perfectly?


As the tweet author notes, starting with iOS 14 Apple has moved iMessage parsing into a sandboxed "blastdoor" process - I'm surprised it was ineffective in stopping this exploit chain.


Why there's so much parsing related exploits?


Because people implement parsers in languages that don’t allow direct expression of grammars (e.g. C). To safely implement parsers you must choose either algebraic datatypes or continuation passing, and a lot of programmers choose neither. CPS is annoying in most languages. ADTs are the obvious choice but somehow in 2021 most people are using languages that don’t have them. If you write a parser in Haskell, for example, you’d have to mess up pretty badly and write totally non-idiomatic code to write a parser that crashes at all, let alone crashes in a way that compromises memory safety.


You can just use a handwritten parser in any memory-safe language.

However, I suspect parsers for big objects like images tend to have more vulnerabilities because developers try to avoid copying data, for performance reasons. Many memory-safe languages have their own performance issues, or make it hard to avoid copying bulk data, so those languages aren't a great fit.

This makes Rust a particularly good choice for writing parsers: Rust is pretty strong at supporting complex data sharing patterns while remaining memory-safe.


> You can just use a handwritten parser in any memory-safe language.

It’s still probably going to be wrong for a complex grammar, even if it’s not going to literally crash. Bratus et al did a survey of PDF parsers a few years ago and iirc all the popular ones were wrong in ways that don’t necessarily correspond to memory errors (like infinite looping).

> This makes Rust a particularly good choice for writing parsers

In particular, Rust has ADTs, which is the main feature I advocate for this.


Even infinite looping is much better failure mode than "villains seize control of your device".


If your goal is validation (i.e. this is a JPG/PNG) and stripping of EXIF data it is entirely possible to write your own parser in a managed and safe language in less than 500 lines of code without sacrificing any performance.

Don’t load them into memory, parse them as a stream byte-by-byte in accordance with the standard for the codec, check every offset before seeking, and reject images that don’t conform to the standard.

And of course, a ton of fuzzing to accompany it.


The overhead of the stream abstraction is typically a lot greater than the cost of accessing a byte array.

Also, maybe I'm wrong, but when I read "image parsing" I think that actually means "image decoding".


The overhead of stream abstractions is negligible if your goal is security when processing arbitrary input files provided from a zero-trust environment.

In environments where you’re prioritizing performance I’d still argue streams are likely your best bet when the size of the file to be parsed is not a constant. You wouldn’t want to load 50 large files into ram on a server environment let alone a phone.

If your input buffer is a bunch of tiny 10 KB files and you trust them? Sure, load them into memory and access their indices on the stack. Make sure you reuse the buffer to avoid unnecessary allocations.

If you want parallel processing with zero-allocations then streams with an array pool for their backing buffer are the best bet.

Not loading arbitrary files into memory will always be safer than doing so.

As for decoding - I believe the functions for validating if an array of bytes is an image should be far removed from the decoding and presentation of those bytes to the frame buffer. You don’t need to decode a JPG to validate that a file is a JPG. It either conforms to the standard or it doesn’t; the pixel data is irrelevant.


> if your goal is security

The goal is never just security.

E.g. for a Web browser like Firefox the priority has to be to be as fast or faster than the competition, THEN be secure. That's just the reality of what users care about. If the goal was just security we'd all have been using HotJava for the last 24 years.

The goal for Rust was performance plus safety. That's pretty hard to pull off.

> You wouldn’t want to load 50 large files into ram on a server environment let alone a phone.

mmap() works pretty well here.

> As for decoding - I believe the functions for validating if an array of bytes is an image should be far removed from the decoding and presentation of those bytes to the frame buffer. You don’t need to decode a JPG to validate that a file is a JPG. It either conforms to the standard or it doesn’t; the pixel data is irrelevant.

Yeah but in a browser for example you never want to just "validate" an image file, you want to decode it, and separating validation from decoding is just asking for trouble. That is the meaning of "parse, don't validate".


>ADTs are the obvious choice but somehow in 2021 most people are using languages that don’t have them.

Isn't inheritance to create hierarchy enough? why?

       Node 
   SubNode1    SubNode2
 SubNode1.1      SubNode2.1


Using inheritance in this way is a hack to emulate some of the functionality of ADTs. Grammars are perhaps one of the most poignant examples where the various constructors in your type might have no behaviors in common, so adherence to a shared interface is nothing but a vague indication that these types are somehow related. Sealed classes let you recover a little bit more of the functionality.


well, but from security standpoint it should be fine?


Because writing a good parser that is fast, secure and works on a whole bunch of crappy/broken/non compliant formats is very hard.

Sure its easy to write a fast, standards compliant parser (assuming its not a format like .psd or a word document), but it will choke on the multitude of dodgy versions of files out there, causing complaints

_most_ users only care about the picture/sound/video/other viewing correctly, with security a distant second. (they expect it to be secure) so pressure is on to make the parse work with everything.


I get where product people will noisily demand this, but perhaps the idea of parsing even broken files successfully is a large part of the problem. HTML started this way, and it's still terrible.

At an ancient job, we did a lot of scraping content from HTML as the de facto input source. Each content provider had to have custom code written, and it was as bad as you would expect. XML came about, and its largest advantage was that invalid input was broken, and must be rejected as such.


My guess: because most parsing uses the stack a lot, and the parsed language often allows arbitrary length inputs, both of which are connected to overflow problems, which in turn can often be exploited.


It's more often use-after-free or heap buffer overrun bugs, these days.


Are these parsing exploits connected to JPEGs? I don't know for sure, but I can understand how this could be a can of worms.

The JPEG core format is complicated, but JPEGs in circulation today are simply nightmarish. They can include XMP, EXIF, and ICC data, plus a good dozen of other extensions that may actually affect how the JPEG is displayed. For example, knowing if a JPEG contains CMYK data depends on an Adobe extension with is about twenty years old. These extensions are in use today in images published in the web. So, just to display an image, one needs a parser for the image format, a parser for ICC, a parser for XMP, a parser for that obscure Adobe extension from twenty years ago, and so on. Often, each of them has its own library and represents decades of developer time. It's a lot of space for bugs and exploits.


> Why there's so much parsing related exploits?

Broadly, because by necessity you're dealing with untrusted inputs in a relatively complex format. The complexity leaves room for bugs in how you handle the input and the user supplied data provides an easy way to feed malicious inputs in.


My thinking is this:

1. Exploitation inherently relies on malicious input data.

2. In computer systems, any input data (especially in human-facing systems) is not logically useful to the software until parsed.

3. Thus, a parser is the first software element which systematically interacts with the input data. It is the prime exploitation target.

That, + parsing complex data is just kind of hard to get right. If iMessage was UTF-8 only, this would not be an issue I'm sure.


The future is memory safety, but to get there, they would need to rewrite and audit millions of lines of code. Targeted attacks against VIP users don't cause significant PR damage, so why go to all that effort?


> Targeted attacks against VIP users don't cause significant PR damage, so why go to all that effort?

Because VIP users include people who run big-tech. Ask Bezos if it's worth fixing bugs that only target VIPs.


Id argue that exactly VIP users attacks cause a lot more significant PR damage and there is a more serious discussion and harsher criticism when an important person is involved somehow.


The iPhone doesn't have to support anything other than the hardware it was born with, right? That should make for a much, much smaller and much simpler kernel.


XNU is a fairly versatile kernel, it's not really possible (nor would it be particularly useful) to rip out everything that is not iPhone-specific.


I've been hearing of Pegasus for a good 3 years now. Is it so hard to patch devices to close whatever means it uses to hack them?

Or is there also a Pegasus V2, V3, etc that plays catchup with OS's security patches?


It's the name of a iOS malware/RAT by NSO, an Israeli company that likes to sell their software to governments offering varying degrees of personal freedom around the world.

It's been around with zero-click exploits for years, and apparently even now, after their big iMessage "security" rewrite with iOS 14. Very likely that they have other entrypoints as well though.


Just so no one is confused: as the Facebook lawsuit confirmed, NSO is running the C&C servers for their clients. They are not selling some software, "do what you want".

It is NSO running these operations. They are directly implicated in whatever their malware ends up doing.


So Pegasus somehow manages to stay alive patch after patch when every other exploit and virus out there gets shut down

Is it not clear that at least Apple must be involved in this too? Who else is?


Every other exploit and virus is used by impatient idiots. NSO knows how to stockpile exploits.

I've been there done that. It's how we pretty much kept Wii homebrew available consistently throughout the console's life. We had the next exploit ready before Nintendo patched the current one.

Apple isn't involved. NSO are just good at their job.


Thanks for the explanation. It might seem obvious to people involved in this type of work, but I honestly found it hard to believe


...No, it is not clear.


Pegasus was also used by Saudi Arabia to hack Jeff Bezos’ phone and it was MBS (the crown prince) himself that sent the iMessage to him.



Did that actually happen? I thought the Saudi Arabia hack was a convenient tale conjured up by Bezos’ team to cover up that his girlfriend was the one sending his pictures and messages to her brother, who in turn tried selling it to the National Enquirer.


Other way around, National Enquirer was the fence using the brother as a cover story for the hack. (National Enquirer is a friend of Saudi Arabia by the way, if you didn't catch the special edition "The New Kingdom" magazine published by their parent company)


It was a WhatsApp missed call


Pegasus does indeed play catchup. They seem to constantly introduce new vulnerabilities, each more advanced than the last.

The whole operation is incredibly sophisticated—this report [0] by Amnesty International is a great read. They also constantly upgrade their infrastructure—for example, they have at least -four- major versions of their C&C infrastructure, which they dub the "Pegasus Anonymizing Transmission Network". :(

[0]: https://www.amnesty.org/en/latest/research/2021/07/forensic-...


Burn everything down and start with memory safe languages, or have computer security remain a joke.


Good luck with that absolutionist agenda.


It's true though


Not entirely. Remember some things, like kernels, require unsafe() calls which tell Rust to turn off the memory safety to function properly. There will be significantly less bugs, but there absolutely will be bugs with memory safety remaining.


Sure, but we've been trying to "just fix the bugs in the C codebase" for ~30 years now and it just keeps getting worse.


>and it just keeps getting worse.

Is there any evidence for that because my perception is it has become much harder to exploit systems.


What we are seeing is a problem with the very fundamentals of software written in non-memory-safe languages. Everything we do is a bandaid.

Any sufficiently funded group will find the zero days they need to exploit the users they want. It's not even a question of if they will find it, or when they will find it. They will find it and they will use it for unethical means.

This whole situation is hopeless until we go back to the basics and build systems from the ground-up with security in mind rather than trying to slap band-aids on architectures that were developed when computer security didn't even exist as a serious concept.


None the less is there any evidence that "it just keeps getting worse"?



That may just mean more software is being written and older software can still have a vulnerability that gets a CVE assigned. Also a lot of CVEs are for potentially dangerous crashes that not necessarily can be a exploited and of course a lot of them are for client sides requiring a lot of user intervention.

So no what Rapid7 is saying doesn't mean it's getting worse. Also they sell offensive penetration testing software, they have an interest in making your believe it's getting worse.


Conceptually, why must a kernel require unsafe calls or actions in some routines?

Is it too expensive in terms of performance, or time spent digging through the guts of ancient code? Is there some inherent property of the low level functions that require unsafe code?


Ask the people working on Redox OS, which is a Rust-base kernel and userland. They would know better than me, and they state on their website that running a completely memory-safe OS in Rust without calling unsafe() is not possible.


Backdoors can be written in any language.


lol, are you saying that any software that does that is 100% safe?


I might come off as quite naive here, but is there an equivalent of mercantile laws regarding exporting goods & services to regimes with diverse and relaxed human right records when it comes to cyber?

For example, it has been established that it's illegal to export munition or strategic resources to designated Hate/Terrorist groups.


Trying to control Israel or any Israeli company is a political mine field no one wants to step on if they want to continue to have a career in politics.


How long until the articles telling users to disable iMessage?


I would hope Apple could do something quick and easy in response to this. From the original thread:

  BlastDoor is a great step, to be sure, but it's pretty lame to just slap sandboxing on iMessage and hope for the best. How about: "don't automatically run extremely complex and buggy parsing on data that strangers push to your phone?!"


That would mean, at the very least, that every image and video sent via SMS would require user interaction to even see a preview. Including active conversations with your contacts, given that SMS spoofing is a thing.


And even then, it would all be for naught as soon as the user opens a message, which is a pretty normal thing to do.

(I do think they should be more cautious though. For example, under no circumstances should Apple Mail ever auto-extract an attached zip file from a stranger.)


We're talking iMessage here, so Apple has more reliable evidence of identity than what's provided by easily spoofed SMS.


Really goes to show Apple's approach to security, "feebly containerize something after it's already been severely exploited."


Run commercials that end with a black background with a single word Security.


ImageIO is a fairly widely framework, so the number of vectors is very wide.


It doesn't actually show a whole lot, because that's not really a very good characterization of how BlastDoor works.


Tin-foil-hat-time: This is Apple's backdoor, just in the "wrong hands"

(It's just as likely they have nothing to do with it, I'm just getting out in front of the eventual iMessage leak that Tim Apple gave the plan a thumbs up emoji)


anybody notice a lot of crashes in their iOS device lately?


Not really. Used to have more occasional crashes but my iPhone 12 crashed only once since I got it at launch.


Nope. Not a singe one since I've got my 11.


Apps crash so often, I barely even register it, but the OS itself has only had a single reboot crash since I got my iphone 12 right after launch.


How often does yours crash?


There was a period from about 3 weeks to one week ago where I had crashes almost every day from Safari, Snapchat, and a few other apps. Maybe 1-3 times a day?


Definitely not normal, something eating lots of memory? Bug in some iCloud sync bs? If you’re really keen on finding out what’s going on, you can look at the system logs through xcode.


Iphone 7& 8, barelly usable since 2 months (ios 14.xx)


Nope. Haven't had a crash in months. iPhone SE 2020.


Could be that your battery has deteriorated too muchz


"fmld"

Please tell me this daemon isn't named after what I think it's named after.


It's meant to look like a system daemon, probably fmfd.


how would one tell if they'd been hit with this?



Amnesty International released their forensic tool on GH - https://github.com/mvt-project/mvt. Basically it will analyze your iPhone/Android backups and look for the known IoCs as described in the research paper.

https://www.amnesty.org/en/latest/research/2021/07/forensic-...


TIL my corporate webfilter blocks Amnesty's website under "Advocacy Orgs".

I guess human rights don't line up with corporate values.


> Oh, and also you can just run "strings" on the DataUsage.sqlite file and find the deleted entries...

Maybe they should have enabled secure deletion: https://www.sqlite.org/pragma.html#pragma_secure_delete


Dang, I just updated mine to 14.6. Is there anything i can do to be more cautious/prevent spywares from my iphone?


NSO's zero-click exploits work for previous versions of iOS as well. It's still good you upgraded since the latest update includes many fixes for other known security vulnerabilities: https://support.apple.com/en-us/HT212528


Does anyone know if there is a publicly available list of the 50k numbers that were exploited by this software?


What options, if any, are there to reduce the remote attack surface on an ios device?


disable networking or enable a firewall, if Apple lets you


This seems bad


Is there any mention on how to check your own devices for this exploit?


Amnesty International released their forensic tool on GH - https://github.com/mvt-project/mvt. Basically it will analyze your iPhone/Android backups and look for the known IoCs as described in the research paper.

https://www.amnesty.org/en/latest/research/2021/07/forensic-...


Seems safe to assume that everybody's been infected by this point, eh?


No. More infections = more noise. If you want to target specific people for a long time, you want to make as little noise as possible. This includes unexpected traffic, file artefacts, background energy use, etc.

Although now that the cat is out of the bag, I'm sure some groups are working to reproduce it for mass-infection. Especially since this looks wormable.


[flagged]


The NSA was unable to protect its offensive crown jewels from the Shadow Brokers, and when they could not auction them off and decided to just release them in the public domain, that led to things like Wannacry or Petya that almost sank (pun intended) the world's No.1 shipping line, Maersk.

And yet those buffoons want us to trust them with master decryption keys for all encryption protocols.


With extremely valuable zero-days like this targeting is the way to go b/c you don't want the zero-day discovered by putting it out extensively in the wild. Obviously it's always a question of time anyways.


Passively collecting data on the wire is different from actively exploiting a device to execute malware. Any entity trying to work with intelligence agencies is definitely going to be careful and somewhat sparing with their use of an "S-tier" zero-day like this. (Unless they have reason to believe it's already likely been burned, in which case they might decide to hastily machine gun it while it's still viable.)


They are able to drink from the firehose, though. This is an exploit on a device rather than a nations infrastructure.

That being said Stuxnet had done its business before it went public.


This is a different context, different targeted group, different use case, than what we've seen with global NSA monitoring. You're comparing apples to oranges.


It's crazy how much of Apple's security efforts have gone into preventing users from installing apps on their own devices than actually keeping them safe.


This severely damages my confidence in ios being a secure operating system. Looking into Linux phones.


If they could do this on iOS, your Linux phone doesn't stand a chance. They'll find bugs in it.


It probably cost millions of dollars to put together a team to develop and deploy this - it was only worth the investment because it could be reliably used against anyone with an iphone.


So it's Security through Obscurity. One of the most mocked security methods online. Gotcha.


One of many tools, my man. Personally my security mantra is “don’t become an enemy of the state”


There's no such thing as a "secure operating system," at least not one with any adoption whatsoever.


Stop using propietary software. Choose simple/libre protocols such as Jabber, IRC, or Matrix.

Avoid using videoconferences except for journalism.

Better if you use a custom build ROM for Android from the sources and then flash your phone.

Even better: get a little netbook, the most compatible with OpenBSD. Encrypt your whole SSD with bioctl and use pledge(4) and unveil(4)ed browsers.

Use text browsers as much as you can, with a good hosts file. If you need JS, choose some browser such as vimb where you can set 3rd party cookies and JS ondemand with a switch.

Use Links+ (links -g for graphics) with Tor (the proxy settings has a Tor checkbox, set 127.0.0.1:9050 for Socks4a proxies AND MARK that checkbox). Go to the Links+ settings and save them from the menu. In order to do the same for i2pd, run Links+ with different setting on the command line to override the saved Tor settings. Basically you need to set all proxied conns through 127.0.0.1:4444.

If you trust your partner, use Email+GPG over I2PD (i2pd is the daemon). Alpine/Mutt + GPG is difficult, but Claws-Mail has some PGP settings and it's lightweight enough. OFC if you learnt to use s-nail+fdm+smtpd over I2PD (doable), then that's ok.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: