So You Want To Know If Your Microphone Is Bugged...
May 27, 2022•1,122 words
Say you have reason to believe that the State considers you enough of a visible and high-priority target to engage in expensive and time-consuming targeted digital surveillance (as opposed to sifting through your metadata generated by passive surveillance) -- and for the sake of simplicity let's assume they want to turn your phone into a snitch and take over your microphone to do so. What should you be concerned about? Based on what we know from history and what we can infer from technical documentation/research, there are two main vectors through which this can be achieved...
Rootkits and/or spyware
This is by far the most popular and documented approach in what is a rare and poorly-documented practice. The idea seems intuitive enough, too: infect your target with malware and then use undisclosed security vulnerabilities in the operating system take over their phone and control their microphone at a level invisible to the user.
This is the approach used by the "Pegasus" malware, created by the now-infamous Israeli tech firm NSO Group1 and sold by them to whichever government is willing to pay them (including the US2). It gained notoriety starting in 2016 for being purchased by repressive regimes in the Middle East and then used to target journalists3 and activists4, and a different but related piece of NSO Group malware was used in the high-profile assassination of Jamal Khasshougi by the Saudi royal family5. The way Pegasus in particular was sent to the targets' phones was through WhatsApp messages containing a phishing link, which when clicked by the victim would download the malware onto their phone and use an unpatched security vulnerability that NSO had found and kept to themselves (known as a "zero-day") to install itself. This is known as a "one-click exploit", because it involves user interaction through clicking the phishing link once (side note: though the primary avenue of distribution of Pegasus has been WhatsApp, nothing prevents such an attack from being conducted through any other messaging service, from Signal to plain SMS, because what matters here is the content of the message itself i.e. the phishing link).
However, there are an even more advanced class of exploits called "zero-click exploits", which take effect upon the user merely receiving the message and do not require any user interaction at all (these also work by taking advantage of zero-days). These are significantly rarer than one-click exploits, but can be of bewildering subtlety, complexity, and (to be honest) ingenuity 6.
Because Pegasus and exploits like it rely on unpatched zero-days, when such attacks are discovered, the software vulnerabilities they rely on are usually fixed in short order and pushed out in software updates. Which segues into...
Low-level and/or hardware backdoors
There is very little hard evidence of this vector, and most of it consists of hypotheticals based on inference and extrapolation of what has been documented and reported on. However, this does not mean it's impossible; it means that it's technically possible or feasible.
The idea behind this approach is that tech companies themselves have inserted backdoors either in their own phone hardware or the lowest level of software lying directly above the hardware (the so-called "firmware") -- either with or without collaboration with the State -- allowing them or the State to gain direct and transparent access to all functionality on your phone and do whatever they want with it (like taking over your microphone). As you can probably tell, this is by far the most powerful vector and in many ways the "doomsday scenario" of targeted surveillance threat models7, and one with the least number of countermeasures besides throwing your phone into a fire. The fact that all of the hardware and much of the software on phones are completely proprietary (and that, accordingly, detailed knowledge of the true inner workings of such are only available to a small number of people working inside the large corporations who have complete control over design and manufacture of consumer electronics like phones) leaves a gigantic sinkhole of unknown feasibility of such backdoors being inserted at one of the numerous points of the supply and manufacturing chain.
All this can certainly provoke intense dread and crippling paranoia (and if this writeup has done this, I have failed...), but one reprieve is that there have been no documented examples of this happening, even in extremely high-profile targeted surveillance operations (which is not to say that they could never happen). One would think, by recourse to Occam's razor, that if backdoors of this type already existed in phones people actually used, they would have been used much earlier and much more frequently and we would all know about it by now based on the absolute scandals such a revelation would cause. The mystery and opacity of the hardware and software on devices so ubiquitous today has produced a small army of talented hardware hackers who do the thankless work of tearing down and reverse-engineering the latest phones in their minute details. They act informally and most often not even with security or privacy in mind, but out of sheer curiosity or enterprise. If they hypothetically encountered a low-level backdoor while reverse-engineering, say, the latest iPhone, it's highly unlikely they would keep quiet about it. And although Apple has been on record for trying to patent hardware that can remotely disable iPhone cameras specifically in the context of protests 8, nothing of the sort has been found or disclosed as of yet on any Apple device despite receiving a not-insignificant amount of active and passive scrutiny by reverse-engineers. Which is to say, there are many (but, one could argue, not enough) eyeballs peeled on this particular area of interest precisely because there are so many unknowns.
Finally, it is worth mentioning that in many cases (more than might one expect) inserting a low-level backdoor goes directly against the business interests of large tech companies themselves. As many of them are rapidly transitioning to taking over the functionality of one's wallet or bank, consumer trust in the security of their devices is paramount, and inserting any kind of backdoor on a device weakens this security -- who's to say that the backdoor will or can only be discovered and used by those it's intended for? This is one of the reasons why Apple not only refused to collaborate with the FBI to crack the San Bernardino shooter's iPhone, but said it had no way of doing so9 (this notably does not apply to data held in the cloud, on their servers, which were completely fair game and is the route the FBI ended up taking here).
-
https://www.theguardian.com/news/2022/feb/02/fbi-confirms-it-obtained-nsos-pegasus-spyware ↩
-
https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/ ↩
-
https://citizenlab.ca/2022/04/peace-through-pegasus-jordanian-human-rights-defenders-and-journalists-hacked-with-pegasus-spyware/ ↩
-
https://citizenlab.ca/2018/10/the-nso-connection-to-jamal-khashoggi/ ↩
-
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html ↩
-
https://www.zdnet.com/article/apple-patent-could-remotely-disable-protesters-phone-cameras/ ↩
-
http://www.antipope.org/charlie/blog-static/2016/03/follow-the-money-apple-vs-the-.html ↩