The biggest news in the infosec world, besides the fact that balaclavas are becoming increasingly popular due to record-low temperatures across the United States, is that leet haxors can listen to you from your iPhone using FaceTime without you even answering the call. There are obvious security implications of this bug: phones should only turn on the microphone after you pick up a call. This effectively turns any iPhone running iOS 12.1 or later into a party line. In response Apple has taken group FaceTime offline in preparation of a software update later this week.
So, how does this FaceTime bug work? It’s actually surprisingly simple. First, start a FaceTime call with an iPhone contact. While the call is dialing, swipe up, and tap Add Person. Add your own phone number in the Add Person screen. This creates a group call with two instances of your iPhone, and the person you’re calling. You may now listen in to the audio of the person you originally called even though they haven’t chosen to pick up the call. Dumb? Yes. Insecure? Horribly. If your iPhone is ringing, the person on the other end could be listening in.
But this isn’t a story about how Apple failed yet again. This is a story about how this security flaw was found, and what a normal person can do if they ever find something like this.
The first report of this bug came from a complete rando. Twitter user @MGT7500 first posted about this bug a mere nine days ago to Apple support, then posted about it on Twitter:
That’s it. That’s the responsible disclosure. We’ve heard stories about random people on the Internet finding security flaws that make the heads of people running Trillion-dollar companies burst into flames, but here’s the evidence, rendered in tweet form. Additionally, [MGT7] also emailed Apple, Fox News (not an affiliate), CNBC, CNN, and 9to5Mac about this security flaw. There was no response until 9to5Mac ran the story eight days later.
If a random person on the Internet finds a security vulnerability, what should they do? This is in the hacker and infosec realm, so the most common advice is to request a CVE, contact the parties involved (in this case, Apple, and the best email to reach them is the twenty first link on this page), and negotiate a time after which the vulnerability will be disclosed. This is called responsible disclosure. You might want to check into bug bounties, because there might be a cash award. Alternatively, you could reach out to security researchers investigating the same platforms, and see if they could use their pull on Twitter to focus attention on the problem. If that security researcher is honest, you may even be the PI on whatever paper that would come out of your discovery.
A random person on the Internet isn’t an infosec expert. The random person on the Internet simply wants things fixed, and in this case [MGT7] did exactly the right thing: they emailed Apple Support, including registering as a developer and going through the right channels. This reporting process should be easier, more obvious, and the response should be swift.
We have a new hero in the infosec world. It’s a Twitter account that’s been around for a few months, has opinions about college football in Arizona, and is still using the default avatar. Whoever [MGT7] is, we’re going to say they’re the best example of what you should do if you ever find a security flaw: find an email address on the company’s page for the security team. Email them, and sit back and wait. That’s all you need to know. It’s also the complete opposite of what security researchers suggest, and this is a failing of the entire community.
> It’s also the complete opposite of what security researchers suggest
That’s news to me. It would be nice if you elaborated.
They did. 6th paragraph, starts with “If a random person on the Internet finds a security vulnerability…”
I also am a bit confused. I thought the story was building up to recommending a bit more than 9 days before you go public. Usually, a security firm will give them like 90, won’t they? I guess I missed the “obvious,” that security flaws should be made public as soon as you get impatient.
Apple likes you to think that they care about privacy… but actions speak louder then words.
So, Apple took immediate action. Immediately after the email passed the mill and got to someone up high in the hierarchy. :) 8 days is not so slow, actually. It shows that the hierarchy isn’t very deep in Apple.
No response until 9to5 Mac ran a story about security. It sort of makes you wonder if the trigger for response was a responsible journalist asking for a response, or if there were several layers of management within Apple that correctly understood the security flaw and escalated. My money is on the fast response to the story and not a moderate but understandable response to a security flaw called into the help desk.
My guess is the others didn’t understand what was handed to them. They are journalists and therefore somewhat limited in comprehension.
Send your tips to Hackaday! :)
Considering what this “flaw” accomplished one should always wonder if it was actually a “flaw.”
You’d think that if THEY, whoever they are, are capable of putting a backdoor into iOS that allows eavesdropping on users they would also be capable of not making it as obvious as having your phone ring whenever you’re listened to.
I did and Im brown LOL
https://www.youtube.com/watch?v=5Zu1VzfY8LQ
B^)
I can only imagine all the fun things Apple can do through the products they sell. Maybe eavesdropping was suppose to be reserved for Apple employees. the bug was that a consumer stumble across a way to gain access. Wonder if they have a voyeur thing for the cameras. They already collect all kinds of crazy marketing data, basically everything that happens on your phone, which seem kind of invasive, so why wouldn’t they be interested in what’s going on outside the phone.
I always put a piece of tape over the “selfie” camera on my devices. I do not have any desire to take a selfie of my ugly mug and do not want others to see it either.
Someone needs to write an app that sends a fake picture to your selfie camera when your not using it.
We used to have to do video conferencing and I thought it would be fun to write an app that would let you loop a pre recorded video instead of the camera feed. It had the potential to turn an otherwise offensive meeting into a productive nap.
“Everything”? Good thing everyone’s on an unlimited plan.
Doesn’t the microphone need to be on for any kind of voice commands to work?
That doesn’t excuse connecting the audio (or video) of a call before it’s actually answered.
I was trying to get Netflix to fix a bug in the app used on the PS3. After 3 months of trying I gave up and gave Sony a call. I got thru to a lady tech support level 1. First dude didn’t get it or knew how to respond. (It was not in the script!). The lady mentioned the app is not created by Sony and Netflix should be contacted. After relaying I have tried for 3 months to get it fixed. I suggested to her, if the problem is presented by Sony it will get more attention than me the pee-on did! Guess what 3 days later the problem was fixed in the next software update from Netflix.
The moral of the story: He who has clout, is listened too. Money talks all others walk.
The answer I really want to know, is how did something like this not come up in testing? Sure it’s an edge case, but I wonder if Apple developers ever actually take their new feature/app and install it on their own iPhone, to try for a few days outside of a pristine sandboxed environment. Or get someone else to screw around with it: “haha what if I try to add myself to the FaceTime call” “oh shoot that’s not good”.
Ooooh, I know this one. Most engineers think in terms of “how it works.” They don’t think in terms of how to break what they’ve made. For that you want service technicians, who mainly deal with stuff in just-broken mode. Neither of these jobs quite exists in the software realm, but the people who have the same flawed thinking do. One of my most valued coworkers is a guy who has a talent for breaking the stuff I create. He has helped me find hundreds of bugs before they shipped. Most devs would just get pissed off at him instead of considering him a valued asset though.
+1
I use to work for a company now defunct. We had a department dedicated to test/break the software. I can say for sure they came out with the most esoteric bugs you could think of. Some were almost impossible to fix, they almost became features.
This. Engineering mindset vs Hacker mindset.
In talking with Jiska about her Bluetooth hack and disclosure (https://hackaday.com/2018/12/30/finding-bugs-in-bluetooth/) she summed it up perfectly.
She contacted Broadcomm and told them that there was a flaw in the pre-pairing transaction that could be exploited when the attacking device sent X. The engineer on the other end responded that sending X wasn’t in the spec, and thus no Bluetooth device would ever do it.
“Your hack doesn’t conform to the Bluetooth specification, so it’s not a bug.”
“But I can run code on your BT host without even pairing to it”
“Not if you follow the rules!”
A non-profit I worked for paid a guy $125 an hour to password protect their website. I tested it by simply not entering a password. You can guess the outcome. That’s right, they paid him to fix it while I twiddled my thumbs on salary.
> Email them, and sit back and wait. That’s all you need to know.
Which is fine until you realise the whole “responsible disclosure” thing evolved precisely that is what people did, and the companies would almost always ignore the reports. The guaranteed disclosure of the vulnerability is the stick that forces managers to give the problem the priority it requires, instead of falling back on “oh, only one person found it, it’s no big deal”.
Been there seen that.
Only the treat of disclosure seems to get people to start taking action.
Found a few critical authentication issues (backdoors, plain text passwords) and people sitting on their hands about it.
It’s even worse inside many companies.
Especially with the “only one person internally found it, not to worry”
Sometimes it’s best to go incognito and email it in from an external address.
Keep it secret, keep it safe.
https://i.imgur.com/Ll1yWma.mp4
What i am wondering now.
Now many of these security holes are actual real flaws.
and how many are intentionally put there by the developer.
https://en.wikipedia.org/wiki/Hanlon%27s_razor
++
Why can you add yourself to the facetime call?
The dev probably never thought of it and thus never considered that anyone would try and so they didn’t put a check in the code to prevent it from happening. In my first programming class the professor made a point that has stuck with me to this day: always check corner cases, even if you assume they are unlikely or impossible to occur. Once you overlook a single possibility is the moment that Murphy will come back to bite you.
According to this interview with the kid who found the bug, it works if you add a third party as well. https://www.yahoo.com/gma/high-school-student-stumbled-upon-apples-facetime-bug-105003240–abc-news-topstories.html
Its especially funny when you consider Apple’s recent ad: https://appleinsider.com/articles/19/01/04/apple-plasters-privacy-ad-on-billboard-near-las-vegas-convention-center-ahead-of-ces
Id trust Apple over Google any day of the week.
“But this isn’t a story about how Apple failed yet again.”
Wow biased much?
unless you juuust haaappen to be browsing the hardware device-driver innard code and stumble upon a “bug” that spits out money from an ATM machine… you’ll need a handle fast!