Ever wondered what “cyberwar” looks like? Apparently it’s a lot of guessing security questions and changing passwords. It’s an interesting read on its own, but there are some interesting clues if you read between the lines. A General in the know mentioned that Isis:
clicked on something or they did something that then allowed us to gain control and then start to move.
This sounds very similar to stories we’ve covered in the past, where 0-days are used to compromise groups or individuals. Perhaps the NSA supplied such an exploit, and it was sent in a phishing attack. Through various means, the U.S. team quietly compromised systems and collected credentials.
The article mentions something else interesting. Apparently the targets of this digital sting had also been compromising machines around the world, and using those machines to manage their efforts. The decision was made by the U.S. team to also compromise those machines, in order to lock out the Isis team. This might be the most controversial element of the story. Security researchers have wanted permission to do this for years. How should the third parties view these incursions?
The third element that I found particularly interesting was the phase 2 attack. Rather than outright delete, ban, and break Isis devices and accounts, the U.S. team installed persistent malware that emulated innocuous glitches. The internet connection is extremely laggy on certain days, certain websites simply don’t connect, and other problems. These are the sort of gremlins that networking pros spend all day trying to troubleshoot. The idea that it’s intentional gives me one more thing to worry about.
Quantum Supremacy and the Death of RSA
Quantum Supremacy made the news, as Google announced they achieved the milestone in one of their research projects. Quantum supremacy is the term for the tipping point when quantum computers finally outperform classical computers. Google’s quantum computer performed a calculation in minutes, and they suggest that a mundane supercomputer would take thousands of years to perform the same process.
So that’s it, right? All our computers can be retired, RSA is dead, and prepare for the singularity. Not so fast. The problem solved was the “random circuit sampling problem.” Not familiar with that particular challenge? It could be thought of as a self-test for a quantum computer. In the theoretical realm, it’s still important, but doesn’t mean anything in the real world. While a few have claimed this to be the end of encryption, there are still years of work until quantum computing has a real world effect.
Encryption was again prematurely declared deceased by Crown Stirling, the creator of a brand new encryption technique, “Time AI.” The whole thing is predictably bogus. These are the guys that rented a sponsored time slot at Black Hat, got booed during their presentation, and proceeded to launch a lawsuit against the hecklers. Let’s just say that they aren’t the most well respected security company.
Local Accounts vs the Cloud
I’m not sure if you’ve installed Windows recently, but Microsoft has made local accounts even harder to create on Windows 10 installs. The stated reason for the Microsoft Account is security and convenience. It may be more convenient, but moving your account information into the cloud is certainly not more secure.
For your periodic reminder that the cloud is a hip way to describe somebody else’s computer: A Yahoo engineer has pleaded guilty to abusing his administrative privileges to access customer accounts, with the intent of gaining access to users’ compromising images. It’s a scary reminder that there are potentially malicious people working at the big companies we trust with our data.
Encrypted DNS
Google and Firefox have both begun rolling out encrypted DNS over HTTPS. This seems like an obvious security win for everyone, so why are several groups complaining and trying to block Google’s actions? The given reason is that Google is making a move to be the single centralized DNS provider. While Google’s DNS is indeed one of the most popular, the DNS over HTTPS support won’t materially increase Google’s DNS traffic.
So why the push back? The most plausible theory is that encrypting DNS will make data mining harder for ISPs, who currently use DNS lookups to monitor customers. Some of this monitoring is for positive reasons, like detecting malware infections. It’s possible that it’s also used for things like tailored advertisements.
On a technical note, even with DNS over HTTPS, domain names are still sent in the clear as part of the HTTPS handshake, in a TLS extension known as Server Name Indication, or SNI. This serves as a hint to enable a web server to serve the correct HTTPS certificate in the case of multiple web sites hosted from the same machine. Encrypted SNI is an experimental solution to this problem that is also being slowly deployed.
iOS Checkm8
[axi0mX] dropped the Checkm8 iOS vulnerability on Twitter just a few days ago. This vulnerability exists in the iOS bootloader, iBoot, all the way up to iPhone X devices. The release mentions that it’s a use after free bug in the bootloader’s USB stack, and depends on a race condition to trigger.
The sheer number of devices impacted by this vulnerability has alarmed some, but this is a tethered only attack, and on its own doesn’t break the secure enclave. What Checkm8 is useful for is jailbreaking iDevices. Apple designed their devices with hard-coded boot loaders as part of their security stack. There is absolutely no way to fix this vulnerability, so many many devices are now permanently accessible for jailbreaking. This has been called a renaissance in the iOS jailbreaking scene, and we’re sure to see many interesting ramifications from this vulnerability.
Android VoIP
On the other side of the mobile landscape, a set of Android vulnerabilities were just made public, all in the VoIP stack. The most serious, CVE-2018-9475, was patched in 2018. It was a simple buffer overflow: when the calling user name or number was longer than 513 bytes, a return address could be overwritten, allowing an attacker to jump execution into their own code.
Another interesting problem was that of a very long sip name. This is the name that would be displayed on screen for an incoming call. At the time, Android didn’t sanity check that name for length, so a long enough value would simply cover the interface and prevent answering or rejecting the incoming call.
The bugs were reported and fixed, so as long as your phone is running a reasonably recent Android version, all should be well. For those running devices that no longer receive security updates? Maybe it’s time to look at LineageOS, or one of the other 3rd party ROMs.
DoH (DNS over HTTPS) is a lot like asking who do you trust to log (and use) the name and time that you access all websites. And all the metadata that is generated from that data (e.g. this person gets up at exactly 6:20am every single morning and they leave by 7:00am, ….). Your local ISP or Alphabet Inc.(Google) or somewhere else, maybe you run your own local DNS caching only server to obfuscate the collection of timing metadata.
Using DoH and using your local ISP are both bad in most cases.
Run one’s own server, and at most the world will see is raw addresses flowing out. The browser is probably a bigger leak than the DNS.
Most people still don’t even know about cert stores and root signing decades later… It all probably goes over their heads..
DoH will likely lead to data monopolies..
“The internet connection is extremely laggy on certain days, ”
Similar to Cliff Stoll jangling his set of keys across the Rx and Tx wires in The Cuckoo’s Egg”
“For your periodic reminder that the cloud is a hip way to describe somebody else’s computer”. Which is pretty much the entire internet. In the case of the cloud, and storage, encryption is one’s friend. Processing encrypted date is being worked on, but I don’t know how well it works.
Microsoft forcing people to use cloud accounts vs local is a security disaster in the making as they are inherently much less secure by their very nature.
Microsoft certainly knows a thing or two about security disasters!
Well I dunno if they know anything, but they have certainly been the subject and the cause countless times. If they knew anything they’d stop building their OS on top of 40 years of mistakes.
But they stole all those mistakes, and are fundamentally incapable of inventing anything new or innovative. They probably don’t understand their own system much better than their power users do. Oh well.
I’ve already heard nightmares of MS account users being locked out for one reason or another (other than forgetting the password) leaving their device functionally useless.
The ability to be locked out of a machine remotely, whether by accident or by design, is not great design choice. Even if they can monopolize the user’s data they could get a lot of home users switching to apple or just abandoning PCs for mobile devices when there is an eventual, massive, screw up.
– Yeah, giving MS access to remotely lock me out of my own machine (s)? no thanks. Not even if they managed to not screw it up at some point. Sad this may begin to become common – and with store-bought machines, ‘basic users’ likely won’t know the difference or what a local (admin or otherwise) account even is.
– The less cloud, the better. Mostly all it is good for is cloud or software company’s bottom lines. You can keep your minor conveniences and requiring internet access for work that should have no need for one.
Funny you mention this:
Our agency uses O365 linked to our on-prem AD system.
However I had a user that for the last 6+ months would call in daily to our help-desk and have to have is account unlocked. (No one escalated this issue to me) I didn’t know until 2 days ago.
So I asked how long has this been going on etc etc. Then I just ask him, do you use your phone to check your email, he says yes but that it hasn’t worked in about the same amount of time as the above account lockout.
Took me 2.5 seconds to put the two together and ask him for his phone, were we discover a former admin had locked the device admin functions including adding and changing accounts out from anyone but the former employee. He left no info on what the lock code was etc.
The final result was that 6 months or so ago the users password was changed in AD, not updated on the phone due to being locked out of that function by prior admin. His phone automatically trying his old password was inadvertently locking out his domain account. And the current help-desk just kept unlocking his account sometimes 3-5 times a day for months.
Pulled his sim, put it in a new phone without such restrictions, issued solved. (I did have to cut and file down the sim to fit the new device).
While I had a happy ending to our little cloud integrated nightmare, I can only imagine if one had to rely on Microsoft to keep your account safe, and had to have an internet connection just to install and use your os. I mean this is the same company that can’t seem to get a windows update to work without bricking their own hardware. (see surface pro’s rendered inept by MS updates)
I’m not sure it really is – MS accounts are designed to auth from cache (just like local accounts) when they’re not connected. But MS has taken some apps (including, sadly, the excellent OneNote app) and made them effectively impossible to used w/o OneDrive (which of course, requires a MS cloud account.) . This is a bigger problem from the point of view that it prevents me making backups of my most critical information than any other aspect of the cloud. (If anyone does know how to back up OneNote App data, please let me know…
Now with the news of Quantum Supremacy it is about time to switch to a machine that has no discernible amount of memory, no real storage solutions worth speaking of, and needs a fairly hefty “classical computer” to just administrate and handle it…
Now not to poke fun at quantum computing. But frankly stated, it will likely take decades, if not centuries for it to become more then a specialty tool for certain workloads. It isn’t really “optimal” for run of the mill applications, not to mention any normal user’s requirements on a system.
But here I would like to stop and rather take a look at the words “classical computer”.
A lot of people might shrug their shoulders, thinking that surely x86 and ARM are good representations of what transistorized computers are capable of. But not really.
x86, ARM, even Z80, MC68000 and a lot of other architectures are all part of a group called “general purpose architectures”, these are architectures that are designed to run practically anything you toss at them with reasonable efficiency. In other words, they aren’t optimized for any particular application. (they do have more application specific instructions, and might even have an emphasis on certain areas of use. But nothing like an application specific architecture.)
Then there are many architectures specifically targeting certain workloads, be it database management, packet switching/routing, encryption, 3D rendering, video encoding, crypto mining, etc. These architectures typically scrub the floor with other general purpose architectures in their respective fields. (Performance and power-efficiency differences in the couple of orders of magnitude isn’t unheard of.)
Though, using a DSP, or a GPU, or a crypto miner ASIC as one’s main CPU isn’t a fun experience for most users, regardless of its performance and power efficiency advantages over x86, ARM or other general purpose architectures when it comes the the application specific architecture’s intended field. And since these application specific architectures generally perform abhorrently when it comes to general purpose use, then it makes them even less logical of a replacement for your CPU.
A quantum computer at current is also fairly application specific, and not even all types of processing can be done on them either.
To be extremely oversimplified here, a quantum computer can do a tremendous amount of work in 1 cycle, but then typically only runs at a few KHz to a couple of MHz (last I checked), so if your task is very compatible with quantum computing, then it might only take a few cycles. If your task isn’t compatible, it could take just as many cycles as a “classical computer” would need, or it might not even be something that a quantum computer could run, leaving you with only classical computing as an option, other then doing it by hand that is. So is quantum computing faster? Well, it depends.
Though, so far, quantum computing is still in its very infancy, so this might all change. But to answer the age old question, “Will it run Crysis?” Likely not, due to a game generally being a mixed bag of various workloads.
FWIW, Quantum computers aren’t really good enough for crypto breaking anytime soon. RNA computers may be, but it’s hard to get info about them, as only the spooks have them, and they’re obviously not talking…
The most interesting thing about quantum computer presently is that their computing power is growing at a double exponential rate. That doesn’t mean like e^(2x) but rather e^(e^x). At this rate they will be doing interesting things pretty soon.
Man, come on. The old “if there’s a question in the headline, the answer is invariably no” trope. Come on y’all. I know you gotta get some clicks, but when you write a headline like that and explain that no, RSA is still safe for many years to come several paragraphs down–you’re part of the problem. You’re spreading misinformation because the people who are most likely to impulsively share are not the ones likely to read the article first. And their viewers on the conspiracynet won’t read it either. Stop it. Stop making headlines in the form of a deceptive question that implies the opposite of the truth.
Whoops, looks like I replied to you instead of directly to the article. My mistake.
It was sarcasm, not click-baiting.
“Is RSA Finally Broken?” https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines
Checkm8 , so now i can hopefully do something with the Apple TV 3 that is gathering dust ?
Y’all haven’t gotten it yet? Here’s a clue: Go watch a hamster on a wheel.
Adding more encryption to DNS would likely make it harder for routers to cache, harder for ISPs and routers to reroute you to local mirrors, and harder to do intranet sites.
I assume you’ll be able to opt out… Maybe…
Why don’t they just put onion routing right in Chrome, and let the people who want it have real privacy easily, and let the rest of us have better performance?
If you wanted real privacy, you wouldn’t be using a browser made by the world’s largest data-mining company.
DOH/DOT discovery methods are still under discussion. so far Mozilla standpoint is that encryption solves everything, which isn’t really the proper answer. DOH fundamentally discards authenticity information in the replies (DNSSEC) and the current (nonexisting) discovery process is a bit botched, so it is a lot easier to tamper with it and then do your tricks in your favourite DOH proxy, which magically turns your bogon replies into totally legit ones, because… encryption.
the whole thing is a whole lot more complicated than providing a secure transport from the user device to a ‘trusted’ DOH proxy. as long this proxy still uses ‘regular’ DNS to resolve queries we hardly have an airtight system. i agree with the fact that the most exposed/endangered part of the name resolution process is the users’ end device (the OS/malware it runs) and the home network (where the router is a nice place to do nasty stuff), and yes, DOH on app level will mitigate most of these attack vectors, but until the discovery process is not defined e2e in a plausible and secure way, and all these prerequisites are implemented on all levels (RESINFO or HTTPSVC RRs are only accepted from DNSSEC signed zones) we cannot say we’re significantly more secure then we are.
saying that DOH is superior (but merely a hack) and it doesn’t have proper response validity data (maybe due to increased reply size?) but still uses absolutely bloaty JSON data representation (easy/fun to read/process, but sloppy on data efficiency), and all this in 2019…
i hate to say, but it doesn’t matter how bulletproof your last mile is, if it gets stuff from the same muddy pool.
i’d say, there are different countries with different ISP/government behavior. the global ‘saviors’ are more or less the same jackals who monetise you every last bit of data, and in general they are happy to sell it to ISPs/Telcos – i’ve read the product proposition from Cisco/Akamai/CF. in most cases the local SP bitpusher folks are 2-3 levels less crafty to pull off a proper big-data-profile building that could have effect on ads/contents delivered to you. but your overlords are happy to pay money for stuff to the OTT players who can easily ensure no one gets to that data.