Git Shell Bypass, Less Is More

We’ve always been a fans of wargames. Not the movie (well, also the movie) but I’m referring to hacking wargames. There are several formats but usually you have access to an initial shell account somewhere, which is level0, and you have to exploit some flaw in the system to manage to get level1 permissions and so forth. Almost always there’s a level where you have to exploit a legitimate binary (with some shady permissions) that does more than what the regular user thinks.

In the case of CVE-2017-8386, less is more.

[Timo Schmid] details how the git-shell, a restricted shell meant to be used as the upstream peer in a git remote session over a ssh tunnel, can be abused in order to achieve arbitrary file read, directory listing and somewhat restricted file write. The git-shell basic idea is to restrict the allowed commands in an ssh session to the ones required by git (git-receive-pack, git-upload-pack, git-upload-archive). The researcher realized he could pass parameters to these commands, like the flag –help:

$ ssh git@remoteserver "git-receive-pack '--help'"

GIT-RECEIVE-PACK(1)            Git Manual             GIT-RECEIVE-PACK(1)

NAME
 git-receive-pack - Receive what is pushed into the repository
[...]

What the flag does is make the git command open the man page of git, which is passed on to a pager program, usually less. And this is where it get interesting. The less command, if running interactively, can do several things you would expect like searching for text, go to a line number, scroll down and so on. What it can also do is open a new file (:e), save the input to a file (s) and execute commands (!). To make it run interactively, you have to force the allocation of a PTY in ssh like so:

$ ssh -t git@remoteserver "git-receive-pack '--help'"

GIT-RECEIVE-PACK(1) Git Manual GIT-RECEIVE-PACK(1)

NAME
 git-receive-pack - Receive what is pushed into the repository

 Manual page git-receive-pack(1) line 1 (press h for help or q to quit)
 

Press h for help and have fun. One caveat is that usual installations the code execution will not really execute arbitrary commands, since the current running login shell is the git-shell, restricted to only some white listed commands. There are, however, certain configurations where this might happen, such as maintaining bash or sh as a login shell and limit the user in ways that they can only use git (such as in shared environments without root access). You can see such example here.

The quickest solution seems to be to enable the no-pty flag server-side, in the sshd configuration. This prevents clients from requesting a PTY so less won’t run in an interactive mode.

$ man less

LESS(1) General Commands Manual LESS(1)

NAME
less - opposite of more

Ironic, isn’t it?

Featured image

Keep The Burglars Away With Some Pi

Ten years ago, we never imagined we would be able to ward off burglars with Pi. However, that is exactly what [Nick] is doing with his Raspberry Pi home security system.

We like how, instead of using a standard siren, [Nick] utilized his existing stereo system to play a custom audio file that he created. (Oh the possibilities!) How many off the shelf alarm systems can you do that with?

The Pi is the brains of the operation, running an open source software program called Home Assistant. If any of the Z-Wave sensors in his house are triggered while the alarm system is armed, the system begins taking several actions. The stereo system is turned on via IR so that the digital alarm audio file can be played. Lights flash on and off. An IP camera takes several snapshots and emails them to [Nick].

Home Assistant didn’t actually have the ability to send images in an email inline at the time that [Nick] was putting together his system. What did [Nick] do about that? He wrote some code to give it that ability, and submitted it through GitHub. That new code was put into a later version of the program. Ah, the beauty of open source software.

Perhaps the most important part of this project is that there were steps taken to help keep the wife-approval factor of the system on the positive side. For example, he configured one of the scripts so that even if the alarm is tripped multiple times in succession, the alarm won’t play over itself repeatedly.

This isn’t [Nick’s] first time being featured here. Check out another project of his which involves a couple of Pi’s communicating with each other via lasers.

 

Hackaday Prize Entry: Secure Storage On SD Cards

Here’s a puzzler for you: how do you securely send data from one airgapped computer to another? Sending it over a network is right out, because that’s the entire point of an airgap. A sneakernet is inherently insecure, and you shouldn’t overestimate the security of a station wagon filled with tapes. For his Hackaday Prize entry, [Nick Sayer] has a possible solution. It’s the Sankara Stones from Indiana Jones and the Temple of Doom, or a USB card reader that requires two cards. Either way, it’s an interesting experiment in physical security for data.

The idea behind the Orthrus, a secure RAID USB storage device for two SD cards, is to pair two SD cards. With both cards, you can read and write to this RAID drive without restriction. With only one, the data is irretrievable so they are safe during transit if shipped separately.

The design for this device is based around the ATXMega32A4U. It’s pretty much what you would expect from an ATMega, but this has a built-in full speed USB interface and hardware AES support. The USB is great for presenting two SD cards as a single drive, and the AES port is used to encrypt the data with a key that is stored in a key storage block on each card.

For the intended use case, it’s a good design. You can only get the data off of these SD cards if you have both of them. However, [Nick] is well aware of Schneier’s Law — anyone can design a cryptosystem that they themselves can’t break. That’s why he’s looking for volunteers to crack the Orthrus. It’s an interesting challenge, and one we’d love to see broken.

Ultrasonic Tracking Beacons Rising

An ultrasonic beacon is an inaudible sound with encoded data that can be used by a listening device to receive information on just about anything. Beacons can be used, for example, inside a shop to highlight a particular promotion or on a museum for guided tours where the ultrasonic beacons can encode the location. Or they can be used to track people consumers. Imagine if Google find outs… oh, wait… they already did, some years ago. As with almost any technology, it can be used to ‘do no harm’ or to serve other purposes.

Researchers from the Technische Universitat Braunschweig in Germany presented a paper about Ultrasonic Side Channels on Mobile Devices and how can they be abused in a variety of scenarios , ranging from simple consumer tracking to deanonymization. These types of ultrasonic beacons work in the 18 kHz – 20 kHz range, which the human being doesn’t have the ability to hear, unless you are under twenty years old, due to presbycusis. Yes, presbycusis. This frequency range can played via almost any speaker and can be picked up easily by most mobile device microphones, so no special hardware is needed. Speakers and mics are almost ubiquitous nowadays, so there is a real appeal to the technology.

Continue reading “Ultrasonic Tracking Beacons Rising”

Is Intel’s Management Engine Broken?

Betteridge’s Law of Headlines states, “Any headline that ends in a question mark can be answered by the word no.” This law remains unassailable. However, recent claims have called into question a black box hidden deep inside every Intel chipset produced in the last decade.

Yesterday, on the Semiaccurate blog, [Charlie Demerjian] announced a remote exploit for the Intel Management Engine (ME). This exploit covers every Intel platform with Active Management Technology (AMT) shipped since 2008. This is a small percentage of all systems running Intel chipsets, and even then the remote exploit will only work if AMT is enabled. [Demerjian] also announced the existence of a local exploit.

Intel’s ME and AMT Explained

Beginning in 2005, Intel began including Active Management Technology in Ethernet controllers. This system is effectively a firewall and a tool used for provisioning laptops and desktops in a corporate environment. In 2008, a new coprocessor — the Management Engine — was added. This management engine is a processor connected to every peripheral in a system. The ME has complete access to all of a computer’s memory, network connections, and every peripheral connected to a computer. The ME runs when the computer is hibernating and can intercept TCP/IP traffic. Management Engine can be used to boot a computer over a network, install a new OS, and can disable a PC if it fails to check into a server at some predetermined interval. From a security standpoint, if you own the Management Engine, you own the computer and all data contained within.

The Management Engine and Active Management Technolgy has become a focus of security researchers. The researcher who finds an exploit allowing an attacker access to the ME will become the greatest researcher of the decade. When this exploit is discovered, a billion dollars in Intel stock will evaporate. Fortunately, or unfortunately, depending on how you look at it, the Managment Engine is a closely guarded secret, it’s based on a strange architecture, and the on-chip ROM for the ME is a black box. Nothing short of corporate espionage or looking at the pattern of bits in the silicon will tell you anything. Intel’s Management Engine and Active Management Technolgy is secure through obscurity, yes, but so far it’s been secure for a decade while being a target for the best researchers on the planet.

Semiaccurate’s Claim

In yesterday’s blog post, [Demerjian] reported the existence of two exploits. The first is a remotely exploitable security hole in the ME firmware. This exploit affects every Intel chipset made in the last ten years with Active Management Technology on board and enabled. It is important to note this remote exploit only affects a small percentage of total systems.

The second exploit reported by the Semiaccurate blog is a local exploit that does not require AMT to be active but does require Intel’s Local Manageability Service (LMS) to be running. This is simply another way that physical access equals root access. From the few details [Demerjian] shared, the local exploit affects a decade’s worth of Intel chipsets, but not remotely. This is simply another evil maid scenario.

Should You Worry?

This hacker is unable to exploit Intel’s ME, even though he’s using a three-hole balaclava.

The biggest network security threat today is a remote code execution exploit for Intel’s Management Engine. Every computer with an Intel chipset produced in the last decade would be vulnerable to this exploit, and RCE would give an attacker full control over every aspect of a system. If you want a metaphor, we are dinosaurs and an Intel ME exploit is an asteroid hurtling towards the Yucatán peninsula.

However, [Demerjian] gives no details of the exploit (rightly so), and Intel has released an advisory stating, “This vulnerability does not exist on Intel-based consumer PCs.” According to Intel, this exploit will only affect Intel systems that ship with AMT, and have AMT enabled. The local exploit only works if a system is running Intel’s LMS.

This exploit — no matter what it may be, as there is no proof of concept yet — only works if you’re using Intel’s Management Engine and Active Management Technology as intended. That is, if an IT guru can reinstall Windows on your laptop remotely, this exploit applies to you. If you’ve never heard of this capability, you’re probably fine.

Still, with an exploit of such magnitude, it’s wise to check for patches for your system. If your system does not have Active Management Technology, you’re fine. If your system does have AMT, but you’ve never turned it on, you’re fine. If you’re not running LMT, you’re fine. Intel’s ME can be neutralized if you’re using a sufficiently old chipset. This isn’t the end of the world, but it does give security experts panning Intel’s technology for the last few years the opportunity to say, ‘told ‘ya so’.

Stealing Cars For 20 Bucks

[Yingtao Zeng], [Qing Yang], and [Jun Li], a.k.a. the [UnicornTeam], developed the cheapest way so far to hack a passive keyless entry system, as found on some cars: around $22 in parts, give or take a buck. But that’s not all, they manage to increase the previous known effective range of this type of attack from 100 m to around 320 m. They gave a talk at HITB Amsterdam, a couple of weeks ago, and shown their results.

The attack in its essence is not new, and it’s basically just creating a range extender for the keyfob.  One radio stays near the car, the other near the car key, and the two radios relay the signals coming from the car to the keyfob and vice-versa. This version of the hack stands out in that the [UnicornTeam] reverse engineered and decoded the keyless entry system signals, produced by NXP, so they can send the decoded signals via any channel of their choice. The only constraint, from what we could tell, it’s the transmission timeout. It all has to happen within 27 ms. You could almost pull this off over Internet instead of radio.

The actual keycode is not cracked, like in a HiTag2 attack. It’s not like hacking a rolling key keyfob either. The signals are just sniffed, decoded and relayed between the two devices.

A suggested fix from the researchers is to decrease this 27 ms timeout. If it is short enough, at least the distance for these types of attacks is reduced. Even if that could eventually mitigate or reduce the impact of an attack on new cars, old cars are still at risk.  We suggest that the passive keyless system is broken from the get-go: allowing the keyfob to open and start your car without any user interaction is asking for it. Are car drivers really so lazy that they can’t press a button to unlock their car? Anyway, if you’re stuck with one of these systems, it looks like the only sure fallback is the tinfoil hat. For the keyfob, of course.

[via Wired]

You Think You Can’t Be Phished?

Well, think again. At least if you are using Chrome or Firefox. Don’t believe us? Well, check out Apple new website then, at https://www.apple.com . Notice anything? If you are not using an affected browser you are just seeing a strange URL after opening the webpage, otherwise it’s pretty legit. This is a page to demonstrate a type of Unicode vulnerability in how the browser interprets and show the URL to the user. Notice the valid HTTPS. Of course the domain is not from Apple, it is actually the domain: “https://www.xn--80ak6aa92e.com/“. If you open the page, you can see the actual URL by right-clicking and select view-source.

So what’s going on? This type of phishing attack, known as IDN homograph attacks, relies on the fact that the browser, in this case Chrome or Firefox, interprets the “xn--” prefix in a URL as an ASCII compatible encoding prefix. It is called Punycode and it’s a way to represent Unicode using only the ASCII characters used in Internet host names. Imagine a sort of Base64 for domains. This allows for domains with international characters to be registered, for example, the domain “xn--s7y.co” is equivalent to “短.co”, as [Xudong Zheng] explains in his blog.

Different alphabets have different glyphs that work in this kinds of attacks. Take the Cyrillic alphabet, it contains 11 lowercase glyphs that are identical or nearly identical to Latin counterparts. These class of attacks, where an attacker replaces one letter for its counterpart is widely known and are usually mitigated by the browser:

Continue reading “You Think You Can’t Be Phished?”