This Week In Security: Browser Exploits, Play Protect, And Turn ON Your Firewall!

Google Chrome has done a lot of work on JavaScript performance, pushing the V8 engine to more and more impressive feats. Recently, that optimization has one more piece, the Maglev compiler, which sits between Sparkplug and TurboFan, as a mid-tier optimization step. With a Just In Time (JIT) system, the time saving of code optimization steps has to be carefully weighed against the time costs, and Maglev is another tool in that endless hunt for speed. And with anything this complicated, there’s the occasional flaw found in the system. And of course, because we’re talking about it here, it’s a security vulnerability that results in Remote Code Execution (RCE).

The trick is to use Maglev’s optimization against it. Set up a pair of classes, such that B extends A. Calling new B() results in an attempt to use the constructor from A. Which works, because the compiler checks to make sure that the constructors match before doing so. There’s another way to call a constructor in JS, something like Reflect.construct(B, [], Array);. This calls the B constructor, but indicates that the constructor should return an Array object. You may notice, there’s no array in the A class below. Tricking the compiler into using the parent class constructor in this fashion results in the array being uninitialized, and whatever happens to be in memory will set the length of the array. Continue reading “This Week In Security: Browser Exploits, Play Protect, And Turn ON Your Firewall!”

The Hello World Of GPT?

Someone wants to learn about Arduino programming. Do you suggest they blink an LED first? Or should they go straight for a 3D laser scanner with galvos, a time-of-flight sensor, and multiple networking options? Most of us need to start with the blinking light and move forward from there. So what if you want to learn about the latest wave of GPT — generative pre-trained transformer — programs? Do you start with a language model that looks at thousands of possible tokens in large contexts? Or should you start with something simple? We think you should start simple, and [Andrej Karpathy] agrees. He has a workbook that makes a tiny GPT that can predict the next bit in a sequence. It isn’t any more practical than a blinking LED, but it is a manageable place to start.

The simple example starts with a vocabulary of two. In other words, characters are 1 or 0. It also uses a context size of 3, so it will look at 3 bits and use that to infer the 4th bit. To further simplify things, the examples assume you will always get a fixed-size sequence of tokens, in this case, eight tokens. Then it builds a little from there.

Continue reading “The Hello World Of GPT?”

This Week In Security: Lastpass Takeaway, Bitcoin Loss, And PyTorch

We mentioned the LastPass story in closing a couple weeks ago, but details were still a bit scarce. The hope was that LastPass would release more transparent information about what happened, and how many accounts were accessed. Unfortunately it looks like the December 22nd news release is all we’re going to get. For LastPass users, it’s time to make some decisions.

To recap, an attacker used information from the August 2022 breach to target a LastPass Employee with a social engineering ploy. This succeeded, and the attacker managed to access LastPass backups, specifically a customer account database and customer vaults. There has been no official word of how many users’ data were included, but the indication is that it was the entire dataset. And to make matters worse, the encrypted vault is only partially encrypted. Saved URLs were exposed as plain-text to the attacker, though usernames and passwords are still encrypted using your master password.

So what should a LastPass user do now? It depends. We can assume that whoever has the LastPass vault data is currently throwing every password list available at it. If you used a weak password — derived from words in any language or previously compromised — then it’s time to change all of your passwords that were in the vault. They are burned. Continue reading “This Week In Security: Lastpass Takeaway, Bitcoin Loss, And PyTorch”

Edging Ahead When Learning On The Edge

“With the power of edge AI in the palm of your hand, your business will be unstoppable.

That’s what the marketing seems to read like for artificial intelligence companies. Everyone seems to have cloud-scale AI-powered business intelligence analytics at the edge. While sounding impressive, we’re not convinced that marketing mumbo jumbo means anything. But what does AI on edge devices look like these days?

Being on the edge just means that the actual AI evaluation and maybe even fine-tuning runs locally on a user’s device rather than in some cloud environment. This is a double win, both for the business and for the user. Privacy can more easily be preserved as less information is transmitted back to a central location. Additionally, the AI can work in scenarios where a server somewhere might not be accessible or provide a response quickly enough.

Google and Apple have their own AI libraries, ML Kit and Core ML, respectively. There are tools to convert Tensorflow, PyTorch, XGBoost, and LibSVM models into formats that CoreML and ML Kit understand. But other solutions try to provide a platform-agnostic layer for training and evaluation. We’ve also previously covered Tensorflow Lite (TFL), a trimmed-down version of Tensorflow, which has matured considerably since 2017.

For this article, we’ll be looking at PyTorch Live (PTL), a slimmed-down framework for adding PyTorch models to smartphones. Unlike TFL (which can run on RPi and in a browser), PTL is focused entirely on Android and iOS and offers tight integration. It uses a react-native backed environment which means that it is heavily geared towards the node.js world.

Continue reading “Edging Ahead When Learning On The Edge”

Putting Perseverance Rover’s View Into Satellite View Context

It’s always fun to look over aerial and satellite maps of places we know, seeing a perspective different from our usual ground level view. We lose that context when it’s a place we don’t know by heart. Such as, say, Mars. So [Matthew Earl] sought to give Perseverance rover’s landing video some context by projecting onto orbital imagery from ESA’s Mars Express. The resulting video (embedded below the break) is a fun watch alongside the technical writeup Reprojecting the Perseverance landing footage onto satellite imagery.

Some telemetry of rover position and orientation were transmitted live during the landing process, with the rest recorded and downloaded later. Surprisingly, none of that information was used for this project, which was based entirely on video pixels. This makes the results even more impressive and the techniques more widely applicable to other projects. The foundational piece is SIFT (Scale Invariant Feature Transform), which is one of many tools in the OpenCV toolbox. SIFT found correlations between Perseverance’s video frames and Mars Express orbital image, feeding into a processing pipeline written in Python for results rendered in Blender.

While many elements of this project sound enticing for applications in robot vision, there are a few challenges touched upon in the “Final Touches” section of the writeup. The falling heatshield interfered with automated tracking, implying this process will need help to properly understand dynamically changing environments. Furthermore, it does not seem to run fast enough for a robot’s real-time needs. But at first glance, these problems are not fundamental. They merely await some motivated people to tackle in the future.

This process bears some superficial similarities to projection mapping, which is a category of projects we’ve featured on these pages. Except everything is reversed (camera instead of video projector, etc.) making the math an entirely different can of worms. But if projection mapping sounds more to your interest, here is a starting point.

[via Dr. Tanya Harrison @TanyaOfMars]

Continue reading “Putting Perseverance Rover’s View Into Satellite View Context”

OpenSource GUI Tool For OpenCV And DeepLearning

AI and Deep Learning for computer vision projects has come to the masses. This can be attributed partly to theĀ  community projects that help ease the pain for newbies. [Abhishek] contributes one such project called Monk AI which comes with a GUI for transfer learning.

Monk AI is essentially a wrapper for Computer Vision and deep learning experiments. It facilitates users to finetune deep neural networks using transfer learning and is written in Python. Out of the box, it supports Keras and Pytorch and it comes with a few lines of code; you can get started with your very first AI experiment.

[Abhishek] also has an Object Detection wrapper(GitHub) that has some useful examples as well as a Monk GUI(GitHub) tool that looks similar to the tools available in commercial packages for running, training and inference experiments.

The documentation is a work in progress though it seems like an excellent concept to build on. We need more tools like these to help more people getting started with Deep Learning. Hardware such as the Nvidia Jetson Nano and Google Coral are affordable and facilitate the learning and experimentation.