Graphene Generates A Little Power

We never know exactly what to make of university press releases, as we see plenty of them with breathless claims of new batteries or supermaterials, but then we don’t see much else. Sometimes, the claims in the press release don’t hold up in the paper, while other times the claims seem to be impractical for use in real life. We aren’t quite sure what to make of a press release from the University of Arkansas claiming they can draw current from a sheet of freestanding graphene purely from its temperature fluctuations.

The press release seems to claim that this is a breakthrough leading to “clean, limitless power.” But if you look at the actual paper, normal room temperature is causing tiny displacements in the graphene sheet as in Brownian motion. A scanning tunneling microscope with two diodes can detect current flowing even once the system reaches thermal equilibrium. Keep in mind, though, that this in the presence of a bias voltage and we are talking about nanometer-scale displacements and 20 pA of current. You can see a simple video from the university showing a block diagram of the setup.

Continue reading “Graphene Generates A Little Power”

NVIDIA Announces $59 Jetson Nano 2GB, A Single Board Computer With Makers In Mind

NVIDIA kicked off their line of GPU-accelerated single board computers back in 2014 with the Jetson TK1, a $200 USD development system for those looking to get involved with the burgeoning world of so-called “edge computing”. It was designed to put high performance computing in a small and energy efficient enough package that it could be integrated directly into products, rather than connecting to a data center half-way across the world.

The TK1 was an impressive piece of hardware, but not something the hacker and maker community was necessarily interested in. For one thing, it was fairly expensive. But perhaps more importantly, it was clearly geared more towards industry types than consumers. We did see the occasional project using the TK1 and the subsequent TX1 and TX2 boards, but they were few and far between.

Then came the Jetson Nano. Its 128 core Maxwell CPU still packed plenty of power and was fully compatible with NVIDIA’s CUDA architecture, but its smaller size and $99 price tag made it far more attractive for hobbyists. According to the company’s own figures, the number of active Jetson developers has more than tripled since the Nano’s introduction in March of 2019. With the platform accessible to a larger and more diverse group of users, new and innovative applications for machine learning started pouring in.

Cutting the price of the entry level Jetson hardware in half was clearly a step in the right direction, but NVIDIA wanted to bring even more developers into the fray. So why not see if lightning can strike twice? Today they’ve officially announced that the new Jetson Nano 2GB will go on sale later this month for just $59. Let’s take a close look at this new iteration of the Nano to see what’s changed (and what hasn’t) from last year’s model.

Continue reading “NVIDIA Announces $59 Jetson Nano 2GB, A Single Board Computer With Makers In Mind”

Indian RISC-V Chip Is Team’s Third Successful Chip

There was a time when creating a new IC was a very expensive proposition. While it still isn’t pocket change, custom chips are within reach of sophisticated experimenters and groups. As evidence, look at the Moushik CPU from the SHAKTI group. This is the group’s third successful tapeout and is an open source RISC-V system on chip.

The chip uses a 180 nm process and has 103 I/O pins. The CPU runs around 100 MHz and the system includes an SDRAM controller, analog to digital conversion, and the usual peripherals. The roughly 25 square mm die houses almost 650 thousand gates.

This is the same group that built a home-grown chip based on RISC-V in 2018 and is associated with the Indian Institute of Technology Madras. We aren’t clear if everything you’d need to duplicate the design is in the git repository, but since the project is open source, we presume it is.

If you think about it, radios went from highly-specialized equipment to a near-disposable consumer item. So did calculators and computers. Developing with FPGAs is cheaper and easier every year. At this rate it’s not unreasonable to think It won’t be long before creating a custom chip will be as simple as ordering a PCB — something else that used to be a big hairy deal.

Of course, we see FPGA-based RISC-V often enough. While we admire [Sam Zeloof’s] work, we don’t think he’s packing 650k gates into that size. Not yet, anyway.

Continue reading “Indian RISC-V Chip Is Team’s Third Successful Chip”

This Week In Security: PunkBuster, NAT, NAS And MP3s

Ah, the ever-present PDF, and our love-hate relationship with the format. We’ve lost count of how many vulnerabilities have been fixed in PDF software, but it’s been a bunch over the years. This week, we’re reminded that Adobe isn’t the only player in PDF-land, as Foxit released a round of updates, and there were a couple serious problems fixed. Among the vulnerabilities, a handful could lead to RCE, so if you use or support Foxit users, be sure to go get them updated.

PunkBuster

Remember PunkBuster? It’s one of the original anti-cheat solutions, from way back in 2000. The now-classic Return to Castle Wolfenstein was the first game to support PunkBuster to prevent cheating. It’s not the latest or greatest, but PunkBuster is still running on a bunch of game servers even today. [Daniel Prizmant] and [Mauricio Sandt] decided to do a deep dive project on PunkBuster, and happened to find an arbitrary file-write vulnerability, that could easily compromise a PB enabled server.

One of the functions of PunkBuster is a remote screenshot capture. If a server admin thinks a player is behaving strangely, a screenshot request is sent. I assume this targets so-called wallhack cheats — making textures transparent, so the player can see through walls. The problem is that the server logic that handles the incoming image has a loophole. If the filename ends in .png as expected, some traversal attack checks are done, and the png file is saved to the server. However, if the incoming file isn’t a png, no transversal detection is done, and the file is naively written to disk. This weakness, combined with the stateless nature of screenshot requests, means that any connected client can write any file to any location on the server at any time. To their credit, even Balance, the creators of PunkBuster, quickly acknowledged the issue, and have released an update to fix it.

Continue reading “This Week In Security: PunkBuster, NAT, NAS And MP3s”

Bunnie’s Betrusted Makes First Appearance As Mobile, FPGA-Based SoC Development Kit

Recently, [Bunnie Huang] announced his Precursor project: a spiffy-looking case housing a PCB with two FPGAs, a display, battery and integrated keyboard. For those who have seen [bunnie]’s talk at 36C3 last year, the photos may look very familiar, as it is essentially the same hardware as the ‘Betrusted’ project is intended to use. This also explains the name, with this development kit being a ‘precursor’ to the Betrusted product.

In short, it’s a maximally open, verifiable, and trustworthy device. Even the processor is instantiated on an FPGA so you know what’s going on inside the silicon.

He has set up a Crowd Supply page for the Precursor project, which provides more details. The board features a Xilinx Spartan 7 (XC7S50) and Lattice iCE40UP5K FPGA, 16 MB SRAM, 128 MB Flash, integrated WiFi (Silicon Labs WF200-based), a physical keyboard and 1100 mAh Lio-Ion battery. The display is a 200 ppi monochrome 336 x 536 px unit, with both the display and keyboard backlit.

At this point [bunnie] is still looking at how much interest there will be for Precursor if a campaign goes live. Regardless of whether one has any interest in the anti-tamper and security features, depending on the price it might be a nice, integrated platform to tinker with.

Twitter: It’s Not The Algorithm’s Fault. It’s Much Worse.

Maybe you heard about the anger surrounding Twitter’s automatic cropping of images. When users submit pictures that are too tall or too wide for the layout, Twitter automatically crops them to roughly a square. Instead of just picking, say, the largest square that’s closest to the center of the image, they use some “algorithm”, likely a neural network, trained to find people’s faces and make sure they’re cropped in.

The problem is that when a too-tall or too-wide image includes two or more people, and they’ve got different colored skin, the crop picks the lighter face. That’s really offensive, and something’s clearly wrong, but what?

A neural network is really just a mathematical equation, with the input variables being in these cases convolutions over the pixels in the image, and training them essentially consists in picking the values for all the coefficients. You do this by applying inputs, seeing how wrong the outputs are, and updating the coefficients to make the answer a little more right. Do this a bazillion times, with a big enough model and dataset, and you can make a machine recognize different breeds of cat.

What went wrong at Twitter? Right now it’s speculation, but my money says it lies with either the training dataset or the coefficient-update step. The problem of including people of all races in the training dataset is so blatantly obvious that we hope that’s not the problem; although getting a representative dataset is hard, it’s known to be hard, and they should be on top of that.

Which means that the issue might be coefficient fitting, and this is where math and culture collide. Imagine that your algorithm just misclassified a cat as an “airplane” or as a “lion”. You need to modify the coefficients so that they move the answer away from this result a bit, and more toward “cat”. Do you move them equally from “airplane” and “lion” or is “airplane” somehow more wrong? To capture this notion of different wrongnesses, you use a loss function that can numerically encapsulate just exactly what it is you want the network to learn, and then you take bigger or smaller steps in the right direction depending on how bad the result was.

Let that sink in for a second. You need a mathematical equation that summarizes what you want the network to learn. (But not how you want it to learn it. That’s the revolutionary quality of applied neural networks.)

Now imagine, as happened to Google, your algorithm fits “gorilla” to the image of a black person. That’s wrong, but it’s categorically differently wrong from simply fitting “airplane” to the same person. How do you write the loss function that incorporates some penalty for racially offensive results? Ideally, you would want them to never happen, so you could imagine trying to identify all possible insults and assigning those outcomes an infinitely large loss. Which is essentially what Google did — their “workaround” was to stop classifying “gorilla” entirely because the loss incurred by misclassifying a person as a gorilla was so large.

This is a fundamental problem with neural networks — they’re only as good as the data and the loss function. These days, the data has become less of a problem, but getting the loss right is a multi-level game, as these neural network trainwrecks demonstrate. And it’s not as easy as writing an equation that isn’t “racist”, whatever that would mean. The loss function is being asked to encapsulate human sensitivities, navigate around them and quantify them, and eventually weigh the slight risk of making a particularly offensive misclassification against not recognizing certain animals at all.

I’m not sure this problem is solvable, even with tremendously large datasets. (There are mathematical proofs that with infinitely large datasets the model will classify everything correctly, so you needn’t worry. But how close are we to infinity? Are asymptotic proofs relevant?)

Anyway, this problem is bigger than algorithms, or even their writers, being “racist”. It may be a fundamental problem of machine learning, and we’re definitely going to see further permutations of the Twitter fiasco in the future as machine classification is being increasingly asked to respect human dignity.

Autodesk Blinks, Keeps STEP File Export In Free Version Of Fusion 360

Good news, Fusion 360 fans — Autodesk just announced that they won’t be removing support for STEP file exports for personal use licensees of the popular CAD/CAM platform after all.

As we noted last week, Autodesk had announced major changes to the free-to-use license for Fusion 360. Most of the changes, like the elimination of simulations, rolling back of some CAM features, and removal of generative design tools didn’t amount to major workflow disruptions for many hobbyists who have embraced the platform. But the loss of certain export formats, most notably STEP files, was a bone of contention and the topic of heated discussion in the makerverse. Autodesk summed up the situation succinctly in their announcement, stating that the reversal was due to “unintended consequences for the hobbyist community.”

While this is great news, bear in mind that the other changes to the personal use license are still scheduled to go into effect on October 1, while the planned change to limit the number of active projects will go into effect in January 2021. So while Fusion 360 personal use licensees will still have STEP files, the loss of other export file formats like IGES and SAT are still planned.