GigaDevice Releasing RISC-V MCUs And Development Boards

Probably not too many people have heard of Chinese manufacturer GigaDevice who so far has mostly been known as a NOR Flash memory manufacturer. Their GD32 range of MCUs is however STM32-compatible, making them interesting (cheaper) alternatives to sourcing directly from ST. Now GigaDevice has announced during a presentation that they are releasing a range of RISC-V-based MCUs: the GD32V series.

As GigaDevice has not yet updated their English-language website, the information we do have is based on CNX-Software‘s translations from Chinese. The specs for the GD32VF103 series of MCUs are listed by them as follows:

  • Core – GD32VF103 RISC-V “Bumblebee Core” @ 108 MHz
  • Memory – 8KB to 32KB SRAM
  • Storage  – 16KB to 128KB flash
  • Peripherals – USB OTG and CAN 2.0B
  • I/O – 3.3V, 5V tolerant
  • Supply Voltage – 2.6 to 3.6V
  • Package – QFN36, LQFP48, LQFP64, and LQFP100 packages

Whether they are pin-compatible with the GD32 MCUs is still to be confirmed. If that turns out to be the case, then this might be an interesting drop-in solution for some products. From the specs it seems clear that they are targeting the lower-end ARM-based MCUs such as ST’s Cortex-M3-based STM32F103, which are quite common in a large range of embedded systems.

Seeing a performance comparison between both types of MCU would be interesting to see as lower power usage and higher efficiency compared to the ARM cores is being claimed. Both MCUs and development boards are already available for sale at Tmall, with the basic GD32VF103C-START board going for about $11 and the GD32VF103TBU6 MCU (QFN36, 64 kB Flash) for roughly $1.27.

Documentation and SDKs in English seem to be a bit scarce at this point, but hopefully before long we too will be able to use these MCUs without having to take up Chinese language classes.

Thanks to [Flaviu] for the tip!

The Satellite Phone You Already Own: From Orbit, UbiquitiLink Will Look Like A Cell Tower

For anyone that’s ever been broken down along a remote stretch of highway and desperately searched for a cell signal, knowing that a constellation of communications satellites is zipping by overhead is cold comfort indeed. One needs specialized gear to tap into the satphone network, few of us can justify the expense of satellite phone service, and fewer still care to carry around a brick with a chunky antenna on it as our main phone.

But what if a regular phone could somehow leverage those satellites to make a call or send a text from a dead zone? As it turns out, it just might be possible to do exactly that, and a Virginia-based startup called UbiquitiLink is in the process of filling in all the gaps in cell phone coverage by orbiting a constellation of satellites that will act as cell towers of last resort. And the best part is that it’ll work with a regular cell phone — no brick needed.

Continue reading “The Satellite Phone You Already Own: From Orbit, UbiquitiLink Will Look Like A Cell Tower”

ORNL's Summit supercomputer, fastest until 2020 (Credit: ORNL)

Joining The RISC-V Ranks: IBM’s Power ISA To Become Free

IBM’s Power processor architecture is probably best known today as those humongous chips that power everything from massive mainframes and supercomputers to slightly less massive mainframes and servers. Originally developed in the 1980s, Power CPUs have been a reliable presence in the market for decades, forming the backbone of systems like IBM’s RS/6000 and AS/400 and later line of Power series.

Now IBM is making the Power ISA free to use after first opening up access to the ISA with the OpenPower Foundation. Amidst the fully free and open RISC-V ISA making headway into the computing market, and ARM feeling pressured to loosen up its licensing, it seems they figured that it’s best to join the party early. Without much of a threat to its existing business customers who are unlikely to whip up their own Power CPUs in a back office and not get IBM’s support that’s part of the business deal, it seems mostly aimed at increasing Power’s and with it IBM’s foothold in the overall market.

The Power ISA started out as the POWER ISA, before it evolved into the PowerPC ISA, co-developed with Motorola  and Apple and made famous by Apple’s use of the G3 through G5 series of PowerPC CPUs. The PowerPC ISA eventually got turned into today’s Power ISA. As a result it shares many commonalities with both POWER and PowerPC, being its de facto successor.

In addition, IBM is also opening its OpenCAPI accelerator and OpenCAPI Memory Interface variant that will be part of the upcoming Power9′ CPU. These technologies are aimed at reducing the number of interconnections required to link CPUs together, ranging from NVLink, to Infinity Fabric and countless more, not to mention memory, where OMI memory could offer interesting possibilities.

Would you use Power in your projects? Let us know in the comments.

Build A Fungus Foraging App With Machine Learning

As the 2019 mushroom foraging season approaches it’s timely to combine my thirst for knowledge about low level machine learning (ML) with a popular pastime that we enjoy here where I live. Just for the record, I’m not an expert on ML, and I’m simply inviting readers to follow me back down some rabbit holes that I recently explored.

But mushrooms, I do know a little bit about, so firstly, a bit about health and safety:

  • The app created should be used with extreme caution and results always confirmed by a fungus expert.
  • Always test the fungus by initially only eating a very small piece and waiting for several hours to check there is no ill effect.
  • Always wear gloves  – It’s surprisingly easy to absorb toxins through fingers.

Since this is very much an introduction to ML, there won’t be too much terminology and the emphasis will be on having fun rather than going on a deep dive. The system that I stumbled upon is called XGBoost (XGB). One of the XGB demos is for binary classification, and the data was drawn from The Audubon Society Field Guide to North American Mushrooms. Binary means that the app spits out a probability of ‘yes’ or ‘no’ and in this case it tends to give about 95% probability that a common edible mushroom (Agaricus campestris) is actually edible. 

The app asks the user 22 questions about their specimen and collates the data inputted as a series of letters separated by commas. At the end of the questionnaire, this data line is written to a file called ‘fungusFile.data’ for further processing.

XGB can not accept letters as data so they have to be mapped into ‘classic LibSVM format’ which looks like this: ‘3:218’, for each letter. Next, this XGB friendly data is split into two parts for training a model and then subsequently testing that model.

Installing XGB is relatively easy compared to higher level deep learning systems and runs well on both Linux Ubuntu 16.04 and on a Raspberry Pi. I wrote the deployment app in bash so there should not be any additional software to install. Before getting any deeper into the ML side of things, I highly advise installing XGB, running the app, and having a bit of a play with it.

Training and testing is carried out by running bash runexp.sh in the terminal and it takes less than one second to process the 8124 lines of fungal data. At the end, bash spits out a set of statistics to represent the accuracy of the training and also attempts to ‘draw’ the decision tree that XGB has devised. If we have a quick look in directory ~/xgboost/demo/binary_classification, there should now be a 0002.model file in it ready for deployment with the questionnaire.

I was interested to explore the decision tree a bit further and look at the way XGB weighted different characteristics of the fungi. I eventually got some rough visualisations working on a Python based Jupyter Notebook script:

 

 

 

 

 

 

 

Obviously this app is not going to win any Kaggle competitions since the various parameters within the software need to be carefully tuned with the help of all the different software tools available. A good place to start is to tweak the maximum depth of the tree and the number or trees used. Depth = 4 and number = 4 seems to work well for this data. Other parameters include the feature importance type, for example: gain, weight, cover, total_gain or total_cover. These can be tuned using tools such as SHAP.

Finally, this app could easily be adapted to other questionnaire based systems such as diagnosing a particular disease, or deciding whether to buy a particular stock or share in the market place.

An even more basic introduction to ML goes into the baseline theory in a bit more detail – well worth a quick look.

Looking Around Corners With F-K Migration

The concept behind non-line-of-sight (NLOS) imaging seems fairly easy to grasp: a laser bounces photons off a surface that illuminate objects that are within in sight of that surface, but not of the imaging equipment. The photons that are then reflected or refracted by the hidden object make their way back to the laser’s location, where they are captured and processed to form an image. Essentially this allows one to use any surface as a mirror to look around corners.

Main disadvantage with this method has been the low resolution and high susceptibility to noise. This led a team at Stanford University to experiment with ways to improve this. As detailed in an interview by Tech Briefs with graduate student [David Lindell], a major improvement came from an ultra-fast shutter solution that blocks out most of the photons that return from the wall that is being illuminated, preventing the photons reflected by the object from getting drowned out by this noise.

The key to getting the imaging quality desired, including with glossy and otherwise hard to image objects, was this f-k migration algorithm. As explained in the video that is embedded after the break, they took a look at what methods are used in the field of seismology, where vibrations are used to image what is inside the Earth’s crust, as well as synthetic aperture radar and similar. The resulting algorithm uses a sequence of Fourier transformation, spectrum resampling and interpolation, and the inverse Fourier transform to process the received data into a usable image.

This is not a new topic; we covered a simple implementation of this all the way back in 2011, as well as a project by UK researchers in 2015. This new research shows obvious improvements, making this kind of technology ever more viable for practical applications.

Continue reading “Looking Around Corners With F-K Migration”

Largest Chip Ever Holds 1.2 Trillion Transistors

We get it, press releases are full of hyperbole. Cerebras recently announced they’ve built the largest chip ever. The chip has 400,000 cores and contains 1.2 trillion transistors on a die over 46,000 square mm in area. That’s roughly the same as a square about 8.5 inches on each side. But honestly, the WSE — Wafer Scale Engine — is just most of a wafer not cut up. Typically a wafer will have lots of copies of a device on it and it gets split into pieces.

According to the company, the WSE is 56 times larger than the largest GPU on the market. The chip boasts 18 gigabytes of storage spread around the massive die. The problem isn’t making such a beast — although a normal wafer is allowed to have a certain number of bad spots. The real problems come through things such as interconnections and thermal management.

Continue reading “Largest Chip Ever Holds 1.2 Trillion Transistors”

Russian Robot To Visit Space Station

The Russians were the first to send a dog into space, the first to send a man, and the first to send a woman. However, NASA sent the first humanoid robot to the International Space Station. The Russians, though, want to send FEDOR and proclaim that while Robonaut flew as cargo, a FEDOR model — Skybot F-850 — will fly the upcoming MS-14 supply mission as crew.

Defining the term robot can be tricky, with some thinking a proper robot needs to be autonomous and others seeing robotics under human control as enough. The Russian FEDOR robot is — we think — primarily a telepresence device, but it remains an impressive technical achievement. The press release claims that it can balance itself and do other autonomous actions, but it appears that to do anything tricky probably requires an operator. You can see the robot in ground tests at about the one minute mark in the video below.

Continue reading “Russian Robot To Visit Space Station”