Satellite Tracking With Friends

If you’re in the mood to track satellites, it’s a relatively simple task to look up one of a multitude of websites that can give you a list of satellites visible from your location. However, if you’re interested in using satellites to communicate with far-flung friends, you might be interested in this multi-point satellite tracker.

[Stephen Downward VA1QLE] developed the tracker to make it easier to figure out which satellites would be simultaneously visible to people at different locations on the Earth’s surface. This is useful for amateur radio, as signals can be passed through satellites with ham gear onboard (such as NO-44), or users can even chat over defunct military satellites.

[Stephen] claims the algorithm is inefficient, but calculations are made in a matter of a few seconds, so we’re not complaining. While it was originally designed for just two stations, it works with a near-infinite number of points. [Stephen] recommends verifying the tracks with another tool once calculated to ensure accuracy. The tool is accessible here, and the code is up on GitHub for your perusal.

Perhaps now you need a cost-effective satellite-tracking antenna? [Paul] has you covered.

Wolfram Alpha Shows its Work

The bane of math students everywhere is the teacher asking for you to show your work. If you’ve grown up where a computer is a normal part of school work, that might annoy you since a lot of tools just give you an answer. We aren’t suggesting you cheat at homework, but we did notice that Wolfram Alpha now shows more of its work when it solves many common math problems.

Granted, the site has always shown work on some problems. However, a recent update shows more intermediate steps and also covers more kinds of problems in a step-by-step format. There are examples, but be aware that for general use, you do need to upgrade to pro (about $6 a month or less if you are student or teacher).

Continue reading “Wolfram Alpha Shows its Work”

A Great Guide To Software PLLs

There are some things that you think you know quite well because you learned them in your youth and you understand their principles of operation. Then along comes a link in your morning feed that reminds you of the limits of your knowledge, and you realize that there is a whole new level of understanding to be reached.

Take Phase Locked Loops (PLLs) for example. You learn how they work, you use them for frequency synthesis, and you know they can do other things like recover noisy clock lines and do FM demodulation. But then you read [Paul Lutus’] Understanding Phase-Locked Loops page, and a whole new vista opens.

He’s discussing PLLs in the context of software, as part of a weather fax decoder project, and this allows a perspective that was unavailable to those of us who learned about them through the medium of hardware such as the venerable 4046 CMOS chip. We can easily look at different PLLs with varying parameters, for example their use with a narrowband loop filter to retrieve signals buried in the noise, all through some straightforward code tweaks rather than extensive circuitry. It’s a page that’s a few years old now, but resources like this one do not age.

If PLLs are entirely new to you then you need to reat last year’s excellent PLL primer by Hackaday’s own [Al Williams].

[via Hacker News]

[PLL diagram: Chetvorno CC0]

Hardware for Deep Neural Networks

In case you didn’t make it to the ISCA (International Society for Computers and their Applications) session this year, you might be interested in a presentation by [Joel Emer] an MIT  professor and scientist for NVIDIA. Along with another MIT professor and two PhD students ([Vivienne Sze], [Yu-Hsin  Chen], and [Tien-Ju Yang]), [Emer’s] presentation covers hardware architectures for deep neural networks.

The presentation covers the background on deep neural networks and basic theory. Then it progresses to deep learning specifics. One interesting graph shows how neural networks are getting better at identifying objects in images every year and as of 2015 can do a better job than a human over a set of test images. However, the real key is using hardware to accelerate the performance of networks.

Hardware acceleration is important for several reasons. For one, many applications have lots of data associated. Also, training can involve many iterations which can take a long time.

Continue reading “Hardware for Deep Neural Networks”

Language Parsing with ANTLR

There are many projects that call out for a custom language parser. If you need something standard, you can probably lift the code from someplace on the Internet. If you need something custom, you might consider reading [Federico Tomassetti’s] tutorial on using ANTLR to build a complete parser-based system. [Frederico] also expanded on this material for his book, but there’s still plenty to pick up from the eight blog posts.

His language, Sandy, is complex enough to be a good example, but not too complex to understand. In addition to the posts, you can find the code on GitHub.

Continue reading “Language Parsing with ANTLR”

Forget Troy. Try HelenOS

Even though it seems like there are a lot of operating system choices, the number narrows if you start counting kernels, instead of distributions. Sure, Windows is clearly an operating system family, and on the Unix-like side, there is Linux and BSD. But many other operating systems–Ubuntu, Fedora, Raspian–they all derive from some stock operating system. There are some outliers, though, and one of those is HelenOS. The open source OS runs on many platforms, including PCs, Raspberry PIs, Beaglebones, and many others.

Although the OS isn’t new, it is gaining more features and is now at version 0.7. You can see a video about some of the new features, below.

According to the project’s web site:

HelenOS is a portable microkernel-based multiserver operating system designed and implemented from scratch. It decomposes key operating system functionality such as file systems, networking, device drivers and graphical user interface into a collection of fine-grained user space components that interact with each other via message passing. A failure or crash of one component does not directly harm others. HelenOS is therefore flexible, modular, extensible, fault tolerant and easy to understand.

Continue reading “Forget Troy. Try HelenOS”

Decoding Enigma Using A Neural Network

[Sam Greydanus] created a neural network that can encode and decode messages just as Enigma did. For those who don’t know, the Enigma machine was most famously used by the Germans during World War II to encrypt and decrypt messages. Give the neural network some encrypted text, called the ciphertext, along with the three-letter key that was used to encrypt the text, and the network predicts what the original text, or plaintext, was with around 96-97% accuracy.

The type of neural network he used was a Long Short Term Memory (LSTM ) network, a type of Recurrent Neural Network (RNN) that we talked about in our article covering many of the different types of neural networks developed over the years. RNNs are Turing-complete, meaning they can approximate any function. [Sam] noticed the irony in this, namely that Alan Turing both came up with the concept of Turing-completeness as well as played a big part in breaking the Enigma used in World War II.

How did [Sam] do it?

Continue reading “Decoding Enigma Using A Neural Network”