Evolutionary algorithms are an interesting topic of study. Rather then relying on human ingenuity and investigation to create new designs, instead, an algorithm is given a target to achieve, and creates “offspring”, iterating in an evolutionary manner to create offspring that get closer to the target with each generation.
This method can be applied to the design of electronic circuits, and is sometimes referred to as “hardware evolution”. A team from Duke University attempted exactly this, aiming to produce an oscillator using evolutionary techniques.
The team used a platform called the “evolvable motherboard”, or EM. The EM is a platform controlled by an attached computer, consisting of reconfigurable solid state switches that allow attached circuit components to be interconnected in every combination possible. These components may be virtually any electronic component; in this experiment, 10 bipolar transistors were used.
The evolutionary algorithm was given a fitness function that rewarded output amplitude and frequency, aiming to create an oscillator operating at 25KHz. However, the team noticed some interesting emergent behavior. The algorithm tended to reward amplification behavior from the circuit, leading to many configurations that oscillated poorly, but amplified ambient noise. In the end, the algorithm developed circuit configurations that acted as a radio, picking up and amplifying signals from the surrounding environment, rather than oscillating on their own. The evolutionary algorithm took advantage of the interaction between not only the circuit elements, but effects such as the parasitic capacitance introduced by the switching matrix and appeared to use the PCB circuit traces as an antenna.
The team conclude that evolutionary algorithms used in circuit design ignore human preconceptions about how circuits work, and will take advantage of sometimes unpredictable and unexpected effects to achieve their targets. This is a blessing and a curse, bringing unconventional designs to the fore, but also creating circuits that may not work well in a generalized environment. If your “oscillator” relies on a nearby noise source to operate, it may operate unpredictably in the field, for example.
We’ve seen evolutionary algorithms used before, such as being applied to robotic design.
This even happens within evolved FPGA designs where they work in ways that are hard to foresee as they rely on the underlying analog nature of even digital electronics. It can even be so sensitive to the chip characteristics that it only works on a given piece of silicon and not another one from the same batch. Anything exploiting nonlinear effects and true randomness is likely to produce something unique. It may be that future AGI brains have similar quirks where they are started from a known point but the finished product is an evolved one that cannot be duplicated exactly.
Yes. There’s a good article on FPGA circuit evolution that I read a few years ago, called “On the Origin of Circuits”, that talks about such experiments.
I could see that effect being used to tie something to “approved” hardware.
it is already.
This seems like classic specification gaming.
By rewarding frequency (and not consistency of frequency) then you have specified a reward for transitions and amplification of noise or external transitions is the most efficient way to achieve that.
Been looking at the genetic algorithm issue for ages in terms of linguistics Eg. One is regarding adaptive bots responding to facile insults arising from base psychological issues eg jealousy, anxiety, ego displacement and others with many combinations too and initially as part of dialectic convergence to arrive at essential truths in debates. Suffice to say genetic algorithms can do wonders with some outcomes appearing as if very intelligently designed indeed – comparatively that can also mean better than nature has achieved on Earth so far in respect of creating life forms with immense numbers of genetic diseases eg humans suffering with 3000 or so conditions not at all ‘well designed’ by any means. In any case here is an example top right on link offered, a weird looking antenna as best fit for set of criteria which at first was considered blatantly ‘wrong’ yet fitted perfectly well within the primary physics requisites as well as secondary boundaries for its specific mostly static role in the required application as far as possible within budget…
https://en.wikipedia.org/wiki/Genetic_algorithm
Ignoring your nonsensical social anxiety preamble.
The genetic diseases you’ve mentioned are the genetic algorithm at work on Earth as we speak. Those afflicted might be “dead ends”, although some genetic diseases with negative symptoms are still selected as they confer additional benefits. An example of that would be sickle cell disease, which confers a measure of resistance to malaria.
Everything in nature is a work in progress, including humans.
I think I understand Microsoft’s train of thought with win10 now, except they sticking with the AIDS riddled leper branch for God knows why reason.
Well… the genetic algorithms for HIV and leprosy are external to that of humans, as they’re external infections. Win10’s genetic problems, such as they are, are likely to stem from the old Win32 API.
“Ok, it can’t possibly be worse than this! Let it mutate, then …”
http://hackaday.com/2018/11/12/how-to-evolve-a-radio/#comment-5446597
If you can’t beat it, cheat it.
I can imagine an evolutionary technique generating some really interesting circuits. But if you are defining exactly what you want, it will never generate an interesting circuit like a “Gilbert cell” unless you decided that was exactly what you were looking for before the search started. Muntzing known working overly complex circuits that currently do exactly what you want (and only use up to 10 bipolar transistors) would probably converge to interesting solutions faster using an evolutionary technique.
Perhaps they used these methods to develop engine control units at Volkswagen
Or perhaps they could be. The problem at Volkswagen was that they didn’t have the resources to design an engine control algorithm that met the emissions limits at all operating points (all combinations of speed, power, temperature, etc.). They chose the expediency of “teaching to the test”. With automated design, a more acceptable solution might be found.
Okay, so I know your point was about gaming the system, but this means that given the same constraints (i.e., the details of the testing sequence), an automated design might come up with exactly the same solution – “cheating”. In yesterday’s HaD article, https://hackaday.com/2018/11/11/the-naughty-ais-that-gamed-the-system/, a list of similar AI “gaming” problems is presented, at https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml. In all of the examples given in this list, the behavior resulted of one of two factors: 1) the people posing the problem not fully stating the objectives, or 2) the AI discovering bugs in the system that led to rewarded behavior. In both cases, the AI had no idea it was doing anything wrong – in the first case, they simply didn’t have sufficient information, and in the second, the AI had no way to differentiate between intentional and unintentional behavior of the system they were trying to solve.
This, by the way, is why the world is not yet ready for self-driving cars. People just haven’t developed the experience for safe development of specifications for AIs in critical applications.
Be careful what you ask an AI for.
Maybe they just ignore human preconceptions about how evolution works.
True indeed Bart,
The mechansms of brute force evolutionary permutation searches focus as much on the grex of cell collaboration as a system (multi-cellular life humans included) regardless of the grex-like/organism’s ‘personal’ illusions of individual interactions as they (evolution) do with sufficient attention to the cellular component’s survival and where there is some conjunctions or symbiosis then cancers and other decay of the individual is held at bay. Well, long enough for the organism’s sense of individual suffering as a unit being to be felt deterministically in the thought its a victim whilst it’s individual cells survive short term regardless and unaware it’s part of a grex-like entity…
https://en.wikipedia.org/wiki/Grex_(biology)
That’s the problem of doing things without understanding it. I have seen a lot of that with people (mostly self taught analog designers) who try to add all kinds of ”tweaks” to a circuit until it ”works” without thinking out the why and how. I asked one self-claimed ”expert” guy to explain his design over a course of a few months and I got very different 3-4 answers for the exact same question.
As for the machine learning part, it is garbage in garbage out. They’ll need better simulations set up with Monte Carlo to simulate component variations, environment and manufacturing tolerance. It’ll take longer and less fun to watch.
http://k6jca.blogspot.com/2012/07/monte-carlo-and-worst-case-circuit.html
I’ll admit that I sometimes play with LTSpice to come up with interesting circuit configurations. Each iteration is much less than what it would take in a lab for ”tweaking” things to work. I do try to understand the why and how by analyzing the signals that I couldn’t otherwise perform in real life without expensive setups.
The same thing happens in the software development world. Lots of copy/paste from StackOverflow and Github, followed by (in the case of SO) downvotes and “it not work!” comments. Those people can be extremely irritating.
Then again, that’s how a lot of really good developers start. They learn basics by writing glue code, then they start studying the components they’re gluing together, then they use that understanding – how to read code and how to interpret architecture – to learn how to design and build the components themselves.
Sure, there’s an option to be snarky about the approach from a degreed perspective but as long as they’re learning and not simply expecting to coast by (and be called experts) as voodoo tinkerers, the snark ends up sounding more like envy at the sheer brute force strength needed to teach yourself something complex.
This is nothing new. There have been similar experiments in the past, and while they came up with interesting minimalistic circuits, they always used parasitic characteristics of the components, so the circuits wouldn’t work when using other components out of the box.
A major part of a circuit designer’s work is eliminating the effects of tolerance, drift and aging of components.
Safety is becoming more and more important, so you also have to make sure, that even with the failure of several components the system always ends up in a safe state.
This is nothing that an AI can solve today.
Pretty sure you’re wrong on the ‘always’ front. If the testing hardware is varied enough such that the genetic algorithm can’t use those parasitic characteristics it’s perfectly able to generate relocatable results.
Taking that a step further, if you want to limit the AI to “acceptable” behavior, you don’t feed the AI’s output to real hardware, but to a simulator that only models the acceptable behavior.
List of algorithms and experimental systems which tried to “cheat” their way: https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml
Came here to post exactly that. When I found this list, I shared this exact story from it with friends on FB. I feel it ranks up there with the “life imitates art” Apple ][ TRON lighcycles story (https://blog.danielwellman.com/2008/10/real-life-tron-on-an-apple-iigs.html)
“picking up and amplifying signals from the surrounding environment, rather than oscillating on their own”
I wonder if they can patent this, before the naughty spy-guys squelch it…
In retrospect, with 20/20 hindsight, this isn’t _that_ surprising a result. Optimizing for small size and using constrained parts means that any circuit that comes out is going to be weird. Additionally, evolution doesn’t find the _best_ solution, it finds the _first_ solution, and radios/circuits that are interfered with by ambient RF are more likely to be stumbled upon first.
If your fitness function gives even a small reward to outputting any signal at all, a circuit that acts as an antenna is going to show up long before anything that generates its own signal. At that point, the other possible circuit designs are going to be discarded and all contending circuits going forward will be variations on that initial “radio” that further refine the incoming signal and do nothing to generate their own timing. Indeed, from that locally “better” point (from the perspective of the fitness function), it seems likely that any designs that manage to mutate towards having anything like an oscillator in them will appear to behave worse than a straight antenna design.
What were the solid state switches they used as interconnects? You’d need something with extremely low impedance, right?
In 1996 Adrian Thompson published a paper on his experiments to evolve a circuit on an early (and now extinct) FPGA: “`Intrinsic’ Hardware Evolution is the use of artificial evolution, such as a Genetic Algorithm, to design an electronic circuit automatically, where each fitness evaluation is the measurement of a circuit’s performance when physically instantiated in a real reconfigurable VLSI chip.”
Turns out the evolved circuit was essentially analog, only working in a narrow range of temperature, voltage and wafers. Not practical at all, but a fascinating try.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.9691&rep=rep1&type=pdf
The thing with FPGA is that the underlying principles (and indeed laws of physics) are analog and not digital at all.
The digital abstraction is just for convenience of expression.
Want a single rail wide band high frequency amplifier? Better still, want 6 of them in one package? Reach for a 74HC06 (74HC07) and bias the inputs into the linear range.
Your link appears to be dead, but I remember this paper well. There was additional experimentation, in which it was discovered that if you kept the supply voltage and temperature constant during the evolution process, the circuit only worked over a very narrow temperature and supply voltage range, while varying both during the evolution, the circuit was better behaved.