The Unix operating system has been around for decades, and it and its lookalikes (mainly Linux) are a critical part of the computing world. Apple’s operating system, macOS, is Unix-based, as are Solaris and BSD. Even if you’ve never directly used one of these operating systems, at least two-thirds of all websites are served by Unix or Unix-like software. And, if you’ve ever picked up a smart phone, chances are it was running either a Unix variant or the Linux-driven Android. The core reason that Unix has been so ubiquitous isn’t its accessibility, or cost, or user interface design, although these things helped. The root cause of its success is its design philosophy.
Good design is crucial for success. Whether that’s good design of a piece of software, infrastructure like a railroad or power grid, or even something relatively simple like a flag, without good design your project is essentially doomed. Although you might be able to build a workable one-off electronics project that’s a rat’s nest of wires, or a prototype of something that gets the job done but isn’t user-friendly or scalable, for a large-scale project a set of good design principles from the start is key.
Émilie du Châtelet lived a wild, wild life. She was a brilliant polymath who made important contributions to the Enlightenment, including adding a mathematical statement of conservation of energy into her French translation of Newton’s Principia, debunking the phlogiston theory of fire, and suggesting that what we would call infrared light carried heat.
She had good company; she was Voltaire’s lover and companion for fifteen years, and she built a private research institution out of a château with him before falling in love with a younger poet. She was tutored in math by Maupertuis and corresponded with Bernoulli and Euler. She was an avid gambler and handy with a sword. She died early, at 41 years, but those years that she did live were pretty amazing. Continue reading “Émilie du Châtelet: An Energetic Life”→
Last week we covered the past and current state of artificial intelligence — what modern AI looks like, the differences between weak and strong AI, AGI, and some of the philosophical ideas about what constitutes consciousness. Weak AI is already all around us, in the form of software dedicated to performing specific tasks intelligently. Strong AI is the ultimate goal, and a true strong AI would resemble what most of us have grown familiar with through popular fiction.
Artificial General Intelligence (AGI) is a modern goal many AI researchers are currently devoting their careers to in an effort to bridge that gap. While AGI wouldn’t necessarily possess any kind of consciousness, it would be able to handle any data-related task put before it. Of course, as humans, it’s in our nature to try to forecast the future, and that’s what we’ll be talking about in this article. What are some of our best guesses about what we can expect from AI in the future (near and far)? What possible ethical and practical concerns are there if a conscious AI were to be created? In this speculative future, should an AI have rights, or should it be feared?
The concept of artificial intelligence dates back far before the advent of modern computers — even as far back as Greek mythology. Hephaestus, the Greek god of craftsmen and blacksmiths, was believed to have created automatons to work for him. Another mythological figure, Pygmalion, carved a statue of a beautiful woman from ivory, who he proceeded to fall in love with. Aphrodite then imbued the statue with life as a gift to Pygmalion, who then married the now living woman.
Throughout history, myths and legends of artificial beings that were given intelligence were common. These varied from having simple supernatural origins (such as the Greek myths), to more scientifically-reasoned methods as the idea of alchemy increased in popularity. In fiction, particularly science fiction, artificial intelligence became more and more common beginning in the 19th century.
But, it wasn’t until mathematics, philosophy, and the scientific method advanced enough in the 19th and 20th centuries that artificial intelligence was taken seriously as an actual possibility. It was during this time that mathematicians such as George Boole, Bertrand Russel, and Alfred North Whitehead began presenting theories formalizing logical reasoning. With the development of digital computers in the second half of the 20th century, these concepts were put into practice, and AI research began in earnest.
Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of reality. Generally, this can be chalked up to computing limitations. But, a deeper problem of the understanding of what intelligence actually is has been a source a tremendous debate.
Despite these setbacks, AI research and development has continued. Currently, this research is being conducted by technology corporations who see the economic potential in such advancements, and by academics working at universities around the world. Where does that research currently stand, and what might we expect to see in the future? To answer that, we’ll first need to attempt to define what exactly constitutes artificial intelligence.
A while back I wrote a piece titled, “It’s Time the Software People and Mechanical People Sat Down and Had a Talk“. It was mostly a reaction to what I believe to be a growing problem in the hacker community. Bad mechanical designs get passed on by what is essentially digital word of mouth. A sort of mythology grows around these bad designs, and they start to separate from science. Rather than combat this, people tend to defend them much like one would defend a favorite band or a painting. This comes out of various ignorance, which were covered in more detail in the original article.
There was an excellent discussion in the comments, which reaffirmed why I like writing for Hackaday so much. You guys seriously rock. After reading through the comments and thinking about it, some of my views have changed. Some have stayed the same.
It has nothing to do with software guys.
I definitely made a cognitive error. I think a lot of people who get into hardware hacking from the hobby world have a beginning in software. It makes sense, they’re already reading blogs like this one. Maybe they buy an Arduino and start messing around. It’s not long before they buy a 3D printer, and then naturally want to contribute back.
Since a larger portion of amateur mechanical designers come from software, it would make sense that when I had a bad interaction with someone over a design critique, they would be end up coming at it from a software perspective. So with a sample size too small, that didn’t fully take into account my positive interactions along with the negative ones, I made a false generalization. Sorry. When I sat down to think about it, I could easily have written an article titled, “It’s time the amateur mechanical designers and the professionals had a talk.” with the same point at the end.
Though, the part about hardware costs still applies.
I started out rather aggressively by stating that software people don’t understand the cost of physical things. I would, change that to: “anyone who hasn’t designed a physical product from napkin to market doesn’t understand the cost of things.”
With the advances in rapid prototyping, there’s been a huge influx of people in the physical realm of hacking. While my overall view of this development is positive, I’ve noticed a schism forming in the community. I’m going to have to call a group out. I think it stems from a fundamental refusal of software folks to change their ways of thinking to some of the real aspects of working in the physical realm, so-to-speak. The problem, I think, comes down to three things: dismissal of cost, favoring modularity over understanding, and a resulting insistence that there’s nothing to learn.
An implant hidden in the firmware of hard drives from manufacturers including Western Digital, Seagate, Maxtor and Samsung that replaces the Master Boot Record (MBR).
It isn’t clear whether the manufacturers are complicit in implanting IRATEMONK in their hardware, or if the NSA has just developed it to work with those drives. Either way, it raises an important question: how do we know we can trust the hardware? The short answer is that we can’t. According to the text accompanying the graphic, the NSA
…[installs] hardware units on a targeted computer by, for example, intercepting the device when it’s first being delivered to its intended recipient, a process the NSA calls ‘interdiction.’
We’re interested to hear your responses to this: is the situation as bleak as it seems? How do you build a system that you know you can trust? Are there any alternatives that better guarantee you aren’t being spied on? Read on for more.