The Future of Artificial Intelligence

Last week we covered the past and current state of artificial intelligence — what modern AI looks like, the differences between weak and strong AI, AGI, and some of the philosophical ideas about what constitutes consciousness. Weak AI is already all around us, in the form of software dedicated to performing specific tasks intelligently. Strong AI is the ultimate goal, and a true strong AI would resemble what most of us have grown familiar with through popular fiction.

Artificial General Intelligence (AGI) is a modern goal many AI researchers are currently devoting their careers to in an effort to bridge that gap. While AGI wouldn’t necessarily possess any kind of consciousness, it would be able to handle any data-related task put before it. Of course, as humans, it’s in our nature to try to forecast the future, and that’s what we’ll be talking about in this article. What are some of our best guesses about what we can expect from AI in the future (near and far)? What possible ethical and practical concerns are there if a conscious AI were to be created? In this speculative future, should an AI have rights, or should it be feared?

Continue reading “The Future of Artificial Intelligence”

AI and the Ghost in the Machine

The concept of artificial intelligence dates back far before the advent of modern computers — even as far back as Greek mythology. Hephaestus, the Greek god of craftsmen and blacksmiths, was believed to have created automatons to work for him. Another mythological figure, Pygmalion, carved a statue of a beautiful woman from ivory, who he proceeded to fall in love with. Aphrodite then imbued the statue with life as a gift to Pygmalion, who then married the now living woman.

chateau_de_versailles_salon_des_nobles_pygmalion_priant_venus_danimer_sa_statue_jean-baptiste_regnault
Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons

Throughout history, myths and legends of artificial beings that were given intelligence were common. These varied from having simple supernatural origins (such as the Greek myths), to more scientifically-reasoned methods as the idea of alchemy increased in popularity. In fiction, particularly science fiction, artificial intelligence became more and more common beginning in the 19th century.

But, it wasn’t until mathematics, philosophy, and the scientific method advanced enough in the 19th and 20th centuries that artificial intelligence was taken seriously as an actual possibility. It was during this time that mathematicians such as George Boole, Bertrand Russel, and Alfred North Whitehead began presenting theories formalizing logical reasoning. With the development of digital computers in the second half of the 20th century, these concepts were put into practice, and AI research began in earnest.

Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of reality. Generally, this can be chalked up to computing limitations. But, a deeper problem of the understanding of what intelligence actually is has been a source a tremendous debate.

Despite these setbacks, AI research and development has continued. Currently, this research is being conducted by technology corporations who see the economic potential in such advancements, and by academics working at universities around the world. Where does that research currently stand, and what might we expect to see in the future? To answer that, we’ll first need to attempt to define what exactly constitutes artificial intelligence.

Continue reading “AI and the Ghost in the Machine”

What We Are Doing Wrong. The Robot That’s Not in Our Pocket

I’m not saying that the magic pocket oracle we all carry around isn’t great, but I think there is a philosophical disconnect between what it is and what it could be for us. Right now our technology is still trying to improve every tool except the one we use the most, our brain.

At first this seems like a preposterous claim. Doesn’t Google Maps let me navigate in completely foreign locations with ease? Doesn’t Evernote let me off-load complicated knowledge into a magic box somewhere and recall it with photo precision whenever I need to? Well, yes, they do, but they do it wrong. What about ordering food apps? Siri? What about all of these. Don’t they dramatically extend my ability? They do, but they do it inefficiently, and they will always do it inefficiently unless there is a philosophical change in how we design our tools.

Continue reading “What We Are Doing Wrong. The Robot That’s Not in Our Pocket”

The Most Useless Book Scanner

How do artificial intelligences get so intelligent? The same way we do, they get a library card and head on over to read up on their favorite topics. Or at least that’s the joke that [Jakob Werner] is playing with in his automaton art piece, “A Machine Learning” (Google translated here).

Simulating a reading machine, a pair of eyeballs on stalks scan left-right and slowly work their way down the page as another arm swings around and flips to the next one. It’s all done with hand-crafted wooden gears, in contrast to the high-tech subject matter. It’s an art piece, and you can tell that [Jakob] has paid attention to how it looks. (The all-wooden rollers are sweet.) But it’s also a “useless machine” with a punch-line.

Is it a Turing test? How can we tell that the machine isn’t reading? What about “real” AIs? Are they learning or do they just seem to be? OK, Google’s DeepMind is made of silicon and electricity instead of wood, but does that actually change anything? It’s art, so you get license to think crazy thoughts like this.

We’ve covered a few, less conceptual, useless machines here. Here is one of our favorite. Don’t hesitate to peruse them all.

Meet Blue Jay, The Flying Drone Pet Butler

20 students of the Eindhoven University of Technology (TU/e) in the Netherlands share one vision of the future: the fully domesticated drone pet – a flying friend that helps you whenever you need it and in general, is very, very cute. Their drone “Blue Jay” is packed with sensors, has a strong claw for grabbing and carrying cargo, navigates autonomously indoors, and interacts with humans at eye level.

Continue reading “Meet Blue Jay, The Flying Drone Pet Butler”

Impressive StarCraft 2 AI More Fair to Fleshy Opponents

There was a discussion in the comments when the Alpha Go results were released. Some commentors were postulating that AI researchers are discounting more fluid games such as the RTS StarCraft.

The comments then devolved into a discussion of what would make the AI fair to consider against a human player. Many times, AI in RTS games win because they have direct access to the variables in the game. Rather than physically looking at the small area of the screen where a unit is located and then moving their eye to take in strategic information like exact location, health, unit level, etc, the AI just knows that it’s at 120x,2000y,76%,lvl5, etc instantly. The AI also has no click lag as it gets direct access to the game’s API, it simply changes the variables and action queue of a unit directly.

So we were interested to see [Matt]’s Star Craft AI that required the computer to actually look at the game board and click. [Matt]’s AI doesn’t see using OpenCV, which in its own way is forcing the computer to look in a way that’s unnatural to it. He instead wrote some code to intercept the behind the scenes calls to the DirectX library.

The computer is then able to make determinations about what it is looking at using the texture information and other pieces sent to the library. Unlike AI’s that get a direct look at the variables, it has to then translate this and keep its own mental picture of the map and the situation. If a building is destroyed, for example, it has to go over and look at that part of the map, test what it’s seeing against a control, and then remove the building from its list.

The AI’s one big advantage are its robot fingers. Even though this AI has to click on the interface, it doesn’t do it with a weak articulated fleshy nub like the rest of us. This allows the AI to get crazy Actions Per Minute (APM) in the range of 500 to 2000.

The AI has only been tested against StarCraft’s built in cheater bots. So far it can win most games against the hard level bots. If you want to see a video of what the AI is looking at, check after the break.

Continue reading “Impressive StarCraft 2 AI More Fair to Fleshy Opponents”

A Short History of AI, and Why It’s Heading in the Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Continue reading “A Short History of AI, and Why It’s Heading in the Wrong Direction”