The Real Science (Not Armchair Science) Of Consciousness

Among brain researchers there’s a truism that says the reason people underestimate how much unconscious processing goes on in your brain is because you’re not conscious of it. And while there is a lot of unconscious processing, the truism also points out a duality: your brain does both processing that leads to consciousness and processing that does not. As you’ll see below, this duality has opened up a scientific approach to studying consciousness.

Are Subjective Results Scientific?

Researcher checking fMRI images.
Checking fMRI images.

In science we’re used to empirical test results, measurements made in a way that are verifiable, a reading from a calibrated meter where that reading can be made again and again by different people. But what if all you have to go on is what a person says they are experiencing, a subjective observation? That doesn’t sound very scientific.

That lack of non-subjective evidence is a big part of what stalled scientific research into consciousness for many years. But consciousness is unique. While we have measuring tools for observing brain activity, how do you know whether that activity is contributing to a conscious experience or is unconscious? The only way is to ask the person whose brain you’re measuring. Are they conscious of an image being presented to them? If not, then it’s being processed unconsciously. You have to ask them, and their response is, naturally, subjective.

Skepticism about subjective results along with a lack of tools, held back scientific research into consciousness for many years. It was taboo to even use the C-word until the 1980s when researchers decided that subjective results were okay. Since then, here’s been a great deal of scientific research into consciousness and this then is a sampling of that research. And as you’ll see, it’s even saved a life or two.

Continue reading “The Real Science (Not Armchair Science) Of Consciousness”

AI And The Ghost In The Machine

The concept of artificial intelligence dates back far before the advent of modern computers — even as far back as Greek mythology. Hephaestus, the Greek god of craftsmen and blacksmiths, was believed to have created automatons to work for him. Another mythological figure, Pygmalion, carved a statue of a beautiful woman from ivory, who he proceeded to fall in love with. Aphrodite then imbued the statue with life as a gift to Pygmalion, who then married the now living woman.

chateau_de_versailles_salon_des_nobles_pygmalion_priant_venus_danimer_sa_statue_jean-baptiste_regnault
Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons

Throughout history, myths and legends of artificial beings that were given intelligence were common. These varied from having simple supernatural origins (such as the Greek myths), to more scientifically-reasoned methods as the idea of alchemy increased in popularity. In fiction, particularly science fiction, artificial intelligence became more and more common beginning in the 19th century.

But, it wasn’t until mathematics, philosophy, and the scientific method advanced enough in the 19th and 20th centuries that artificial intelligence was taken seriously as an actual possibility. It was during this time that mathematicians such as George Boole, Bertrand Russel, and Alfred North Whitehead began presenting theories formalizing logical reasoning. With the development of digital computers in the second half of the 20th century, these concepts were put into practice, and AI research began in earnest.

Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of reality. Generally, this can be chalked up to computing limitations. But, a deeper problem of the understanding of what intelligence actually is has been a source a tremendous debate.

Despite these setbacks, AI research and development has continued. Currently, this research is being conducted by technology corporations who see the economic potential in such advancements, and by academics working at universities around the world. Where does that research currently stand, and what might we expect to see in the future? To answer that, we’ll first need to attempt to define what exactly constitutes artificial intelligence.

Continue reading “AI And The Ghost In The Machine”

Next Week In NYC: How The Age Of Machine Consciousness Is Transforming Our Lives

I’ve developed or have been involved with a number of imaging technologies, everything from DIY synthetic aperture radar, the MIT thru-wall radar, to the next generation of ultrasound imaging devices. Imagery is cool, but what the end-user often wants is some way by which to get an answer as opposed to viewing a reconstruction. So let’s figure that out.

We’re kicking-off a discussion on how to apply deep learning to more than just beating Jeopardy champions at their own game. We’d like to apply deep learning to hard data, to imagery. Is it possible to get the computer to accurately provide the diagnosis?

I helped to organize a seminar series/discussion panel in New York City on November 13th (you know, for those readers who are closer to New York than to Munich). This discussion panel includes David Ferrucci (the guy who lead the IBM Watson program), MIT Astrophysicist Max Tagmark, and the person who created genetic sequencing on a chip: Jonathan Rothberg.  As the vanguard of creativity and enthusiasm in everything technical we’d like the Hackaday community to join the conversation.

Continue reading “Next Week In NYC: How The Age Of Machine Consciousness Is Transforming Our Lives”