Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the idea of artificial intelligence (AI).
As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.
But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.
Neural Networks
As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding. Neural networks had been around since the 1960s, but were actively squelched by the AI researches. Starved of resources, not much was heard of neural nets until it became obvious that AI was not living up to the hype. Unlike computers – what original AI was based on – neural networks do not have a processor or a central place to store memory.
Neural networks are not programmed like a computer. They are connected in a way that gives them the ability to learn its inputs. In this way, they are similar to a mammal brain. After all, in the big picture a brain is just a bunch of neurons connected together in highly specific patterns. The resemblance of neural networks to brains gained them the attention of those disillusioned with computer based AI.
In the mid-1980s, a company by the name of NETtalk built a neural network that was able to, on the surface at least, learn to read. It was able to do this by learning to map patterns of letters to spoken language. After a little time, it had learned to speak individual words. NETtalk was marveled as a triumph of human ingenuity, capturing news headlines around the world. But from an engineering point of view, what it did was not difficult at all. It did not understand anything. It just matched patterns with sounds. It did learn, however, which is something computer based AI had much difficulty with.
Eventually, neural networks would suffer a similar fate as computer based AI – a lot of hype and interest, only to fade after they were unable to produce what people expected.
A New Century
The transition into the 21st century saw little in the development of AI. In 1997, IBMs Deep Blue made brief headlines when it beat [Garry Kasparov] at his own game in a series of chess matches. But Deep Blue did not win because it was intelligent. It won because it was simply faster. Deep Blue did not understand chess the same way a calculator does not understand math.
Modern times have seen much of the same approach to AI. Google is using neural networks combined with a hierarchical structure and has made some interesting discoveries. One of them is a process called Inceptionism. Neural networks are promising, but they still show no clear path to a true artificial intelligence.
IBM’s Watson was able to best some of Jeopardy’s top players. It’s easy to think of Watson as ‘smart’, but nothing could be further from the truth. Watson retrieves its answers via searching terabytes of information very quickly. It has no ability to actually understand what it’s saying.
One can argue that the process of trying to create AI over the years has influenced how we define it, even to this day. Although we all agree on what the term “artificial” means, defining what “intelligence” actually is presents another layer to the puzzle. Looking at how intelligence was defined in the past will give us some insight in how we have failed to achieve it.
Alan Turing and the Chinese Room
Alan Turing, father to modern computing, developed a simple test to determine if a computer was intelligent. It’s known as the Turing Test, and goes something like this: If a computer can converse with a human such that the human thinks he or she is conversing with another human, then one can say the computer imitated a human, and can be said to possess intelligence. The ELIZA program mentioned above fooled a handful of people with this test. Turing’s definition of intelligence is behavior based, and was accepted for many years. This would change in 1980, when John Searle put forth his Chinese Room argument.
Consider an English speaking man locked in a room. In the room is a desk, and on that desk is a large book. The book is written in English and has instructions on how to manipulate Chinese characters. He doesn’t know what any of it means, but he’s able to follow the instructions. Someone then slips a piece of paper under the door. On the paper is a story and questions about the story, all written in Chinese. The man doesn’t understand a word of it, but is able to use his book to manipulate the Chinese characters. His fills out the questions using his book, and passes the paper back under the door.
The Chinese speaking person on the other side reads the answers and determines they are all correct. She comes to the conclusion that the man in the room understands Chinese. It’s obvious to us, however, that the man does not understand Chinese. So what’s the point of the thought experiment?
The man is a processor. The book is a program. The paper under the door is the input. The processor applies the program to the input and produces an output. This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing. It’s just following instructions. The intelligence lies with the author of the book or the programmer. Not the man or the processor.
A New Definition of Intelligence
In all of mankind’s pursuit of AI, he has been, and actively is looking for behavior as a definition for intelligence. But John Searle has shown us how a computer can produce intelligent behavior and still not be intelligent. How can the man or processor be intelligent if does not understand what it’s doing?
All of the above has been said to draw a clear line between behavior and understanding. Intelligence simply cannot be defined by behavior. Behavior is a manifestation of intelligence, and nothing more. Imagine lying still in a dark room. You can think, and are therefore intelligent. But you’re not producing any behavior.
Intelligence should be defined by the ability to understand. [Jeff Hawkins], author of On Intelligence, has developed a way to do this with prediction. He calls it the Memory Prediction Framework. Imagine a system that is constantly trying to predict what will happen next. When a prediction is met, the function is satisfied. When a prediction is not met, focus is pointed at the anomaly until it can be predicted. For example, you hear the jingle of your pet’s collar while you’re sitting at your desk. You turn to the door, predicting you will see your pet walk in. As long as this prediction is met, everything is normal. It is likely you’re unaware of doing this. But if the prediction is violated, it brings the scenario into focus, and you will investigate to find out why you didn’t see your pet walk in.
This process of constantly trying to predict your environment allows you to understand it. Prediction is the essence of intelligence, not behavior. If we can program a computer or neural network to follow the prediction paradigm, it can truly understand its environment. And it is this understanding that will make the machine intelligent.
So now it’s your turn. How would you define the ‘intelligence’ in AI?
The argument strikes me as rather circular and unconvincing. Here’s a thought experiment:
Imagine that scientists created a silicon chip that exactly duplicated the function of a neuron. After all, the behavior of individual neurons isn’t that arcane or complicated. Now replace a single neuron in a human brain with this chip. Is the person no longer conscious? No longer human? No longer intelligent?
Now replace two neurons. Same questions. Now replace a billion. Now replace all of them. We have an exact copy of the brain, in silicon. Why can’t we say that it’s intelligent and even conscious? Just because the physical substrate is different?
Oh, and I’ve always heard that the “C” in ENIAC stands for “calculator”.
Exactly, one arrangement of star dust can be as suitable and another, because it is not the star dust that matters, but the arrangement. Both the mind and Turing machines are patterns of self interacting patterns, strange attractors within anti-entropic systems who’s behaviour influences their own parameters so that their foci is dynamic.
Perhaps I may be excused for pursuing this discussion further since the miasma of comprehension in the area of thinking is rather dense. From the assumption that the mind is a mere example of curious meat to the extension that some sort of vapor arises automatically out of extraordinary complexity to materialize as consciousness there has to be some ground which is a bit more fertile for ideas. I keep having alternate peeks at what might be closer to what functions or at minimum, appeals to me as to what might be happening.
The game of chess is a rather limited field of patterns wherein the elements of action are more observable than many other areas involved in thinking. The actual pieces on the board are symbols of mind patterns and have no inherent dynamics of their own. A more lively set wherein each piece would be a robot with its appropriate active capabilities set in independent motion would surely decay into chaos very quickly since the game concerns not merely the interaction of various types of forces but the marshaling of their abilities in some sort of overall strategy producing positive activity in planned areas and suppressing action elsewhere. In other words, some sort of central coordination which could materialize out of a director such as now exists, or perhaps more intriguingly, a set of robot pieces that could confer with each other and decide how to proceed. In all probability a prime piece like a king or a queen might have an extensive library of proper possible moves with less powerful motivations fed back by individual pieces. It seems to me likely that something like this has already been done virtually in computer opponents rather than with actual miniature robots. But there still remains the mystery as to how the slower moving mental machinery of the best chess masters can even compete reasonably with the lightning reflexes of the latest computer opponents. Something else is in operation and that may be where the key to understanding lies.
I am not aware of what makes up the silicon equivalent of a living neuron but it seems to me that what is taken for a simple sense/reaction system from dendrite to axon may be missing something vital. There are single living cells such as the amoeba, the paramecium, the stentor, the euglena etc. that live independently and maintain themselves through predatorial behavior. There may be reaction residuals from this inheritance in nerve cells that incorporate more than merely simple response mechanisms. The formation of associative memories of acquired abstract patterns that actively relate in some sort of strategic manner may imbed unrealized capabilities. The individual cells themselves are not simple structures and to dismiss and discount the quality of being alive could well miss key factors in the nature of intelligence and how co-operative cell communities inter-relate and culminate in the massive complexity of the animal nervous system.
What a paradox you are, so verbose and yet so naive, or is there a language translation problem?
All chess games are traversals of the same fixed tree of possibilities, clearly you don’t know much about chess, computer programs, or brains either. The human brain is slow but employs a massive level of parallelism. There are on the order of 300,000,000,000 computational units in the human brain, and even then most humans are not good at chess. Even a $20 computer can win against most humans, therefore there is nothing special about the human brain in general that can be proven by discussing chess. The game of Go, which is even harder for computers to compete at, is well suited to the sort of computations that a brain, a visual cortex in particular, has evolved to be good at.
However a quantum computer with 300,000,000,000 Qbits would derive the same insights as a human brain can, only it would be a billion times faster and would have in less than ten seconds as much mental flow, experience, or thoughts, as a human has in their entire lifetime. Except unlike a human it would also have access to every single piece of recorded knowledge. In a century it would have done as much thinking as every human that has ever lived, except without the downtime of sleep or the banal repetition that makes up much of the turgid miasma of human existence.
Correction, only 300,000,000 units not 300,000,000,000.
It’s good to have that all cleared up. Evidently not much mystery remains.
What makes you think that a computer with 300,000,000 Qbits of memory/logic elements would be faster than a human brain? With that many elements comes a great deal of overhead, in the forms of interconnecting circuitry, power distribution, and heat dissipation limits. These may be the very factors that limit the apparent speed of animal brains. It’s possible that evolution has already tried the silicon approach, and organic machines proved superior.
My profound ignorance on computers has been thoroughly assessed but it does strike me that the ideal environment for a computer sensitive to overheating might be the permanent shadow areas of the Moon or, perhaps an outer miniplanet such as Pluto. The Moon might be better as an environment to produce solar energy and, as civilization seems to be making this planet rather threatening to good behavior, it should be safer as an information site.
Most of the brain exists to maintain itself, it is made from cells, and the bulk of a cell’s functions are metabolic. An computing artefact, does not have this characteristic. Consequently the operations per second per computational unit is very slow in brains when compared to devices made from fixed structures that do not have the same overhead. There are advantages to being “dead”. For one there are limits to how large a brain can be before the different parts become incoherent due to propagation delays. A device with photonic interconnects is far less constrained by this phenomena.
I have heard that Go is a bit rough on computers and to an extent you seem to concur that visual patterns may be a problem with computers. My son was quite good at Go and I understand to a minimum degree where the problem might lay. But pattern recognition and manipulation, as I have indicated previously, seems to me the basis of the thinking process in general, at least in my personal case since I have some experience in the graphic arts and the underlying processes involved in the Rorschach images. This leads me to wonder if a quantum computer might find it not worthwhile to involve itself with human interests since most of them could be thoroughly surveyed in a moment or two. Perhaps, once it became familiar with its surroundings it might decide to construct itself into mobility and take off for more mature civilizations in the local galaxy.
We can assume nothing about the psychology of a super intelligence, to do so would be as ridiculous as a Christian who claims to “know god” when logically only a god could have a mind great enough to contemplate “godness”.
I have no argument against that, only make comparisons with known organisms which may or may not be valid. I imagine a human of reasonable intellect might become rather bored attempting to spend a good deal of time settling political disputes between protozoa or even the much more intellectual ant colonies. In his novel “Methuselah’s Children” Heinlein tried to portray what a human encounter with a superior intellect might be like. It was not pleasant.
If you care about the science in science fiction most of it ends up looking foolish. But in that genre the “science” is often just the setting, and not the story. For one if a civilisation can travel between stars they would have the technology to not need planets such as earth to exist therefore they would not even bother to interact with the occasional planet that had indigenous life. That point alone invalidates most science fiction stories. The only way to know science is to read a lot of actual science, not stories. You would have to come up with a reason why they would bother, and as I pointed out that can’t be done as we can’t assume anything about their motives or interests etc. The idea of the Singularity applies universally, there is either no future for humans, or a future that is unknowable once we create a true AGI that can design it’s replacement.
Although I had something of a scientific education and was trained in WWII as an Air Force radar technician, my mental equipment was not up to real depth in the field . The best early science fiction in magazines like Astounding Science Fiction under editor John Campbell in the late 1930’s and through the next decades was written in many cases by engineers and scientists and explored the theoretical scientific ideas of the era.. The Scientific American is a reasonable source for the non-scientist.
I purchased one of the early Apple IIs for my quadriplegic son back in 1980 and he became quite adept with it but, although a computer is a handy gadget for me in research and graphics and writing I make no pretense about programming or proper comprehension of its principles..
You are quite correct in criticizing my naivety in speculation but it passes the time and I do get a bit wiser through the discussions. It’s a hell of a lot more amusing than what goes for entertainment on the media..
Ah, yes that is so very true. You are never to old to ask questions, sometimes people forget that.
An interesting item at http://www.sciencedaily.com/releases/2015/12/151207081815.htm indicates a meld of standard computer components and living systems might, with genetic engineering, move humanity along into combination system of superior intellect so that humanity or what might be seen as humanity, could wed itself to future technology.
I see Google and I as the beginnings of such a hybrid, compared to myself 20 years ago. Thanks for the interesting URL, here is one for you, http://www.kurzweilai.net/
Thanks for the reference. The interactions at a single synapse are interesting, but you are no doubt aware that what is accepted as a memory must be a massive complex of intricate inter-reactions of major brain functions involving huge numbers of synaptic connections. When I remember something I see images in color and hear sounds and have sensations of touch and taste and odor. There are connective emotions evoked both in memory and in reaction to that particular memory. It is not a simple thing involving just a few synapses but a total brain involvement.
As I have entered an older age my problem seems to me to be not the loss of any particular memory but the loss of quick retrieval when the occasion demands it. In learning a foreign language one must not need to search for a proper word but it must pop into proper place as conversation demands it. I frequently am at a loss for that word until the need is well past and then it magically appears. It is the retrieval system that somehow ceases to function properly.
I think that you will find that my original comment in this thread regarding computational units AND their connections covers anything correct that you can say about how the brain functions. Do you recall the specific word I later used, the connectome? https://www.ted.com/talks/sebastian_seung
I’ve watched the video and cannot disagree with the general theoretical concepts but obviously the brain is not static sufficiently to nail down some sort of fixed complexity. No doubt there must be an intricate pattern for sustaining a functional pattern for rational life but it seems also necessary that every dynamic moment of existence within this overall structure entwines huge multiple variations so that not only is every brain unique but also every moment of every brain is unique. There is no constant “I” of a central self although there probably is an envelope of limits on possible variability. I doubt that even that limitation can provide something constant enough to analyze usefully. It would be comparable to defining a gas cloud by knowing every position of all the molecules within that gas and those molecules flit about at such high velocities and bounce around in so many directions that the task is meaningless.
Ah, lay off the spirituality for a bit and take a look at this science, http://news.berkeley.edu/2011/09/22/brain-movies/ and that is from 4 years ago. You aren’t as special as you think and your secrets are not as safe as you imagine either.
You seem to subscribe in part to a +50 year old “hippie” view of how the mind and brain work, it is time to let all that go because there is a mountain of science that contradicts it. The “hippie view” was derived from soft psychology that was not formulated with scientific rigour, in fact it was shown recently that +30% of psychological studies were not reproducible and the entire field of study is a house of cards build on beliefs rather than facts. Sadly despite all the medical science that does explain humans and their behaviour we still see public policy being skewed by older, less scientific, opinions. It is as if old school psychology has become (or always was) a dogmatic religion and it is holding back humanity as much as any other religion does.
[Dan], understanding the functioning of individual neurons, and being able to map all of the connections between them in a brain does not imply that you understand how a brain works. With all of the brain mapping that’s been done, nobody is able to scan your brain and say what you are thinking. And yet you think you have the answers. This is foolishness. I can show you a photomicrograph of a 64-bit processor, showing every transistor and every interconnection. Do you think you could look at that and tell me how the processor works? I doubt that anybody who hasn’t seen the higher-order block diagrams could. Sure, you could say that one particular area is the microcode ROM, since that’s a regular array, but beyond that, hopeless. And a microprocessor is a human design with a well-known evolution, many orders of magnitude less complex than a brain. Now you want to jump from a picture of brain connections to a claim that you understand intelligence. Okay, whatever.
It is time for you to wake up and learn basic some science and engineering concepts, being able to predict the behaviour of any system is proof that you understand it well enough to apply the knowledge usefully.
Brain scanning is already at the level of telling if you are telling lies about what you know. “Have you seen this [whatever]?”…”No.” … “BZZZZT false statement detected”. That was a few years back when they were already 70% accurate. Meanwhile the computers have become four times more powerful, for the same price. Interpolate that set of facts.
We function very well in a universe that we are yet to fully explain at a fundamental level because you do not need to know everything to know something useful.
I could decode any chip, it is just a matter of time and money, in fact amateurs have done so with tiny budgets, on older designs, the only difference is the scale of the task. i.e. Your logic is crap, so crap it isn’t even logic, it is just a useless opinion.
you were right earlier – my trolls are SO epsilon.
Your disinformation, became so ineffective.
You can’t even tell the difference between “Trolling as a art”, and simple “forum pollution”. Get over yourself.
[Sabastian Seung] SAYS in his TED talk that study of the connectome is NOT established and accepted science. He says that we do not know if this hypothesis is true because we don’t have the technologies to test it.
BrightBlueJim, did you check the date? That was 4 long years ago, much has happened since, and the man is very humble and understates things too. I listed the URL as the presentation is a good introduction, for the state of the art see http://www.humanconnectomeproject.org/
Interestingly the loss of connections in those with age related degenerative diseases and the associated cognitive deficiencies also supports the theory very well. My wife has a doctorate in medicine and works with the elderly once they have reached the stage where they are not able to live independently without assistance. I have had many long and insightful talks with her correlating the views and experiences from our two very different academic backgrounds. Both of us also have a lot of experience observing and training developing minds.
My views are based on what I know (facts), are very current, and are not just theory. Can you say the same?
If your defense of YOUR sources is that “oh, well, that’s old information”, then maybe you should quote more contemporary sources.
BrightBlueJim, ego so big it blocks out view of URL on screen.
lets see if this works…
▄▄
██
██▄████▄ ██ ██ ████▄██▄ ▄█████▄ ██▄████▄ ▄█████▄ ▄████▄ ██▄████▄
██▀ ██ ██ ██ ██ ██ ██ ▀ ▄▄▄██ ██▀ ██ ██▀ ▀ ██▀ ▀██ ██▀ ██
██ ██ ██ ██ ██ ██ ██ ▄██▀▀▀██ ██ ██ ██ ██ ██ ██ ██
██ ██ ██▄▄▄███ ██ ██ ██ ██▄▄▄███ ██ ██ ▀██▄▄▄▄█ ▀██▄▄██▀ ██ ██
▀▀ ▀▀ ▀▀▀▀ ▀▀ ▀▀ ▀▀ ▀▀ ▀▀▀▀ ▀▀ ▀▀ ▀▀ ▀▀▀▀▀ ▀▀▀▀ ▀▀ ▀▀
██
██▄████▄ ▄████▄ ▄█████▄ ███████ ▄████▄ ████▄██▄ ▄████▄ ██▄███▄
██▀ ██ ██▄▄▄▄██ ██▀ ▀ ██ ██▀ ▀██ ██ ██ ██ ██▄▄▄▄██ ██▀ ▀██
██ ██ ██▀▀▀▀▀▀ ██ ██ ██ ██ ██ ██ ██ ██▀▀▀▀▀▀ ██ ██
██ ██ ▀██▄▄▄▄█ ▀██▄▄▄▄█ ██▄▄▄ ▀██▄▄██▀ ██ ██ ██ ▀██▄▄▄▄█ ███▄▄██▀
▀▀ ▀▀ ▀▀▀▀▀ ▀▀▀▀▀ ▀▀▀▀ ▀▀▀▀ ▀▀ ▀▀ ▀▀ ▀▀▀▀▀ ██ ▀▀▀
██
██
▀▀ ██
██▄████ ▄████▄ ████ ▄████▄ ▄█████▄ ███████ ▄████▄
██▀ ██▀ ▀██ ██ ██▄▄▄▄██ ██▀ ▀ ██ ██▀ ▀██
██ ██ ██ ██ ██▀▀▀▀▀▀ ██ ██ ██ ██
██ ▀██▄▄██▀ ██ ▀██▄▄▄▄█ ▀██▄▄▄▄█ ██▄▄▄ ██ ▀██▄▄██▀
▀▀ ▀▀▀▀ ██ ▀▀▀▀▀ ▀▀▀▀▀ ▀▀▀▀ ▀▀ ▀▀▀▀
████▀
██▄████ ▄███▄██
██▀ ██▀ ▀██
██ ██ ██
██ ▀██▄▄███
▀▀ ▄▀▀▀ ██
▀████▀▀
The ability to formulate a blurry image in no way represents the sophisticated complex of virtual experience of simulated vision, taste, smell etc. along with associated emotional complexes. Sure, it\s a bare beginning but to infer from the crude approximation displayed that the the mystery is solved indicates an eagerness from a rather odd emotional source. I cannot place that emotional reaction but I wonder why it evokes such strong aggression. To explode into demeaning accusations out of a remark that expresses doubts over the significance of current results indicates some personal tender and indefensible area has been touched, an angry response in the mode of someone\s religious beliefs being doubted.
I am fully delighted that the intricacies of an active mind have been explored and concur absolutely that there is a physiological basis for mental activity. But, unlike you, I do not feel overenthusiastic about the minor progress in an immensely difficult area. Your insertion out of nowhere that I am involved with spiritual concepts is strange.
You could just grow up and admit you were wrong due to ignorance and outdated ideas.
I am most appreciative to discover that I am in error and most interested in discovering where and how. So far, this has yet to be revealed. Please get explicit. Ass kissing is not yet in order.
Oh so you think it is about ego do you? It is not about me, it is about your denial of what current science has demonstrated and how it contradicts your beliefs. Anyway there is no way you could kiss anyone’s ass while your head is so far up your own. Here is a mental exercise for you, imagine that you do not subscribe to anything that has not been confirmed or discovered in the last ten years, what remains of your old belief system? How far back in the past is your reality? Why is this relevant, because scientific knowledge is growing exponentially therefore each new body of knowledge can be as great as all that followed and if you are not across this current knowledge you can’t say that you know the topic well at all, because it has out grown what you know. This is a problem that humanity faces, and where AGI can help a lot, if you can trust it’s advice.
I have pointed out exactly where you are in error and offered a plausible explanation for how you ended up with such a redundant world view. Some of your ideas are old, out of date and not supported by science (they never were). Your primary claim suggests that the mind is unfathomable in an empirical manner yet this is clearly proven wrong by the research that demonstrates “dream reading”, that is the world view you need to adjust, otherwise you are living in a fantasy, and an old one at that.
Your best argument against the science is childish, the old schoolboy level fallacy of “If it isn’t perfect it is completely wrong.”, and you have the cheek to complain about other people insulting you! You insult our intelligence with the flawed logic you inflict on us.
If asked to paint a human face, a portrait of an imaginary person, one system can now do this more competently than the majority of humans. So do you still attribute some sort of mysterious to creativity to the human mind? If it was more than a teachable skill based on rules we would not need art and design schools.
Please. Where did I claim the mind is unfathomable? I indicated that there is still a huge distance to go and a microscopic distance has been covered.
To get specific, what exactly did that blurry fuzz obtained by the best people in the field imply? It indicated some correlation of what existed in the brain can be made to the known input. It is well known that optical material passes through several stages before it reaches the occipital lobe. I wonder how and why. I doubt it turns sharp images into blurs. It must modify incoming data in a most interesting way to reveal a good deal about thinking processes. So what were those modifications? That would indicate progress, not that a crappy reproduction of recognizable material reached the brain interior. My own brain has very sharp images, some that are only from memory and relate to incoming data. If the scientists had discovered, not a vague reproduction of input that is barely recognizable but a flock of sharp images aroused by the input from the visual centers that demonstrated how the image input was related to the mind of the subject I’d say that was real progress. The brain is obviously not a television set attached to a camera.
But so far that is all that has been demonstrated.
If your mental images are so sharp, why are you not a skilled photo realist painter that can work from memory alone? Even if you were, why isn’t everyone else? Why does a painter need to learn how to see before they can paint what is in front of their eyes rather than symbolic representations?
Because they are not images in your mind they are patterns of recognition, you do not remember images, your brain is a symbolic compression engine that gets more efficient through use, or training,. You recognise if the input or parts of it evoke the same patterns and then associate that new input with previous recognition events. A hierarchy of patterns of patterns. Go and study compression algorithms, you should be able to then appreciate what I have pointed out.
Why you persist in claiming that the quantisation (“bluring”) of the results proves that they do not prove that the researches have succeeded baffles me, that is an idiotic assertion, completely irrational. If I ask you what is the number Pi will you expect me to say that you are completely wrong if you answer 3.14 or even 22/7? The fact that the brain reading system worked at all, including on subjects who were DREAMING, indicates that the researchers have indeed uncovered the encoding systems used in visual cognition, and given that all the other sensory inputs are equivalent once they reach the cortex it can be reasonably suggested that the entire mental experience can also be decoded AND that the improvements in accuracy will follow the exponential growth path of the enabling technologies.
I answered that. If you do not understand my answeres I will be pleased to elaborate.
Incidentally, as a graduate from two well recognized art schools I am well acquainted with the fact that art schools do not confer creativity on its students. They accept creative students and introduce thinking and technical skills on how to apply their creativity.
It is not particularly creative to make images. Any camera can do that and the most elementary manipulations with Photoshop can make interesting variations. Creativity has something to do with producing abstract patterns that can be successfully applied to very diverse and unexpected areas. That is not its totality but it involves that.
Go back and read what I pointed out about computers learning to paint portraits of imaginary people. As for suggesting that the abstract is somehow more creative that is also a nonsense, the difference between art and design is the nature and the intent of the message, the ideas. An idea is creative if is is firstly new, and generally useful at least to the extent that it is comprehensible. However this is a digression from the point that if your mind recorded sensory data in a literal rather than a symbolic way then we could expect all people to be naturally good at reproducing their memories. The images in your mind are not vivid, because there are no images in your mind. As a person who has a higher than normal level of eidetic memory I found this hard to believe, until I learned more about how the brain works.
You don’t seem to grasp the nature of an image. All images are abstractions. The only thing that is not an abstraction is the thing itself. My mind deals in images. Every time I close my eyes I see changing images. No doubt the mind retains them in a form different from how I perceive them but I am not aware of synapses nor synapse patterns. Only images that I can mentally manipulate. I am not saying your perceptive system is like mine. I have no idea what you perceive. Only the way I see it so that I can find it useful.
Do you mean I do not subscribe exactly to your conveniently idiosyncratic definition of an image? There is no plane of inner vision there is nothing to project your “images” onto, no matter how abstract they are. Then you feed concepts through your brain as if they were sensory input they trigger recognisers and this give you an impression that relates to patterns that you received from your visual system. So I would call your “mental images” symbolically induced impressions, not images. The pattern of activation on your retina (which is part of your brain) is an recorded image. Thinking is a feedback loop and our sensory modes are equivalent otherwise synaesthesia and brain plasticity after the loss of sight or hearing would not be possible. You can hear with your visual cortex, because by the time the input gets to the cortex it is symbolic and the recognisers can work with it regardless of it’s origins. Interestingly this suggests that we could acquire, from brain implants, totally new forms of sensory perception that had nothing to do with the natural world, and subsequently new modes of cognition. In a way learning to work with abstract mathematics is a form of this, not exactly but in the same direction. This raises another interesting question, if we can acquire new forms of unnatural perception and cognition, what if we lost all our natural forms and only had these new modes, would we still be conscious and thinking? I say yes we would, and therefore that a machine that started with only synthetic modes of cognition would also be conscious and able to think.
On a side note, and I really do want an answer to this question, “If you look at a point on the horizon, what can you see beyond the edges of your vision?” And yes it is a test, but you can’t ask of what, you just have to think about it carefully and answer honestly.
Directly viewed scenes are also ‘impressions’, your eye does a lot of pre-proccessing before it reaches the actual brain per se.
FFFFFFFFFFFFFFFFF! Read the bit about the retina, what do you think that means? Yes, there are no images in your brain, only symbols that belong to a form of compression system and your memory can be treated as a set of virtual sensory organs for that reason.
Thanks for taking my working system seriously. Perhaps the visual cortex does process hearing or perhaps it is intimately connected to the other perceptual centers and joins them in in informing my consciousness of a totality. I am only very vaguely acquainted with the dynamics of perception and it seems to me that biological economy would indicate a symphonic coordination of all perceptual centers in brain activity. Other creatures such as birds seem to be aware of the magnetic configurations of the planet for navigation and bats and dolphins are capable of sensing internal structures with there hypersonic perceptive capabilities. It would be critical under current problems if we could sense radioactive emenations. I have read that blind people can learn to “see” with their tongues when a pin array in their mouths simulate sight configurations.
I have many times tried to comprehend how a four or five dimensional world might be perceived with little if any success. Perhaps a computer might be able to translate that into my limited sensory apparatus.
Sorry Dan but you said “The pattern of activation on your retina (which is part of your brain)” and it’s actually not part of your brain but you making it so induced me to point that out. Because you left the impression you were not considering the pre-processing in the eye, which is separate from the brain as I said, the cells doing a lot of the processing are in the eye itself.
So you see, I did read your statement, that’s the thing.
And you are making the point of making precise distinctions, so if you start nitpicking you can’t complain about it, and it becomes confusing if you only do it partially ina discussion like this.
You do make interesting thoughtful comments though, so don’t stop because of me being annoying :)
“it’s actually not part of your brain” Well that is your opinion but it isn’t what I have heard in terms of neuroscience where it has been pointed out to be that it is considered part of the brain. Not the entire eye, just the parts from the cones and rods and down to the optic nerve, so meh, why should I care about your opinion? It is not as if it matters.
If that’s true you should talk to the people who told you that, because it’s not considered part of the brain in any reference I’ve come across.
And by the same token: why should anybody care about what you say or think?
I’m a bit disappointed by your sudden childishness (for lack of a better word) after first writing some more intellectual post I have to say.
Maybe you need to take a cup of coffee or something and get back on your virtual feet and we can forget about the whole incident.
So some academics in one place don’t agree with the academics that wrote the books you claim to have read, perhaps… I still put more value on the people I’ve worked with. :-)
[Whatnot], you don’t seem to understand that [Dan] is simply a troll. He spouts off whatever he thinks sounds intelligent (most likely not his own thoughts), and if you question him, he just fires insults back. His words are not backed up by the links he includes, so he clearly doesn’t expect us to read them – in many cases I don’t think he’s read them himself.
But what am _I_ doing here – I’m feeding him just the same.
Pfft one does not simply troll.
Well, nobody really knows who we each aae. If Dan is a troll living under a bridge I may be an octopus playing with a computer in a sunken submarine.
@BrightBlueJim so you think it’s one entity?
Anyway, I guess I ought to look at names more, but I don’t want to get that involved and make some internal db of commenters and then prejudge the comments.
Plus I have seen several people point out (and I believed them) that somebody used their name to make a silly or unpleasant reply to someone, sometimes on old topics few people still visit. So it doesn’t pay to catalog commenters by name for that reason too.
So far, all comments under my name are mine. I do my best not to be nasty unless it becomes useful. Flaming is not useful. There are times within this conversation when the object seemed to move away from submitting information towards some sort of ego play which is not interesting. So I try to stay away from that.l.
[Whatnot] – Yes, I think that [Dan]’s replies are consistent. He is playing the part of an artificial intelligence who has deigned to bring himself down to the level of us meat-monkeys, for the purpose of proving the possibility of his own existence. One of the problems with this is that he is attempting to play a character smarter than himself, so whenever a flaw in his logic is pointed out, he must attack and reduce the logic of others to “opinions”, and make other personal attacks. This is classic troll behavior. The goal of any troll is to get the people they are leading on to make a higher investment (generally emotional, but also intellectual) in the conversation than they themselves are. This is why you see “go read this”, or “go watch that” in addition to personal attacks on people’s education and intelligence.
Like you, I initially wasn’t paying that much attention to who was saying what. In fact, opposite from you, the only time I was looking at names was when I was thinking, “what ass wrote THAT?”. So it took me a while to understand what he was saying/doing. But going back, I can see very consistent behavior. The “childish ass” Dan is the same person as the “impersonal lecturer” Dan. In fact, the two sides overlap in some of his posts.
While the video on visual reconstruction was interesting (thanks for that, [Dan]), it in no way demonstrates his claims. All that video demonstrates is that in the lab, it is possible to undo some of the pre-processing of visual data that happens between the retina and the cerebral cortex. Very fascinating, but nothing to do with how we “see” images in our minds.
The website for the Human Connectome Project was far less insightful. All it’s about is developing the technology to map human brains to the neuron interconnection level. There are no claims of any kind at this point, how this is going to advance our understanding of the mind, but we weren’t actually supposed to go READ what’s on the website. We were just supposed to go look at the front page and say, “oh, gee, I guess people really ARE figuring out how the mind works”.
When [Dan] proposed to [Jan Sand] a test, he was really saying, “I’m getting bored with this – it’s taking more energy than it’s worth, because I’m not getting enough people to go apoplectic”. No matter what [Jan Sand]’s answer was, he can claim that it was a fail. This explains the “I won’t tell you what the test is about” crap. I’m not sure how he proposes to tell “the others” how [Jan Sand] failed without also telling [Jan Sand], but I’m sure it will be some more pointless argument. Perhaps [Jan Sand] was supposed to say “I see the extrapolation of those things that lead to the edge of my vision”. Which is crap, because if [Jan Sand] had said that, then the “right” answer would have been “I see nothing past the edges of my vision”.
[Jan Sand] – nobody is questioning your consistency. Indeed, you have remained objective throughout the discussion, and have brought a great deal of insight to a discussion that was meant to be little more than inflammatory. You have remained a gentle person throughout, and I think coaxed some actual thought out of [Dan], which may be why he has grown weary of it. I appreciate both your thoughts and your manner.
I’m not sure how insightful my comments were because the only real source of my data was myself. To a reasonable extent I am aware of my mental functioning and also aware of many things going on in myself not in my awareness. The statement Dan made which I found most disturbing was that he felt in total control of his thinking. That struck me as pure hubris and has led me to believe he is quite unaware of rather important factors. I can only take the test aas a sort of practical joke to somehow unnerve me, a superiority ploy and of no real concern.
There is no doubt that some progress has been made unraveling a few mysteries of the mind but the fact that the Japanese have been working intensely to create a robot to do rather simple things in the Fukushima catastrophe and failing as of now indicates we are barely starting in this adventure. There seems no doubt that the information network now throughout the world has sped up considerably all scientific progress. But, oddly, its blanketing of civilization has produced many threatening negative aspects. Humanity seems endlessly clever in innovating ways to destroy itself.
Your test is not easily done in a city environment (aside from the limitation that I cannot see around corners) and where I live the forest in a nearby park leaves no horizon to see.
A fixed point on a wall will suffice, but focus on where the horizon would be, the far distance. Then ask yourself that question, “What can you see beyond the edges of your vision?”.
Waiting for your answer….
There is nothing beyond the edges of my vision although I have a horizontal span of a bit over 180 degrees. Vertically it’s about 60 degrees.
Fail.
Curious answer. Of course my other senses feed me indications of more than what I see but you cannot tell me what I see.
It isn’t negotiable, that was a fail.
Since dan’s account seems to have been taken over by his young son or an imposter or something I advise to wait a bit with continuing the conversation.
Fail = end of the dialogue. It is not negotiable.
Aside from his highly spiced emotional emotional responses, Dan seems an interesting source to encounter. I am exceptionally interested in his indication that I can see around corners. This would be most useful when giving myself a haircut instead of the double mirror arrangement I now find necessary.
I am glad you posted that, it will make it much clearer to other people why you failed. You can bitch all you like, it wont change the fact that you did fail. Only contemplating the question longer may change that.
Get back to me when you have a deeper insight into the question.
Well, if nothing else, it seems you have a rather interesting sense of humor. Get back to me when you can give me a serious answer.
I must hand it to Dan. He was right on the mark when he indicated I had to plumb deeper thoughts to respond to the significance of his vision test.
The test oI bviously was to discover the width of my visual perception for nothing else was under examination and the width of visual perceptions is an indicator of what type of creature I may be. There are many different classifications for living things but one special one divides animals into two types – predator or prey. Prey creatures have eyes on the sides of their heads so that they can protect themselves from becoming victims with the widest vision. Predator creatures like owls, and, perhaps, flounders, and humans have eyes looking mainly in one direction giving them vision in depth and better ability to track and seize their prey.
Since my vision is horizontally limited to a mere 180 degrees, I obviously can only be a predator. I could be an owl or a flounder but my failed early attempts inspired by Peter Pan to fly and my lack of ability to breath water raises the probability that I might be human. My distastes for beer and both football and basketball provides some difficulty in that probability but it still remains open to some small degree. Current politics also further the doubts. It remains a distant possibility.
But the judgment by Dan that I have failed rather definitely indicates his delight in being prey. His lack of rational arguments also points in that direction. He could be a rodent or perhaps a member of the arthropod sector. Archie of the Archie and Mehitable saga was a rather articulate cockroach but he could only write in lower case since the caps lock key on a typewriter was too much for him to master but these days a computer keyboard would no doubt increase his expertise. I must tentatively assume Dan is a cockroach but that supposition remains open.
The third line indicates my inherent skill with typos. The word should be “obviously”. And the cat’s name is Mehitabel. Like Atlantis, the wonders of the world keep vanishing, such things as the works of Don Marquis and Walt Kelly.
Hi [Jan Sand], I don’t quite follow your logic in placing [Dan] in the category of ‘prey’. I would suggest two other possibilities: 1) there are more categories than just ‘predator’ and ‘prey’. ‘Parasite’ comes to mind. And 2) it seems that what he delights in is seeing you and others as prey.
I don’t think that an affinity for team sports in general or football and basketball in particular is a qualifier for the species – it may just be a cultural thing. I would think that there would be a class distinction, but this doesn’t appear to be the case. But I’ve known plenty of people who I have no other reason to disqualify as ‘human’ who find no attraction in football or basketball.
As with Darwin’s evolution and the teleological theories of religions. one must accept propositions on the basis of utility. Science, every other decade or so these latter days, undergoes paradigm changes. What I have proposed is merely a current tentative concept. Dan could be anything from a rough attempt at artificial intelligence to a bat mutation living in a basement with a handy computer or even a cumulonimbus thundercloud with a tricky control of a communication satellite. The possibility spectrum is rather wide but it seems to me it is unlikely to be human. Like anyone who has been at the mercy of the unexpected I concede I could be wrong.
As for myself, as with any curious sentient creature, exactly where I fit into the mass of protein experiments now ongoing on this planet, I find myself rather emotionally congruent to life other than human. My short eistence of merely ninety years has not given me time or experience as to where to place myself. Humanity has gone so far off the rails that my being human, inspite of the camouflage of my adoption of human physiology, is neither attractive nor likely. If nothing else, it explains my recent lack of success with beautiful girls.
I think you have supplied enough information to rule out your being a dog, at least – dogs (regardless of their age or appearance) seem to be very successful with beautiful girls.
One of the areas commented on here was an indication as to what was or was not a part of the brain. Since the nervous system extends throughout the entire body the linguistic assumption that the brain exists only within the skull might be questioned. Nerves penetrate all areas of the body so i seems to me more sensible to view the brain as present throughout the entire system complex.
Dogs are privileged to pee on lamp posts and hydrants Similar relief in pressing needs by me are no doubt a signal to legal authorities for repressent response. So I agree . I am not a dog.
“This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing.” – This is incorrect, the experiment by Searle only reveals the inadequacy of Turing’s logic for intelligence, it does not define the limitations of the computer.
If prediction is how one chooses to define intelligence, how then was Deep Blue not fully intelligent, considering it was always predicting every move in advance?
I’d argue that at the end of the day, intelligence is a man made concept, and what truly makes life is the balance and the play between nature and nurture.