What We Are Doing Wrong. The Robot That’s Not in Our Pocket

I’m not saying that the magic pocket oracle we all carry around isn’t great, but I think there is a philosophical disconnect between what it is and what it could be for us. Right now our technology is still trying to improve every tool except the one we use the most, our brain.

At first this seems like a preposterous claim. Doesn’t Google Maps let me navigate in completely foreign locations with ease? Doesn’t Evernote let me off-load complicated knowledge into a magic box somewhere and recall it with photo precision whenever I need to? Well, yes, they do, but they do it wrong. What about ordering food apps? Siri? What about all of these. Don’t they dramatically extend my ability? They do, but they do it inefficiently, and they will always do it inefficiently unless there is a philosophical change in how we design our tools.

Maps let us express the world in ways our brain could not.
Maps let us express the world in ways our brain could not.

When we began augmenting our natural ability we started with simple things. Levers, fire, chalk and slate. The real change came when we realized that writing things down and drawing pictures could store memories with exacting precision and we began to invent empiricism. This let us draw maps that recalled the world better than we could. This let us keep notes and agendas that instantly outclassed our natural ability. Most importantly they let us compress hours of human labor into something that could be digested quickly and in parts. A letter might take a day to draft but it could get its point across in moments.

However, there’s a common thread in each one of these tools. The intelligence was always the human’s. A tool simply extended one aspect of a person’s ability but not the person. Either that or it facilitated communication to another person. It increased the potential amount of information that could be transmitted to another mind at once to a significantly higher level, but how efficient that transfer was depended a lot on how well the person communicated. To pick a forced metaphor, it’s like adding RAM and keeping the same CPU.

We Need a Band-Aid-Rip of Change

Now, the change I’m hoping for has begun to happen in the tech world, but it’s so tacked on to the existing applications that it appears to be a sort of happy mistake rather than an intentional design choice. Our tools need a brain of their own and they need to work in tandem with us. Every single technology out there is ready to be pushed completely out of its market because they are just optimizations on pen and paper. They are not optimizations on us.

Let’s take Google maps as an example. Right now maps is like a navigation coprocessor. It doesn’t do anything we can’t do and it doesn’t do anything unless we tell it to. We could carry an atlas or get a tourist map and plan our route out manually. It just delivers the map to you without you needing to keep every possible map in your car.

We could learn to use a compass and orient ourselves. This is provably within any moderately intelligent person’s ability. It’s not doing anything special. However, this can only get so efficient. One day maps will instantly navigate. It will know our location to the millimeter, but it won’t work with us. It doesn’t have a brain of its own.

Without basic intelligence on top of its algorithms... well... this is an actual place google maps has taken me.
Without basic intelligence on top of its algorithms… well… this is an actual place Google Maps has taken me. Nice road.

So what would be an example of Maps working with us? How could an intelligence help the application interface with our mind in such a way that it actually begins to increase our ability to think and operate in this world. Let’s take a walk across a city at night.

You’ve been out with your buddies and now it is the time of the night where diets don’t matter and fried chicken is the highest culinary height man has ever aspired to. So you reach for your phone. You get the application that replaced the yellow pages out and find that there’s a chicken place not six blocks away. You input that information into your maps application and it happily gives you information. You begin to walk.

It’s not two blocks before you notice what is definitely an unsanitary section of a drug habit in the gutter. This is not the place to be. This is not the path to take. If you knew the city well this would not be an issue. You’d know how to get around. However you don’t. So what do you do? Well you point the little blue dot on the screen a different way and get walking. Eventually the application gets upset with you and plans another path.

It’s a matter of understanding the world in the same way you do. If maps had some intelligence you could tell it, hey maps, this path looks scary. Maps, this path looks confusing, I don’t think it will work. Maps, I’m drunk and this is an unreasonable amount of stairs. It could begin to take and extend the thinking a person has to do about a situation. Maps, “I’d like a hotel and I want to pay eight dollars for a meal along the way”.

In other words our applications have begun to take control of sections of our intelligence without matching the shape of it at all. It’s up to us to hammer the square peg in the round hole and it’s frustrating. It’s inefficient.

Have You Really Thought About the Enterprise?

The first example is one thing, but imagine that hundreds of people are saying, “Maps, this is an unreasonable amount of stairs.” If maps can think for itself, if it can begin to understand the world it is seeing, then it can make decisions, it can tell you things. “Hey, most people avoid this route between 9:00 and 12:00 at night.”

Now some of you are likely rising up and claiming that maps already does a lot of this. That it’s exactly what people have been working towards, but it’s not. It doesn’t fit the shape of our minds. It gives us information but it doesn’t do any of the work for us. For some, this sounds right, I don’t want a machine thinking for me. I want my brain to do all the work, but then I urge you to think of Star Trek.

A brain or a spaceship?
A being or a spaceship?

In my mind there’s two ways to look at the Starship Enterprise. Is it a crew of humans exploring the universe who just happen to have a superintelligence along for the ride, or is it a superintelligence that’s exploring the universe in a symbiotic relationship with a bunch of humans? There are many instances when it’s shown that the computer on the ship is so powerful it can simulate, and may even be a sentient being. So why would it keep people around? Likewise, the ship is known to be piloted by the most talented and trained beings Starfleet could find, why would they need their own onboard super intelligence? Because it extends the ability of each.

In the movie Iron Man we don’t complain when Tony Stark tells Jarvis to run a simulation or to send the design to the manufacturing robots. However, if people were to try to turn Tony’s iron suit into a manufacturable design in a matter of hours it would be impossible. It would take thousands of man hours over years. The future, is terrifyingly, going to involve handing over sections of our minds to robots we’ve built. It’s going to leave us vulnerable to those robots being taken away.

We Must Deal with Too Much Information

Take medical data for example. There are so many variations on single diseases. On top of that there’s research into each facet of those diseases. However, there’s more information than a single mind could categorize and process. IBM’s Watson project has been soaking in the dollars trying to promise just this. A Star-Trek-like super intelligence that can process millions of man hours of refined knowledge and assist a human in the decision making process.

If you went to the doctor were told that based on your genetic make-up, weight, diet, and fifty years of unrelenting medical research we can say with 90% confidence that exactly 51mg of chemical X will cure what ails you. Then you’d thank them, even if a miracle of precision like that required someone giving up a portion of their mind to a machine.

It’s terrifying. The first applications that start to augment human ability with actual intelligence. Tools that understand the way we see the world instead of just improving the tools we designed out of crude things will have grand impacts.

I'm not saying that AI will destroy the world. I'm just sayin' that Microsoft has 5,000 people working on it and they brought us Vista,
I’m not saying that AI will destroy the world. I’m just sayin’ that Microsoft has 5,000 people working on it and they brought us Vista. OpenAI might be a good idea.

Waze, let people tell maps what was happening in the world around them. It let them say “hey this road is slow”, and “I saw a cop”. Shaper Origin adds its own intelligence to the movement of the human body. Tesla Motor’s Autopilot, when used properly, removes some of the tasks of driving, allowing the human to remain more alert and hopefully less fatigued.

These are examples of humanity beginning to understand the next step. Shaping our tools to our minds instead of our hands. It’s scary, but likely to be such a brute evolutionary advantage for those willing to take the risk that it will never be a decision at all. The question is instead, who gets to build it first? Will they build it well and responsibly? That’s where Hackers come in.

[image shown on smartphone is HAL 9000 by javierocasio]

108 thoughts on “What We Are Doing Wrong. The Robot That’s Not in Our Pocket

  1. All technology is a stack of hacks, occasionally simplified with innovation. To do a small percentage of what you’re describing requires more data (sensors, tracking, privacy issues, etc) and several more layers of hacks or some massive innovation. To do 100% of what you’re discussing (Human level AI) involves several orders of magnitude more layers of hacks AND significant innovations AND more data.

    It’s not a simple matter.

  2. There is also a very big 2 sided sword between competition and cooperation.
    Wolves and sheep. sabotaging other peoples work to get better yourself.
    It’s the divide an conquer strategie which works all to well unfortunately.
    I don’t follow politics much here in the Neterlands any more but I have the idea most of them are being influenced far too much by lobbyists. All those wolves trying to steel a bit of the big pie.
    One of the small cussesses was that govenmens here said to the industry: “We don’t care what kind of phone charger you use, as long they are all the same”. Within a few years all phone’s were the same (With a single exception).
    And then you go to a shop and see 20 different Li-ion batteries for foto cameras on a row, all of roughly the same size.
    Tens of different sizes of phone batteries, all the same within a few mm or only the connector slightly displaced.
    2 weeks ago I bought a new coffe machine for EUR40 because a EUR0.20 thermostat had failed.
    Standardisation of a gazillion different small items will help a lot of people and reduce landfill but it will have to be enforced by governments. Lobbists will always oppose.

    This world already would have been a beautiful place to live in if the sharks & wolves & pigs & dogs among us had some more opposition.

    PS. the voting war in the USA was already over. It was over when the 100 (or so) candidates “lost” and 2 were left.

    1. Addition:
      What happend to “occupy wallstreet” and the rest of the worldwide “occupy” groups?
      They all seem to have dissapeared pretty quickly.
      Partly because they were completely ridiculed by the “news”.
      “NOS” (Nederlandse Omroep Stichting (Dutch news company)) was sold years ago to sbs, which has also been bought by some bigger foreign company.

      1. The Occupy movements tore themselves apart from the inside.
        The hate for corporate greed wasn’t enough for them to set aside their philosophical & political creeds to focus on the original common goal.

  3. This article sounds like it’s trying to criticize people for not striving for truly smart AI, when really people ARE striving for this, but the technology simply is not there yet. It’s not a “Band-aid rip of change”. We aren’t following some dead-end road. It’s all slow progress. Our computers already do far more thinking for us than we could have imagined half a century ago. Give it time.

    1. It actually reminds me of the other day when a friend, dead serious, told me he had the idea to add a generator to an electric car. “It won’t need batteries!” he exclaimed. I had to break it to him that he wasn’t some super genius who thought of this before anyone else, but that it just doesn’t work that way. AI of this order of magnitude simply isn’t convenient or even possible at this point, and probably won’t be for many years.

    2. As far as I can determine, the article is lamenting the writing of “dumb” apps, which just do what we tell them (like plot a route), but don’t make any “intelligent” decisions along the way, such as google maps being able to factor in that certain routes might be dangerous at night, or that a route has too many stairs and we’re drunk, or that we’d like a snack along the way for less than 8 dollars. These are all things that a human could do as we’re planning a route, but are completely skipped over by the “dumb” route planning algorithm used by google maps.

      This is hardly an OHMYGODAIISGOINGTOKILLUSALL kind of request. A google maps app capable of checking for inexpensive snacks near a rout probably won’t kill us all, but the author seems to intend to placate responses against automating things normally done by human common sense. Ie, that we’d become incapable of making these simple judgments if apps always do it for us. (the fear that if your route is automatically adjusted to avoid dangerous places at night, you might not learn to avoid them when not depending on the app, resulting in getting yourself mugged)

      As for the band aid part, the argument seems to be that the “slow progress” being made will only serve to bring in new features as people stumble upon their utility, rather than consciously considering that these kinds of functionality might make the app far more useful. You might think of the nighttime redirection feature accidentally if you’re just thinking about a maps app that plans routes, since it’s fairly closely related to the actual task of finding a route, specifically the existing consideration of avoiding traffic at rush hour. However, if you’re actually looking for human mental considerations not reflected in just plotting the route, then you can find things like the 8-dollar-or-less-snack-stop just be considering what people care about that isn’t related to maps. (like getting hungry when out and about)

      It’s not about the problem the AI has to solve. The problem is easy to solve. It’s about the hidden considerations which are obvious to a human, but cannot be inferred from the problem description in any way.

      As a bit of side reading, I’d recommend looking at the concept of Viv, the giant brain. It’s a successor to Siri which attempts to at least be capable of handling the extra considerations (like asking for a snack on the way) but doesn’t seem to be smart enough to automatically include them (like the consideration that it should avoid dangerous places at night). Perhaps this might be corrected in future, by looking at the aggregate data of other people to determine suggestions. This seems like wishful thinking to me, considering that real life versions never seem to quite be useful, always suggesting thing that others needed but which never quite match your situation, and real life is going to have a lot more possible situations. Perhaps someone will come up with some additional cleverness for predicting it, perhaps from current conditions using derived rules rather than purely from past cases.

      1. When I read this example I get the image of a butler or assistant in my head. The same principle applies to a real human. How can a real human know I’m hungry and might want a snack? I need to tell him that. I think that these kind of applications will become irritating really fast. There’s nothing I hate more than when I want something, the person, or computer on the other side keeps asking questions. Questions I find irrelevant. I want to go somewhere and if the computer keeps asking if I want a safe route / am I hungry? / it is raining, do you want to stay dry? /etc I probably won’t use that application anymore. To stay with the route example, if I drive through an area I don’t like, I turn around and take an other route. It’s really not that difficult.

        1. Right, people want a sidekick/copilot/PA, but what they’re going to get is a nagging nanny that’s gonna piss you off.

          Then if google has anything to do with it, you’re not going to trust it, will think it’s always playing mind games to push you toward paid advertisers.

          Plus if google has anything to do with it, it will use crowdsourced data for feedback, half the population are less than average intelligence, half those left are probably not as smart as everyone here, so basically it will only do what stupid people want, like google search now. This is why I keep calling it artificial stupidity, all efforts seem to augment and enable the stupidity of the user. You don’t have to get smarter to use it, it comes down to your level, then you get lazier and stupider. Training the thing out of default stupidity is going to be immensely frustrating and time consuming, and then to do so, you have to confide to google (Or whatever cloud AI provider is going to get bought up by google, or facebook or whoever next year) your innermost thoughts, desires and passing fancies for it even to take wild guesses at what you actually want.

          1. +1 mate. I think most of the people that visit HAD enjoy fixing things and making them better. We’re not looking for the things to fix us, and make us ‘better’.

  4. A few years ago my phone decided to tell me that I hadn’t walked as much this year compared to last year. Who knows why it was telling me this, but I thought that it wasn’t going far enough.
    Google Maps knew that I worked on a street that also had a gym, it also knew what time I normally set of for work (reminding me when it thought I’d be late). Why is it such a stretch to say “Hey, I’m going to wake you up 30mins earlier so that you can get to the gym before work”?

    1. Because there’s fire in the heart who quenches their brain?! :)

      I see a childish approach to AI, asking for a personal assistent for lazy/undisipled/drunk peoples?
      I think that’s a dangerous premise to expect from machines or programs.

      Take a look how Siri, Google Ads works, with strong boundaries of their programming,
      limited by how they want to earn their revenue by user submitted data, and what
      will happen when they fail to take care of a drunkish user?

      > Tesla Motor’s Autopilot

      Well, how about a bicycle, motorbike, bus, taxi, train or even a ferry?
      It’s just a tool from Tesla to make more money, and it’s still illegal to be drunk/drugged in a car.

      > “Hey, I’m going to wake you up 30mins earlier so that you can get to the gym before work”?

      “Hey, check out this cool new ads for new protein drink who tastes awesome!”
      “Hey, check out this new gym who are 10 minutes closer to you, costs only 99 USD per hour!”
      “Hey, you need to renew the subscription to FCK_OFF, we demand your money, do it now!”

      > Our tools need a brain of their own and they need to work in tandem with us.

      There have been one since Epoch Time, 1. January 1970, but it’s not friendly with everyone.
      it cannae do everything, but it do it well!

      1. This is one of the reasons I think we should be working on a open source, you-own-it-yourself digital personal assistant. THIS should be the Linux of our generation. Whomever owns the AI (be it Google, Apple, FB, MS whoever) will have disproportionate influence over the decisions we will make. And I’m certain they will sell this influence to the greatest bidder in a heartblink. Can you ever rely and trust an assistant that is not working for you but with someone to whom you are the product.

    2. Because the AI that creates these shitposts are in dire need of improvement, and we need a Band-Aid-Rip type paradigm shift in shitpost AI, so that we can tailor our posts to be exactly what you wanted to hear, before you knew you wanted to hear it. ^_^

  5. Gerrit Coetzee – I have a brainstorm for you. You want to somehow introduce more human-like input to an artificial intelligence construct? I like your example of Google Maps and better path planning to take into account local path obstructions in real-time. The only way to do that is with active environmental input from other humans. Web cams and other electronic sensors are just too information-deficient. Human input is much more detailed and helpful to other humans. Enter Crowd Sourced Map Navigation or CrowdNav (not invented yet – maybe?).

    However, a programmer of such a program would have to limit or filter open narratives or suffer the plague of information-overload and nonsense input like Reddit and YouTube. The path requester would submit the coordinates of start and destination (gleaned from Google Maps Directions) and input them into CrowdNav. The user would give it a Topic or Subject header and desired date and time parameter (i.e Trip from Brooklyn Heights NYC to Bedford Stuyvesant NYC on foot at 8PM on August 1, 2017 – Purpose: Scenic walk with desire to eat & catch movie along the way). A new window (as Google uses frame busters) can be hyperlinked to show proposed route to other users.

    The other users see the topic in a list of geographically sorted topics so they can input their local knowledge of the areas. However, no textarea or text narratives! Just radio buttons, check-boxes, and drop-down menus. These real-time choices will have pre-thought-out contingencies about any area on planet – only from the CrowdNav programming team. This cuts down on the puerile graffiti like nonsense. It would only have pertinent and informative data about the areas in the path displayed by coordinates and customized Labels on Google Maps. An off-site blog or discussion database could be setup for discussion groups. That will allow for moderated narratives about certain path choices.

    Obviously things like fall back sanctuaries like police stations, security guards, hotels/motels, restaurants (with ethnicity and price range), residential areas, etc would be helpful. Even an overall structured critique of the path would be helpful. For example: OVERALL RATING OF PATH: Are you nuts! Plan different path to different start and destination. REASONS: Derelicts, street thugs, potential for armed robbery, copious panhandlers, etc. Or just change time of start. Or here is better route that puts you next to better sanctuaries. Or get an UBER ride! If suggested routes don’t show on Google Maps to actually have those listed sanctuaries and seem like routes to back alleys and residential apartments then just ignore that one as suspicious and report it to webmaster.

    I think all CrowdNav users need to be vetted so it doesn’t turn into another Craig’s List honeypot disaster. You need advice about Well Lit streets, What to look out for along the path, Any web cam links along the path?, Any recent photos from a social media like Instagram, etc. Restaurant ratings links would be good too. Maybe that part could be Yelp! automated as scammers could put in their own web links – I think.

    Just a brainstorm. NYC was probably a bad choice as an example. A better choice would be a forest, hiking, area with locations of Park Wardens, area restaurant and hotel locales, good places to see stuff from, bear and cougar warnings, camping sites, etc. In any case one would never post their real name or physical description. Probably not even specific date. Only a date range or days of week. AM or PM would be helpful if you wouldn’t want to post specific time. All bad information is CrowdSourced reported and a score is built about the trustworthy-ness of your data. You get banned by IP address or MAC address if your score gets too low. However, an EBAY-like good overall score posting could be added to your CrowdNav posting “User 3345667 – Score -1009 – bad score – ignore as untrustworthy” or “User 5551212 – Score 35,000 – good score – possibly trustworthy”

    I don’t know, maybe someone could improve this cyber-kernel of an idea.

    1. You’ll get the same problem as with wikipedia editors. Who moderates the moderators? The moderators themselves. In the end all you have to do is kiss ass and have a load of sockpuppets or followers by your side.

      So there’s no actual way of preventing false and malicious information. Someone could intentionally divert traffic out of a certain part of town to lower the property value for business, to make it into a ghetto.

      1. Dax – I agree with you as while I was typing, the “Devil’s Advocate” mode was setting in on me and I started to think how bad people might try to exploit such a human augmented crowd sourced GPS application. I started to remember how bad guys use Craig’s List to lore people to honeypot locations to ambush them. It’s such a shame we Americans have such problems. This model would work in Canada, Japan, or Sweden (etc.) but I think it might be abused by clever miscreants. I know it would NOT work in Mexico or Colombia (etc) as it would be a kidnappers dream!

        Some how some vetting process, user scoring systems, or some A.I. system to analyze the formatted (no narratives) input to determine the probability of malicious intent. Maybe some sort of MMPI-2 (miniature version) test could be used for end-user initial account vetting. Maybe the adhoc entry could be integrated to Yelp! like data. IOW if you enter a nice restaurant along the proposed route, the A.I. looks it up to see if physical address is real and along that proposed path. Also if the user recommends an alternative route, the A.I. analyzes the new route and notices that it is not mercantile-heavy, mainly residential-heavy, too far from emergency services, is listed on a proposed new database by city police dept’s or chamber of commerce on their city’s neighborhoods to avoid at certain times of day on foot or driving. Some city websites provide links to police reports in kinda’ sorta’ real-time. An A.I. could data-mine them for keywords that match your route plans. This project seems like it would be huge undertaking. Maybe a good business model for a Internet startup?

        What would be great is if proposed route’s could have a quadcopter drone (paid by end-user – small fee), that does a brief flyover of the route at the proposed time of day ahead of time. That could be entered to your route project. It could just flyover the critical parts like in downtown city areas. The A.I. could not analyze the video data but you could at least see what’s up like a military recon op. ABC News now uses them and recently saved some people in a flood area. I have a Hackaday.io project about this subcontracting of your quadcopter drones (https://hackaday.io/project/11586-proj-quadcopter-what-to-do-with-it-after-the-fun)

    2. Crowdsourcing doesn’t work for creating accurate, actionable intelligence. Reasons – bad information is much worse than no information at all, and a few people are sociopathic bastards.

      Don’t like the fact that Intelligent-Maps-9000 is detouring people away from your cafe, and towards the sandwich shop a block away? Don’t bother fixing your deficiencies, just tell IM9000 that the competing sandwich shop is a crack-house, with an upstairs brothel, and a dress code requiring sad-clown costumes. Now you’ve significantly narrowed the competitors clientele, while making your own shop the slightly-less-shitty choice – with no actual effort! :-)

  6. There is loads of work on AI, machine learning, massive networking, and all that at Google and Amazon and in universities and myriad other places. The math and algorithms, even the ways of describing algorithms, have become so esoteric that hardly anyone outside the field can read the publications. The authors who popularize this stuff are so far behind the curve that most of us have very little or no idea what is going on.

    1. I remember him! He was bonkers, eh? I once tried chatting with his Javascript “brain”. It sortof echoed back some of the words I told it, now and then, in no sort of order.

      He had a little ASCII art diagram of his AI in the sig of every post. Real name was Arthur I think. Wonder what happened to him. I heard he tried uploading his brain into his mess of Javascript, and now he makes article suggestions on clickbait sites.

      1. Greenaum – I know you’ll find this humorous, however, a billionaire transsexual (Martine Rothblatt) enamored by his African-American (and extant) wife Bina Rothblatt, poured a crap-load of money to build a android (head only) and attached it to a desktop supercomputer built by Hanson Robotics in Plano TX and Bruce Duncan Bristol VT. I can not get more details as he is trying to keep the nitty-gritty details secret. They live in a small Vermont town.The A.I. is doing amazingly well in learning stuff. It is up to a smart 3-year old human child level. However, that is true of the African Parrot too (and maybe some Corvids too – aka Crows).

        Here is Stephen Colbert making fun of it. But it is quite real and just as spooky. It’s name is BINA48. It is named after his wife Bina and is a acronym for Breakthrough Intelligence via Neural Architecture 48. It’s quite amazing. I would love to talk to her as she is at Terasem Movement Foundation Labs in Bristol VT – not far from me. However, you have to be invited to do so. I would love an online interactive demo.

        Here is Colbert’s joke version: http://www.cc.com/video-clips/p8wqsa/the-colbert-report-the-enemy-within—bina-the-activist-android

          1. interstellarsurfer – No I think 48 is the year the human Bina Aspen was born. She was a realtor from Compton CA that the billionaire Dr. Rothblatt met while working on his XM Satellite project (his invention). He (or she) was born in 1954 and is younger than Bina.The android (or gynoid) was born only 6 years ago. BTW it’s intelligence (or situational awareness) is gleaned from the Internet undoubtedly Google. But since Terasem is acting like the C.I.A. with it’s information we may never get the full story until they are ready to share.

          2. Hmmmmmm, sounds like a lot of ELIZA-like keyword triggering interspersed with scripted speeches. Could be half parrot/cyberape (reflecting/copying owner) half search interface (Maybe it hits keyword gardening and quotes snippets from gardening forums) not convinced enough from that it understands full contexts.

          3. RW – you interstellar and dax are quite correct. I just watched a YouTube video about bina48 and it appears she does NOT use Google. She uses ASK. I guess ASK is easier to do screen scrapes on. Helps eliminate page fluff and reduce results to pure parse-able text and overcomes cross domain browser security issues. Also she has EXAFLOPS of memory and hard drive. You have to PAUSE my video at the beginning to read off her amazing specs. You can go to her website and upload your mindfile. They’ll use it in her engrams. For audio input she needs a special microphone and NO background noise. Also she reacts to whatever you may be doing with body language and react to it. I do think she is faking human intelligence well. I think A.I. is a misnomer as we don’t really know what our intelligence actually is.Artificial Human Mimicry is a more accurate moniker.

        1. I can see it as being true though, that if you want human level generalist AI, you’re going to have to “bring it up” put your time in playing peekaboo and sorting blocks. Maybe at ~8 year old equivalence, it can start teaching itself. If you haven’t given it any psychological problems… (i.e. imagine an actual 8 yo trapped in a clunky, limited robot body)

          1. > ~8 year old equivalence

            Most of peoples are still on that stage, as crybabies, so why should I bother with “androids”?
            Take a look at Star Trek’s Data. It’s an interesting person, with traits few peoples has.
            In a way an android has to be better than a human being, and can people accept that?
            Lore was disposed because he was too “human”.

            Maybe it’s just me who are misanthropic?

          2. Replying myself, I thinked a little more about personality traits an AI could/should have. ideonomy.mit.edu/essays/traits.html ; 37% Positive, 18% Neutral, 47% Negative. Clearly a lot of these subroutines(!) could be removed. But what about negative traits, it’s clearly a positive usage to have some of these, such as “Abrupt”, “Angry”, “Assertive”, “Hateful”, to defend oneself. Which I used too long time to learn that. :( People will abuse an android/AI if they can, and I mean that an android/AI should defend itself, it have to expect irrational behaviour. Words has often no meaning in these cases, it’s emotions, and can an android emulate that?

          3. RW – How do we know if TERSAM is not already doing the toddler heuristics thing with BINA48 right now? They have loaded BINA48 with the engrams of Mrs. Bina (a politically-angry African-American Lesbian baby-boomer human with a transsexual baby-boomer husband – not at all dysfunctional right???). No telling if they have done the TURING test yet, I doubt it. Reaching an 8-year old’s non-dysfunctional engram-heuristics is doubtful. The android is up to an unusually bright 3-year old right now. It can hold an adult-like non-ventriloquist interactive conversation with impromptu animation and visual and aural awareness (IOW not a Disney animatron puppet). I personally would have preferred a less HUMAN appearance and opted for something more SKYNET-esque like this, but that’s probably why I’ll never get an invitation for a tour (LOL):

  7. One fundamental problem with the thing you are describing, at least for me, is a matter of trust. Before I use an device as an extension of my own intelligence and let it make decisions for me without double-checking them, I need to trust it. And that won’t happen as long as it’s not really in my pocket, but actually in the cloud owned by a “don’t be evil, haha, just kidding” corporation.

    In fact, a lot of life-improving technology has the problem of trust and privacy. I would love to collect detailed data about my health, metabolism, food, mood, etc. — in hopes of analyzing it and improving my life through it. But I really wouldn’t want anyone else to have access to that. I absolutely loved the idea of “memoto” — a small camera with gps that creates a stream of tagged photos from your day, and lets you search in it later. But as soon as I learned that the only thing you can do with those photos is send them encrypted to their cloud, without the possibility of storing them locally, the idea kinda began to suck big time.

    With just my own intelligence, I at least have a chance to spot a problem, notice when I’m being sold something I don’t want, change my decision, and most importantly, verify the answers that I get. And believe me or not, I’m doing it. An intelligence booster is a very personal thing, it becomes a part of you, I would hate for anybody to be able to tinker with it.

    1. I used Samsung S-Health on my Galaxy 5S for about a year. I really liked recording my pulse, my steps, my stress level (a separate part of the app) and my time on the exercise machine. I could have recorded my Blood Pressure as well, but I wasn’t sure if that was data I wanted to “fall into the wrong hands”. Then one day my phone upgraded to Android 6.0. S-Health no longer worked and insisted that I download its replacement giving it more permissions.
      I didn’t. I miss the old app, but I’ll do without one rather than let some unknown entity have more of my private information.
      Likewise, I am a regular FaceBook user (addict?), over a year ago I could no longer receive FB messages on my phone. FB insisted that I change over to their new app “Messenger”, which of course, requires more permissions…
      Like you said, Trust…

    2. There is no Greater Good, that a single National Security Letter cannot subvert for nefarious purposes. We cannot trust AI explicitly for the same reason you cannot trust other humans. Some of us are bastards.

      1. True, as the NSA sent Yahoo! a NSL recently to scan ALL customer emails on all Yahoo! domains (and there are a few of them too) for just a few target keywords. No idea what they were fishing for. Evidently Yahoo! was hard-encrypting all emails without telling customers. NSA was so pissed at this as it literally stopped them in their tracks forcing the NSL.And we know they weren’t looking for Arabs in caves typing emails on their expensive notebooks or Chromebooks.

  8. The issue here is do we want AI(Artificial Intelligence) or IA(Intelligence Augmentation). The author wants AI, which strikes me as a bad idea and a waste of resources even assuming it eventually becomes available in a few dozen (or hundred years).
    First of all, AI is like Fusion, it’s the computer (or energy) of the future and it always will be, it has been just around the corner since 1950 and every decade has proven just how little we really understand the problem. I am not saying AI is impossible, just that I don’t plan to see it, especially in everyday use.
    Which brings us to the waste of resources problem. A real human level AI would be great for say, piloting an interstellar generation ship or exploring a volcano, or any of a number of things to dangerous or boring (or both) for a human. But why build a multi-million dollar machine and devote years of highly paid professional labor to training it when, as the saying goes, functional human being are being produced relatively cheaply by semi-skilled labor all the time. Perhaps you plan to build the first one and then reproduce it ad infinitum, but one of the great strengths of humans is that each of them has a different viewpoint, and a problem that is impossible for one may be solved easily by another. A meeting of 5 people could come up with a hundred possible solutions to a problem, but a meeting of a hundred identical AIs would only come up with one, and if it is the wrong one… In a similar vein, I agree with Douglas Hofstadter’s critique – what we like about computers is that they a reliable, there is no need to check their math or memory. This is because they are deterministic Von Nuemann machines. Humans on the other hand are massively parallel computers which harness emergent data from chaotic systems. If we build a computer that simulates such a system it will also be bound to eventually add 2+2+3 and get 5. Error and creativity spring from the same engine.
    But even if AI were not implausible and a waste of resources, it would still be a bad idea. A guide app that keeps me away from bad areas will also keep me from finding someone who needs my help or encountering some amazing graffiti art. What if I live in that part of town? An app that allows for my distaste for stairs will not be able to respond to my need for exercise. The app that guides me to KFC is guiding me away from that great new food truck. The fundamental problem is that giving the right to make choices for a large number of people to a program means giving that right to a programer and that programmer has motivations that others may not share. Do you want the Communist Party of China controlling your guidebook/newspaper/taxi on your next visit? The problem of uniformity also raises it’s ugly head, how many bands have died aborning because their sound did not fit well into the categories used by Pandora?
    Google Maps is an IA, it allows me to have memory accurate(better actually) and speed(almost) access to bajillions of maps. If I have to figure out for myself the results of weighing another bajillion variables relating to my comfort level with my surroundings, that is a trivial problem for me but one which is unlikely to be solvable by a distant overmind and if we allow it to try, I submit that the world will end up a poorer place.

      1. The AI that is here is very weak AI which functions only in tightly bounded fields, Chess AIs only play chess, driving AIs can’t shop for pants. these are not terribly interesting in the sense that they cannot do what humans do all the time, which is apply old knowledge to new systems. I would say that these specialized systems are more akin to IA in that they do limited things faster and more easily thus leaving me time and energy to do harder and more interesting things.

          1. I would expect any mathematician to be able to produce a rough floor plan and to be capable of quickly teaching himself enough to talk sensibly to an architect or contractor. Ask Watson to drive your car and see how far you get.

          2. I wouldn’t, not by default. The mathematician would only produce some kind of a house by the fact that they’re human and have lived in a house at some point in their life, so they have prior knowledge to what houses are like, but that counts as training.

            On a more fundamental level, I would not expect either to just pick the task up and do it on the first go with no briefing or adaptation. That would be like taking a 5 year old child and asking them to do calculus – the fact that they can’t do it doesn’t mean they’re non-intelligent – just that they haven’t applied their intelligence to the task yet, so they have not learned it.

          3. “Ask Watson to drive your car and see how far you get.”

            I’m willing to bet Watson to be an excellent driver, as long as you can format the data to what it can read. It’s a stand-alone search engine after all, so you could load it up with a number of driving scenarios and set a time limit for each query so it won’t spend too long coming up with a decision.

            It’s frankly not very different to how computer vision works in the first place: a massively parallel search through a dataset to determine the most likely object in the picture. Driving itself is very simple: you go or you don’t go – recognizing what’s happening around the vehicle is the hard part.

          4. Dax says:
            ” The mathematician would only produce some kind of a house by the fact that they’re human and have lived in a house at some point in their life, so they have prior knowledge to what houses are like, but that counts as training.”

            This is my point; a person is “trained” by experiences that contain no direct relationship to the task that will eventually call forth the knowledge. If I asked the mathematician for a floorplan for a 3 bedroom igloo or a house made out of gumdrops they would produce credible designs. If I ask my tesla to design a traffic circle for cars going 70 miles an hour- zip, despite the fact that my tesla knows all about traffic circles and speeding cars. Watson kills at chess, but cannot make the first move at Chinese checkers, something any child can do. The question that determines intelligence is not can the system do some task well, it is can the system learn to do a task with limited initial information.

            Similarly,you say Watson would be a good driver if I format the data for it and provide it with a bunch of scenarios (you left out providing it with a weighting system for outcomes and some other algorithmic goodies). My driving instructor did not do those things for me,I had already acquired all that knowledge from riding in cars and bikes and hearing stories about crashes and narrow escapes. Only rarely,and late in the process did anyone say “you need to remeber that for when you are driving.”, and by then my response was usually “Duh”. Drivers ed merely refined a system of knowledge that I acquired without anyone needing to formally create it for me. Intelligence is the ability to extract specific meaning from global chaos

        1. Protip: the Turing test does NOT test for intelligence, but for whether one can fake it.

          The Turing Test was originally deviced as a means to assess how good PEOPLE are at identifying what intelligence is and what is intelligent. Alan Turing himself said that the question of whether a machine is intelligent is “not interesting” because we do not know what intelligence is, hence why he proposed the test.

          1. The troulbe is, the whole Turing Test got perverted by people who made the behaviourist argument of “walks like a duck, talks like a duck, is a duck”.

            Intelligence according to the behaviourist argument is like running as defined by this engineer:

          2. The joke may be a bit obscure though, so I’ll explain it:

            Running or walking is a difficult problem for a machine/robot because it’s a complex dynamic and mechanical problem, and many people have spent decades trying to produce a robot that walks or runs reliably and efficiently.

            But if “running” is defined broadly as “feet hitting the ground in a sequence, propelling an individual forwards at speed”, then a bunch of fixed feet arranged as a wheel effectively bypasses all the complexity simply by filling the letter of the definition.

            That is how the behaviourist argument for artifical intelligence (Turing Test) also works: you set up some limited task, such as driving a car from A to B, or holding a length of conversation without being discovered as a fraud, and if the machine passes then it is “intelligent”. But that is really an argument from ignorance – it’s saying “because we can’t see a difference, and we aren’t willing to look any closer, we declare these two to be the same”.

            The flaw of the behaviourist is that he can’t device an “ultimate Turing test” that would prove his case, because that would require him to test the difference between an intelligent being, or what is assumed as such, and the machine in EVERY possible contingency. Otherwise there is always the possibility that the machine he has created is not intelligent, but merely sufficiently complex to fake it.

    1. ” but a meeting of a hundred identical AIs would only come up with one”

      Not if they use Monte Carlo methods, because even if they were programmed the same, the probability that they’d come up with exactly the same result is slim.

      Think of weather forecasting. The simulation is seeded with random (noisy) starting conditions and ran multiple times to get a probability distribution of the outcome. Multiple runs of the simulation set will produce slightly different predictions, and the longer you run them the more they will diverge from each other, so a hundred weather predicting AIs would mostly agree for perhaps the next three days, but disagree wildly over the next three months.

      1. That is true, but opens up two new problems.
        First, monte carlo approaches can only produce a range of outcomes based on a pre chosen variable. Truly interesting/difficult problems offer a large number of variables, and if you attempt to randomize all the variables your simulation descends into noise but the method itself provides no way to choose any variable as more significant than any other, thus you have no way of knowing if you are even working towards a solution. Just as importantly, monte carlo assumes there is a desired end state, selecting that state is another task in itself and here we risk infinite regress.
        Here we come to the second, and most profound problem, how does a deterministic program “realize” that a nondeterministic approach is needed to solve a problem? I call this the “Non-determinist Starting Problem” which is the inverse of the classic “Halting Problem”- how does a machine realize it is in an infinite loop unless it has a process monitoring the loop process, but then you need a process to make sure the monitoring process does not start looping, etc.
        Randomness is not novelty, precision is not understanding.

        1. “how does a deterministic program “realize” that a nondeterministic approach is needed to solve a problem?”

          Can’t it simply try both and see which approach looks more promising?

          “but then you need a process to make sure the monitoring process does not start looping, etc.”

          That process would have to be deterministic so that it can’t go into infinite loop – non-dependent on feedback conditions – and a watchdog hard reset to make sure a glitch won’t get it stuck. A practical implementation doesn’t need to be perfect.

          1. “Can’t it simply try both and see which approach looks more promising?”
            So you are suggesting that several hundred simulations be run for EVERY decision point throughout the day? You better have a LOT of computing cycles on hand. Again people have a meta-cognitive ability to think about what they are thinking about in terms of other things, this ability appears to arise from the chaotic/emergent/homomorphic workings of our mushy parallel brains. Simulating that in silicon is going to be really hard and not profitable because those same processes also cause distraction and error, so you will spend millions of dollars to build a machine that can add 2+20 and get 4. AI is a dead end, IA is both possible and useful.

          2. There are some processes where you can’t even define the idea of a ‘promising’ solution. In the same way, a watchdog process cannot tell the difference between a ‘working-as-intended’ process, and a ‘off-on-a-tangent’ process.

          3. “In the same way, a watchdog process cannot tell the difference between a ‘working-as-intended’ process, and a ‘off-on-a-tangent’ process.”

            But in practice it doesn’t need to. Presumably there isn’t unlimited time to make a decision anyhow, and a solution that takes too long to come up with would arrive too late even if it was right, so after a certain time it’s more expedient to just reset the machine.

            Ultimately the scheduler has to switch tasks anyways and get the machine unstuck, whether or not it was on the path to discover the meaning of life and everything.

    2. ” how many bands have died aborning because their sound did not fit well into the categories used by Pandora?”

      The same problem technically occurs in the existing ecosystem as well. It’s not a new problem.

      Bands don’t get recognition because labels buy playlists and chart positions and advertisement spots, and the whole copyright and royalty system prevents people from just playing a song on the air, in a movie, at a venue, as a part of a performance etc. without first spending time and money to check if anyone owns rights to it and who you have to pay, how much, etc. so it’s safer and more profitable to just play what the labels push.

      Essentially: the labels own and control the publishing channels, without which you don’t get listeners, without which you don’t get record sales or gigs, without which you don’t have listeners. The only way to get through is by enlisting with the labels, who reject you if you don’t fit their formulaic expectations of what will sell.

      The system is like this because of the idea of copyright and IP royalties – the publishers make it their business to prevent and make difficult for anyone to freely publish anything, so individual authors would have to give up their copyrights to the publishers in order to get paid, which the publishers then use to inflate the price of art and entertainment about 100x to run themselves.

      1. The problem I am talking about is not that gatekeepers exist in complex markets, that has always been the case, rather I was noting the problem of leverage, that the motives of record producers are relatively clear and open to influence by consumers,critics, etc. but when the Pandora algorithm decides that a record is synth-pop rather than reggae there is no one to argue with and it may very well be impossible to know why the “decision” was made. Yet that decision means thousands of reggae fans will never hear the music and as there are fewer algorithms than record producers, my ability as an artist to choose my channel is reduced. Similarly, if I as a government official want to reduce spending on social problems I would be well served if most of my constituents stayed away from poor areas of town. A minor tweek to a path rating algorithm to make it more risk averse and suddenly the only people going to that part of town are those who live there. The beauty of it is most people never know they were steered away, there is no warning sign on the map, no obvious attempt to make poverty seem rare, you just never see it,and never know why.

        1. “open to influence by consumers,critics, etc”

          Far less than you think, because the individual critics too have to be heard. That’s the problem of the imperfect market – the consumers are not in communication with each other and do not know all the relevant information.

          If they were in communication, to the point where the consumers as a mass could just choose to ignore/override the labels’ power, then they would already be spreading information about the bands effectively among themselves and the labels would not exist.

    3. The existence of a generally used guide app from a ubiquitous player like google or facebook could be bad for the individual whether you use it or not. Say you take the perfectly common sense step of shaving a mile off your walk somewhere because you’re a big scary looking fooker and don’t really care if it’s a bad area or not, a mile is a freaking mile. Now then, it so happens it IS a bad area, and a murder happens, cops pin the time to time you were walking through. You didn’t see shit. However, now due to highly popular app recommending to everyone the F else to avoid the area, you are having to prove to a court that it was the action of a reasonable person (Jury thinks “average person”…) to be near there at all. I know it is supposed to go on reasonable doubt, but if the Chief has got a boner to close the case and hasn’t lifted a finger to search further, and his underlings want to indulge and please, well then, you’re kinda screwed because all sorts of other circumstantial stuff will come up, they’ll find knives in your house, everyone has knives in their house, but it’s all going to look nasty. And because Annie Airhead in the jury would believe the app religiously, and can’t even imagine (because there is nooooo capacity there) why anyone would be there unless they done it, you’ve got a 50 50 shot of losing a hell of a long period of liberty or your life in certain places.

  9. “If maps can think for itself, if it can begin to understand the world it is seeing, then it can make decisions, it can tell you things. “Hey, most people avoid this route between 9:00 and 12:00 at night.””

    Did you consider that the reason why people avoid that route between 9-12 might be because “maps” keeps telling people to avoid that route – not because there’s necessarily anything there to avoid?

    That’s the problem of computers thinking for us. The computer is still fundamentally “stupid” and simply following a program in the way it’s making the decisions – so omissions and programming errors result in unintended consequences that have non-immediate and far reaching consequences. Tony Stark’s suit is another example – albeit a slightly different kind of example of this – because it’s basically programmed to evaluate people according to neoliberal/Randian standards. You might not want to live in a world where decisions are made based solely on the Prisoner’s Dilemma, or some form of naive utilitarianism, because it can turn out fundamentally unstable.

    Another alternative is that the AI is too smart for our good and becomes self-serving and evil.

    1. Or the programmer for the map program is told, “You can’t make them avoid certain neighborhoods! That’s profiling! It is demeaning to the people that live there! You are being judgmental! “

  10. Maybe the problem is how you use the apps you have. If I need turn by turn driving instructions, then Maps is great; but I can also just punch in the address of a place and then plan the route myself. That way, I’m eye-up and will see when I’ve wandered into a less safe area, or am on a piece of dirt that Maps thinks is a real road.

    My sibling is dealing with some home remodeling. She could use the pocket robot to get reviews of professionals and pick the best bargain; instead she reads about HVAC classes at the college, researches manufacturers, and gets an idea of the parts costs so that she knows enough to see through whatever the HVAC-equivalent of “blinker fluid” is.

    Where these pocket robots more integrated with us, say a futuristic brain interface, then a community live map would be easier: to access this program you have to provide information when it’s needed. I visit your city, my map gets information from you and I get occasional memories of my town because someone visiting it needs to know my opinion on where the best pho is. But without that integration, we have yelp and grubhub and paid reviews and 1 star ratings because someone’s ex works there.

    1. The problem with user reviews, “most popular” lists and search engines is that the results are paid or faked, so leaving the task of buying you HVAC equipment to the AI will get you swindled.

  11. In 1999 I carried a small PDA in my pocket (didn’t have the $ for a cellular connection at the time so it only synced at my desk). I scanned a document with my HP CapShare, sent it to my PDA, edited it and could print it. I could edit my spreadsheets, documents. Wrote HTML, configured my routers, etc. I added extra fields to the calendar so I could track weather my calls to clients were billable by the hour, under a service contract, or under warranty. We had Palms which could read handwriting.
    Then Apple ‘invented’ the iphone (according to them they didn’t copy palm and all the other pda’s at the time) and mobile devices became a thing that you use to access itunes, they became limited to what Apple thought you needed to do. You had to pay for a developer license to change anything and only then could you change things if they were approved by Apple.
    So the open source world started their own project to get around the artificial limits but it got gobbled up by Google and because Apple sells they make it be similar to Apple, limits and all, because they think thats what people want.
    I now barely have the same functionality over a decade and a half later because the industry was set back to far.
    In short, what went wrong was Apple.

    1. This! But my PDA didn’t play well with the rest of the world. I had to use the PDA’s Contacts, Calendar, Tasks format,
      And if it wasn’t M$ Outlook, too bad! Even WinCE PDA’s didn’t work well with Outlook. Then M$ dropped Outlook (pretty much, it was no longer free, or backward compatible). That, plus when I bought my cell “flip phone”; Contacts, Calendar, Tasks, Etc. were incompatible with my PDA(s) or Outlook.

      1. I moved off of outlook a while after and used bitpim. But nothing yet has been as easily customizable. If I added new categories in outlook they appeared on the pda/phone.
        I’ve now moved off of goole calendar because it faces the same limitations as the rest but to add functionality I pretty much have to do real development and create my own calendar app since none of the ones available are extendable. I’m partway there with my own sync server (owncloud) as its open source so I can add thing as needed.

        1. What I hate about Google Calendar/Contacts/stuff is
          It went through all (most?) of my various Contacts and Calendars and put them in the same place.
          Now a friend’s birthday is listed several times in Calendar because it couldn’t distinguish between John Smith, smith.john@aol.com, jsmith@workaddress.com, and John; and Mrs. John Smith is listed as John Smith too!
          And I find it hard (impossible) to edit all these various “entities” to build a correct and complete one!
          If I did, then Google (no doubt!) would update all references to John Smith on the Contacts of other Google customers, including ones those John Smith probably doesn’t want to have additional information.

  12. Presently, all the “Artificial Intelligence” and “Machine Learning” are misnomers used for marketing hype. All it is is a set of tools for statistical analysis. Upon this, everyone can build up the many levels that will be required to get to “Artificial Intelligence”.

  13. “doesn’t do anything unless we tell it to” That’s the way (uh-huh uh-huh) I like it!

    “What about ordering food apps?” Most of them are code bloated monstrosities taking up far more megabytes than should be needed to do the mundane tasks of interfacing with a web server and placing a food order.

  14. There is loads of work on AI, machine learning, massive networking, and all that at Google and Amazon and in universities and myriad other places. The math and algorithms, even the ways of describing algorithms, have become so esoteric that hardly anyone outside the field can read the publications. The authors who popularize this stuff are so far behind the curve that most of us have very little or no idea what is going on. Here is a link to a component of WATSON you can dabble with online http://www.java-tips.org/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s