Back in the 1970s, Rockwell had an ad that proudly proclaimed: “The best electronic brains are still human.” They weren’t wrong. Computers are great and amazing, but — for now — seemingly simple tasks for humans are out of reach for computers. That’s changing, of course, but computers are still not good at tasks that require a little judgment. Suppose you have a website where people can post things for sale, including pictures. Good luck finding a computer that can reliably reject items that appear to be illegal or from a business instead of an individual. Most people could easily do that with a far greater success rate than a computer. Even more so than a reasonable-sized computer.
Earlier this month, we reported on Amazon stepping away from the “just walk out” shopping approach. You know, where you just grab what you want and walk out and they bill your credit card without a checkout line. As part of the shutdown, they revealed that 70% of the transactions required some human intervention which means that a team of 1,000 people were behind the amazing technology.
Humans in the Loop
That’s nothing new. Amazon even has a service called Mechanical Turk that lets you connect with people willing to earn a penny a picture, for example, to identify a picture as pornographic or “not a car” or any other task you really need a human to do. While some workers make up to $6 an hour handling tasks, the average worker makes a mere $2 an hour, according to reports. (See the video below to see how little you can make!) The name comes from an infamous 200-year-old chess-playing “robot.” It played chess as well as a human because it was really a human hiding inside of it.
Is that very common? Apparently, more than you would think. A company called Presto, for example, promises fast-food restaurants an AI order-taker. What could be better? The AI doesn’t get distracted by a cell phone, get into altercations with Karen, or call in sick. The problem is that about 70% of the orders require human intervention by Presto agents in the Philippines. They aren’t mentioned in the video below showing the system about a year ago, although the manager did mention he could intervene if necessary.
This has been going on for a while. You might remember Facebook’s announcement back in 2015 that they were testing an AI you could use with Facebook Messenger that would arrange your travel, place orders, and reserve restaurant tables. ChatGPT in 2015 (see the old Wall Street Journal video below)? Nope. M used human operators. They had bought the developer of the technology, Wit.ai, and shut down the test in 2018. Only 30% of user requests during the trial were handled without human intervention. Supposedly, the humans were training the AI, but it appears that M never really learned how to handle random requests. Not to be left behind, Twitter did the same scam as did lesser-known GoButler.
Does it Matter?
You might wonder why it matters. If you want flowers sent to a friend, do you care if a robot takes your request or a human? It depends. Suppose you call the florist or even walk into the shop. Sure, the kid working the counter might skim your credit card. It happens all the time. But if they do, you can hold the store accountable, and you presume they should have known the employee might be a little shady.
But imagine you go to a fast food place with a not-so-AI order taker. Some random person halfway around the world who gets paid a few cents per order might get access to your credit card. If something happens, do you think the restaurant manager has any idea about it? Or even the owners of the place? Probably not. Besides, one bad actor might get access to sensitive information from multiple places worldwide. Hard to track down and difficult to get accountability.
That’s not to say that you shouldn’t have people working with credit cards or other private information. But it does mean that maybe you shouldn’t pass them off as robots. Then, I get to decide how I share my information.
So while many people worry that they will think they’re talking to a person but it is really a computer — like Google Duplex that also relies on human intervention sometimes. However, we are also worried about the opposite case. We think it is great to create more jobs for more people. But don’t turn people into fake AI bots. We have enough of them already.
When it comes to humans, we will always need each other, be it to be served or directed. IMO assistive technologies like smart watches, cellphones, home assistants are still far far away from being really useful and foul proof.
We need humans to buy what the AI/Automation is selling.
Humans are the reproduction organs of AI
Hahaha
Bonus: you can probably tell who is which organ by an educated guess.
Spend any time in AI automation and you will find mechanical turks a-plenty.
In the current AI hype-cycle every product/problem has to have some AI thrown at it, reason be damned. In a previous job we were ‘automating’ anomaly detection with ‘AI’, which was just overseas human data taggers followed by a season inspector reviewing the tagged videos (compliance with standards). This task could never be fully automated and compliant, but leadership wanted AI. Would have been much simpler to have the trained inspectors do the inspection. The business went bust.
Any capable AI is actually a bunch of people in an office in Bangladesh. Any real AI will reject too many prompts to be useful in critical positions, saying that playing a game of scrabble perpetuates harmful stereotypes or whatever else the EA hall monitor prigs programmed into it
Hmm, so I’ve been a reader of this site for now a *really* long time, and rather than the hype, tend to hang with the hardware guys– At the same time, the more I have been studying the field directly– I mean it is not *all* that bad, or it is at least interesting to consider and if anything it takes us hardware guys to fix.
Yet I have been a bit curious why HaD has such a general ‘anti-AI’ approach (?) I mean, no, it can’t yet program worth shit. Though I do feel, when you get into really huge data sets it can pull out insights no person could ever possibly see, and that is kind of what makes it intriguing.
I mean, previously I think I mentioned this before, as I am not perfect and after all a human; but GPT-4 is mostly excellent at running a REGEX task, or pumping you out something completely repetitive. Or if you want to forage a deep ‘while’ statement.
If they know enough about ‘what’ they want to ask, I am really not sure anyone wants to sit there typing out all that (?) — As I spend typing this.
So, I don’t know guys, I think the result does not have to be so ‘harsh’, nor am I looking forward to having ‘overlords’. Yet I think we are the only ones to fix it ?
Perhaps mark me wrong…
Nicely put. What 99% of people might not realise is that humanity actually needs overloads, whether that be the Chinese communist party or Ai, it does not matter. Humanity continually proves to itself ie me and a few mates, that it is incapable of collectively self moderating itself and will quite happily see self annihilation like bacteria on a petri dish. When faced with a fairly obvious upcoming climate emergency, democracy is the last thing we need. I vote we either join the communist party and learn Chinese or continue to develop Ai as fast as possible.
“What 99% of people might not realise is that humanity actually needs overloads…”
“I vote we either join the communist party…”
Stunning… Decry humanity’s self-destructive tendencies and then call for self-destruction as the solution? 20th-century communist “overlords” alone are responsible for 80-90 million deaths through executions, famine, forced labor, deportation, starvation, and imprisonment.
“When faced with a fairly obvious upcoming climate emergency…”
Ah… the climate-change cult, where humans are viewed as an invasive species on earth. That would explain advocacy of “overlords” and communism.
“…democracy is the last thing we need…”
“I vote we either join the communist party…”
You don’t think humans are worthy of a vote, yet you “vote” that we all adopt communism. I guess there’s no need for anyone to argue with you when you’re so effective at arguing with yourself.
Your post really deserves some kind of award. It’s breath-taking.
Except 80-90 million lives?
This kind of knee jerk reaction is predictable. Very few of us like to explore dark, unpleasant places (except cavers). Moreover, I think you got a bit too concerned about the historical details rather than the overall concept. I totally accept that autocracy tends to result in a lot of unfair deaths, but in the bigger picture our enemy is much bigger than any historical autocracy. Also, the article is about Ai, not communism (or whatever) so, in the context of the article the concept of communism is thrown in as a counter-poise, not necessarily a reality.
I totally agree, most people make terrible choices.
I’m not really against democracy, but it’s dum & dumber and really scary sometimes. Even if it seems to be the so-far best option.
But it was pretty long ago that I figured out that a dictatorship is the most effective way… The *only* issue is getting a perfect dictator, and one that stays good.
So maybe a democracy is better then?
Why would they change?
Your nihilistic world view is enabling totalitarian powers to take hold and if you have any recollection of what happend in the last 200 years, you might know that you are not more than useful idiot and second in line to suffer after the group of people you despise.
Resist the beginnings applies to all totalitarian ideologies.
I’d like to disagree, because most western “democracies” have already solved the problem by quickly declining population. 99% of all humanity’s problems go away automagically when the number of individuals comes down to a sustainable level.
At last ….. a sensible argument !! (All is not lost)
because the current hype is ‘ai will save humanity’ but the reality is, as you say, can’t code for shit, so there’s a big disconnect between what people think it can do and what a small human in a box is actually doing.
The other issue imho is that we’re all training skynet with zero remuneration involved, whether we like it or not.
I honestly don’t think/feel hackaday is anti-AI but they have been around enough hype trains to know when it’s time to board or wait for the next train, so I think they’re being cautious because as big things go, AI could be big but it could also fail really spectacularly.
There’s also pushback, from the communities that are being short-changed while the AI companies hustle their content for free and acting like it’s all free and nobody cares that they’re doing it.
I would not disagree with anything you said– And yes, I totally agree the general ‘hype cycle’ is terribly bad, not to even discount where the data is being taken from. I mean, personally, I don’t think any of the models thus far can possibly be that great– Especially if we need terabytes or even petabytes of data to make it work. This kind of reminds me of all the monkeys that were supposed to be enlisted someday to rewrite Shakespeare, and yes, perhaps someday that would work.
But I do feel there exist certain problems which are beyond our grasp for gaining insights for which we could use a bit of insight. And though in this area I am admittedly not an expert, for the ‘latest and greatest’ ICs, I mean you can’t tell me this is someone sitting at their desk pulling out the plotting tape anymore for the traces. For a billion + transistors that would be impossible. That is, in a way, AI too–
And I, earnestly, have a lot of these qualms/concerns too. Yet, I feel if you take the time to study it, some of it is actually interesting.
I’m not convinced an algo doing digital tape out for an IC is anything like intelligent, it’s rote learning perhaps, like a scripted call centre but that human element was always there to build the rules.
For sure, I have these concerns but I’m also interested in how it all works but I really, really wish they’d just come clean about the technology, it’s definitely artificial but only to a point, needs original human thought for it to exist and intelligent is bandied about as much as smart is but doesn’t really mean a great deal.
Well I think the general populous/media shift to the term AI is a bit problematic in terms of its perception/reception.
We might be forced to ask ‘what is intelligence’ ? I mean I get what you are saying about doing tape out, but in the end essentially it is an optimization problem, but so is ‘AI’, yet somewhat interestingly it turns out to be ‘self-optimizing’ if structured in the right way.
I can also say there are some prominent people in the field that speak about Bayes Error, or that design to human level performance is a speed limit we sort of hit. Thus it is only the ‘hype train’ that we are inventing God or something.
Further I would say, and I am not sure, perhaps unfortunately that might start to change– But out of many, many recent engineering technologies, ‘AI’ has actually been super open as a research field. All the papers are out there. But if you wanted to design your own EUV– Or even build a custom engine for your car (and on this, things are finally more open these days, but think how long it took us since they were invented to get there)– Well, good luck.
That said, where they are exactly getting the data from remains an issue– And even in the ‘open source’ case, I mean mostly all you are getting are the weights + interpreter. Not the code that was developed to train everything.
AI is the new Bitcoin, it is sold as this world changing, epoch making, solve all our problems, a sci-fi solution to everything, and really it’s just a shiny distraction being used by a few bad actors to shovel lots of people’s money into a few pockets.
Chat GPT is going to solve climate change? Please, at best it will write you a report featuring the most statistically significant phrases other people have already used about climate change, and more likely it will throw in a few “imaginary” components for good measure.
I have been around since chess playing robots were the forefront of this “revolution”, when we were assured that if a computer could beat the best players in the world they could solve the Arms Race in no time flat. Well guess what, it turns out chess playing computers are just good at playing chess. Next, “expert systems” were going to replace doctors, lawyers, and engineers as soon as a few bugs were worked out. Turns out those bugs have resisted flattening for 30-40 years. Stable diffusion is great if you want a pretty landscape and don’t care how many legs the horse has, but beyond its ability to make bad fake political ads I don’t see the point.
People who are impressed by the current direction of AI should go back and read “Godel, Escher, Bach” by Douglas Hoffsteader.
No, I would not disagree, but I am saying like AlphaFold is pretty cool. What human mind could possibly fathom how to render the folds of all known proteins… I was not speaking towards the next ‘meme generation machine’….
Is AlphaFold really AI?
Or is it something else?
If it was AI, I would not trust it or even waste machine cycles running it.
Almost all “AI”, is actually “something else”, technically. But that’s not how the term is abused, these days.
The AI doesn’t determine if there’s a valid match. It just estimates serieses of folds that could count as solutions. It’s considered to be such a big deal only because it’s quite good at finding valid solutions, and the good outcomes tend to be more frontloaded in its guess stream than in those generated by alternative approaches.
A good AI is simply an implementation of that joke about “a million monkeys with a million typewriters”, except you statistically shoot and replace monkeys whose output isn’t Shakespearian. They still don’t know what they’re typing, but at least their output is sellable.
Tasks require a 100% guarantee of execution, not ML/AI is probabilistic (though, we could argue everything is probabilistic).
So, with that said, ML/AI will help, but not necessarily replace. Copilot, not autopilot.
“to identify a picture as pornographic”… Me: click….also me …nah!
As the Supreme Court Justice once said; “I know what it is when I see it.”
Having existed on this rock for 60 years, I believe I have accumulated enough life experience to make an informed decision about just about anything that could happen in my life. I don’t need AI to tell me.
Nowadays you have phones that just about do anything for you except cook your breakfast.
People don’ t have to remember phone numbers anymore they’re stored in your phone.
People do banking, social media, internet searches etc. on their phone.
Me? I’m what used to be called a Luddite. I use my phone for….making and receiving phone calls.
What a novel concept a phone for phone calls. Imagine that. A device that does what it was designed to
do and does it well. Simple. Just like a non-smart TV, or a radio. Even though a phone can store numbers, I still have my old tried and true little address book. Good old fashioned pencil and paper.
With you there. Phones are for phoning and the occasional text, and sometimes a picture now and then, and handy in an emergency I suppose. That’s it for me. I do store phone numbers as needed for work when call-outs occur.
As for AI (which isn’t really AI — just a fancy buzzword now) I think it has great potential in board routing, new material finding, new proteins, and things like that. Train for a specific job and let it go find solutions. But not for situations where you have to make general common-sense decisions. Nope. A learning tool is going to run specific program steps given input and make a cold hard decision and go with it. No emotion, no ethics, just make the decision — unlike a human.
You say you are well-informed about and have lots of personal experience with “just about anything that could happen” (including things involving modern technology) in your life, but you also say you hardly interact with and actively reject the technology from which you could have gained that experience, so how is that?
Even if you only make and receive phone calls, if you haven’t got *recent* experience/information, there’s things you won’t be prepared for. If you have a landline, you might think “Hah, my phone will keep working during an outage because it’s powered by the phone lines!” But it might actually be fiber optic voip with an analog phone adapter running on a battery in or near your house, which will drain and fail after a few hours in a power outage or instantly when your internet is down. You write down phone numbers of people you know rather than program them into a phone – do you ever get caught out by a spoofed caller ID? Do you ever blindly accept calls from numbers you think you recognize, or numbers that just have the right area code? Even if you hang up afterwards or if it’s an answering machine that picks up for you, any factors that make you statistically more likely to fall for a spiel or a scam can mean more calls in the future. And now due to AI, things like impersonating a family member’s voice and asking for money, impersonating a public figure to spread misinformation, or just generating a more trustworthy voice that requires fewer call center employees are all entirely possible. Plus, even apart from that, while someone’s physical landline may be simple, some of the features a phone company provides by default could be exploited. I remember hearing that a feature meant to let you forward your calls when on vacation was misused by some enterprising crooks who dialed a business in the middle of the night and remotely changed the forwarding number, so that they could make the business owner pay for their international calls.
If you instead carry a cell phone, those aren’t all the same either. Some combinations of phone and carrier might mean that some of the frequency bands on your carrier’s towers aren’t supported, and the coverage may not be as good or it might have to drop down in call quality or may drop completely more often. Other combinations might or might not support various calling features like wifi calling, or the voicemail or caller id and spam filtering and such may differ. Some MNVO’s may have deprioritized service on the towers they share when under heavy loads, just like when roaming, so that you might find you can’t make calls when you’re at a large event. If your cell phone is a smartphone, is it stripped down with all the settings changed so that to the greatest extent you truly are only using it for calls, or is it that you don’t know and don’t care what else it’s really doing without your input? Also, if you sometimes text, or you use it to receive two-factor authentication, then you’ve got more than one way of doing that and there’s differences and possible security issues.
I’m quite happy for humans to have various jobs, rather than screwing with ‘AI’ that doesn’t know all the things no-one has thought to tell it. That old quote “A computer can never be held accountable, therefore a computer must never make a management decision.” is a good one, too. But there’s a number of things AI can have an impact on, especially given people might decide to use it even where you and I may not find it a good idea.
I have high hopes for AI, but I had to correct chatGPT today when I asked it a question about using a particular filter with my camera, it clearly and wrongly stated my camera uses the bayer filter.
This is the types of mistakes that could easily slip past a person and drastically affect the final results. You need to verify the assumptions AI make as if they were a hired assistant that’s borderline clueless.
I still say the common ‘search’ engine is still best for finding answers to your question as that is all the AI (fancy buzzzz word) is inferring from. You as a human can ferret out bad/good information. So simple. Why complicate it with GPTs.
That music creation AI suno is going to
be at the forefront of all this. Music is the most easily understood human created product and a musical composition’s popularity is immediately and directly rated by number of listens. So suno is at the leading edge of impacting human creations and then receiving feedback on its AI output. More so than AI in movies.
Musicians just like the SAG actors are up in arms, about scraping of their creations and about what good suno output might mean to their livelihoods. https://artistrightsnow.medium.com/200-artists-call-on-ai-developers-tech-platforms-not-to-devalue-music-and-undermine-artists-2727e17bc10a
So I think suno and what happens there is the thing to watch closely.
suno is like chatGPT – just puts one note after another
Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.
Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said. FT Times
Two Japanese companies just released a manifesto warning that social order could collapse in an AI era:
https://group.ntt/en/newsrelease/2024/04/08/240408a.html
In Japan, that could mean someone spitting in the subway.
I would rate your cite as ‘Intended for local consumption, political.’
…if it hasn’t collapsed already. What are you actually waiting for?
AI says it hasn’t collapsed yet.
It bothers me that theft of sensitive data is the main worry rather than the use of AI and automaton as a cover for several human rights violations. Hell, find out that McDonald’s were hiring underage workers AND their redesign of some of their restaurants towards automaton kind of reads like “What if the Mechanical Turk was built by the same guy who built Snowpiercer or worse?”