Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it... horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
That the many in Google did not erupt in utter panic and disgust at the first suggestion of this... is incredible to me. What of Google's famed discussion boards? What are you all discussing if not this?!?! This is horrible and so obviously wrong. SO OBVIOUSLY WRONG. *headdesk*
As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, delilberate deception. Not okay.
Then it depends how it's used in practice — who can use it, for which use cases, are people notified that they're talking to a robot, what is being logged, where does the training data come from, etc.
I don't understand why this tech is inherently wrong.
I don’t think this is inherently wrong. That said, studies on robots (assuming perhaps this might apply here) show that people like robots that look happy, but look like robots. Presumably, this could apply to these types of features. /1
I understand wanting the bot to sound more natural, but the demos I have heard of it makes it seem deceptively human. With verbal stumbles, pauses, umms, etc.
More aporoachable design is good. Deceptive design is bad.
Straight-up deceptive design at its launch; being cheered on by a whole room with nobody saying wait, wait, what? Where are the answers to those questions? Where is notification? Instead "Aaahs" and "umms" are added specifically to deceive humans.
It's a demo for a crowd of people who mostly just want to see cool new tech. Of course they're not going to do a fine analysis of the societal & privacy aspects.
This doesn't mean this will launch as is, without people thinking hard about the implications.
The deceptive aspect depends entirely on implementation. The fact that there's "hmm"s is just making the interaction easier for users.
This is very similar to Deepfakes. It's horrifying when there's no consent, but a movie studio could use this tech ethically & responsibly.
This isn't the 3rd grade science fair; this is Google. No demos, no gawking over cool tech without privacy/societal aspects being considered/built into design. It should be the first question, not let's wildly cheer first and then try to put some social impacts stuff as add-on.
I'm not involved in this particular product, but I'm sure that some folks have been looking at this from this perspective & building it into design.
This is how the process works. But privacy folks don't decide what gets highlighted at I/O keynotes. It's not the important part.
Take what Apple is doing on Lightning Connector protection. I'm sure that's not what they'll highlight in their keynotes, either. It's not going to be cheered at by the crowd.
Nonetheless, it's awesome that they do that, and will have real impact.
That's not how the process should work. Privacy and social implication people should be leading stuff, and the demos should discourage gawking at unconsidered tech just plopped out like that. Too far into all this, and too powerful, to be elephant in china shop.
I don't think I/O demos matter as much as you seem to think. It's all over tech news, but most people will discover this via actual implementations.
When I do privacy reviews for Google products, the marketing strategy is not my main focus. I don't think it should be.
The marketing is the main signal visible from the outside, but that doesn't mean that it's the only or main product driver.
I agree it would be nice to have more of our privacy/security/societal work featured externally. I'm trying to push in that direction. It's not easy.
This whole thing feels so wrong. Decreasing human contact even more than today and dumping AI training on small business owners. Will people decide restaurants based on if the assistant can understand them? I don't think this is thought through at all
what about people with social anxiety?
people with autism? customers who aren't fluent in english (1/10 of mine)? employees who want to return to their customers quickly? older people who understand conversational ui but not esoteric interfaces like apple maps?
I don’t understand the notion of ‘deception’ for a product that is limited in use, are you considering its scalability as the issue or is there a fundemental reason why every human-machine encounter must be signalled to the human? If latter, are there any sources you may cite?
Silicon Valley is full of people who have been in Silicon Valley bubble, where the goal is to come out number 1 & innovate. This is just following the same patterns as Wall Street. At some point can we admit that the sociopathic business model is the problem & not the people???
In 2018, every single phone call from an unknown number is deception. People are paying humans to deceive you on the phone, they're called telemarketers and scammers. Why are those things not just add bad or worse?
The quality of Google Assistant is miles ahead of the sampling @timkmak profiled last month. Imagine this technology in the wrong hands. I was simultaneously terrified & amazed when I listened to Google's latest advances. ai.googleblog.com
The applause was shocking and shameful. To me there's at least three levels of alarm here—that Google would think it was a good idea to develop this, that they would publicly present it, and that their employees would applaud it.
The public demonstration of their lack of an ethical center, at this late date, is what's scarier than any specific technology. These are the people in charge of our world, and these are the decisions they are capable of making.
As usual, the demo focuses on the "busy parent" - they're not even mentioning accessibility unless I missed it and just as expected, there's no "here are some potential downsides of this tech" (of which, as usual, there are many :))
Robo calls have been doing this always. There's a recent one that spoofs a phone number close to mine that has, "Emily" from a vacation property that I "recently stayed at." There is an app tho. listens for audio fingerprints and kills them.
All of these things start with good intentions by smart techies, but their ambition is blind to the potential. Nobody thinks of the impact of future fake people calling the elderly and lying to them to swing an election, but that's where this leads.
Tech companies have so much influence on our lives that they are essentially political entities that we only “vote” for through our use of their increasingly ubiquitous products. The indirect effects are often not felt immediately, so it’s often an uneducated trade off we make
Silicon Valley is unable to handle ethical questions or understand the impact of innovations on humanity. There's a need for a higher multi-national body of ethicists/lawyers/security/privacy experts that can look at AI with a bird's eye view & make that decision.
You didn't actually read the interviews, did you? The ones where he said that they are wary of the ethical considerations, and that this technology would likely be required to identify itself as an automated assistant? But I understand that ignoring that is more dramatic.
If you don't understand that this is where technology has ALWAYS been going, whether you like it or not, then YOU are rudderless. Best for them to be addressing these issues up front, as they say they are. It's going to happen regardless.
I do more than make sausages. Programming that understands natural language is highly beneficial in many very important use cases. And, yes, there are nefarious use cases as well but that shouldn’t stop progress. Tech must innovate and government must regulate.
The RoboCallers are already using “over familiarity” as well as errs and uhhs in well crafted/scripted robocalls.
I’m guessing (like spammers) the robocallers will be early adopters unless Silly Valley puts in some kind of UX that designates the call clearly as AI driven.
“Silly Valley” as you call it is one most important economic engines in this country so embrace it, cultivate it, invest in it, encourage it, and, yes, regulate it. Either way, the IT Genie ain’t going back in the bottle.
Again, this doesn’t provide any real insight into either the ethics or the tech.
The point is that google is just releasing these into everyday use. So unless you expect secretaries to TT every customer they have (which is horrible service and bad business), it doesn’t help.
Yes, that’s a great idea. However, since captchas would also be automated, it would become a challenging problem.
In essence, training the impersonators and training the captchas would be like a society-scale generative adversarial network.
Would paradoxically *accelerate* AI.
What do you mean by this? That you’re going to Turing test every phone call you receive for the rest of your life?
Bc the point of the thread is that Google is just quietly releasing these into everyday use.
Even before getting into context, can we also eloborate on the fact that it is installed without any notification and cannot be uninstalled?
It asks for permission to "turn itself on" when clicked on but why can I not just delete it all together?
Well, don’t worry about it too much because before long the person answering will also be replaced by a bot. Just millions of bots making millions of landline voice phone calls to each other all day, every day.
Reading the replies, it seems like many ppl lack imagination for how badly this can be misused. Just one example: bot armies calling politicians & pretending to be constituents, or calling companies & pretending to be customers, to give (fake) support for a political position.
It's how companies strategize.
How the military says, "We can't cut defense spending because we need to keep our troops safe." Is somewhat how tech companies operate. They introduce benign use-cases like helping the disabled in order to avoid backlash toward their full ambitions
Valid concerns. Currently, they've announced a technology, and plan to address the transparency issue before rolling it out as a product/service. From the announcement (ai.googleblog.com/2018/05/duplex…):
Speaking as a "human on the other end" that takes calls from the public all day long, I would be thrilled to receive calls like this from the bot. You're overestimating society's ability to communicate clearly and with understanding, at least over the phone.
I thought it was the opposite of horrifying. I didn't think about the fears, I thought about the possibilities.
By 2030 this might make psychotherapy available to all at near zero cost. And patients wouldn't be embarrassed to talk with a human.
I agree this is a questionable decision. But hyperbolic reactions like this only really get people to stop listening. ‘Ethically lost’? What, all of the hundreds of thousand of people building things across an entire industry?
Tone of my reaction is somewhat irrelevant to their responsibility given they’re the giant company shaping the world vs my tweets or even opeds but... I was just surprised the room just erupted into cheer. It’s 2018. First reaction should be “wait, what? Are we deceiving people?”
Far more people appreciate artistic and athletic achievements for their own sake without *gasping* about whatever else the person has done or what it might mean for the future.
Let the nerds have their tech conference.
we all remember that time lebron james hit a game winner and it led to people not being able to genocide on the other side of the world.
if you don't want to think about the implications, thats fine. just get out of the industry.
There is nothing inherent in this kind of service that prevents it from identifying itself as a bot, or whatever. I know Siri (for example) is a bot, even when Siri is pleasant to me. I think the presumption that the goal is deception is difficult to defend.
I guess I'm confused by the implication that this particular usage is abusive, that the restaurant employee had some kind of right to talk to a human or an obvious bot. There are definitely problematic ways to use this technology, but the example here seems totally benign.
And that is why it would help to get multiple perspectives when designing a system like this. Apparently nobody involved thought this was bad, either. I would though, as do many others who reacted similarly.
Is it the tech, or the deception, that's the problem? Imagine I had a human assistant who called the hair salon to make an appointment and never bothered to correct their perception that the person they were talking to was me--is that an abuse?
I think he's saying that he cares about all the people who will be deceived that they are speaking to a human if this tech becomes widespread. Just because he himself hasn't yet interacted with this tech doesn't mean he can't find it alarming.
It’s meant to deceive people under the API level rather than above it… service workers and other people considered too low status to be afforded the respect of being informed they are talking to an automated system rather than a real person.
I can't really read their minds or know what they thought but it is deceptive. They should have safety, security, privacy and paranoia teams in the room from the first moment and bake such considerations into everything before anything gets off the whiteboard, let alone demo'ed.
So, I've seen quite a few different products demoed at various states of design over many decades. Demos, in my experience, actually are quite often a way to generate useful critical feedback. Now, you may have had a different experience with the products you have designed.
I’m ex-Apple, so that’s the model I’m working off of when I see demos like this. The product, being shown to a dev conference is substantially a representation of what is conceived to be the shipping design.
If Google does it differently then 🤷♂️🤷♂️
It's 2018. If we learned anything, I'd hope that it would be that we can't put off safety, security, privacy, society etc. considerations after you're way into product development. If it is ready to demo, that stuff should already be in. Otherwise, not ready to demo. It's buggy!
When you're talking about "how they are represented to people," you're talking about something external to the technology itself. A good painting doesn't become less good simply because it's misrepresented as (say) a Rembrandt.
I'm assuming we're all against deception. I assume also that we all agree that some version of this tech can be used to deceive. Where I part ways is in the presumption that it cannot be deployed non-deceptively or otherwise innocuously.
I don't know of any sensible argument for such a blanket statement. "Innocent until provent guilty" is excellent jurisprudence, but ethics is not a subset of criminal law. Ignoring evidence because it's more comfortable to cling to a falsely neutral position isn't sound ethics.
It's a product. You don't need to buy it. It'll lead to more appointments being made hopefully. And making more human sounding voices is amazing for people who need Alternative & Augmentative Communication. This will lead the way for speech devices to improve
"It was intended to sound like a human." That is deceptive. Further inferring isn't required. Even if it was, considering Google's track record, we have no reason to give them the benefit of the doubt.
Incorrect. Mr. Moviefone didn't pattern its speech patterns in such a way to imply false pauses to think. Sounding more clear and comprehensible is not the same as trying to sound like a human speaker.
Agree; the umms may put the human at ease, which is good UX design.
I’m more interested in the feedback and customization: The system can detect gender (and culture?) from the voice, and adjust its voice and speech patterns to maximize intended outcomes. That’s new.
Would they need to put the human at ease for reasons other than the human becoming aware they were talking to a bot when they thought they were talking to a human? Why not start with "Hi, this is Google Assistant calling…" then?
It seems obvious to me that one would normally want Google Assistant (or Siri, or whatever) to self-identify at the beginning of a returned call (which I guess is the scenario you're getting at). The fact that this is possible suggests that the tech isn't categorically deceptive.
Most tech people will be amazed at this at first. So no need to assume they are all “lost”. People need time to process things. Not everyone looks at things from your perspective (trying to catch the evil tech people deceiving the world).
This is an amazing technical achievement. It is just brilliant that they can do that. And I give it 99% chance that they will add a disclaimer in the real product. Just like they clearly mark ads in searches. The demo wouldn’t have been great if it started with “I am a robot”.
I'd love to know what actual fields of arguing (political campaigns, litigation, etc.) allow you to declare your own tone irrelevant. In any event, I look forward to being a dick on Twitter and informing people that my tone is irrelevant.
Well I sense your alarm, but JFC did you get this angry when gmail started offering canned replies? If I click one of the canned replies the machine supplies, or in the next step, I let the machine autochoose a reply for me, is this not the same 'scandal' i.e. deception?
My view is that if it is /voice/ rather than merely text in by the 'deception' takes place, it is more visceral to know it is not a real person. But I don't think the response is to erupt on the ethics of SV, that's just piling on. The boundary needs evolving, but it's hard.
You talk about 'reliable signals' between human and machine as if this is a) possible b) the best idea. Given that dating sites massively configure actual relationships algorithmically, is there are way or good reason to separate 'organic' from 'engineered' coupling?
In the end, intent is all. I agree some kind of accountability and moral transparency is urgent - yes - but I don't believe you get that by insisting on a simple bright line between humans and machine, given such universal existing use of simulacra and automata.
I would add this after a few decades of thought on this very subject. Imagine if this was a prosthetic voice for a human. Would the disclosed still be needed? Would we require a test that the prosthetic was “necessary”? Slippery slope folks...
Many reasonable uses for synthetic voices, with disinfluencers to normalize it even, as replacement or as extension, sure. Start it off the wrong foot—it gets perceived as attached to deception and is cheered on wildly as such—you lose possibilities. The room should have gasped.
While I, too, am concerned about the implications of how this technology rolls out, surely there is room to appreciate the wonder of dangerous and powerful technology as well as cautioning about its use.
I don’t think its slippery or a slope. If the prosthetic is a thought or text to voice vs. an agent acting of its own accord. If it’is just a sound generated by a human then no disclosure necessary - but if it is a separate entity acting on behalf of the human then yes disclose
If this is the pinnacle, then that judgment isn't too harsh. Why doesn't the Valley focus on solving real problems? Instead of just looking to disrupt society and hurt people and their livelihoods so a few geeks can get rich.
Yes, but at what cost?
It must always be about the cost.
'Move fast and break things' might work in the Valley, but the Valley has never been about looking after people. When society breaks, does that leave us better off? Just externalising the cost can't be the answer always.
Checkout the @The_Maintainers for pushback against the Innovation propaganda
@A_R_G_Olabs committed to applying open source ML to support local institutions.
Ex: We use open source CV to help cities collect the Ground Truth and make fairer decisions such as counting potholes.
I think the relevant question is how google got this far without realizing the system should announce itself to the human. It is legitimate to wonder what lead them to the conclusion that deception was better than honesty, and what that says about google as a company.
Yes. Ethically lost doesn't mean evil. Just that they don't have a set of instincts to filter out bad stuff right up front. Can evaluate the bad applications of tech from the good long before they demo. Certainly tech folks don't believe they need ethics. It's a "nice to have".
at the same time there's something wrong with their review processes where these obvious flaws aren't caught before the demos or before updates/redesigns are launched. it does come across as an ethical failure that no one at any of the stages of review can see them.
The question isn't whether people are building worthwhile things. It's that many in Silicon Valley seem to conflate "we can do this" w "this is something we should do." And there is little thought to the social consequences of products/how they could be misused in the real world
It's like keeping people alive with technology; just because you can does not mean you should. Rather than throw it out there for anyone to use, set some safeguards and awareness before you open the Pandora's box.
You’re tone policing and you’re wrong. It’s not difficult to imagine hundreds of thousands of people being ‘ethically lost’ while focusing on how impolite it is to mention that they’re ethically lost because they’re cheering @ asymmetrically deceptive technology.
I can't wait til my elderly relatives are being scammed out of their money by Google-bots on an industrial scale. Or until my small-business owning relatives are inundated with calls about open-hours or inventory levels to fill some database somewhere.
They don't even work on their beta 'Voice Access' app, which should be a simple and obvious thing for folks like me. It's not sexy enough, apparently. I'd work on it myself, but I became disabled during my schooling for AI, ironically :D
It’s really alarming if the same deceptive technology is being used to make calls to representatives or MPs to influence/justify their voting decision.
It’s wrong at an individual level and can go massively wrong if done at scale.
It will not be hard to require adding a beep (a la when we used to do that when recording someone) or a sentence: "I'm Sally's Google Assistant..."). None of the robots calling me now identifies itself! These are technical demos. They're proving points. Products can be different
It's about the nuances of what can be done vs what should. This deserved cheers for the tech accomplishment. But is it right to have bots interacting with people in this way? The problem is that the tech world doesn't even see that as a discussion worthy of their time.
I love Google and Pichai, but I thought the keynote was tone deaf. Like you talked about on TWIG, there was no moment of recognition of where we are right now. Nadella did that beautifully at Build. Google should be sobered by Facebook's stumbles, not silently gloating about them
Google celebrating new ways to deceive humans interacting with bots into thinking they are interacting with another human is indeed horrifying. That so many commenters fail to recognize that is also horrifying. Thank you @zeynep for continuing to ring alarm bells.
But if a bot calls you, says it's a bot, but sounds human in every way, what would you conclude? Also, the deception is in WHY the call is made, not in the voice, or if it's human or a bot. Now, if you ask if it's a bot, it should tell the truth.
Where did you think we're going with virtual assistants and artificial intelligence? And how is it deceiving? is the person on the other end hoping to talk to a human being and will they even care? There's this theory that justifies how you feel.
They’re in a mind tunnel of tech for tech’s sake. Just like we have medical ethicists, I’m starting to think we need a new specialty of technology ethicists. But first the techbros need to realize they have a problem.
Sadly, you’re probably right. It always comes down to either financial incentives or regulation. Unfortunately, the societal impact of some of these technologies are very hard to quantify in terms of harm. 1/2
By the way, that specialty (tech ethicist) already exists, and many of us do actively consult with tech companies. But there are far too few of us for the scale of the problem, so it’s a bit like trying to redirect an entire fleet of ships with a rowboat and a megaphone.
All bots should be mandated (self-regulation?) to play a tone/sound snippet that identifies it as a bot/non-person. This universal tone doesn't have to be obnoxious. It simply sets the other party straight as to whom/what it's speaking with. Then it can proceed on, chatting away.
If the AI is fully autonomous, you don't know what it would actually say. That's one difference to start. And I care about what something is saying or doing that involves me without me being present. Imo you're giving up your own autonomy to a robot. No thank you.
I think not telling is the best solution. If they were told they were talking to a machine they would react differently. And if the machine is doing a human job why should it need to announce itself as a machine? Would you shop ask the caller it's race?
However we react to this, doesn’t change a thing, why? Because you cannot say to some engineers stop developing things that they can easily achieve. Telling people to stop AI from advancing is far from affective
So put the same amount of effort into developing an ethical framework as went into developing the tech. The two aren’t mutually incompatible.
We know the genie is out of the bottle. That doesn’t mean human beings should forfeit all responsibility for how it is to be used.
To draw a new line you should place a dot first,I know exactly what you’re saying here and I’d love to agree but it should start from somewhere and what can we do about the fact that those who are capable of creating new tech aren’t often considerate of misusing it
Today is the day! Four years in the making, we're so incredibly proud to launch the Hero Arm - the world's first medically approved 3D-printed bionic arm, and the most affordable bionic arm ever 🎉 openbionics.com/hero-arm#BionicHeroes#HeroArm
The idea that calling somewhere to set an appointment constitutes human interaction is dubious at best. This is the kind of interaction that's easy to automate because even if we limited ourselves to 1960s tech, we're essentially using other humans as machines.
eh, theres a tendency to shit on literally anything coming out of the valley these days.
Not everything is a juicero.
And whats the risk really?
That maybe a machine instead of a dude will try to sell you boner pills?
Oh, I have "Microsoft" calling me "about a virus on my computer" at least weekly on different numbers. I always think of the naive folks. This tech will be much cheaper because they don't have to pay a warm body to do the dirty work.
We are in an AI arms race. If "Silicon Valley" changes its direction or slows down innovation to explore ethical concerns at length, Chinese companies will quickly take over this research. Where AI-powered options are more effective, they will win and will be widely adopted.
That's your opinion. Mine is that the result would be helping people like me with severe anxiety and phobias actually be able to make important phone calls without getting the shakes and immediately throwing up afterwards.
If I hear a voice that sounds like a person, I assume it's a person. That's a big deal to me. Deception is harmful whether or not a person knows about it. If the voice DECLARES itself non-human, that's a different situation.
^ In that hypothetical, @sfmnemonic, you'd be aware you're talking to an AI (Siri). The objection @emilynussbaum is making to Google's demo is that the person talking to the AI was unaware that's what was going on.
Well, we're communicating via printed language. That's one approach. But even if tricking someone with a fake voice were the best pragmatic solution for you, it would still be ethically questionable, whatever the person's motive.
Also, this is going to hurt people with social anxieties who may benefit from this technologies. Have a potentially useful tool be associated with deceit for the sake of convenience and then it will be useless for those who actually need it. This does the most disservice to them.
She's saying that if people associate the fake voice with deception—if the tool itself is seen as a scam, a hoax, a trick, something cruel—then that reputation will also smear the people who use it. So if you use the tool, you'll be seen as a conman who lies.
I don't see why a natural-sounding voice can't be identified as an automated service. It seems difficult to justify the claim that it's inherently deceptive. I guess if you assume Google is evil, it gets easier.
The biggest issue I have with it is that PRETENDS to be a human. There's nothing wrong with a robot speaking fluently, but wasting time with synthetic disfluencies and human idiosyncrasies doesn't help anyone and doesn't serve any purpose but to be deceptive.
I don't care about "who." If you create a voice that's designed to sound like a human, that doesn't indicate that it is not human, to interact in a context where the listener can't tell it's not human, you're creating a deceptive tool. That's a fact, whatever the intent.
Look, if you turned out to be a Godwin-bot, I would be pissed and hurt at the energy I wasted debating this with you, especially because I know you. Obviously, there is deception in many places online, but that does not mean deception is okay or desirable.
No one is happy about those either. Or deep fakes, at least it should introduce itself as a google scheduling assistant.
Personally I’m waiting for these to get fielded and wind up in an infinite ping pong loop with another bot.
To be clear, I'm totally opposed to deceptive uses of this tech (or other UX tech). But I don't regard it as categorically deceptive. And demos are just demos. (I've also seen lots of concept cars that never got to be street-legal.)
That's a really good point — so much of communication (internet, phone, and otherwise) is based upon trust. I trust that you're Mike Godwin and you trust that I'm Jordan Matelsky, and I would feel equally deceived if you were Mike Godwin's Robot.
A friend mentioned not being able to tell the diff between Amazon's chatbots and real support humans: I think that's a success, where machines are just as good as humans at conveying the necessary information:
My analog here is a bot typing "Hm, let me go find out" and then waiting to deliver a response, even if it had a perfectly good answer ready for you immediately...Humanness as a goal, instead of a comforting byproduct
I'm not sure there was a conscious effort by the Google designers to 'pretend,' but the end-result is that the behavior includes functionality (such as conveying information through speech) and.. cruft, like "ehmmm", which serves only the purpose of hiding robotness. 1/
I don't think that this means that Google engineers got together and said "let's fool some folks!"
...but an efficient, to-the-point, no-disfluencies, humanlike voice would serve the purpose. Adding unnecessary human elements is COOL (!!!!) but... ¿why? Aside from coolth 2/
That's what I think too—and I think it's kind of a great example, in miniature, of a problem with the entire tech industry. Their intent doesn't have to be malicious for it to be a dicey situation, either.
On the one hand, the inference seems to be that this tech is an inch away from being deployed deceitfully. But now it's just "some engineers just got overly excited"? I know some pretty ethical engineers, and any large-enough team will have some.
I'm not bothered by talking to a bot (human-machine teaming is what I do for a living) but it bothers me that the system reduces efficacy to match "my expectations" of how a human should mess up
more words here, if one has the patience for MY disfluency
The example I think of is my mom (may she R.I.P.), who likely would have found a Google Assistant that spoke with human flaws more approachable if, for example, she were calling a number to get restaurant or car-repair information.
I don't have a problem with feeling a bit sad, confused, hurt and nervous about the realization that the thing you are pouring your emotional skills toward is automated & has no response back. Emotions like that are fine with me.
Many in Silicon Valley have shown they don’t deserve the benefit of the doubt. Google for years has recommended horrifyingly racist and false content about the Obamas and falsely claimed it couldn’t do anything about it. Our trust must be earned not taken for granted.
Google intentionally created a bot that pretends to be human. That is the definition of deceitful. The point is to trick real people, to get people to respond in a way that Google wants them to respond. Substituting machine responses for human interaction is dangerous.
You haven't thought this through. If the only reason you can think of for developing this technology is to deceive someone else, I want to say that says more about you than it does about the tech. (Sorry! I don't mean to be hurtful about this!)
I am thinking of it as a programmer. Why umm and ahh be good speech training? The Siri comparison is faulty because the Siri makes does not pretend to be something other than the voice of a search engine.
Suppose someone who can't speak clearly wants to use it to speak for them without giving away they have disability? They'd like this.
Do you think people who wear contact lens are inherently deceptive about their needing to wear corrective lenses?
But software that can sound realistically human can be used as a "corrective" tool for people who may want to sound human. And Google developing such tools can translate to work for people with disabilities etc.
People who would benefit from accessibility aids built on technology like this suffer discrimination all the time. Reducing that discrimination by providing alternative communication methods that don't discriminate is a Good Thing.
My wife is Cambodian, and although she speaks English pretty well, she speaks it with an accent. She thinks it's fun to practice conversationally with Siri. To the extent Siri gives naturalistic responses, it's actually more fun for her.
What would be the purpose of adding "um" and "ah" other than to deceive? If this is for accessibility purposes, the voice would serve just as well without that feature. The only purpose would be to make it seem like a human is at the other end when there isn't.
Mike, I know - and am sympathetic to your angle here - all AI systems are in effect trying to emulate human responses - to be more accurate in MT is to be closer to human. But to introduce intended flaws does _feel_ like an effort to deceive rather than an effort to be accurate.
Look, I feel FURIOUS ever time I get one of these phony sales calls that coughs to sound real. I bring a full emotional response to a person vs a machine. A robot faking humanity both wastes my energy and fucks with my head. I don't care about intent, that's the effect.
I didn't listen to the demo, Clive told me about it. But what's with all this faux-naivete? It's pretty obvious that a voice designed to sound exactly like a human voice is, if used, going to trick listeners, unless it says, "HI, I'M A ROBOT!"
Why should it self-identify? Why can’t a computer talk to a human? What is it doing that is wrong? Why does the receptionist care if I made the appointment or I did. I really don’t get this latest techno panic and I am far from a Google apologist or fan.
Sometimes the self-identification is implied in the exchange. If I begin my query with "Hey Siri" it doesn't matter how naturalistic the responding voice is--I don't imagine that I'm talking to a real person.
I'm okay with self-identification as a humane requirement. Sometimes people strike up conversations with the voices they interact with on the phone. (I even do this myself from time to time.) I'd prefer to know I'm not talking to a program in at least that subset of cases.
Rules about when an how a bot has to identify itself as not human—and how and what a human can disclose—need to be a part of any interactive interface, IMHO.
Plenty of IVR and text support chats are already bots with fake names. We don’t complain because we called them. ;)
Look, do you resent my theorizing why you presume that this tech is all set up to be deployed deceptively by design? Google-is-evil (or thoughtless, or whatever) seems to be where Occam's Razor has led some people.
I made a point about AI interaction design: that signals (of AI) are critical and ethical and systems that leave these out or worse insert signals that are intentionally human/flawed represent bad design. that's all. Occam should find that simpler than Google-hating.
With absence of a global body to oversee AI, Google Duplex can get out of hand & be used against unsuspecting ppl. Google isn't just a US company, it has global reach including war zones. GD can perfect accents; criminals/propagandists can now hide original accents for deception
I expect that people will adopt the strategy I am currently using: I simply don’t answer the phone to unknown numbers. I think we are going to have to switch to an opt-in system for telecommunications that requires an introduction and consent.
Ok I understand you feel the machine acting human is deceptive. It is a strange machine with a task to perform.
So other than the aforementioned deception whats the difference between that strange machine and a stranger calling to perform the same task?
I doubt they’ve even considered it, I see no evidence that they have and I stopped taking tech companies at their word long ago. I also don’t think this even has potential for enough benefit to change audio recording laws. Why not just an open scheduling api platform?!
That would solve the stated goal with zero r&d effort, a handful of engineers, and a nifty google cloud product enhancement. There is another motive here and considering the tech’s intended operating mode includes deception by design it’s going to make people uncomfortable.
Get over it folks this is the future. Jobs are changing and so we need to keep up. You can’t stop progress. Australia needs to change its curriculum in schools to teach kids about being innovative so they can adapt. We can never go back to the past.
Lol now you sound like me ove the last 2 years. I'm glad you finically agree they haven't learned a thing. Do you really think for a second they suddenly plan to change please. Time to for us to set up groups and plan not just write about this.
I found the whole google io keynote yesterday was very scary. From voice assistants, to cameras, to maps and new android P everything is designed to collect so much data about the user. What happened with facebook could be a very small glimpse of what might happen with google.
The problem is, they have a deep hate for humanity. That's why they are trying to get a robot in every profession there is, and leave every human without a job. Plus, the fact they want to interface that with AI and that will be certainly, our end as a species.