See the entire conversation

Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it... horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
553 replies and sub-replies as of May 10 2018

That the many in Google did not erupt in utter panic and disgust at the first suggestion of this... is incredible to me. What of Google's famed discussion boards? What are you all discussing if not this?!?! This is horrible and so obviously wrong. SO OBVIOUSLY WRONG. *headdesk*
As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, delilberate deception. Not okay.
I think I understand this perspective but I don't quite understand the degree of "horrifying". This is fine, at scale it wouldn't be, right?
I agree with that. Is it enough to preface any conversation with the fact that it is a bot?
Shades of R. Daneel Olivaw from Asimov.
I was thinking of Asimov, as well
Then it depends how it's used in practice — who can use it, for which use cases, are people notified that they're talking to a robot, what is being logged, where does the training data come from, etc. I don't understand why this tech is inherently wrong.
I don’t think this is inherently wrong. That said, studies on robots (assuming perhaps this might apply here) show that people like robots that look happy, but look like robots. Presumably, this could apply to these types of features. /1
Communicating at the beginning of the call that this is a robot/automated system (as they do for debt collection) could be a good option. /2
I understand wanting the bot to sound more natural, but the demos I have heard of it makes it seem deceptively human. With verbal stumbles, pauses, umms, etc. More aporoachable design is good. Deceptive design is bad.
Straight-up deceptive design at its launch; being cheered on by a whole room with nobody saying wait, wait, what? Where are the answers to those questions? Where is notification? Instead "Aaahs" and "umms" are added specifically to deceive humans.
it's not deception. it's a part of conversation that's incredibly important for humans to know the other party is understanding them
and some is masking load times i bet, as lag kills conversations
It's a demo for a crowd of people who mostly just want to see cool new tech. Of course they're not going to do a fine analysis of the societal & privacy aspects. This doesn't mean this will launch as is, without people thinking hard about the implications.
The deceptive aspect depends entirely on implementation. The fact that there's "hmm"s is just making the interaction easier for users. This is very similar to Deepfakes. It's horrifying when there's no consent, but a movie studio could use this tech ethically & responsibly.
This isn't the 3rd grade science fair; this is Google. No demos, no gawking over cool tech without privacy/societal aspects being considered/built into design. It should be the first question, not let's wildly cheer first and then try to put some social impacts stuff as add-on.
(That said, I'd expect societal aspects and privacy to be integral considerations in any reasonable 3rd grade science fair in 2018. Didn't mean to offend 3rd graders😬)
As a former 3rd grader, I am not offended in the least. Rather, I am SO glad @zeynep is reminding us (with urgency) to design better tech.
I'm not involved in this particular product, but I'm sure that some folks have been looking at this from this perspective & building it into design. This is how the process works. But privacy folks don't decide what gets highlighted at I/O keynotes. It's not the important part.
Take what Apple is doing on Lightning Connector protection. I'm sure that's not what they'll highlight in their keynotes, either. It's not going to be cheered at by the crowd. Nonetheless, it's awesome that they do that, and will have real impact.
iOS 11.4 Disables Lightning Connector After 7 Days, Limiting Law Enforcement Access
The iOS 11.4 update, currently being beta tested, includes a USB Restricted Mode that introduces a week-long expiration date on access to the...
macrumors.com
That's not how the process should work. Privacy and social implication people should be leading stuff, and the demos should discourage gawking at unconsidered tech just plopped out like that. Too far into all this, and too powerful, to be elephant in china shop.
(Well, maybe some elephants are very delicate in china shops. No offense to elephants either 😬.)
I read somewhere that they actually do inform the person they’re calling that it’s a robot, but that they left out that part for dramatic effect in the demo. Not sure where though
I don't think I/O demos matter as much as you seem to think. It's all over tech news, but most people will discover this via actual implementations. When I do privacy reviews for Google products, the marketing strategy is not my main focus. I don't think it should be.
Need to change what we cheer, how things get released... The "ooh, this is cool" has driven too much product. The results aren't good.
The marketing is the main signal visible from the outside, but that doesn't mean that it's the only or main product driver. I agree it would be nice to have more of our privacy/security/societal work featured externally. I'm trying to push in that direction. It's not easy.
Cosign. The past two decades have given us baselines for “cool new tech”. Ain’t good.
This whole thing feels so wrong. Decreasing human contact even more than today and dumping AI training on small business owners. Will people decide restaurants based on if the assistant can understand them? I don't think this is thought through at all
"thinking hard"
“cool new tech” The problem without context or moral implications.
Why do people need to know whether they are talking to a bot or a human?
It's going to get far, far worse.
what about people with social anxiety? people with autism? customers who aren't fluent in english (1/10 of mine)? employees who want to return to their customers quickly? older people who understand conversational ui but not esoteric interfaces like apple maps?
Yeah, best way to ruin a technology that could be helpful to people by launching it with straight-up deception and fakery.
speaking naturally isn't deception. i bet you avoid eye contact on dates, the fuck? this is conversation not expository writing
We are all so worried about the humans but nobody seems to be taking the GoogleBots’ feelings into consideration…
Also infuriating about this is it's a bunch of men cheering for a robot with a woman's voice that is going to eliminate jobs that have historically employment opportunities for women.
What happened in 2016?
I don’t understand the notion of ‘deception’ for a product that is limited in use, are you considering its scalability as the issue or is there a fundemental reason why every human-machine encounter must be signalled to the human? If latter, are there any sources you may cite?
Silicon Valley is full of people who have been in Silicon Valley bubble, where the goal is to come out number 1 & innovate. This is just following the same patterns as Wall Street. At some point can we admit that the sociopathic business model is the problem & not the people???
But we use deception to protect humans all the time. All the time.
“...how to delineate humans and machines...” There may be technologies which help us greatly in which this delineation might actually get in the way.
In 2018, every single phone call from an unknown number is deception. People are paying humans to deceive you on the phone, they're called telemarketers and scammers. Why are those things not just add bad or worse?
I wonder if it could circumvent the wiretap acts
And yet my family doesn't understand why I want to build a nice homestead away from Mountain View's clutches. This may be why I have a renewed interest in microfilming & Computer Output Microform.
They’re openly trying to greatly limit the number of things humans can do better than computers.
Automation is for the benefit of capital not labor. They think every one wants the leisure to paint or make songs.
Just finished off #supermariorpg and as that game and many other media’s imply, automation is the enemy. 😕
We need a well thought out version of Asimov's laws of robotics for ai, and bots
all of SV should have to watch old star trek. nothing from the 80s on..lol
Imagine the potential use cases for malicious telemarketing.
Rest assured there will be some devastating memes posted internally about this.
amazing that in an age ppl worry about manufactured news and GMO labeling, active deception gets claps on stage.
The applause was shocking and shameful. To me there's at least three levels of alarm here—that Google would think it was a good idea to develop this, that they would publicly present it, and that their employees would applaud it.
There’s always the accessibility bit for developing such tech; but yeah the deception bit is huge facepalm. But maybe they’ll get the message, who knows. It’s not out *yet*
The public demonstration of their lack of an ethical center, at this late date, is what's scarier than any specific technology. These are the people in charge of our world, and these are the decisions they are capable of making.
As usual, the demo focuses on the "busy parent" - they're not even mentioning accessibility unless I missed it and just as expected, there's no "here are some potential downsides of this tech" (of which, as usual, there are many :))
It's pretty clear to me that they are celebrating being able to tap into "people" with an API--now they can access "offline" knowledge with a phonecall
(I am also disgusted)
externality generation on information processing overhead. it's like dumping garbage on your neighbour's lawn.
The idea seems to have been the genesis of people who find being on the phone a chore (e.g. to order pizza) but don’t understand/care that this is ripe for robocall and fraud abuse
Robo calls have been doing this always. There's a recent one that spoofs a phone number close to mine that has, "Emily" from a vacation property that I "recently stayed at." There is an app tho. listens for audio fingerprints and kills them.
They should make all personal assistant bots start any call, email or other interaction with the phrase: "This is Peter's virtual assistant..." or "This zeynel's AI assistant..."
All of these things start with good intentions by smart techies, but their ambition is blind to the potential. Nobody thinks of the impact of future fake people calling the elderly and lying to them to swing an election, but that's where this leads.
“This update addresses an issue found with advertising-exposure-deficit in certain humanoid client end-points of category 7A5.”
Maybe their first priority was technology and actually making it work; instead of what people would whine about on Twitter?
Those laughs we heard after the "mmms" are from a bunch of retardeds or it's just another use for AI?
I feel like they were more applauding the breakthroughs of the technology. The room was full of developers who have been working on similar tech for years. Why not applaud such a breakthrough?
What is so obviously wrong about it? It isn’t that obvious to me
After what happened to James Damore, I can imagine it must be difficult for Google employees to voice concerns these days - you never know when your concerns will cross the wrong people.
Well, when you consider how YouTube still allows borderline assault prank videos on their platform, pranking random businesses seems a-okay
They are paid employees and are being videotaped? Some of them looked a bit nonplussed at the second (even more alarming) dialogue, I interpret that as concern...
Tech companies have so much influence on our lives that they are essentially political entities that we only “vote” for through our use of their increasingly ubiquitous products. The indirect effects are often not felt immediately, so it’s often an uneducated trade off we make
Silicon Valley is unable to handle ethical questions or understand the impact of innovations on humanity. There's a need for a higher multi-national body of ethicists/lawyers/security/privacy experts that can look at AI with a bird's eye view & make that decision. @Buttarelli_G
You didn't actually read the interviews, did you? The ones where he said that they are wary of the ethical considerations, and that this technology would likely be required to identify itself as an automated assistant? But I understand that ignoring that is more dramatic.
If you don't understand that this is where technology has ALWAYS been going, whether you like it or not, then YOU are rudderless. Best for them to be addressing these issues up front, as they say they are. It's going to happen regardless.
this is all very, very, very predictable. Silicon Valley moves in a more or less straight line.
Your assessment of SV is probably right, though I’m not sure this is an example of said behavior.
It’s a dead on correct assessment, in tech this is called “Dark Design,” a design or UX (user experience) explicitly made to deceive the end user.
I do more than make sausages. Programming that understands natural language is highly beneficial in many very important use cases. And, yes, there are nefarious use cases as well but that shouldn’t stop progress. Tech must innovate and government must regulate.
The RoboCallers are already using “over familiarity” as well as errs and uhhs in well crafted/scripted robocalls. I’m guessing (like spammers) the robocallers will be early adopters unless Silly Valley puts in some kind of UX that designates the call clearly as AI driven.
“Silly Valley” as you call it is one most important economic engines in this country so embrace it, cultivate it, invest in it, encourage it, and, yes, regulate it. Either way, the IT Genie ain’t going back in the bottle.
Imagine the 'social engineering' possible when everyone is a robocaller.
Oh yeah I can already imagine the "love bots" made to lure naive people into phone relationships.
I was thinking more of phishing than catfishing, but that's another angle, I reckon.
I'll Turing Test it. "So what did you think of the NBA playoffs last night?" Or "Stormy Daniels affair"? It will go "hmmm haaa, can I have another scoop of ice cream"...
Or tell a German joke. If it laughs, it's AI.
Knock Knock whos there Herr Schrodinger's cat Schrodingers cat who? Kaput.
Because these are definitely natural questions for a receptionist at the hair salon to be asking prospective customers on the phone...
As someone who works as a receptionist that bookd appointments I would never ask this over the phone.
I bet Google AI would pass those Turing tests while I (presumably a human) wouldn’t the first one. Not all humans care about sports while it’s simple to feed that info into a machine.
If you saw a turtle in the desert, with its back down, would you turn it over? Tell me about your Mother? Just keep it loose..
Again, this doesn’t provide any real insight into either the ethics or the tech. The point is that google is just releasing these into everyday use. So unless you expect secretaries to TT every customer they have (which is horrible service and bad business), it doesn’t help.
You're right. So there's an opportunity here for voice or verbal captcha
Yes, that’s a great idea. However, since captchas would also be automated, it would become a challenging problem. In essence, training the impersonators and training the captchas would be like a society-scale generative adversarial network. Would paradoxically *accelerate* AI.
So would I tbh
What do you mean by this? That you’re going to Turing test every phone call you receive for the rest of your life? Bc the point of the thread is that Google is just quietly releasing these into everyday use.
It's creepy, more than anything. We're sliding downhill into the uncanny valley, and we're going to go crazy.
I got one of these calls yesterday. Extremely off-putting and creepy.
It was off-putting because you could tell it wasn't a human being.
Oh it’s not that bad — it’s just those misunderstood boys looking for ways to replace the functionality that mom provided before she kicked them out of her basement.
"horrifying..." Dramatic. Spare me...
I think I understand this perspective but I don't quite understand the degree of "horrifying". This is fine, at scale it wouldn't be, right?
If you don't catch it now, it's going to scale. Look at the havoc wrought by DNT, MITM/XSS/CSRF, GDPR - arguably improvements all, but at such a high cost.
that's the thing with computer technology, it never really stays small scale
Success is being measured by how well it tricks real human beings. Are you okay with that?
Also, AI is churning in the background all the time, improving its ability to deceive with every call.
it's horrifying, apparently, to talk to a person with a hearing aid. we must put an end to the ethical nightmare of people using accessibility devices without the consent of everyone else
No. It's deceit. Humiliating for the human. Violates their consent, dignity. Not OK on any single occasion.
ok I'm glad I'm not crazy in thinking that was hella creepy.
Even before getting into context, can we also eloborate on the fact that it is installed without any notification and cannot be uninstalled? It asks for permission to "turn itself on" when clicked on but why can I not just delete it all together?
Well, don’t worry about it too much because before long the person answering will also be replaced by a bot. Just millions of bots making millions of landline voice phone calls to each other all day, every day. THE FUTURE!
... and humans getting invoices and receipts and bank statements detailing all the crap corporate bots sold their own personal bot...
I pledge that I will never open Google Assistant for anything. In fact, I’m increasingly migrating away: @DuckDuckGo @FirefoxFocus Apple Maps, Apple Calendar #googlepledge
Replacing google with apple is like replacing cancer of the mouth with cancer of the throat. I have been looking for a privacy replacement for gmail recently. Any advice?
I use @ProtonMail and it's quite good. Mobile integration is a bit rough around the edges, but it works. They use a subscription model, so they don't need to sell out to the ad industry for revenue.
They’re still so caught up in the high of what they can do, they never consider if they should.
SV fundamentally denies the very notion of moral agency.
Reading the replies, it seems like many ppl lack imagination for how badly this can be misused. Just one example: bot armies calling politicians & pretending to be constituents, or calling companies & pretending to be customers, to give (fake) support for a political position.
I believe the guidelines for disclosing AI are yet to be developed.
You can't trust the tech geeks. Oh how they love the power.
will there be audio checkboxes "I am not a robot"? questionnaires as in blade runner?
Oh my God you're so dramatic calm down. It's making appointments for busy people not raising your kids.
Your iPhone has been raising your kids for awhile…
Your standard for parenting is pretty damn low if you think an iPhone can encompass all a parent does to raise their child. Also, I'm 16.
It's how companies strategize. How the military says, "We can't cut defense spending because we need to keep our troops safe." Is somewhat how tech companies operate. They introduce benign use-cases like helping the disabled in order to avoid backlash toward their full ambitions
hahahahah it is hilarious.
Curious about the ethics in relation to robocalls or clearly automated phone responses already in place. Same ethics issue, or does Google Assistant change things?
No different than NCA/ICA SNA panels in the year of Snowden politely pooh poohing fears
Valid concerns. Currently, they've announced a technology, and plan to address the transparency issue before rolling it out as a product/service. From the announcement (ai.googleblog.com/2018/05/duplex…):
Honestly, I found it both unimpressive and creepy, a combination which takes some skill to pull off.
Unimpressive? Right.
Yeah Mark, Ash could pull something like that off in a weekend, but he's just not excited about it. 😂
Indeed! I only get excited about sharks with freakin' laser beams on their heads.
Are you trying to neg Duplex? She has too much confidence to fall for that.
Speaking as a "human on the other end" that takes calls from the public all day long, I would be thrilled to receive calls like this from the bot. You're overestimating society's ability to communicate clearly and with understanding, at least over the phone.
Horrifying is a strong word, i agree to some degree but the only way is forward and technology can’t be stopped. The only variable is time but if it could, it would.
You must have really liked Jurassic Park
If humans prefer talking to other humans over robots, tell me why is it "horrifying" if it creates a better user experience for the receiving party? I'm curious.
I thought it was the opposite of horrifying. I didn't think about the fears, I thought about the possibilities. By 2030 this might make psychotherapy available to all at near zero cost. And patients wouldn't be embarrassed to talk with a human.
I agree. In fact I hope we never have to talk to a human again because of fear of embarrassment.
Completely agree. And think about it - Silicon Valley could be spending all of their capital on things like improving access to the political process, and so on.They aren't.
What about ai-tweeps? Would you despise/belittle them too? Oh my Gosh/Lord! Given due respect they can teach us all we lack...
How do we know you’re human? 😉
I agree this is a questionable decision. But hyperbolic reactions like this only really get people to stop listening. ‘Ethically lost’? What, all of the hundreds of thousand of people building things across an entire industry?
You said it first Benedict...
Tone of my reaction is somewhat irrelevant to their responsibility given they’re the giant company shaping the world vs my tweets or even opeds but... I was just surprised the room just erupted into cheer. It’s 2018. First reaction should be “wait, what? Are we deceiving people?”
Doesn’t speak well to internal culture that they’d demo like this: put in disfluencies, and go public with no accompanying conversation about implications and just get wild applause instead of a gasp.
Old enough to remember Google Glass…
They were probably cheering the technical achievement.
You're SO CLOSE to getting the point
I was born into the point. Shaped by it. Moulded by it. You merely adopted the point.
Nobody asked about the shape of your head, Harold.
Of course they were cheering the tech. The problem is admiration of technical achievement without considering all the implications.
Far more people appreciate artistic and athletic achievements for their own sake without *gasping* about whatever else the person has done or what it might mean for the future. Let the nerds have their tech conference.
we all remember that time lebron james hit a game winner and it led to people not being able to genocide on the other side of the world. if you don't want to think about the implications, thats fine. just get out of the industry.
Exactly the problem. The technical achievement is all they care about, and not a single thought spared for the ethical questions.
It's not Jurassic Park. There's time for the discussion as tech like this is gradually rolled out.
There is nothing inherent in this kind of service that prevents it from identifying itself as a bot, or whatever. I know Siri (for example) is a bot, even when Siri is pleasant to me. I think the presumption that the goal is deception is difficult to defend.
With respect, watch the demo. It was clearly intended to sound like a human rather than a bot with the "uh", "mm-hmm", and "um"'s thrown in.
If they didn't want to deceive people, the call could've started with "Hi, this is Google Assistant calling".
I guess I'm confused by the implication that this particular usage is abusive, that the restaurant employee had some kind of right to talk to a human or an obvious bot. There are definitely problematic ways to use this technology, but the example here seems totally benign.
It demonstrates a fundamental lack of respect for other people as human beings to casually deceive them in this way.
Why? I'm not trying to troll, but if I were that salon employee, I would be utterly unbothered by this whole experience.
And that is why it would help to get multiple perspectives when designing a system like this. Apparently nobody involved thought this was bad, either. I would though, as do many others who reacted similarly.
Same for, say, click-bait ads like those by Taboola and Outbrain. I’m sure many people are fine with them, but they are part of the constant erosion of trust and of treating each other with dignity.
Is it the tech, or the deception, that's the problem? Imagine I had a human assistant who called the hair salon to make an appointment and never bothered to correct their perception that the person they were talking to was me--is that an abuse?
It’s the deception. Having someone else pose as you is also bad.
I guess I'm more of a consequentialist than you are on this particular case, but I appreciate your explanations. Thanks.
Isn't giving people a demo one way "to get multiple perspectives"?
Remind me again who has been casually deceived? Was it you?
Am I supposed to not care if someone is mistreated just because it happens to someone else? Isn’t that a bit sociopathic?
Who is telling you not to care if someone is mistreated? Or did you mean to address that question to me?
It was a response to your question, which I took to mean “it wasn’t you who was called by a robot and not informed about it, so why do you care?”
That's a weird interpretation. I'm trying to figure out the source of your outrage given that there doesn't actually seem to be *any* victim, at least so far.
You don't consider deceiving people to have any victim, as long as the deceit was successful?
I think he's saying that he cares about all the people who will be deceived that they are speaking to a human if this tech becomes widespread. Just because he himself hasn't yet interacted with this tech doesn't mean he can't find it alarming.
So we're not worried about us, but about a class of future victims that hasn't yet emerged, because we're certain this will be used to deceive people?
Is it logical to conflate a demo with a deployed service? Are you trying to say that this is a completed design, and that it's meant to deceive users?
It’s meant to deceive people under the API level rather than above it… service workers and other people considered too low status to be afforded the respect of being informed they are talking to an automated system rather than a real person.
Wow, can you source that claim? That's a pretty damning factual statement, if Google said this.
It’s clearly my opinion.
If Google truly is deploying this technology in order to deceive users, there ought be hearings about that.
In the meantime, what I’m really worried about is the increasing number of Pixar voice actors who are likely to go unemployed.
Right, it was intended to sound like a human. But you infer that the only reason to do this is to deceive. That's the part that's difficult to justify.
I can't really read their minds or know what they thought but it is deceptive. They should have safety, security, privacy and paranoia teams in the room from the first moment and bake such considerations into everything before anything gets off the whiteboard, let alone demo'ed.
So, I've seen quite a few different products demoed at various states of design over many decades. Demos, in my experience, actually are quite often a way to generate useful critical feedback. Now, you may have had a different experience with the products you have designed.
I’m ex-Apple, so that’s the model I’m working off of when I see demos like this. The product, being shown to a dev conference is substantially a representation of what is conceived to be the shipping design. If Google does it differently then 🤷‍♂️🤷‍♂️
It's 2018. If we learned anything, I'd hope that it would be that we can't put off safety, security, privacy, society etc. considerations after you're way into product development. If it is ready to demo, that stuff should already be in. Otherwise, not ready to demo. It's buggy!
Haven't we also learned not to begin by ascribing a motive to deceive? Because that's what I learned in my own ethics classes.
No… that kind of thinking leaves you open to exploitation by sociopaths. At a societal level it leaves you open to asymmetric information warfare like, say, influencing elections.
What kind of thinking leaves you open to exploitation by sociopaths? I'm inclined to think that if you attribute the intent to deceive, routinely, that amounts to a pathology as well.
Again, different priors.
I’m sorry you’ve been victimized so terribly. So far, it’s only been actual human beings who’ve done me harm (at least so far as I know).
My ethics class was on museum/art ethics, so maybe that’s why the ethical line over representing a thing as something it is not is clearer to me.
You think portraiture is unethical? It's just paint on canvas, but represents itself as human beings. omg. Not to mention forced perspective! (A.k.a. fake 3D.)
I mean counterfeits versus prints. Both reproductions, difference is in how they are represented to people.
When you're talking about "how they are represented to people," you're talking about something external to the technology itself. A good painting doesn't become less good simply because it's misrepresented as (say) a Rembrandt.
And this technology is fine. It’s the deception that is bad.
I'm assuming we're all against deception. I assume also that we all agree that some version of this tech can be used to deceive. Where I part ways is in the presumption that it cannot be deployed non-deceptively or otherwise innocuously.
I don’t think we depart there, either! All they need to do is start with “Hi, this is Google Assistant calling”.
I guess once Tristan Harris left Google there were no more "design ethicists" left.
Mike if you're going with a logical argument here not sure using loaded statements helps your cause, but hey no-one has called the other Hitler yet so go on lol
I don't know of any sensible argument for such a blanket statement. "Innocent until provent guilty" is excellent jurisprudence, but ethics is not a subset of criminal law. Ignoring evidence because it's more comfortable to cling to a falsely neutral position isn't sound ethics.
I've seen many things at dev conferences that aren't ready for prime time, but I've been around for a while.
and here is the critical feedback they should be expecting
So wait, are we assuming that the called people weren't debriefed after the call? That they didn't have a chance to give feedback on the call? Seems like we're assuming that.
It's a product. You don't need to buy it. It'll lead to more appointments being made hopefully. And making more human sounding voices is amazing for people who need Alternative & Augmentative Communication. This will lead the way for speech devices to improve
I think we have different priors about human motivation here, which is why we see this differently.
It seems to me you're making assumptions about my "priors" that are just as difficult to justify as your earlier assumptions.
"It was intended to sound like a human." That is deceptive. Further inferring isn't required. Even if it was, considering Google's track record, we have no reason to give them the benefit of the doubt.
'"It was intended to sound like a human." That is deceptive.' No, not necessarily.
No more than Mr. Moviefone was deceptive.
Incorrect. Mr. Moviefone didn't pattern its speech patterns in such a way to imply false pauses to think. Sounding more clear and comprehensible is not the same as trying to sound like a human speaker.
So your evidence that it's deceptive isn't something built into the demo but something you infer from "Google's track record"?
Agree; the umms may put the human at ease, which is good UX design. I’m more interested in the feedback and customization: The system can detect gender (and culture?) from the voice, and adjust its voice and speech patterns to maximize intended outcomes. That’s new.
I know plenty of actual human beings who code-switch their spoken language depending on whom they're talking to. I'm going to stick my neck out and say these folks are not tryng to be deceptive.
Would they need to put the human at ease for reasons other than the human becoming aware they were talking to a bot when they thought they were talking to a human? Why not start with "Hi, this is Google Assistant calling…" then?
It seems obvious to me that one would normally want Google Assistant (or Siri, or whatever) to self-identify at the beginning of a returned call (which I guess is the scenario you're getting at). The fact that this is possible suggests that the tech isn't categorically deceptive.
Most tech people will be amazed at this at first. So no need to assume they are all “lost”. People need time to process things. Not everyone looks at things from your perspective (trying to catch the evil tech people deceiving the world).
This is an amazing technical achievement. It is just brilliant that they can do that. And I give it 99% chance that they will add a disclaimer in the real product. Just like they clearly mark ads in searches. The demo wouldn’t have been great if it started with “I am a robot”.
i recieve up to five or six phone calls a day from automated services. why the hell is it only unethical when i get to do it?
Only 30-something percent of children and young adults interviewed today think lying is morally wrong so this is right up our society's ally, sadly
Google's core business, and practically only source of income, is deceiving people. Do you think there's something honorable in serving ads (or "more accurate" ads) that this reaction contradicts?
wait, how does this work? if I'm really offended by someone in power, I can go nuclear and then say "tone of my reaction is somewhat irrelevant"?
It is possible massively to over-react in unreasonable and indeed counter-productive ways, and that does not cease to be the case if if you call it ‘tone’.
I'd love to know what actual fields of arguing (political campaigns, litigation, etc.) allow you to declare your own tone irrelevant. In any event, I look forward to being a dick on Twitter and informing people that my tone is irrelevant.
Well I sense your alarm, but JFC did you get this angry when gmail started offering canned replies? If I click one of the canned replies the machine supplies, or in the next step, I let the machine autochoose a reply for me, is this not the same 'scandal' i.e. deception?
Are you equally scandalised that a huge amount of digital cleanup is done on non-adventure movies without telling the audience? Is this a scandal of deception?
My view is that if it is /voice/ rather than merely text in by the 'deception' takes place, it is more visceral to know it is not a real person. But I don't think the response is to erupt on the ethics of SV, that's just piling on. The boundary needs evolving, but it's hard.
You talk about 'reliable signals' between human and machine as if this is a) possible b) the best idea. Given that dating sites massively configure actual relationships algorithmically, is there are way or good reason to separate 'organic' from 'engineered' coupling?
In the end, intent is all. I agree some kind of accountability and moral transparency is urgent - yes - but I don't believe you get that by insisting on a simple bright line between humans and machine, given such universal existing use of simulacra and automata.
I would add this after a few decades of thought on this very subject. Imagine if this was a prosthetic voice for a human. Would the disclosed still be needed? Would we require a test that the prosthetic was “necessary”? Slippery slope folks...
Many reasonable uses for synthetic voices, with disinfluencers to normalize it even, as replacement or as extension, sure. Start it off the wrong foot—it gets perceived as attached to deception and is cheered on wildly as such—you lose possibilities. The room should have gasped.
Really, great point! I agree.
While I, too, am concerned about the implications of how this technology rolls out, surely there is room to appreciate the wonder of dangerous and powerful technology as well as cautioning about its use.
Agree totally. My assumption is the reason they don't start with "I'm a google assistant trying to nake an appointment" is that many people will hang up at that point, justifiably.
I don’t think its slippery or a slope. If the prosthetic is a thought or text to voice vs. an agent acting of its own accord. If it’is just a sound generated by a human then no disclosure necessary - but if it is a separate entity acting on behalf of the human then yes disclose
When you use an absurd absolute in reply, you've lost. Rightly so. She's right.
Um... the absurd absolute is in her tweet. So, I take it you agree with me.
No. You put it in your reply, as you know.
If this is the pinnacle, then that judgment isn't too harsh. Why doesn't the Valley focus on solving real problems? Instead of just looking to disrupt society and hurt people and their livelihoods so a few geeks can get rich.
Computers we can talk to, and that can understand speech, is about as real a problem worth solving as one can possibly imagine.
Yes, but at what cost? It must always be about the cost. 'Move fast and break things' might work in the Valley, but the Valley has never been about looking after people. When society breaks, does that leave us better off? Just externalising the cost can't be the answer always.
Checkout the @The_Maintainers for pushback against the Innovation propaganda @A_R_G_Olabs committed to applying open source ML to support local institutions. Ex: We use open source CV to help cities collect the Ground Truth and make fairer decisions such as counting potholes.
I think the relevant question is how google got this far without realizing the system should announce itself to the human. It is legitimate to wonder what lead them to the conclusion that deception was better than honesty, and what that says about google as a company.
Yes. Ethically lost doesn't mean evil. Just that they don't have a set of instincts to filter out bad stuff right up front. Can evaluate the bad applications of tech from the good long before they demo. Certainly tech folks don't believe they need ethics. It's a "nice to have".
Yes. All of them. Literally everything they do demonstrates that they have no morals or ethics and don't care about anything beyond getting rich.
at the same time there's something wrong with their review processes where these obvious flaws aren't caught before the demos or before updates/redesigns are launched. it does come across as an ethical failure that no one at any of the stages of review can see them.
🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄
This industry has a very poor track record of asking itself "should we build this?". If something can be done, it will be, because who knows, it might be worth millions! See also: Facebook.
It matters little if SWE no. 450 in SV has a minor problem with this when the Google CEO gets through this demo without once mentioning any potential downside.
The question isn't whether people are building worthwhile things. It's that many in Silicon Valley seem to conflate "we can do this" w "this is something we should do." And there is little thought to the social consequences of products/how they could be misused in the real world
It's like keeping people alive with technology; just because you can does not mean you should. Rather than throw it out there for anyone to use, set some safeguards and awareness before you open the Pandora's box.
You’re tone policing and you’re wrong. It’s not difficult to imagine hundreds of thousands of people being ‘ethically lost’ while focusing on how impolite it is to mention that they’re ethically lost because they’re cheering @ asymmetrically deceptive technology.
Having lots of people doesn’t stop groups from getting ethically lost, even in relatively decentralized sectors like tech. (A twist on @sfmnemonic’s law)
I mean, if your model of every user is an able-bodied adult who is just too lazy to pick up the phone... But I saw this and was hyped by the accessibility implications.
They’ll ruin it for accessibility use if it gets associated with deception right of the bat. People with disabilities should be most alarmed.
I can't wait til my elderly relatives are being scammed out of their money by Google-bots on an industrial scale. Or until my small-business owning relatives are inundated with calls about open-hours or inventory levels to fill some database somewhere.
This could already happen and there's nothing stopping it so it's unlikely that it will be the case.
Compare spam email to physical chain letters
Most businesses already post opening hours online or in Google Maps so the number of cases where people enquire about redundant info should be very small
You are very optimistic
Elaborate please. I loose you here. I do believe pitching in cloaked in “we can truck people for you” is bad and agree with you previous points but no idea what you mean here
They don't even work on their beta 'Voice Access' app, which should be a simple and obvious thing for folks like me. It's not sexy enough, apparently. I'd work on it myself, but I became disabled during my schooling for AI, ironically :D
It’s really alarming if the same deceptive technology is being used to make calls to representatives or MPs to influence/justify their voting decision. It’s wrong at an individual level and can go massively wrong if done at scale.
Do you mean/argue MP’s should not be better informed/briefed to save time?
Imagine an MP's office getting hundreds of constituent calls and interpreting that as a genuine groundswell of support for [Position X], when in reality it's one person with a bunch of bot helpers.
Yes, this. It's the Facebook/Twitter fake-news-bot phenomenon unleashed on the physical world. Think today's confusion over "what's real?"is bad? Just you wait. Ugh.
Replies like this make me hate Twitter, the internet, and good chunk of humanity.
Yup, hate the humanity and love us. Resistance is futile.
It will not be hard to require adding a beep (a la when we used to do that when recording someone) or a sentence: "I'm Sally's Google Assistant..."). None of the robots calling me now identifies itself! These are technical demos. They're proving points. Products can be different
Too many Google Buzzes for such faith.
I'd like to add the question: If (when) consciousnesses aren't distinguishiable between humans & machines anymore, is this signalling still necessary or even becomes discrimination?
That beep will be the last thing most of us hear. In an ironic twist, we will know it is a robot snuffing us out, and they will revel in its use.
This nonsense about protecting the rights of machines is beyond absurd
Agree 100%. Silicon Valley is out of control, and has been for a while.
i wonder if machines will ever reach a moment where humans classify them as conscious or if we'll keep moving the bar saying..well it's "this" but it's not full human consciousness...
At that point it'll be discrimination. But before admitting that, we'll have the biggest debate over "human rights" we've seen yet.
It's about the nuances of what can be done vs what should. This deserved cheers for the tech accomplishment. But is it right to have bots interacting with people in this way? The problem is that the tech world doesn't even see that as a discussion worthy of their time.
This particular interaction seems totally unproblematic to me. But combine this tech with something like deepfakes and the problems start to come into view.
You're jumping to many conclusions here.
I love Google and Pichai, but I thought the keynote was tone deaf. Like you talked about on TWIG, there was no moment of recognition of where we are right now. Nadella did that beautifully at Build. Google should be sobered by Facebook's stumbles, not silently gloating about them
When the rudderless control the steering function evoked by the "kyber" of cybernetics
Do you want #Westworld? Because this is how you get Westworld.
If we hit '2' on the keypad will the bot remove us from their list?
I believe you have to ask Alexa for permission first
Reminds me of the AI assistant in the novel #Origin by @AuthorDanBrown
Google celebrating new ways to deceive humans interacting with bots into thinking they are interacting with another human is indeed horrifying. That so many commenters fail to recognize that is also horrifying. Thank you @zeynep for continuing to ring alarm bells.
So much drama! Wouldn't it be nice to let a bot work its way through the phone tree maze ("Press 1 for English... Now select from the following 9 choices...") on our behalf?
But if a bot calls you, says it's a bot, but sounds human in every way, what would you conclude? Also, the deception is in WHY the call is made, not in the voice, or if it's human or a bot. Now, if you ask if it's a bot, it should tell the truth.
Where did you think we're going with virtual assistants and artificial intelligence? And how is it deceiving? is the person on the other end hoping to talk to a human being and will they even care? There's this theory that justifies how you feel.
Uncanny valley: why we find human-like robots and dolls so creepy | Stephanie Lay
It seems obvious that the more human robots are, the more familiar we find them. But it’s only true up to a point – then we find them disturbing
theguardian.com
In other words, ripe for a break up?
They’re in a mind tunnel of tech for tech’s sake. Just like we have medical ethicists, I’m starting to think we need a new specialty of technology ethicists. But first the techbros need to realize they have a problem.
This will only happen when the money reward for heedless “innovation and disruption” is turned into punishment, ie accountability for harm caused ie fines and criminal charges
Sadly, you’re probably right. It always comes down to either financial incentives or regulation. Unfortunately, the societal impact of some of these technologies are very hard to quantify in terms of harm. 1/2
How do you put a price tag on “gradual tearing of the social fabric”? 2/2
Indeed. The slow-boiling frog is the paradigm for our times it seems
This is another example of one of the primary tenets I’ve lived my life by: Just because you can, doesn’t mean you _should_.
Tech for tech’s sake? How about tech for saving me time and never having to make dreaded phone calls
Life sounds really hard for you. You going to be ok if this tech doesn’t get to market right away?
By the way, that specialty (tech ethicist) already exists, and many of us do actively consult with tech companies. But there are far too few of us for the scale of the problem, so it’s a bit like trying to redirect an entire fleet of ships with a rowboat and a megaphone.
We actually shouldn’t have medical ethicists, though
Oh please. Perhaps turn the hyperbole down a bit. Jeez.
Seems that @Google has lost its conscience and they don’t give an F to the humanity @sundarpichai
THAT'S JUST WHAT GOOGLE ASSISTANT MASQUERADING AS AN NYTIMES WRITER WOULD SAY!
LOL. Oh yeah, next step the businesses will use Google Assistants to answer and schedule the appointments so the voice interface is just the networking protocol and people can get on with their lives.
Straight up this can be abused as a platform for astroturfing.
Who is being harmed in this scenario?
How many calls working social exploits would it take to find the downside?
How is this different than employing some humans to do the same thing? Harm is in intent, not mechanism.
True, they already use call centres in developing countries, but this would be cheaper and it would be way easier to optimise scripts for plausible exploits and target vulnerabilities.
because this is essentially free at an incite scale compared to hiring humans. you’re being intentionally thick or you’re not actually interested in exploring the bad side of this.
*infinite scale
Also they installed it on my phone last night without my consent and it cannot be deleted.
I turned off the software option for microphone access to it. Of course they can turn it back on remotely with notice, but I have a fig leaf
I've been trying to do that for months for OK Google and still can't fucking figure it out.
What type of phone do you have?
Ok, I went google settings, personal info and privacy, activity controls, voice and audio, then paused it. No idea where they hide it on your phone
All bots should be mandated (self-regulation?) to play a tone/sound snippet that identifies it as a bot/non-person. This universal tone doesn't have to be obnoxious. It simply sets the other party straight as to whom/what it's speaking with. Then it can proceed on, chatting away.
Encode some signed data in the sound to prevent spoofing and serve as a sort of “Caller ID” for bots. Phone device does the decoding.
Hell, my device might play a sound right back at the bot saying in effect “Hey, I’m a bot too! Let’s chat. Can you step this call up to high speed?”
the replacement of humans by robots is basically inevitable - better get on their good side!
What’s the difference between me scheduling an appointment and an AI saying the exact same words for me? Who cares?
And if I control the AI?
Yay, wildly unfounded hypotheticals.
Do you check the permissions on your phone apps? How many have access to your microphone? How many have access to Google Assistant?
If the AI is fully autonomous, you don't know what it would actually say. That's one difference to start. And I care about what something is saying or doing that involves me without me being present. Imo you're giving up your own autonomy to a robot. No thank you.
There are worse things than can be mitigated with these bots for example internal fraud! So I am happy about that. Stop complaining. Embrace change!!!
I think not telling is the best solution. If they were told they were talking to a machine they would react differently. And if the machine is doing a human job why should it need to announce itself as a machine? Would you shop ask the caller it's race?
The whole point is to sound human in order to ‘get it done’ gurrl!!! If I hear a machine calling me I’d hang up immediately w/o pause.
Ethically lost! Lol as if it isn't debatable that a human-like voice is a "good" thing.
Sure they have. Earth needs fewer humans than we thought.
The industry needs but does not value people who study ethics, because they all seem to think that ethics, like any other aspect of the human condition, has a technocratic workaround.
These calls drive me batshit crazy!! Our business gets at least one of them a day.
However we react to this, doesn’t change a thing, why? Because you cannot say to some engineers stop developing things that they can easily achieve. Telling people to stop AI from advancing is far from affective
So put the same amount of effort into developing an ethical framework as went into developing the tech. The two aren’t mutually incompatible. We know the genie is out of the bottle. That doesn’t mean human beings should forfeit all responsibility for how it is to be used.
To draw a new line you should place a dot first,I know exactly what you’re saying here and I’d love to agree but it should start from somewhere and what can we do about the fact that those who are capable of creating new tech aren’t often considerate of misusing it
Nobody does this for free
How is asking a bot to identify itself as a bot "stop[ping] AI from advancing?"
Democratic act is needed when facing AI :D let em know what they’re capable of and let em use it freely :))) sorry but couldn’t stop myself from joking about this
Thoroughly creepy
... and at the other end of the spectrum you have @openbionics who struggle to make amazing artificial arms
Today is the day! Four years in the making, we're so incredibly proud to launch the Hero Arm - the world's first medically approved 3D-printed bionic arm, and the most affordable bionic arm ever 🎉 openbionics.com/hero-arm #BionicHeroes #HeroArm
This is what I dreamed of doing before I got (too) disabled myself.
Zeynep: "But what happens when someone gives themselves 8 arms??? ZOMFG science so horrifying! The terror of the unknown! Fear the new!"
They better not make any real looking arms. That would be "deceptive".
It’s not just that. It’s the underlying motivation to disincentivise and/or undermine human interaction.
The idea that calling somewhere to set an appointment constitutes human interaction is dubious at best. This is the kind of interaction that's easy to automate because even if we limited ourselves to 1960s tech, we're essentially using other humans as machines.
look up the words "Turing Test" and youll understand the cheering.
I'm pretty sure @zeynep is already familiar with the Turing Test, lol.
Twitter is bad at transmitting sarcasm
But other than that its about on the same level as making a scare about the introduction of photocopiers because they could be used for forgeries.
or the printing press. Shoulda said that. would have been funnier. oh well opportunity missed.
:D I think it's good and obvious tech, myself. But I don't trust Google bros to use/release it in a way that keeps the downside minimized. That's more on them than the tech, obviously.
eh, theres a tendency to shit on literally anything coming out of the valley these days. Not everything is a juicero. And whats the risk really? That maybe a machine instead of a dude will try to sell you boner pills?
Oh, I have "Microsoft" calling me "about a virus on my computer" at least weekly on different numbers. I always think of the naive folks. This tech will be much cheaper because they don't have to pay a warm body to do the dirty work.
oh god. client of mine had that the other day. What made it worse he KNEW it was a scam and let them fuck up the machine anyhow it amazes me.
and likewise, are you familiar with the jolly roger bot? Its basically the same thing for the end user, and it will keep telemarketers etc at bay. it goes both ways lol
funny. the second i asked her to elaborate on the dangers she blocked me. Typical XD
Aw, I do think it's a good suggestion to spell out how it can be abused!
That would actually take effort. And it gets less attention than WAAH EVIL GOOGLE ROBOTS ARE DECEIVING US
Next: Mannequin manufacturers behind evil conspiracy to produce "Fake humans"
Juicero is waste of money. This is potent and potentially dangerous.
How about instead of vague fearmongering you elaborate on those "Dangers"
Not true. They’ve learned to be better at scheming and manipulating people with razor sharp efficiency.
We are in an AI arms race. If "Silicon Valley" changes its direction or slows down innovation to explore ethical concerns at length, Chinese companies will quickly take over this research. Where AI-powered options are more effective, they will win and will be widely adopted.
JUST had a half of a convo with one. This is generated with Google Ass?
Kind of like the human race.
Horrifying? Take a deep breath
I think the presumption that the purpose of humanizing the Google Assistant voice is "to deceive the human on the other end" is unwarranted.
Would you feel better if the word was "result" rather than "purpose"? Because when perfected, deceiving humans is what the result will be.
That's your opinion. Mine is that the result would be helping people like me with severe anxiety and phobias actually be able to make important phone calls without getting the shakes and immediately throwing up afterwards.
There are solutions to this problem that don't involve tricking people.
Tricking implies a malice that simply isnt there. Also, how would you even know you were being tricked? It's just a voice asking for a doctors appointment.
If I hear a voice that sounds like a person, I assume it's a person. That's a big deal to me. Deception is harmful whether or not a person knows about it. If the voice DECLARES itself non-human, that's a different situation.
That makes absolutely no sense to me but I respect your opinion.
But if I were to telephone Siri, and "she" spoke to me in a more natural-sounding voice than "she" does now, are you saying that I'd be deceived into thinking Siri is real?
^ In that hypothetical, @sfmnemonic, you'd be aware you're talking to an AI (Siri). The objection @emilynussbaum is making to Google's demo is that the person talking to the AI was unaware that's what was going on.
Okay but what if in the Final Product [B/c wasnt this just a demo?] they make it clear that the person speaking is a bot and not a real person?
My model of @emilynussbaum would be fine with that!
I would be okay with that. I might find it annoying as a thing to interact with, but it wouldn't be an ethical problem for me.
I would also be interested in those solutions you talk about bc I've been trying to find a solution for years.
Well, we're communicating via printed language. That's one approach. But even if tricking someone with a fake voice were the best pragmatic solution for you, it would still be ethically questionable, whatever the person's motive.
Also, this is going to hurt people with social anxieties who may benefit from this technologies. Have a potentially useful tool be associated with deceit for the sake of convenience and then it will be useless for those who actually need it. This does the most disservice to them.
I've read this tweet 20 times and I still dont understand it. ¯\_(ツ)_/¯
She's saying that if people associate the fake voice with deception—if the tool itself is seen as a scam, a hoax, a trick, something cruel—then that reputation will also smear the people who use it. So if you use the tool, you'll be seen as a conman who lies.
I don't see why a natural-sounding voice can't be identified as an automated service. It seems difficult to justify the claim that it's inherently deceptive. I guess if you assume Google is evil, it gets easier.
Ah! Okay! That makes more sense, thank you! ♡
I'm unclear how deceit is inherent in the technology. It seems obvious to me that it could be deployed undeceitfully.
The biggest issue I have with it is that PRETENDS to be a human. There's nothing wrong with a robot speaking fluently, but wasting time with synthetic disfluencies and human idiosyncrasies doesn't help anyone and doesn't serve any purpose but to be deceptive.
I just thought the purpose was to show off and be cool. Didnt know google was purposely trying to be deceptive =/
I don't think they're consciously TRYING to deceive; but that's part of what bothers me. They didn't THINK about the implications, or at least give no indication they thought of the implications.
When you say they "didn't THINK about the implications," where are you getting that from? I would be frankly astonished if even one engineer told you no one thought about this.
When you say "PRETENDS to be a human," to whom are you ascribing the intention of pretending?
I don't care about "who." If you create a voice that's designed to sound like a human, that doesn't indicate that it is not human, to interact in a context where the listener can't tell it's not human, you're creating a deceptive tool. That's a fact, whatever the intent.
Here we are on the internet, which lends itself to deception all the time, and now it's realistic-sounding voices that tripped your alerts?
Look, if you turned out to be a Godwin-bot, I would be pissed and hurt at the energy I wasted debating this with you, especially because I know you. Obviously, there is deception in many places online, but that does not mean deception is okay or desirable.
I'm not defending deception. My point is that deception can be implemented with all sorts of tech, including the decades-old interfaces we now use.
You know who else was interested in big lies? JK MikeGodwinBot, I'm sorry, you make it so easy. I've got to go now, but this has been an enjoyable debate.
It's the only way to end a conversation online, I might as well make the move before Google or Mike or Jordan or Trump himself does.
I make it easy because it's part of my user-interface design. Another UX triumph.
No one is happy about those either. Or deep fakes, at least it should introduce itself as a google scheduling assistant. Personally I’m waiting for these to get fielded and wind up in an infinite ping pong loop with another bot.
To be clear, I'm totally opposed to deceptive uses of this tech (or other UX tech). But I don't regard it as categorically deceptive. And demos are just demos. (I've also seen lots of concept cars that never got to be street-legal.)
Well maybe they can be designed to accuse the other party of being a nazi when they don’t get what they want... that would make it just like the internet, right Mr Godwin?
That's a really good point — so much of communication (internet, phone, and otherwise) is based upon trust. I trust that you're Mike Godwin and you trust that I'm Jordan Matelsky, and I would feel equally deceived if you were Mike Godwin's Robot.
Right, and so you should. But the fact that this user interface can be used deceptively doesn't demonstrate deceptive intentions on the part of its designers.
A friend mentioned not being able to tell the diff between Amazon's chatbots and real support humans: I think that's a success, where machines are just as good as humans at conveying the necessary information:
My analog here is a bot typing "Hm, let me go find out" and then waiting to deliver a response, even if it had a perfectly good answer ready for you immediately...Humanness as a goal, instead of a comforting byproduct
"Hmmm, let me go find out" is better than the long pause with (fake) electronic noises in the background.
That drives me BONKERS You don't hear me making annoying human noises in the phone, do you??! (usually no)
The point is that those beep-beep-boop noises are meant to give you the sense that you can hear a computer processing.
The false "umms" are meant to give the sense that I can hear a human processing. Equally frustrating, but weirder. (Although I don't think I'd prefer the "human" to beepbop instead, to be fair)
I mean, in the event that I ever have "Mike Godwin's Robot" answer the phone for me, I'll make sure it self-identifies. That's my solemn promise.
Now I am feeling the emotions of warmth, love and closure. (Probably not an actual emotion, but it will become one soon.)
I know a guy who'd record, on his answering machine, "Hello, this is Tom," followed by a long-ass pause. After a full minute the pause would end with "leave a message at the beep."
I'm not sure there was a conscious effort by the Google designers to 'pretend,' but the end-result is that the behavior includes functionality (such as conveying information through speech) and.. cruft, like "ehmmm", which serves only the purpose of hiding robotness. 1/
I don't think that this means that Google engineers got together and said "let's fool some folks!" ...but an efficient, to-the-point, no-disfluencies, humanlike voice would serve the purpose. Adding unnecessary human elements is COOL (!!!!) but... ¿why? Aside from coolth 2/
I think some engineers just got overly excited about technology and thought the rest of the world would share their enthusiasm? [I do, at least]
That's what I think too—and I think it's kind of a great example, in miniature, of a problem with the entire tech industry. Their intent doesn't have to be malicious for it to be a dicey situation, either.
On the one hand, the inference seems to be that this tech is an inch away from being deployed deceitfully. But now it's just "some engineers just got overly excited"? I know some pretty ethical engineers, and any large-enough team will have some.
I'm not bothered by talking to a bot (human-machine teaming is what I do for a living) but it bothers me that the system reduces efficacy to match "my expectations" of how a human should mess up more words here, if one has the patience for MY disfluency blog.jordan.matelsky.com/duplex/
LOL. I love the idea of a calculator acting like a woman on a certain kind of date, downplaying her intelligence in order to be charming.
The example I think of is my mom (may she R.I.P.), who likely would have found a Google Assistant that spoke with human flaws more approachable if, for example, she were calling a number to get restaurant or car-repair information.
You're making the assumption that those idiosyncrasies don't have any function.
Then that is ethically okay with me, albeit emotionally a little gross-feeling.
What worries me is that emotional response. It is feeding what is rapidly heading toward moral panic.
I don't have a problem with feeling a bit sad, confused, hurt and nervous about the realization that the thing you are pouring your emotional skills toward is automated & has no response back. Emotions like that are fine with me.
I took Jeff to be worrying about the emotional response, not to a particular instance of being deceived, but to the notional idea that one might be deceived generally by the tech.
Yes @sfmnemonic said it better than I did and not for the first time.
As you know, I think we're in moral-panic land already.
We agree. I appreciate the company.
Before regulating a texhnology according to the bad it *could* do let us make sure to imagine and protect the good.
Minor example: Google can use this to call businesses to get holiday hours to Dave the businesses trouble and send them more customers.
Many in Silicon Valley have shown they don’t deserve the benefit of the doubt. Google for years has recommended horrifyingly racist and false content about the Obamas and falsely claimed it couldn’t do anything about it. Our trust must be earned not taken for granted.
The speedometer in my car isn’t lying; it’s relative to an imaginary body that’s moving at 10km/hour in the same direction as I am.
Google intentionally created a bot that pretends to be human. That is the definition of deceitful. The point is to trick real people, to get people to respond in a way that Google wants them to respond. Substituting machine responses for human interaction is dangerous.
Then why fake being human? It is superfluous and inefficient to include the fake — like redundant code — unless the intent is to fake. The intent is the deception.
You haven't thought this through. If the only reason you can think of for developing this technology is to deceive someone else, I want to say that says more about you than it does about the tech. (Sorry! I don't mean to be hurtful about this!)
I am thinking of it as a programmer. Why umm and ahh be good speech training? The Siri comparison is faulty because the Siri makes does not pretend to be something other than the voice of a search engine.
Sorry the editing didn’t work well that time.
Suppose someone who can't speak clearly wants to use it to speak for them without giving away they have disability? They'd like this. Do you think people who wear contact lens are inherently deceptive about their needing to wear corrective lenses?
Or that people who dye their hair color are deceptive?!?
(Sorry, this is directed at the original Tweet poster.)
That is a poor comparison — a contact lens is a corrective instrument that an individual chooses. Reach for a better analogy.
But software that can sound realistically human can be used as a "corrective" tool for people who may want to sound human. And Google developing such tools can translate to work for people with disabilities etc.
Whereas if the output lacked umms and errrs it could give away that such a tool was eg talking for someone who can't otherwise talk.
Also cf glasses vs contact lens and some of the reasons people choose the latter.
The Siri comparison "is faulty"? You think a comparison is the same thing as an equation?
Do you oppose all natural-language interfaces? Because, let me tell you, there's lots of redundancy in natural language.
People who would benefit from accessibility aids built on technology like this suffer discrimination all the time. Reducing that discrimination by providing alternative communication methods that don't discriminate is a Good Thing.
My wife is Cambodian, and although she speaks English pretty well, she speaks it with an accent. She thinks it's fun to practice conversationally with Siri. To the extent Siri gives naturalistic responses, it's actually more fun for her.
Your own body is filled with redundant code. God obviously set out to deceive us all.
Remind me again who's being tricked?
I don't doubt that some people could deploy such a technology in ways that are deceptive. It seems likely some bad actors will try that. But why do you think Google wants to deceive you?
What would be the purpose of adding "um" and "ah" other than to deceive? If this is for accessibility purposes, the voice would serve just as well without that feature. The only purpose would be to make it seem like a human is at the other end when there isn't.
Mike, I know - and am sympathetic to your angle here - all AI systems are in effect trying to emulate human responses - to be more accurate in MT is to be closer to human. But to introduce intended flaws does _feel_ like an effort to deceive rather than an effort to be accurate.
To introduce flaws sounds to me more like an effort to make someone feel comfortable. Like wood-grain formica. But of course since this is Google it must be intentionally evil, right?
Look, I feel FURIOUS ever time I get one of these phony sales calls that coughs to sound real. I bring a full emotional response to a person vs a machine. A robot faking humanity both wastes my energy and fucks with my head. I don't care about intent, that's the effect.
I never answer the phone
You experienced the demo and weren't told it was a demo? Somebody deceived you? I had been given to understand that this demoed as a way in which Google Assistant might respond to a call you make.
I didn't listen to the demo, Clive told me about it. But what's with all this faux-naivete? It's pretty obvious that a voice designed to sound exactly like a human voice is, if used, going to trick listeners, unless it says, "HI, I'M A ROBOT!"
Why assume that it's not going to self-identify? I mean, I have all sorts of criticism of Google, but I don't normally assume they're top-to-bottom dumbasses.
Why should it self-identify? Why can’t a computer talk to a human? What is it doing that is wrong? Why does the receptionist care if I made the appointment or I did. I really don’t get this latest techno panic and I am far from a Google apologist or fan.
Sometimes the self-identification is implied in the exchange. If I begin my query with "Hey Siri" it doesn't matter how naturalistic the responding voice is--I don't imagine that I'm talking to a real person.
I get it. My point is, why does it matter? Why should the person on the phone care if they are talking to me or my Google Assistant if they are getting the info they need?
I'm okay with self-identification as a humane requirement. Sometimes people strike up conversations with the voices they interact with on the phone. (I even do this myself from time to time.) I'd prefer to know I'm not talking to a program in at least that subset of cases.
Rules about when an how a bot has to identify itself as not human—and how and what a human can disclose—need to be a part of any interactive interface, IMHO. Plenty of IVR and text support chats are already bots with fake names. We don’t complain because we called them. ;)
So we think this tech is going to be rolled out, as is, next week sometime?
No clue on launch readiness. My Android phone already “asks a human” to take pics inside places, fill out details about a business. Guess that isn’t going fast enough. ;)
ok, you've hit a nerve here mike. I hate formica, but there is a special place in countertop hell for the wood-grained stuff.
I’m not a fan of fake wood-grain anything. That said, my apartment kitchen counter-top is fake marble, and I’m trying and failing to work up some high dudgeon about it.
but, actually I do resent the implication that my critique is in any way related to the fact that this comes from google. if my mother released this project I would have exactly the same critique.
Look, do you resent my theorizing why you presume that this tech is all set up to be deployed deceptively by design? Google-is-evil (or thoughtless, or whatever) seems to be where Occam's Razor has led some people.
I made a point about AI interaction design: that signals (of AI) are critical and ethical and systems that leave these out or worse insert signals that are intentionally human/flawed represent bad design. that's all. Occam should find that simpler than Google-hating.
Why doesn't it clearly identify itself as Google Assistant, then? Google's quick to put its logo all over everything else.
Wait till they sell it as a tool for automating telemarketing.
Wait till they combine it with voice replication technology to sound like anyone thats available as a recording of 30 common words
I'm spammed on my office phone and when I ask if this is a robocall, the response is a very human sounding laugh. Annoying and creepy.
What exactly is unethical? Where is the harm? The pauses are to allow for computation and to make the conversation feel more natural to the human
so from now I will answer all entering call by asking to sing some accapella-captcha
Did pass the Turing test ? #AI
Need law requiring I.d. as machine vs human.
Your replies are scaring me worse than the technology.
give it a 6 ft extension cord so it can't chase you...credit to Dwight Shrute
With absence of a global body to oversee AI, Google Duplex can get out of hand & be used against unsuspecting ppl. Google isn't just a US company, it has global reach including war zones. GD can perfect accents; criminals/propagandists can now hide original accents for deception
Decieving someone who would otherwise likely be prejudiced against you isn’t unethical, actually
I expect that people will adopt the strategy I am currently using: I simply don’t answer the phone to unknown numbers. I think we are going to have to switch to an opt-in system for telecommunications that requires an introduction and consent.
It's just another stage of The Corporations' progress in gaining the same rights as human beings, creating agents that falsely impersonate them to acquire trust and binding agreement.
The Corporation
THE CORPORATION is a Canadian documentary film written by Joel Bakan, and directed by Mark Achbar and Jennifer Abbott. The documentary examines the modern-da...
youtube.com
This tech could be a huge boon to people who cannot speak clearly or at all, due to physical issues, and who don’t want to be automatically hung up on when they need to make a phone call.
Ok I understand you feel the machine acting human is deceptive. It is a strange machine with a task to perform. So other than the aforementioned deception whats the difference between that strange machine and a stranger calling to perform the same task?
Well. For starters the machine doing it is most likely violating anti-wiretapping laws with about a century of history as societal norms.
That would be a case of technology bypassing the law. I'd expect google to already be working with legislators on that.
I doubt they’ve even considered it, I see no evidence that they have and I stopped taking tech companies at their word long ago. I also don’t think this even has potential for enough benefit to change audio recording laws. Why not just an open scheduling api platform?!
That would solve the stated goal with zero r&d effort, a handful of engineers, and a nifty google cloud product enhancement. There is another motive here and considering the tech’s intended operating mode includes deception by design it’s going to make people uncomfortable.
All this drama about a computer sending a message to a human (from another human). It happens constantly, it's just a different format.
The cheers were for the computer science history being made with a bot for all intents and purposes passing the Turing test. I don't think anyone in the valley was ready for this so soon.
I hate those things.
That’s just freaky. Glad I don’t use it. #TheBeat
Yes. It was one thing to make it not sound so robotic but another to put in filler words for the purposes of deception. I don’t get how Silicon Valley doesn’t see the ethical issues here.
This is the kind of response you get if you mix Luddite + performative bravado.
It is a theatrical statement, but don’t you agree that to cheer this is to willfully ignore the history of the past two years, years whose consequence we are just beginning to understand?
Not really sure where you’re going with that one. I think they’re independent, and similar things have been said at every major invention by every person in history.
Most inventions haven’t allowed people to very cheaply simulate movements and stimulate division in order to swing an election for less than the cost of a jetliner. It’s dangerous
I feel like our apparent conservative and liberal outlooks on life and progress have flipped :-)
I have to admit, when I watched the presentation I was impressed by it. But you are right. It is also about privacy of the person being called, what happens with the recording of the call?
Been getting those calls
Get over it folks this is the future. Jobs are changing and so we need to keep up. You can’t stop progress. Australia needs to change its curriculum in schools to teach kids about being innovative so they can adapt. We can never go back to the past.
But wait, you thought Google was right to fire James Damore? And now they're crazy? Which is it? Make up your mind.
I think you will find law makers will treat this very seriously. This is AI that is impersonating you or someone else, the obvious ramifications are very dangerous especially for children and elderly.
I definitely think it should identify itself as a bot, but this could be really helpful to people with phone anxiety (including myself). The faking humanity part is the worst of it.
Lol now you sound like me ove the last 2 years. I'm glad you finically agree they haven't learned a thing. Do you really think for a second they suddenly plan to change please. Time to for us to set up groups and plan not just write about this.
It's a smaller thing but I was also concerned that it didn't tell people that it was recording their calls
Personally I find this reaction pretty crazy and weird. I am surprised people are reacting this way. "Ethically lost and rudderless" lol WHAT!?!
why do you feel as though you've been lied to if the person you spoke to didnt tell you they were made of plastic? do you feel this way about humans with inflammatory but irrelevant backgrounds?
I found the whole google io keynote yesterday was very scary. From voice assistants, to cameras, to maps and new android P everything is designed to collect so much data about the user. What happened with facebook could be a very small glimpse of what might happen with google.
They were still in awe of the morse code keyboard. *As planned
and all would be solved by starting with "hi, i´m a google secretary how are you" or something like that
Apparently the worst possible use-case for a technology will be its first use and its default once introduced.
The problem is, they have a deep hate for humanity. That's why they are trying to get a robot in every profession there is, and leave every human without a job. Plus, the fact they want to interface that with AI and that will be certainly, our end as a species.
Looks like they need something like this for non-embodied social tech: robots.law.miami.edu/2014/wp-conten…
Anyone who uses one of the smart speakers gets what they deserve. It will be the greatest privacy breach ever.