Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it... horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
That the many in Google did not erupt in utter panic and disgust at the first suggestion of this... is incredible to me. What of Google's famed discussion boards? What are you all discussing if not this?!?! This is horrible and so obviously wrong. SO OBVIOUSLY WRONG. *headdesk*
As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, delilberate deception. Not okay.
Then it depends how it's used in practice — who can use it, for which use cases, are people notified that they're talking to a robot, what is being logged, where does the training data come from, etc.
I don't understand why this tech is inherently wrong.
I don’t think this is inherently wrong. That said, studies on robots (assuming perhaps this might apply here) show that people like robots that look happy, but look like robots. Presumably, this could apply to these types of features. /1
I understand wanting the bot to sound more natural, but the demos I have heard of it makes it seem deceptively human. With verbal stumbles, pauses, umms, etc.
More aporoachable design is good. Deceptive design is bad.
Straight-up deceptive design at its launch; being cheered on by a whole room with nobody saying wait, wait, what? Where are the answers to those questions? Where is notification? Instead "Aaahs" and "umms" are added specifically to deceive humans.
It's a demo for a crowd of people who mostly just want to see cool new tech. Of course they're not going to do a fine analysis of the societal & privacy aspects.
This doesn't mean this will launch as is, without people thinking hard about the implications.
The deceptive aspect depends entirely on implementation. The fact that there's "hmm"s is just making the interaction easier for users.
This is very similar to Deepfakes. It's horrifying when there's no consent, but a movie studio could use this tech ethically & responsibly.
This isn't the 3rd grade science fair; this is Google. No demos, no gawking over cool tech without privacy/societal aspects being considered/built into design. It should be the first question, not let's wildly cheer first and then try to put some social impacts stuff as add-on.
(That said, I'd expect societal aspects and privacy to be integral considerations in any reasonable 3rd grade science fair in 2018. Didn't mean to offend 3rd graders😬)
I'm not involved in this particular product, but I'm sure that some folks have been looking at this from this perspective & building it into design.
This is how the process works. But privacy folks don't decide what gets highlighted at I/O keynotes. It's not the important part.
Take what Apple is doing on Lightning Connector protection. I'm sure that's not what they'll highlight in their keynotes, either. It's not going to be cheered at by the crowd.
Nonetheless, it's awesome that they do that, and will have real impact.
That's not how the process should work. Privacy and social implication people should be leading stuff, and the demos should discourage gawking at unconsidered tech just plopped out like that. Too far into all this, and too powerful, to be elephant in china shop.
I read somewhere that they actually do inform the person they’re calling that it’s a robot, but that they left out that part for dramatic effect in the demo. Not sure where though
I don't think I/O demos matter as much as you seem to think. It's all over tech news, but most people will discover this via actual implementations.
When I do privacy reviews for Google products, the marketing strategy is not my main focus. I don't think it should be.
The marketing is the main signal visible from the outside, but that doesn't mean that it's the only or main product driver.
I agree it would be nice to have more of our privacy/security/societal work featured externally. I'm trying to push in that direction. It's not easy.
This whole thing feels so wrong. Decreasing human contact even more than today and dumping AI training on small business owners. Will people decide restaurants based on if the assistant can understand them? I don't think this is thought through at all
what about people with social anxiety?
people with autism? customers who aren't fluent in english (1/10 of mine)? employees who want to return to their customers quickly? older people who understand conversational ui but not esoteric interfaces like apple maps?
Also infuriating about this is it's a bunch of men cheering for a robot with a woman's voice that is going to eliminate jobs that have historically employment opportunities for women.
I don’t understand the notion of ‘deception’ for a product that is limited in use, are you considering its scalability as the issue or is there a fundemental reason why every human-machine encounter must be signalled to the human? If latter, are there any sources you may cite?
Silicon Valley is full of people who have been in Silicon Valley bubble, where the goal is to come out number 1 & innovate. This is just following the same patterns as Wall Street. At some point can we admit that the sociopathic business model is the problem & not the people???
In 2018, every single phone call from an unknown number is deception. People are paying humans to deceive you on the phone, they're called telemarketers and scammers. Why are those things not just add bad or worse?
The quality of Google Assistant is miles ahead of the sampling @timkmak profiled last month. Imagine this technology in the wrong hands. I was simultaneously terrified & amazed when I listened to Google's latest advances. ai.googleblog.com
And yet my family doesn't understand why I want to build a nice homestead away from Mountain View's clutches. This may be why I have a renewed interest in microfilming & Computer Output Microform.
The applause was shocking and shameful. To me there's at least three levels of alarm here—that Google would think it was a good idea to develop this, that they would publicly present it, and that their employees would applaud it.
There’s always the accessibility bit for developing such tech; but yeah the deception bit is huge facepalm. But maybe they’ll get the message, who knows. It’s not out *yet*
The public demonstration of their lack of an ethical center, at this late date, is what's scarier than any specific technology. These are the people in charge of our world, and these are the decisions they are capable of making.
As usual, the demo focuses on the "busy parent" - they're not even mentioning accessibility unless I missed it and just as expected, there's no "here are some potential downsides of this tech" (of which, as usual, there are many :))
It's pretty clear to me that they are celebrating being able to tap into "people" with an API--now they can access "offline" knowledge with a phonecall
The idea seems to have been the genesis of people who find being on the phone a chore (e.g. to order pizza) but don’t understand/care that this is ripe for robocall and fraud abuse
Robo calls have been doing this always. There's a recent one that spoofs a phone number close to mine that has, "Emily" from a vacation property that I "recently stayed at." There is an app tho. listens for audio fingerprints and kills them.
They should make all personal assistant bots start any call, email or other interaction with the phrase:
"This is Peter's virtual assistant..."
or
"This zeynel's AI assistant..."
All of these things start with good intentions by smart techies, but their ambition is blind to the potential. Nobody thinks of the impact of future fake people calling the elderly and lying to them to swing an election, but that's where this leads.
I feel like they were more applauding the breakthroughs of the technology. The room was full of developers who have been working on similar tech for years. Why not applaud such a breakthrough?
After what happened to James Damore, I can imagine it must be difficult for Google employees to voice concerns these days - you never know when your concerns will cross the wrong people.
They are paid employees and are being videotaped? Some of them looked a bit nonplussed at the second (even more alarming) dialogue, I interpret that as concern...
Tech companies have so much influence on our lives that they are essentially political entities that we only “vote” for through our use of their increasingly ubiquitous products. The indirect effects are often not felt immediately, so it’s often an uneducated trade off we make
Silicon Valley is unable to handle ethical questions or understand the impact of innovations on humanity. There's a need for a higher multi-national body of ethicists/lawyers/security/privacy experts that can look at AI with a bird's eye view & make that decision.
@Buttarelli_G
You didn't actually read the interviews, did you? The ones where he said that they are wary of the ethical considerations, and that this technology would likely be required to identify itself as an automated assistant? But I understand that ignoring that is more dramatic.
If you don't understand that this is where technology has ALWAYS been going, whether you like it or not, then YOU are rudderless. Best for them to be addressing these issues up front, as they say they are. It's going to happen regardless.
I do more than make sausages. Programming that understands natural language is highly beneficial in many very important use cases. And, yes, there are nefarious use cases as well but that shouldn’t stop progress. Tech must innovate and government must regulate.
The RoboCallers are already using “over familiarity” as well as errs and uhhs in well crafted/scripted robocalls.
I’m guessing (like spammers) the robocallers will be early adopters unless Silly Valley puts in some kind of UX that designates the call clearly as AI driven.
“Silly Valley” as you call it is one most important economic engines in this country so embrace it, cultivate it, invest in it, encourage it, and, yes, regulate it. Either way, the IT Genie ain’t going back in the bottle.
I'll Turing Test it. "So what did you think of the NBA playoffs last night?" Or "Stormy Daniels affair"? It will go "hmmm haaa, can I have another scoop of ice cream"...
I bet Google AI would pass those Turing tests while I (presumably a human) wouldn’t the first one. Not all humans care about sports while it’s simple to feed that info into a machine.
Again, this doesn’t provide any real insight into either the ethics or the tech.
The point is that google is just releasing these into everyday use. So unless you expect secretaries to TT every customer they have (which is horrible service and bad business), it doesn’t help.
Yes, that’s a great idea. However, since captchas would also be automated, it would become a challenging problem.
In essence, training the impersonators and training the captchas would be like a society-scale generative adversarial network.
Would paradoxically *accelerate* AI.
What do you mean by this? That you’re going to Turing test every phone call you receive for the rest of your life?
Bc the point of the thread is that Google is just quietly releasing these into everyday use.
Oh it’s not that bad — it’s just those misunderstood boys looking for ways to replace the functionality that mom provided before she kicked them out of her basement.
If you don't catch it now, it's going to scale. Look at the havoc wrought by DNT, MITM/XSS/CSRF, GDPR - arguably improvements all, but at such a high cost.
it's horrifying, apparently, to talk to a person with a hearing aid. we must put an end to the ethical nightmare of people using accessibility devices without the consent of everyone else
Even before getting into context, can we also eloborate on the fact that it is installed without any notification and cannot be uninstalled?
It asks for permission to "turn itself on" when clicked on but why can I not just delete it all together?
Well, don’t worry about it too much because before long the person answering will also be replaced by a bot. Just millions of bots making millions of landline voice phone calls to each other all day, every day.
THE FUTURE!
I pledge that I will never open Google Assistant for anything. In fact, I’m increasingly migrating away: @DuckDuckGo@FirefoxFocus Apple Maps, Apple Calendar #googlepledge
Replacing google with apple is like replacing cancer of the mouth with cancer of the throat. I have been looking for a privacy replacement for gmail recently. Any advice?
I use @ProtonMail and it's quite good. Mobile integration is a bit rough around the edges, but it works. They use a subscription model, so they don't need to sell out to the ad industry for revenue.
Reading the replies, it seems like many ppl lack imagination for how badly this can be misused. Just one example: bot armies calling politicians & pretending to be constituents, or calling companies & pretending to be customers, to give (fake) support for a political position.
It's how companies strategize.
How the military says, "We can't cut defense spending because we need to keep our troops safe." Is somewhat how tech companies operate. They introduce benign use-cases like helping the disabled in order to avoid backlash toward their full ambitions
Curious about the ethics in relation to robocalls or clearly automated phone responses already in place. Same ethics issue, or does Google Assistant change things?
Valid concerns. Currently, they've announced a technology, and plan to address the transparency issue before rolling it out as a product/service. From the announcement (ai.googleblog.com/2018/05/duplex…):
Speaking as a "human on the other end" that takes calls from the public all day long, I would be thrilled to receive calls like this from the bot. You're overestimating society's ability to communicate clearly and with understanding, at least over the phone.
Horrifying is a strong word, i agree to some degree but the only way is forward and technology can’t be stopped. The only variable is time but if it could, it would.
If humans prefer talking to other humans over robots, tell me why is it "horrifying" if it creates a better user experience for the receiving party? I'm curious.
I thought it was the opposite of horrifying. I didn't think about the fears, I thought about the possibilities.
By 2030 this might make psychotherapy available to all at near zero cost. And patients wouldn't be embarrassed to talk with a human.
Completely agree. And think about it - Silicon Valley could be spending all of their capital on things like improving access to the political process, and so on.They aren't.
I agree this is a questionable decision. But hyperbolic reactions like this only really get people to stop listening. ‘Ethically lost’? What, all of the hundreds of thousand of people building things across an entire industry?
Tone of my reaction is somewhat irrelevant to their responsibility given they’re the giant company shaping the world vs my tweets or even opeds but... I was just surprised the room just erupted into cheer. It’s 2018. First reaction should be “wait, what? Are we deceiving people?”
Doesn’t speak well to internal culture that they’d demo like this: put in disfluencies, and go public with no accompanying conversation about implications and just get wild applause instead of a gasp.
Far more people appreciate artistic and athletic achievements for their own sake without *gasping* about whatever else the person has done or what it might mean for the future.
Let the nerds have their tech conference.
we all remember that time lebron james hit a game winner and it led to people not being able to genocide on the other side of the world.
if you don't want to think about the implications, thats fine. just get out of the industry.
There is nothing inherent in this kind of service that prevents it from identifying itself as a bot, or whatever. I know Siri (for example) is a bot, even when Siri is pleasant to me. I think the presumption that the goal is deception is difficult to defend.
I guess I'm confused by the implication that this particular usage is abusive, that the restaurant employee had some kind of right to talk to a human or an obvious bot. There are definitely problematic ways to use this technology, but the example here seems totally benign.
And that is why it would help to get multiple perspectives when designing a system like this. Apparently nobody involved thought this was bad, either. I would though, as do many others who reacted similarly.
Same for, say, click-bait ads like those by Taboola and Outbrain. I’m sure many people are fine with them, but they are part of the constant erosion of trust and of treating each other with dignity.
Is it the tech, or the deception, that's the problem? Imagine I had a human assistant who called the hair salon to make an appointment and never bothered to correct their perception that the person they were talking to was me--is that an abuse?
That's a weird interpretation. I'm trying to figure out the source of your outrage given that there doesn't actually seem to be *any* victim, at least so far.
I think he's saying that he cares about all the people who will be deceived that they are speaking to a human if this tech becomes widespread. Just because he himself hasn't yet interacted with this tech doesn't mean he can't find it alarming.
It’s meant to deceive people under the API level rather than above it… service workers and other people considered too low status to be afforded the respect of being informed they are talking to an automated system rather than a real person.
Right, it was intended to sound like a human. But you infer that the only reason to do this is to deceive. That's the part that's difficult to justify.
I can't really read their minds or know what they thought but it is deceptive. They should have safety, security, privacy and paranoia teams in the room from the first moment and bake such considerations into everything before anything gets off the whiteboard, let alone demo'ed.
So, I've seen quite a few different products demoed at various states of design over many decades. Demos, in my experience, actually are quite often a way to generate useful critical feedback. Now, you may have had a different experience with the products you have designed.
I’m ex-Apple, so that’s the model I’m working off of when I see demos like this. The product, being shown to a dev conference is substantially a representation of what is conceived to be the shipping design.
If Google does it differently then 🤷♂️🤷♂️
It's 2018. If we learned anything, I'd hope that it would be that we can't put off safety, security, privacy, society etc. considerations after you're way into product development. If it is ready to demo, that stuff should already be in. Otherwise, not ready to demo. It's buggy!
No… that kind of thinking leaves you open to exploitation by sociopaths. At a societal level it leaves you open to asymmetric information warfare like, say, influencing elections.
What kind of thinking leaves you open to exploitation by sociopaths? I'm inclined to think that if you attribute the intent to deceive, routinely, that amounts to a pathology as well.
You think portraiture is unethical? It's just paint on canvas, but represents itself as human beings. omg. Not to mention forced perspective! (A.k.a. fake 3D.)
When you're talking about "how they are represented to people," you're talking about something external to the technology itself. A good painting doesn't become less good simply because it's misrepresented as (say) a Rembrandt.
I'm assuming we're all against deception. I assume also that we all agree that some version of this tech can be used to deceive. Where I part ways is in the presumption that it cannot be deployed non-deceptively or otherwise innocuously.
Mike if you're going with a logical argument here not sure using loaded statements helps your cause, but hey no-one has called the other Hitler yet so go on lol
I don't know of any sensible argument for such a blanket statement. "Innocent until provent guilty" is excellent jurisprudence, but ethics is not a subset of criminal law. Ignoring evidence because it's more comfortable to cling to a falsely neutral position isn't sound ethics.
So wait, are we assuming that the called people weren't debriefed after the call? That they didn't have a chance to give feedback on the call? Seems like we're assuming that.
It's a product. You don't need to buy it. It'll lead to more appointments being made hopefully. And making more human sounding voices is amazing for people who need Alternative & Augmentative Communication. This will lead the way for speech devices to improve
"It was intended to sound like a human." That is deceptive. Further inferring isn't required. Even if it was, considering Google's track record, we have no reason to give them the benefit of the doubt.
Incorrect. Mr. Moviefone didn't pattern its speech patterns in such a way to imply false pauses to think. Sounding more clear and comprehensible is not the same as trying to sound like a human speaker.
Agree; the umms may put the human at ease, which is good UX design.
I’m more interested in the feedback and customization: The system can detect gender (and culture?) from the voice, and adjust its voice and speech patterns to maximize intended outcomes. That’s new.
I know plenty of actual human beings who code-switch their spoken language depending on whom they're talking to. I'm going to stick my neck out and say these folks are not tryng to be deceptive.
Would they need to put the human at ease for reasons other than the human becoming aware they were talking to a bot when they thought they were talking to a human? Why not start with "Hi, this is Google Assistant calling…" then?
It seems obvious to me that one would normally want Google Assistant (or Siri, or whatever) to self-identify at the beginning of a returned call (which I guess is the scenario you're getting at). The fact that this is possible suggests that the tech isn't categorically deceptive.
Most tech people will be amazed at this at first. So no need to assume they are all “lost”. People need time to process things. Not everyone looks at things from your perspective (trying to catch the evil tech people deceiving the world).
This is an amazing technical achievement. It is just brilliant that they can do that. And I give it 99% chance that they will add a disclaimer in the real product. Just like they clearly mark ads in searches. The demo wouldn’t have been great if it started with “I am a robot”.
Google's core business, and practically only source of income, is deceiving people. Do you think there's something honorable in serving ads (or "more accurate" ads) that this reaction contradicts?
It is possible massively to over-react in unreasonable and indeed counter-productive ways, and that does not cease to be the case if if you call it ‘tone’.
I'd love to know what actual fields of arguing (political campaigns, litigation, etc.) allow you to declare your own tone irrelevant. In any event, I look forward to being a dick on Twitter and informing people that my tone is irrelevant.
Well I sense your alarm, but JFC did you get this angry when gmail started offering canned replies? If I click one of the canned replies the machine supplies, or in the next step, I let the machine autochoose a reply for me, is this not the same 'scandal' i.e. deception?
Are you equally scandalised that a huge amount of digital cleanup is done on non-adventure movies without telling the audience? Is this a scandal of deception?
My view is that if it is /voice/ rather than merely text in by the 'deception' takes place, it is more visceral to know it is not a real person. But I don't think the response is to erupt on the ethics of SV, that's just piling on. The boundary needs evolving, but it's hard.
You talk about 'reliable signals' between human and machine as if this is a) possible b) the best idea. Given that dating sites massively configure actual relationships algorithmically, is there are way or good reason to separate 'organic' from 'engineered' coupling?
In the end, intent is all. I agree some kind of accountability and moral transparency is urgent - yes - but I don't believe you get that by insisting on a simple bright line between humans and machine, given such universal existing use of simulacra and automata.
I would add this after a few decades of thought on this very subject. Imagine if this was a prosthetic voice for a human. Would the disclosed still be needed? Would we require a test that the prosthetic was “necessary”? Slippery slope folks...
Many reasonable uses for synthetic voices, with disinfluencers to normalize it even, as replacement or as extension, sure. Start it off the wrong foot—it gets perceived as attached to deception and is cheered on wildly as such—you lose possibilities. The room should have gasped.
While I, too, am concerned about the implications of how this technology rolls out, surely there is room to appreciate the wonder of dangerous and powerful technology as well as cautioning about its use.
Agree totally. My assumption is the reason they don't start with "I'm a google assistant trying to nake an appointment" is that many people will hang up at that point, justifiably.
I don’t think its slippery or a slope. If the prosthetic is a thought or text to voice vs. an agent acting of its own accord. If it’is just a sound generated by a human then no disclosure necessary - but if it is a separate entity acting on behalf of the human then yes disclose
If this is the pinnacle, then that judgment isn't too harsh. Why doesn't the Valley focus on solving real problems? Instead of just looking to disrupt society and hurt people and their livelihoods so a few geeks can get rich.
Yes, but at what cost?
It must always be about the cost.
'Move fast and break things' might work in the Valley, but the Valley has never been about looking after people. When society breaks, does that leave us better off? Just externalising the cost can't be the answer always.
Checkout the @The_Maintainers for pushback against the Innovation propaganda
@A_R_G_Olabs committed to applying open source ML to support local institutions.
Ex: We use open source CV to help cities collect the Ground Truth and make fairer decisions such as counting potholes.
I think the relevant question is how google got this far without realizing the system should announce itself to the human. It is legitimate to wonder what lead them to the conclusion that deception was better than honesty, and what that says about google as a company.
Yes. Ethically lost doesn't mean evil. Just that they don't have a set of instincts to filter out bad stuff right up front. Can evaluate the bad applications of tech from the good long before they demo. Certainly tech folks don't believe they need ethics. It's a "nice to have".
at the same time there's something wrong with their review processes where these obvious flaws aren't caught before the demos or before updates/redesigns are launched. it does come across as an ethical failure that no one at any of the stages of review can see them.
This industry has a very poor track record of asking itself "should we build this?". If something can be done, it will be, because who knows, it might be worth millions!
See also: Facebook.
It matters little if SWE no. 450 in SV has a minor problem with this when the Google CEO gets through this demo without once mentioning any potential downside.
The question isn't whether people are building worthwhile things. It's that many in Silicon Valley seem to conflate "we can do this" w "this is something we should do." And there is little thought to the social consequences of products/how they could be misused in the real world
It's like keeping people alive with technology; just because you can does not mean you should. Rather than throw it out there for anyone to use, set some safeguards and awareness before you open the Pandora's box.
You’re tone policing and you’re wrong. It’s not difficult to imagine hundreds of thousands of people being ‘ethically lost’ while focusing on how impolite it is to mention that they’re ethically lost because they’re cheering @ asymmetrically deceptive technology.
Having lots of people doesn’t stop groups from getting ethically lost, even in relatively decentralized sectors like tech. (A twist on @sfmnemonic’s law)
I mean, if your model of every user is an able-bodied adult who is just too lazy to pick up the phone... But I saw this and was hyped by the accessibility implications.
I can't wait til my elderly relatives are being scammed out of their money by Google-bots on an industrial scale. Or until my small-business owning relatives are inundated with calls about open-hours or inventory levels to fill some database somewhere.
Most businesses already post opening hours online or in Google Maps so the number of cases where people enquire about redundant info should be very small
Elaborate please. I loose you here. I do believe pitching in cloaked in “we can truck people for you” is bad and agree with you previous points but no idea what you mean here
They don't even work on their beta 'Voice Access' app, which should be a simple and obvious thing for folks like me. It's not sexy enough, apparently. I'd work on it myself, but I became disabled during my schooling for AI, ironically :D
It’s really alarming if the same deceptive technology is being used to make calls to representatives or MPs to influence/justify their voting decision.
It’s wrong at an individual level and can go massively wrong if done at scale.
Imagine an MP's office getting hundreds of constituent calls and interpreting that as a genuine groundswell of support for [Position X], when in reality it's one person with a bunch of bot helpers.
Yes, this. It's the Facebook/Twitter fake-news-bot phenomenon unleashed on the physical world. Think today's confusion over "what's real?"is bad? Just you wait. Ugh.
It will not be hard to require adding a beep (a la when we used to do that when recording someone) or a sentence: "I'm Sally's Google Assistant..."). None of the robots calling me now identifies itself! These are technical demos. They're proving points. Products can be different
I'd like to add the question: If (when) consciousnesses aren't distinguishiable between humans & machines anymore, is this signalling still necessary or even becomes discrimination?
i wonder if machines will ever reach a moment where humans classify them as conscious or if we'll keep moving the bar saying..well it's "this" but it's not full human consciousness...
It's about the nuances of what can be done vs what should. This deserved cheers for the tech accomplishment. But is it right to have bots interacting with people in this way? The problem is that the tech world doesn't even see that as a discussion worthy of their time.
This particular interaction seems totally unproblematic to me. But combine this tech with something like deepfakes and the problems start to come into view.
I love Google and Pichai, but I thought the keynote was tone deaf. Like you talked about on TWIG, there was no moment of recognition of where we are right now. Nadella did that beautifully at Build. Google should be sobered by Facebook's stumbles, not silently gloating about them
Google celebrating new ways to deceive humans interacting with bots into thinking they are interacting with another human is indeed horrifying. That so many commenters fail to recognize that is also horrifying. Thank you @zeynep for continuing to ring alarm bells.
So much drama! Wouldn't it be nice to let a bot work its way through the phone tree maze ("Press 1 for English... Now select from the following 9 choices...") on our behalf?
But if a bot calls you, says it's a bot, but sounds human in every way, what would you conclude? Also, the deception is in WHY the call is made, not in the voice, or if it's human or a bot. Now, if you ask if it's a bot, it should tell the truth.
Where did you think we're going with virtual assistants and artificial intelligence? And how is it deceiving? is the person on the other end hoping to talk to a human being and will they even care? There's this theory that justifies how you feel.
They’re in a mind tunnel of tech for tech’s sake. Just like we have medical ethicists, I’m starting to think we need a new specialty of technology ethicists. But first the techbros need to realize they have a problem.
This will only happen when the money reward for heedless “innovation and disruption” is turned into punishment, ie accountability for harm caused ie fines and criminal charges
Sadly, you’re probably right. It always comes down to either financial incentives or regulation. Unfortunately, the societal impact of some of these technologies are very hard to quantify in terms of harm. 1/2
By the way, that specialty (tech ethicist) already exists, and many of us do actively consult with tech companies. But there are far too few of us for the scale of the problem, so it’s a bit like trying to redirect an entire fleet of ships with a rowboat and a megaphone.
LOL. Oh yeah, next step the businesses will use Google Assistants to answer and schedule the appointments so the voice interface is just the networking protocol and people can get on with their lives.
True, they already use call centres in developing countries, but this would be cheaper and it would be way easier to optimise scripts for plausible exploits and target vulnerabilities.
because this is essentially free at an incite scale compared to hiring humans. you’re being intentionally thick or you’re not actually interested in exploring the bad side of this.
All bots should be mandated (self-regulation?) to play a tone/sound snippet that identifies it as a bot/non-person. This universal tone doesn't have to be obnoxious. It simply sets the other party straight as to whom/what it's speaking with. Then it can proceed on, chatting away.
If the AI is fully autonomous, you don't know what it would actually say. That's one difference to start. And I care about what something is saying or doing that involves me without me being present. Imo you're giving up your own autonomy to a robot. No thank you.
I think not telling is the best solution. If they were told they were talking to a machine they would react differently. And if the machine is doing a human job why should it need to announce itself as a machine? Would you shop ask the caller it's race?
The industry needs but does not value people who study ethics, because they all seem to think that ethics, like any other aspect of the human condition, has a technocratic workaround.
However we react to this, doesn’t change a thing, why? Because you cannot say to some engineers stop developing things that they can easily achieve. Telling people to stop AI from advancing is far from affective
So put the same amount of effort into developing an ethical framework as went into developing the tech. The two aren’t mutually incompatible.
We know the genie is out of the bottle. That doesn’t mean human beings should forfeit all responsibility for how it is to be used.
To draw a new line you should place a dot first,I know exactly what you’re saying here and I’d love to agree but it should start from somewhere and what can we do about the fact that those who are capable of creating new tech aren’t often considerate of misusing it
Democratic act is needed when facing AI :D let em know what they’re capable of and let em use it freely :))) sorry but couldn’t stop myself from joking about this
Today is the day! Four years in the making, we're so incredibly proud to launch the Hero Arm - the world's first medically approved 3D-printed bionic arm, and the most affordable bionic arm ever 🎉 openbionics.com/hero-arm#BionicHeroes#HeroArm
The idea that calling somewhere to set an appointment constitutes human interaction is dubious at best. This is the kind of interaction that's easy to automate because even if we limited ourselves to 1960s tech, we're essentially using other humans as machines.
:D I think it's good and obvious tech, myself. But I don't trust Google bros to use/release it in a way that keeps the downside minimized. That's more on them than the tech, obviously.
eh, theres a tendency to shit on literally anything coming out of the valley these days.
Not everything is a juicero.
And whats the risk really?
That maybe a machine instead of a dude will try to sell you boner pills?
Oh, I have "Microsoft" calling me "about a virus on my computer" at least weekly on different numbers. I always think of the naive folks. This tech will be much cheaper because they don't have to pay a warm body to do the dirty work.
and likewise, are you familiar with the jolly roger bot? Its basically the same thing for the end user, and it will keep telemarketers etc at bay.
it goes both ways lol
We are in an AI arms race. If "Silicon Valley" changes its direction or slows down innovation to explore ethical concerns at length, Chinese companies will quickly take over this research. Where AI-powered options are more effective, they will win and will be widely adopted.
That's your opinion. Mine is that the result would be helping people like me with severe anxiety and phobias actually be able to make important phone calls without getting the shakes and immediately throwing up afterwards.
Tricking implies a malice that simply isnt there. Also, how would you even know you were being tricked? It's just a voice asking for a doctors appointment.
If I hear a voice that sounds like a person, I assume it's a person. That's a big deal to me. Deception is harmful whether or not a person knows about it. If the voice DECLARES itself non-human, that's a different situation.
But if I were to telephone Siri, and "she" spoke to me in a more natural-sounding voice than "she" does now, are you saying that I'd be deceived into thinking Siri is real?
^ In that hypothetical, @sfmnemonic, you'd be aware you're talking to an AI (Siri). The objection @emilynussbaum is making to Google's demo is that the person talking to the AI was unaware that's what was going on.
Well, we're communicating via printed language. That's one approach. But even if tricking someone with a fake voice were the best pragmatic solution for you, it would still be ethically questionable, whatever the person's motive.
Also, this is going to hurt people with social anxieties who may benefit from this technologies. Have a potentially useful tool be associated with deceit for the sake of convenience and then it will be useless for those who actually need it. This does the most disservice to them.
She's saying that if people associate the fake voice with deception—if the tool itself is seen as a scam, a hoax, a trick, something cruel—then that reputation will also smear the people who use it. So if you use the tool, you'll be seen as a conman who lies.
I don't see why a natural-sounding voice can't be identified as an automated service. It seems difficult to justify the claim that it's inherently deceptive. I guess if you assume Google is evil, it gets easier.
The biggest issue I have with it is that PRETENDS to be a human. There's nothing wrong with a robot speaking fluently, but wasting time with synthetic disfluencies and human idiosyncrasies doesn't help anyone and doesn't serve any purpose but to be deceptive.
I don't think they're consciously TRYING to deceive; but that's part of what bothers me. They didn't THINK about the implications, or at least give no indication they thought of the implications.
When you say they "didn't THINK about the implications," where are you getting that from? I would be frankly astonished if even one engineer told you no one thought about this.
I don't care about "who." If you create a voice that's designed to sound like a human, that doesn't indicate that it is not human, to interact in a context where the listener can't tell it's not human, you're creating a deceptive tool. That's a fact, whatever the intent.
Look, if you turned out to be a Godwin-bot, I would be pissed and hurt at the energy I wasted debating this with you, especially because I know you. Obviously, there is deception in many places online, but that does not mean deception is okay or desirable.
You know who else was interested in big lies? JK MikeGodwinBot, I'm sorry, you make it so easy. I've got to go now, but this has been an enjoyable debate.
No one is happy about those either. Or deep fakes, at least it should introduce itself as a google scheduling assistant.
Personally I’m waiting for these to get fielded and wind up in an infinite ping pong loop with another bot.
To be clear, I'm totally opposed to deceptive uses of this tech (or other UX tech). But I don't regard it as categorically deceptive. And demos are just demos. (I've also seen lots of concept cars that never got to be street-legal.)
Well maybe they can be designed to accuse the other party of being a nazi when they don’t get what they want... that would make it just like the internet, right Mr Godwin?
That's a really good point — so much of communication (internet, phone, and otherwise) is based upon trust. I trust that you're Mike Godwin and you trust that I'm Jordan Matelsky, and I would feel equally deceived if you were Mike Godwin's Robot.
Right, and so you should. But the fact that this user interface can be used deceptively doesn't demonstrate deceptive intentions on the part of its designers.
A friend mentioned not being able to tell the diff between Amazon's chatbots and real support humans: I think that's a success, where machines are just as good as humans at conveying the necessary information:
My analog here is a bot typing "Hm, let me go find out" and then waiting to deliver a response, even if it had a perfectly good answer ready for you immediately...Humanness as a goal, instead of a comforting byproduct
The false "umms" are meant to give the sense that I can hear a human processing. Equally frustrating, but weirder.
(Although I don't think I'd prefer the "human" to beepbop instead, to be fair)
I know a guy who'd record, on his answering machine, "Hello, this is Tom," followed by a long-ass pause. After a full minute the pause would end with "leave a message at the beep."
I'm not sure there was a conscious effort by the Google designers to 'pretend,' but the end-result is that the behavior includes functionality (such as conveying information through speech) and.. cruft, like "ehmmm", which serves only the purpose of hiding robotness. 1/
I don't think that this means that Google engineers got together and said "let's fool some folks!"
...but an efficient, to-the-point, no-disfluencies, humanlike voice would serve the purpose. Adding unnecessary human elements is COOL (!!!!) but... ¿why? Aside from coolth 2/
That's what I think too—and I think it's kind of a great example, in miniature, of a problem with the entire tech industry. Their intent doesn't have to be malicious for it to be a dicey situation, either.
On the one hand, the inference seems to be that this tech is an inch away from being deployed deceitfully. But now it's just "some engineers just got overly excited"? I know some pretty ethical engineers, and any large-enough team will have some.
I'm not bothered by talking to a bot (human-machine teaming is what I do for a living) but it bothers me that the system reduces efficacy to match "my expectations" of how a human should mess up
more words here, if one has the patience for MY disfluency
blog.jordan.matelsky.com/duplex/
The example I think of is my mom (may she R.I.P.), who likely would have found a Google Assistant that spoke with human flaws more approachable if, for example, she were calling a number to get restaurant or car-repair information.
I don't have a problem with feeling a bit sad, confused, hurt and nervous about the realization that the thing you are pouring your emotional skills toward is automated & has no response back. Emotions like that are fine with me.
I took Jeff to be worrying about the emotional response, not to a particular instance of being deceived, but to the notional idea that one might be deceived generally by the tech.
Many in Silicon Valley have shown they don’t deserve the benefit of the doubt. Google for years has recommended horrifyingly racist and false content about the Obamas and falsely claimed it couldn’t do anything about it. Our trust must be earned not taken for granted.
Google intentionally created a bot that pretends to be human. That is the definition of deceitful. The point is to trick real people, to get people to respond in a way that Google wants them to respond. Substituting machine responses for human interaction is dangerous.
Then why fake being human? It is superfluous and inefficient to include the fake — like redundant code — unless the intent is to fake. The intent is the deception.
You haven't thought this through. If the only reason you can think of for developing this technology is to deceive someone else, I want to say that says more about you than it does about the tech. (Sorry! I don't mean to be hurtful about this!)
I am thinking of it as a programmer. Why umm and ahh be good speech training? The Siri comparison is faulty because the Siri makes does not pretend to be something other than the voice of a search engine.
Suppose someone who can't speak clearly wants to use it to speak for them without giving away they have disability? They'd like this.
Do you think people who wear contact lens are inherently deceptive about their needing to wear corrective lenses?
But software that can sound realistically human can be used as a "corrective" tool for people who may want to sound human. And Google developing such tools can translate to work for people with disabilities etc.
People who would benefit from accessibility aids built on technology like this suffer discrimination all the time. Reducing that discrimination by providing alternative communication methods that don't discriminate is a Good Thing.
My wife is Cambodian, and although she speaks English pretty well, she speaks it with an accent. She thinks it's fun to practice conversationally with Siri. To the extent Siri gives naturalistic responses, it's actually more fun for her.
I don't doubt that some people could deploy such a technology in ways that are deceptive. It seems likely some bad actors will try that. But why do you think Google wants to deceive you?
What would be the purpose of adding "um" and "ah" other than to deceive? If this is for accessibility purposes, the voice would serve just as well without that feature. The only purpose would be to make it seem like a human is at the other end when there isn't.
Mike, I know - and am sympathetic to your angle here - all AI systems are in effect trying to emulate human responses - to be more accurate in MT is to be closer to human. But to introduce intended flaws does _feel_ like an effort to deceive rather than an effort to be accurate.
To introduce flaws sounds to me more like an effort to make someone feel comfortable. Like wood-grain formica. But of course since this is Google it must be intentionally evil, right?
Look, I feel FURIOUS ever time I get one of these phony sales calls that coughs to sound real. I bring a full emotional response to a person vs a machine. A robot faking humanity both wastes my energy and fucks with my head. I don't care about intent, that's the effect.
You experienced the demo and weren't told it was a demo? Somebody deceived you? I had been given to understand that this demoed as a way in which Google Assistant might respond to a call you make.
I didn't listen to the demo, Clive told me about it. But what's with all this faux-naivete? It's pretty obvious that a voice designed to sound exactly like a human voice is, if used, going to trick listeners, unless it says, "HI, I'M A ROBOT!"
Why assume that it's not going to self-identify? I mean, I have all sorts of criticism of Google, but I don't normally assume they're top-to-bottom dumbasses.
Why should it self-identify? Why can’t a computer talk to a human? What is it doing that is wrong? Why does the receptionist care if I made the appointment or I did. I really don’t get this latest techno panic and I am far from a Google apologist or fan.
Sometimes the self-identification is implied in the exchange. If I begin my query with "Hey Siri" it doesn't matter how naturalistic the responding voice is--I don't imagine that I'm talking to a real person.
I get it. My point is, why does it matter? Why should the person on the phone care if they are talking to me or my Google Assistant if they are getting the info they need?
I'm okay with self-identification as a humane requirement. Sometimes people strike up conversations with the voices they interact with on the phone. (I even do this myself from time to time.) I'd prefer to know I'm not talking to a program in at least that subset of cases.
Rules about when an how a bot has to identify itself as not human—and how and what a human can disclose—need to be a part of any interactive interface, IMHO.
Plenty of IVR and text support chats are already bots with fake names. We don’t complain because we called them. ;)
No clue on launch readiness. My Android phone already “asks a human” to take pics inside places, fill out details about a business. Guess that isn’t going fast enough. ;)
I’m not a fan of fake wood-grain anything. That said, my apartment kitchen counter-top is fake marble, and I’m trying and failing to work up some high dudgeon about it.
but, actually I do resent the implication that my critique is in any way related to the fact that this comes from google. if my mother released this project I would have exactly the same critique.
Look, do you resent my theorizing why you presume that this tech is all set up to be deployed deceptively by design? Google-is-evil (or thoughtless, or whatever) seems to be where Occam's Razor has led some people.
I made a point about AI interaction design: that signals (of AI) are critical and ethical and systems that leave these out or worse insert signals that are intentionally human/flawed represent bad design. that's all. Occam should find that simpler than Google-hating.
With absence of a global body to oversee AI, Google Duplex can get out of hand & be used against unsuspecting ppl. Google isn't just a US company, it has global reach including war zones. GD can perfect accents; criminals/propagandists can now hide original accents for deception
I expect that people will adopt the strategy I am currently using: I simply don’t answer the phone to unknown numbers. I think we are going to have to switch to an opt-in system for telecommunications that requires an introduction and consent.
It's just another stage of The Corporations' progress in gaining the same rights as human beings, creating agents that falsely impersonate them to acquire trust and binding agreement.
This tech could be a huge boon to people who cannot speak clearly or at all, due to physical issues, and who don’t want to be automatically hung up on when they need to make a phone call.
Ok I understand you feel the machine acting human is deceptive. It is a strange machine with a task to perform.
So other than the aforementioned deception whats the difference between that strange machine and a stranger calling to perform the same task?
I doubt they’ve even considered it, I see no evidence that they have and I stopped taking tech companies at their word long ago. I also don’t think this even has potential for enough benefit to change audio recording laws. Why not just an open scheduling api platform?!
That would solve the stated goal with zero r&d effort, a handful of engineers, and a nifty google cloud product enhancement. There is another motive here and considering the tech’s intended operating mode includes deception by design it’s going to make people uncomfortable.
The cheers were for the computer science history being made with a bot for all intents and purposes passing the Turing test. I don't think anyone in the valley was ready for this so soon.
Yes. It was one thing to make it not sound so robotic but another to put in filler words for the purposes of deception.
I don’t get how Silicon Valley doesn’t see the ethical issues here.
It is a theatrical statement, but don’t you agree that to cheer this is to willfully ignore the history of the past two years, years whose consequence we are just beginning to understand?
Not really sure where you’re going with that one. I think they’re independent, and similar things have been said at every major invention by every person in history.
Most inventions haven’t allowed people to very cheaply simulate movements and stimulate division in order to swing an election for less than the cost of a jetliner. It’s dangerous
I have to admit, when I watched the presentation I was impressed by it. But you are right. It is also about privacy of the person being called, what happens with the recording of the call?
Get over it folks this is the future. Jobs are changing and so we need to keep up. You can’t stop progress. Australia needs to change its curriculum in schools to teach kids about being innovative so they can adapt. We can never go back to the past.
I think you will find law makers will treat this very seriously. This is AI that is impersonating you or someone else, the obvious ramifications are very dangerous especially for children and elderly.
I definitely think it should identify itself as a bot, but this could be really helpful to people with phone anxiety (including myself). The faking humanity part is the worst of it.
Lol now you sound like me ove the last 2 years. I'm glad you finically agree they haven't learned a thing. Do you really think for a second they suddenly plan to change please. Time to for us to set up groups and plan not just write about this.
why do you feel as though you've been lied to if the person you spoke to didnt tell you they were made of plastic? do you feel this way about humans with inflammatory but irrelevant backgrounds?
I found the whole google io keynote yesterday was very scary. From voice assistants, to cameras, to maps and new android P everything is designed to collect so much data about the user. What happened with facebook could be a very small glimpse of what might happen with google.
The problem is, they have a deep hate for humanity. That's why they are trying to get a robot in every profession there is, and leave every human without a job. Plus, the fact they want to interface that with AI and that will be certainly, our end as a species.