If you find yourself pressed for time but needing to quit your job, text your mom, or break up with your girlfriend, don’t panic — there’s an app for that. One download can help you send off a message of resignation, remind your girlfriend that you love her, or end your relationship entirely.
While the aforementioned apps may seem like something dreamed up in the writers room of HBO’s Silicon Valley, at bottom they aren’t that different from Siri’s text dictation feature or Gmail’s autofill and predictive text. They are tools aimed at increasing our efficiency — or at least helping us feel like we are..
While responding to emails, Gmail recommends three options for wrapping up the conversation: “Thank you!” “Thanks so much!” “Best of luck to you too!” Simply clicking on one of these options instead of typing it out feels quicker, even if it actually isn’t. Ultimately, these tools aim to expedite communication without sacrificing authenticity.
Yet other technologies are trying to go one step further by simulating the authenticity of the human touch. The developers of BroApp and MomApp, for instance, promise users a way to “outsource their relationships” by sending automated text messages to their moms and girlfriends.
While an out-of-office reply feels like a common courtesy, by swapping the professional for the personal it suddenly feels like we are doing something far more weaselly.
According to Irina Raicu, the Internet Ethics Program Director for Santa Clara University’s Markkula Center for Applied Ethics, while some might see apps like these as mere tools of expediency, they raise legitimate moral questions.
“If you create automated notifications to say ‘I love you’ to your partner, you’re still not doing the actual work of paying attention,” Raicu says.
Even if that doesn’t bother you, the other issue is whether these apps really work. Guardian writer Stuart Heritage tested out a few back in 2014 and found that, for the most part, the resulting messages sounded unnatural and corny. The message generated by the Breakup Text App read, in part, “We’re like that. A poltergeist love. And I need an exorcism.”
Trick or Turing
While these apps might not be fooling anyone, there are other companies that are simulating authentic communication much more successfully — and in a far more natural voice.
Take, for instance, Google’s Duplex platform, which uses artificial intelligence to help people schedule appointments. Duplex’s AI was novel because it attempted to mimic the human voice, going so far as to include human-like “ums” and “uhhs.”
Though Duplex might have been an attempt by its creators to humanize technology, the ethical dilemmas raised by the premise of the software resulted in significant criticism. Tech critic and University of North Carolina professor Zeynep Tufekci called the product “horrifying” on Twitter.
“As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each,” Tufekci tweeted when the product was released in 2018. “This is straight up, [deliberate] deception. Not okay.”
In response, Google said Duplex would notify the person at the other end of the call that they were interacting with AI.
Given the backlash to Duplex, Raicu says she is surprised there are still companies creating and selling products that are intended to deceive others.
“I don’t think the public appetite for being tricked by technology has changed since then,” she says.
Indeed, people generally don’t like being tricked. Data ethicist Laura Norén, the vice president of privacy and trust for Obsidian Security who has an expertise in sociology, says the idea of an AI posing as genuine human might reasonably make us feel squeamish. And that’s because lying and hypocrisy incites a stronger emotional response than almost anything else.
“It doesn’t matter who you talk to, [hypocrisy] really, really bothers them,” Norén says. “That’s probably why you look at this and think, ‘This just doesn’t sit right with me.’”
Yet trickery in the tech world is nothing new. In the 1950s, English scientist Alan Turing created the famous Turing test, meant to assess whether a machine could exhibit behavior indistinguishable from that of a human.
Norén sees the use of trickery in the modern tech world as an attempt to mimic the human touch. Tesla’s self-driving cars and the emergence of deepfakes are examples of AI that can imitate human behaviors extremely well.
And in terms of the ability of AI to mimic human communication systems, Norén says “we’re already there.”
But automated communication tools that use deception might be more harmful than helpful, according to Raicu’s colleague Brian Green, the director of Technology Ethics at the Markkula Center.
“You’re tricking the person who’s getting the message and, at the same time, the person who set this up is training themselves to be callous toward others,” Green says.
A Fleeting Illusion
People like Chris Tosswill who develop automated communication tools see their work differently. Tosswill, not unlike a real-life George Oscar Bluth, says he prefers to characterize tools like Duplex as an “illusion” rather than a trick.
“At the scale they’re operating at, they can’t answer a human call, but they want someone to still feel comfortable going through the system,” Tosswill says of systems like Duplex. “The intent is always important. If you can create a connection, especially when you’re talking about your customers or your clients, having a warmer, higher caliber communication is always better.”
Tosswill is the CEO and co-founder of Addressable, a Los Angeles and San Francisco-based company that uses robot technology to compose handwritten letters. Here’s how it works: Clients write and submit their own message and then Addressable’s robots hold an actual ballpoint pen and write the letter in very much the same manner a human would.
Addressable has previously assisted with everything from event invitations and thank you letters to mass mailing for businesses.
There is a strategy involved. Tosswill explains that the snappy, clickbait content of a marketing email, for instance, would perform poorly if it were handwritten. So while the handwritten component might get someone to open up a piece of mail instead of toss it out, Tosswill says the actual message has to be thoughtfully crafted for the letter to be successful — and Addressable’s staff can provide its clients with guidance on how to write a meaningful note.
The technology also includes a feature reminiscent of Duplex’s “ums” and uhs,” as it is programmed to make small imperfections, like ink skips. “You get an effect that has all the same qualities as if you’d written it yourself,” Tosswill says.
Notably, recipients are never informed that Addressable — or its handwriting robots — have played a role in creating the mail they are receiving. Responding to this, Tosswill says the robot’s handwriting is not what forges the connection between the sender and receiver. Rather, the connection lies in the message.
Tosswill sees Addressable as helping other companies spend more time doing the work they want to be doing. For instance, Addressable has written thank-you letters to donors of nonprofits which, in turn, has allowed these organizations to dedicate more time to their main missions.
Addressable’s primary client base, however, is in the real estate industry. Real estate agents rely heavily on postcard marketing, classified as “Every Door Direct Mail,” a kind of targeted advertising. “It’s literally spam mail,” Tosswill says. There’s no individual personalization.
“In this digital age, everything has become mass mailed, form-lettered and not super personal, and the goal of Addressable is to give communications more of a warmth,” Tosswill says. “The handwriting is interesting and it gets people to engage.”
This focus on individually personalized mass mail represents a broader shift in the world of modern automated communication. According to one scholarly article from Oxford Handbooks Online that looked at mobile communication, algorithms that better customize and personalize mass communication are being vigorously pursued by the mobile communications industry.
Norén sees this as another way of “making intimacy serve capitalism.” Personalization and relationship-building is an age-old idea in the world of sales. But in the digital age, it isn’t particularly surprising that this strategy is moving online.
What gives tech ethicists cause for concern, however, is the idea of exploiting intimacy on a mass scale.
“I can’t imagine that people receiving these automated responses would feel good if they understood what they were,” Raicu says. “If you get to the point where you don’t know whether someone really wrote to you or not, whether there’s a person behind that or a bot, think about how you would approach communication.”
Addressable’s staff is conscious of the company’s ethics. For instance, the company’s handwriting technology has not ventured into the realm of politics.
Generally speaking, Tosswill said he believes ethics in technology is important, adding that there are both positive and negative uses of any technology, so it’s important for the founder of any company to be conscious of its uses.
And in recent years, there has been a push for tech companies to be more conscientious of the ethical impacts of their work. This was mainly driven by pressure from media outlets and consumers as part of what was deemed the “techlash.”
“Before, all the tech companies were good guys, [then] suddenly the media were covering only the bad aspects of tech,” Raicu says, explaining that the techlash among consumers resulted from this critical coverage as well as a better understanding of how their data was being used.
Demands for greater consideration of ethics became more relevant after the 2016 election, Green says, when people realized that social media wasn’t just a game. “It’s something that affects the real world,” he says.
The techlash is still playing out, but some think the COVID-19 pandemic might mark a turning point. Social media companies like Facebook, Twitter, YouTube and TikTok have all taken a more active role in combatting the spread of disinformation on their platforms. Yet others say even more needs to be done in the form of government regulation.
Santa Clara University’s Markkula Center sets out to help tech companies put the concept of tech ethics into practice. They have free technology ethics materials on their website, and previously, the center has accepted invitations from tech companies to lead in-person ethics training workshops.
Raicu believes tech companies have latched onto the idea of using ethical responsibility to guide decision making to differing extents. But it’s still early, Raicu says, and every tech company has to figure out their own way of incorporating ethics into their model.
When it comes to ethical concerns over Addressable’s product, Tosswill acknowledges that the premise of his company has yielded mixed reactions — everything from excitement and wonderment to skepticism and concern. Nevertheless, he feels strongly about the positive impact of his product, mentioning Addressable’s ability to re-gift handwriting to older individuals or folks with conditions like Parkinson’s.
As is the case with so many tech companies, Addressable sees itself as a problem-solver. Tosswill believes his company is changing the character of the mail industry for the better while also helping companies become both more efficient and maximize their customer outreach. Expanding on this, Tosswill sees Addressable’s tailor-made product as creating an experience rather than just another piece of junk mail.
There’s another term for this kind of problem-solving, Raicu says: “tech-solutionism.”
“People who are in love with tech sometimes try to find solutions for tech-created problems by adding another layer of technology,” Raicu explained. “Technology has made it very easy for us to communicate all the time. It turns out that it’s both a blessing and a curse.”
In other words, Green says, it’s the infamous Jurassic Park dilemma: just because we can do it, doesn’t mean that we should.
So why would we want to automate personalized messages in the first place?
“I don’t think we do,” Raicu says. “This strikes me as a kind of automated spam.”
Yet there seems to be a market for it. So beyond the fact that we are limited in time and ability, part of the reason for using a service like Addressable or a phone app to text our moms might be because authentic relationships are difficult and require effort.
From Green’s perspective, there is a growing belief that we can and should only do the things we want to do in life. But, of course, that isn’t true.
“People have to do things you don’t want to do sometimes, and if that is writing somebody a personal letter or communicating directly with people, then that’s a problem for both parties,” Green says. “It perpetuates that feeling that people can get away with not having authentic relationships.”
As we come to rely upon and accept automated communication into our lives, are we at risk of a future where automated messages fly back and forth? Norén, for one, says she isn’t concerned about this kind of “thoughtless communication.”
“Communication is still a two-way game,” she says. “If you put nonsense out there, it will simply fail.”
Hannah Holzer is an SF Weekly contributor covering news and culture.