You walk into the office and greet a digital avatar that replaced the company receptionist a few years ago. After sliding your badge into a reader, you smile and nod, even though you know “Amy” is not a real person. You sit down at your cubicle and start browsing the web.

Then the trouble starts.

You receive an email requesting a meeting. “Bob” wants to chat about your job performance. You fire up a Zoom chat and another digital avatar appears on the screen.

WIRED OPINION

ABOUT

John Brandon is a writer and columnist based in Minneapolis.

“I have some unfortunate news for you, today … ” says the middle-aged man wearing bifocals. He looks real and talks like a human, and all of his facial expressions seem realistic. There is no uncanny valley, just a bored-looking avatar who’s about to fire you.

Recently, at CES 2020, a company called Neon (which is owned by Samsung subsidiary Star Labs) introduced digital avatars, which are called Neons. Based on real humans but fully digitized, they don’t have that awkward cartoon-like appearance of less-detailed replicants. Details were scarce, and the demo was highly controlled. But a press release trumpeted that “Neons will be our friends, collaborators, and companions, continually learning, evolving, and forming memories from their interactions.” And among the avatars on display were a digital police officer, someone who looked like an accountant, an engineer, and a few office workers. Some looked authoritative, even stern.

I imagined, like some of Neon’s potential clients may imagine, one of them being a boss. Unless you look up close, you can’t tell Neons are not real people. Maybe “Bob” and other bots will laugh, cough, roll their eyes, or furrow their brows.

Some might even act like they are in charge of something.

“I’m afraid I am going to have to let you go today. Do you have any questions?” he says.

Well, yes, many. The first one is: Does it really count?

Ethicists have argued for years that a digital avatar is not a real human and is not entitled to the same rights and privileges as the rest of us. You might wonder if that works both ways. Are you entitled to ignore what a fake human tells you? Let’s look at one possible not-so-distant scenario: Can a digital avatar fire you?

In the workplace, it’s not like an avatar needs a W2 or a Herman Miller chair. What, exactly, is “Bob”? On the Zoom screen, it’s a collection of pixels programmed to trigger a visual pattern, one that we perceive as a human. Algorithms determine the response, so a human is always behind the response. Someone has to create the code to determine whether “Bob” gets angry or chooses to listen intently. In fact, Neon announced a development platform called Spectra that controls emotions, intelligence, and behavior.

Yet, avatars (and robots) don’t understand the deep emotional connection we have to our jobs and our coworkers, or what it means to get fired.

They probably never will. More than algorithms and programming, human emotions are incredibly personal, derived from perhaps decades of memories, feelings, deep connections, setbacks, and successes.

Before starting a writing career, I was an information design director at Best Buy. At one time, I employed about 50 people. I loved the job. Over six years, I hired dozens of people and enjoyed interviewing them. I looked forward to getting to know them, to asking unusual questions about favorite foods just to see how they would respond.

My worst days were when I had to fire someone. Once, when I had to fire a project lead on my team, I stumbled over my words. I wasn’t nervous as much as I was terrified. I knew it would be devastating to him. I still remember the look on his face when he stood up and thanked me for the opportunity to work there.