Yet, avatars (and robots) don’t understand the deep emotional connection we have to our jobs and our coworkers, or what it means to get fired.

They probably never will. More than algorithms and programming, human emotions are incredibly personal, derived from perhaps decades of memories, feelings, deep connections, setbacks, and successes.

Before starting a writing career, I was an information design director at Best Buy. At one time, I employed about 50 people. I loved the job. Over six years, I hired dozens of people and enjoyed interviewing them. I looked forward to getting to know them, to asking unusual questions about favorite foods just to see how they would respond.

My worst days were when I had to fire someone. Once, when I had to fire a project lead on my team, I stumbled over my words. I wasn’t nervous as much as I was terrified. I knew it would be devastating to him. I still remember the look on his face when he stood up and thanked me for the opportunity to work there.

A digital avatar is incapable of understanding the deeply emotional experience of being fired. How do you program that? To be cognizant of the shock and surprise, the awkwardness of telling your loved ones later on, the weirdness of telling coworkers you may never see again.

In my view, getting fired by an avatar is not valid. It doesn’t count, because there are too many nuances. Maybe the employee wants to discuss other options or a lesser role; maybe they want to explain a rather complex workplace issue that led to their poor performance.

More importantly, a digital avatar will always be a collection of pixels and some code. Avatars that greet you in the morning, inform you about a road closure, tell you a few jokes, or even notify you about a change in your cable service are all more valid than a bot that delivers bad news. News that’s personal and will have a major impact on you, your family, and your future.

My initial reaction to being fired by a digital avatar would be to find a real person. I would want to make it more official before I pack up a single stapler or office plant.

I’m OK with an avatar that teaches me yoga. Bring it on. I want to learn, and a real instructor would probably cost too much. Someday, an avatar might try to teach one of my kids how to drive in a simulator or a videogame, and that’s perfectly acceptable. If a digital avatar with way more patience than me handles the educational part before we hit the pavement, that’s fine.

But a “police officer” that hands me a ticket when I was obviously going the speed limit? A “doctor” that talks to me about cancer risks? Don’t even get me started on a bot that tries to teach a sex-ed class to teenagers in high school. Any avatars that deliver important news or require actual credentials to understand the nuances of emotions won’t cut it.

That said, I know where this is heading. In most of the demos for the Neons, it became obvious to me that this is not meant as a mere assistant answering queries. One of the avatars that looked like an accountant started ruffling pages as though it was making a big business decision; another smirked and smiled like it was trying to get to know me.

We’re not talking about a replacement for Amazon Alexa or Google Assistant here, not a mere voicebot that tells you the weather. This is much more ambitious. Neons and their competitors will be more like artificial humans.