AI is moving mainstream, but are users ready to trust it yet?

When DeepMind’s AlphaGo defeated South Korean master Lee Se-dol, it was a historic stride for AI. The depth of this development, coupled with higher computing power and cheaper data storage, is moving AI into the mainstream.

Perhaps the most popular application of AI today comes in the form of virtual assistants and bots, or “agents” as my good friend Shivon defines them. An agent can schedule your meetings, manage your finances, book your travels, order your meals, and more. And even though these agents are typically focused on one specific task, it’s remarkable to consider how much progress we have made outsourcing mundane work for a fraction of the cost.

At Version One, we get excited about the data network effects associated with AI and machine learning (ML): products and services become more valuable to users as more and more people use the service.

Earlier this year, Boris introduced a data hierarchy and wrote that building a defensible product requires access to unique user data. For virtual assistance, this unique data is user feedback and it’s absolutely core to a “smart” agent.

In order to build a smarter agent that is capable of getting all the personalized preferences and nuances right, the agent needs to learn directly from its users. And for agents to learn most effectively, they require user trust:  trust in agents to make decisions and complete tasks on their behalf. Even more critical, users must be tolerant of mistakes that agents may make during the learning process. And this is the crux of the problem.

I recently admitted to my other good friend Jesse that I have trust issues with my scheduling agent (n.b. at Version One, we use Clara and Jesse is an investor in x.ai). These trust issues mean that I heavily constrain Clara’s power. For example, I only let her schedule calls during specific time windows. On one hand, I’m pre-emptively minimizing any scheduling errors. But at the same time, I’m limiting the opportunities for Clara to learn. Ironically, as a user, I’m the one who is learning what I should and shouldn’t trust the AI for.

If you are building an agent, ML is rooted in the trust of your users

For the most part, mistakes by current agents are forgivable because we’re not yet outsourcing mission critical tasks. But what happens when agents aspire to handle more complex tasks and the associated cost of error increases? I imagine that users will further limit their agents’ power, making it difficult for the company building the agents to learn and create a great (and defensible) product.

User Trust ⇔ Smarter Agent

Perhaps the only way to overcome this hurdle is to collect as much user feedback as possible, both actively and passively. Some examples:

  • Actively: Ask for confirmation on a per-task basis during the workflow. The challenge is to figure out the right cadence for soliciting feedback without being too invasive and annoying.
  • Passively: Last month, we talked to Peruse.io (which auto-templates emails). With Peruse.io, users give feedback as to whether they want to send the email drafted by the AI as is or make any changes first. The AI then gets smarter by seeing the difference between what it recommended and what was actually sent.

To be honest, I’m not exactly sure how scheduling agents can earn more of my trust but I am looking forward to seeing how companies like Clara and x.ai move beyond trainers in their human-in-the-loop system, since there is no better validation of AI than getting feedback from users directly. Ultimately, a company wins when their user becomes the human-in-the loop. Why? This cuts down on costs and gives you objective feedback.

The bottom line is that earning users’ trust is crucial to learning and building a better product. What approaches / implementations to your UI/UX are you taking to earn your users’ trust and get those much-needed learning opportunities?

Read Next