Duet
A Machine Learning Platform by Intelus.ai
Project Overview: 
At Intelus, I was the lead designer working on Duet. Duet is a no-code natural language processing platform designed to be used by software engineers, data scientists, and data analysts. It enables users to create two types of machine learning (ML) models: classifiers and entity extractors. Classifiers sort documents into different categories, and entity extractors are able to classify words or sentences within a document. The platform created a path for users with no ML or AI experience to train models and implement them in custom applications. Working on Duet required understanding the fundamentals of machine learning as well as being able to balance user needs with the requirements from the company’s executive leaders. 
Team Role:
- Lead UX designer
- Visual design
- Design research
- Usability tests
Year: 2022
Duration: 8 Months
Team:
- Chief Product Officer
- 2x Front engineering

Understanding the ins and outs
of machine teaching
The first steps I took to familiarize myself with the domain were Subject Matter Expert interviews and Contextual inquiry with our power users. Based on these conversations I created a conceptual map as well as a technical functional map. My goal was to understand both how the system worked and how people perceived their actions.

My sense making process often involves making simple sketches and showing them to the subject matter expert. Often my understanding isn't 100% correct, but making a visual provides an excellent artifact for them to respond to.

Model Types in Duet
Duet allows users to create two types of machine learning models, classifier and entity extractors. Classifier models allow the user to categorize documents in to distinct categories and entity extractors allow the user to identify specific words or topics in the document.

In a classifier the user assigns a single category to each document, as the model improves it makes suggestions for which category each document belongs to.

Entity extractors add a layer of complexity, users label words, phrases, and sentences with nested categories. A fully trained model is able to find these entities in previously unseen data.

But why did users drop off after signing up and creating one project?
To understand the root cause of the problem I ran a series of qualitative user study using Userlytics. The goal of the study was to both identify usability cliffs as well as the locations users were looking for functionality.

The first of a series of unmoderated user studies conducted on Userlytics 

Outcomes from Research:
1. Design a contextual first time user experience, that encourages learning without blocking the process
2. Create an on-ramp for new entity extractors to reduce frustration with early stage models
3. Incorporate suggestions for how to build features for the models
Building trust with developers led to fast and effective collaboration 
By having a regular cadence for meetings with developers I was able to share wireframes and get feedback on the engineering cost to implement changes, based on that feedback I would adjust the wireframes to make implementation easier without sacrificing usability.

Annotated wireframes help ensure front-end engineers have a clear idea of the design intentions. The model initialization flow helps a user get a new project started without getting overwhelmed with prediction mismatches which are common early in the development of the model. 

Before: site tour locked the user from interacting with any elements in the interface

After: contextual help panel provides the same information while maintaining interactivity with the interface.

Making Technical Systems Approachable
The core functionality for teaching models is user created features. Features are words, groups of words, or relationships between words that help the model create meaning in the data set. 
Features allow the user to define what words, types of words or phrases are relevant to a category. These feature are visible in every prediction from the model, leading to a traceable decision making process.
Editing a feature allows a user to apply their domain knowledge. The user knows what words in the dataset are sometimes used instead of "invoice" in this example and is able to add them, this way Duet will associate them together in the future and provide more accurate predictions. 

While most of my designs for Duet avoid using modals, adding or editing a feature is a workflow that needs to be completed before continuing to teach the model.

Spending (development) time
where it matters
While we had to make compromises in some parts of the interfaces, there were places where the experience outweighed the development cost. By building trust with the engineering team I was able to effectively make the case for developing complex features.

The entity selection tool is an advanced highlighting interaction which allows for selecting words, phrases, or sentences to label them. The interaction model is nuanced and unique to Duet, so using an existing design pattern would have led to confusion about how to use it.

In addition to annotated wireframes, I created simple animations the engineers could refer to.

GPT-3 has entered the chat
While the team at Intelus had high hopes in the Machine Teaching approach for building ML models, the public release of GPT-3 shifted the conversation to large language models. In October of 2022 the startup began the process of winding down operations.
Back to Top