We recently deployed our very own Intelligent Virtual Agent at 1904labs named Wally (you can see our blog introducing Wally here if you haven’t read it). He’s the very first voice you hear when you call us and has replaced our menu-prompt phone system. With our work in the Intelligent Virtual Agent space, the ask came to move ourselves to the type of service that we’re also helping our clients reach by creating an Intelligent Virtual Agent of our own.
In this series of blogs, we’ll highlight the work that went into creating and shaping Wally to be more than a simple Intelligent Virtual Agent, but someone we feel represents the labs. It all started with determining who was calling, the type of questions we needed to answer, and creating a base for how those answers would sound.
Determining Wally’s conversational capabilities
Although Wally now has a fleshed out personality, at the beginning of the project, this wasn’t the case. Before beginning to develop Wally’s personality, the team took a step back to assess the task and how best to achieve it. We started with building an understanding of who calls us and why, along with how key internal stakeholders used our current phone system and the type of calls they received.
To do this, the team ran an initial design sprint to lay out conversation paths informed by interviews with stakeholders with the heads of different areas that received calls: operations, talent acquisition, sales, and marketing. From these conversations, we were able to determine some primary reasons people called:
- Looking for help locating our offices
- Understanding how they can access the offices and where they can park
- Finding out what hours we operate (and when people may be in the office)
- Learning what our Covid policies are for anyone that wishes to visit the office
- Leaving messages for specific people, such as the aforementioned operations, talent acquisition, sales, and marketing teams
By knowing these intents and understanding more about both our callers and internal stakeholders, the team was able to build journey maps for the current typical call flow and story maps to build a desired call flow.
Turning a story map into words and initial tweaks
Using the story map as a guide, the team started writing out the initial conversation flows, using the intents as nodes from which conversations could take direction. From there, they then took the written speech and built the wire frame so they could hear it aloud and test it before moving it to any development or production environments.
This was vital because how we write and speak is very different. The team was able to listen to the conversation flows together and note what worked and what didn’t, modifying the script in real time and ideating to create speech that didn’t just sound like something someone might say, but had a conversational feel to it. Some things the team tried sounded more natural, while others just sounded wrong. They didn’t know until they heard it, though, and that’s when they could critique it.
Getting to that stage quickly was vital because it made the words tangible and allowed for real progress to be made. Even without the personality, we could hear things the technology was good and bad at and began to adapt. Until then, the idea of the 1904 Intelligent Virtual Agent was just theoretical. Hearing the bot gave the soon-to-be Wally the first sense of being real – the next stage was to define his personality. Then we could adapt the speech again to reflect how we wanted Wally to sound and exactly what we wanted him to say.
This is the second in a series of blogs about creating 1904labs’ own Intelligent Virtual Agent assistant. Next, we will highlight how the team developed Wally’s personality.