©2018 Built with love by Nick Howland.

Salesforce Events Einstein Voice Assistant

Salesforce Events Einstein Voice Assistant

An efficient tool for attendees to find what they need, all within a delightful experience.

Role: Product Designer / Lead UX + Visual Designer

Deliverables: Final Designs, Interaction Specs, Visual Specs

One App, Many Events

Salesforce puts on dozens of events each year (20 in 2017) with each event taking over anywhere from 1 to a dozen different venues, ranging from all 3 Moscone centers in San Francisco (3 blocks and 87 acres) to smaller hotels. Attendance at these conferences can also reach up to 171,000 people—looking at you Dreamforce. With so much going on in such a large location and with so many people, it’s important that attendees have the information they need to help them find the things important to them, whether that’s the next keynote they are attending or some lunch between networking sessions. The Salesforce Events app is used to help get attendees the information they need while also ensuring that they maintain a personal experience at the conference, through easy access to maps, schedules, and support. This app allows conference attendees to test the latest technology and products all within the free Events app; a wildly effective way to market what is being spoken about and sold at the conferences.

The Problem: Layered Navigation, Massive Events

 

Layered navigation, added complexity,

While the Events app helps with most of these situations, the navigation is pretty standard, requiring attendees to search through layers of data to find what they need. Through interviewing attendees at Dreamforce and looking back on prior user research, we knew that our users needed a better, more direct way to access their data and search for what they need within the moment they need it. The product team needed to deliver a frictionless and more engaging way to search the app, so we asked ourselves: what if attendees were able to find what they needed conversationally?

 

Leveraging our products and showing off some.

At the previous Dreamforce, Salesforce announced Field Service Chatbots, a product that helps customers better communicate with support agents via real time interactions. To spread the word on the feature, we placed a build of the chatbots into the Events app as a way to communicate with event support. This allowed users to have hands-on experience testing out a new product. The Events app is a tool for helping users prepare for and better experience Salesforce events, but it’s also a mechanism for introducing customers to new products while giving them hands on experience with it.

With a tight deadline—3 months from design to development—we need to get the project off the ground fast. The familiarity, efficient structure, and user-friendly interface proved to be a stable ground to build on. This also allowed us a chance to show off the Field Service Chatbot SDK, which was being released at the Dreamforce this feature would launch at. Building on top of this product also gave us an accessible and fully user-tested foundation. To further strengthen this core experience, and show off another new product technology, we layered voice functionality on top of Chatbot SDK with help from the Einstein AI engine. Salesforce was releasing a new Einstein Voice app at the same time as this feature, so it again gave customers a hands-on glimpse at an exciting new product. It also allowed us to flatten the navigation layer and ensure that attendees had a personal and engaging experience at Salesforce Events. This also gave the user three different options for input: using their voice, using the keyboard, and tapping the Chatbot suggestions. Each one acknowledging the different environmental conditions found at a conference.

Ideation & Collaboration

Determining the flows.

With a tested foundation and a looming deadline, we set to work creating a voice experience and ensuring it worked fluidly with the preexisting chatbot product. We met with the Field Service Lightning team to learn more about the product’s UX patterns and technical implementation. We also learned about their future plans for the product. This allowed us to build the voice feature into the experience in a way that Field Service could reuse it within their products later.

To get started, we looked at the Events app usage data to identify highly used features. We used this data to map out 5 key flows for the voice feature.

  • Find a session: At events like Dreamforce, there are hundreds of sessions happening each day. The flow helps users find where sessions are located, when they are, and fill their schedule when they have breaks or can’t get into a full session.
  • Dreamforce Information: Large event spaces and multiple venues make it difficult to find things like event support, a particular customer booth, food, or the restroom. The Information flow allows attendees to ask for any number of these things and find them.
  • Give Back: Providing assistance to non-profits via donations and getting their messages out there has been a mission of Dreamforce since the conference started. This flow allows attendees to do so from anywhere at anytime while also learning more about who they are helping.
  • Easter egg: This flow is purely about providing delight during an inspiring and admittedly stressful span of days. Phrases like “who are you” can be said with unexpected and humorous responses.
  • General Questions: From our own experience using voice assistants and looking at usage data from Field Service, we know that when users are presented with a voice or chat interface, they tend to ask some random questions. We were able to come up with a comprehensive list of “general questions” that attendees might ask t that relate to attending a conference.

Close collaboration.

We were building the voice assistant on top of the chatbot interface so we needed to make a decision on if it should match the preexisting interface or explore further and enrich the current system. I set boundaries for myself and explored directions closer to the current Chatbot UI, a further reaching UI that enriched the interface based on the voice feature iterations and parallel work I was doing on the Mobile Flows project, and a nice middle ground that blended the further reaching explorations and the current Chatbot UI. I formed a close partnership with the Field Service team throughout this process to get feedback and ensure we were making the right decisions for both products.

As I iterated, the Chatbots team got increasingly excited about the more exploratory interfaces. After numerous feedback sessions, design critique presentations, and reviews with stakeholders, we decided to move forward with the more further reaching interface. The Chatbots team was so excited about the update to the interface that they decided work it into their product roadmap, so it can eventually live within the Chatbot SDK as well as Events. This was a huge success as it spent fewer calories on a new feature for two teams, allowed us to innovate with our product UX, and it brought two rather distant teams together to collaborate.

Focusing On The Details

Training the AI.

Because the scope of this feature was rather large (building and training a voice assistant) and the timeline was not, we decided to scope the MVP release to just cover our Dreamforce event. This allowed us to provide more focus on the process and ensure the bot was well trained for use at this specific event. To train the bot, we held weekly “blitzes.” These meetings were cross-function and cross-cloud, bringing PMs, developers, designers, and stakeholders into a room to learn about this new feature and help the Einstein engine learn more about possible questions Dreamforce attendees might be asking it. This was also a chance for us to advertise our work and get other teams excited. I also worked with our accessibility specialist to ensure that this new feature could be used by everyone. Additionally, close collaboration with my developers allowed us to further refine the interactions, allowing them to be far more rich and purposeful.

Defining the system.

As I iterated on the designs, I kept in mind that our users wouldn’t be spending long intervals within this feature. They are using the Events app during small breaks while they are running around the conference campus. I aimed to have each component and interaction match these needs. Each interaction and animation should also help educate the user on the feature’s functionality and set expectations for what was to happen next. There were 4 key areas that I iterated heavily on to ensure the interface was efficient, not disruptive, and delightful.

  • Navigation dock: This is the core of the Einstein Assistant experience. The primary buttons on the screen (voice, menu, keyboard access) are all laid out so that a user can quickly access them with their thumb, all without having to shift grip and lose valuable time. These buttons are also broken up into a defined hierarchy: voice access is primary via the larger blue button while the secondary keyboard access and topics menu, which resets the flow, are smaller white buttons. Tertiary actions are located within the app header since they are less often used.
  • Concentrated keyboard input: The keyboard is the secondary input type. Each flow is mapped out to a decision-based interaction model: once a user makes a decision on what input type to use, the other input types fade away. They are easily accessible if the user changes their mind, but this helps the user focus clearly on the path they have chose. The keyboard input is the same, as the user can quickly dismiss the keyboard with the “dismiss keyboard” button. But once the attendee begins typing, the “dismiss” button transitions to a “send” button, continuing to focus the user on the flow they committed to. The transitions within each interaction are also fluid but beautiful to look at, providing further focus but also a moment of delight.An interaction around the keyboard input transition that provided quick access with a bit of added animation flourish.An interaction that removed all flourish and optimized for efficiency. This was the option we moved forward with as we believed animated delight would be better suited elsewhere.

  • Dictation & Transcription: As the user dictates to the assistant, the transcription happens immediately within a chat bubble. This connects the spoken word of the user to the chat thread that already exists. Once the user commits to the dictation, the previous tappable options are also disabled, further committing the user to the flow and focusing them on what is happening now instead of what happened previously. That being said, the user can always reset the conversation with the menu or change / adjust the topic via voice or keyboard input.Iteration on the dictation interaction that focused on educating the user on when to speak, hinted at by the “…” placed within the initial dictation text bubble.Another iteration on the dictation interaction that only showed text when the user spoke. Similar to Google Assistant.
  • Access point: The core experience of the assistant was important but we also needed to identify how to access the feature within the Events app. I explored locating the access point within the app header, a new tab within tab navigation, in a FAB, and other areas. We ended up moving forward with placing an avatar within the app header as it would be the least disruptive location as we tested out the MVP release for Dreamforce.
  • Waveform: This was the fun part. To ensure that the attendee knew that dictation was active and know that they can speak, we needed to provide a feedback mechanism. After iterating on different animations, we settled on an obvious and beautiful three-layered waveform (the first screen in the image below). As the user spoke, the waves moved in slightly different horizontal paths. The volume of the attendee’s voice also determined the vertical movement, both in height and vibrational frequency. While the waveform was highly useful, it also provided a key moment of delight and beauty within the interface. This brought life to the experience.

Collaborative specs.

As the designs were finalized, I created a quick spec deck outlining the decisions we had made along the way. This deck also housed all the accessibility notes from the meetings with our accessibility specialist as well as interaction specs and notes. Since we were moving fast, I pushed iterative specs to the development team via Zeplin as well. This allowed for constant collaboration and communication within the team, all contextually via comments on the actual screens.

Future Facing Plans

With the UX defined and the product built, we believed we had a solid MVP release that satisfied our core goals: we innovated within our product space, we promoted efficiency, flattened navigation, and developed new technology for the whole company to benefit from. It was time to focus on the future of the feature and learn about it’s usage. As we built the voice assistant, we also built measurement tools into the feature. This allowed us to learn more about what was working, what wasn’t, which questions were being asked, and how they were being asked. This would help us learn how to better utilize the feature with other events and train the bot further. Because of our strong partnership with the Field Service Chatbot teams, this new Salesforce pattern was going to be added to their produce roadmap. With our goals met and a delightful UX created, we were ready to iterate on the future of the product and learn from from near term usage.

Collaborators

Kevin Ota – Product Manager
Adam Drazic – Engineering Manager
Myles Thompson – Product Designer

Other Projects

0000