April 9, 2018
Let’s face it – articles are long. Life is busy.
The “TL;DR” (Too Long; Didn’t Read) series at PointSource, a Globant Division is an opportunity to share summaries of news and articles about emerging tech. These morning discussions take place in the PointSource Café; those who can join gather around with hot coffee and fresh donuts to share new tidbits about the chosen topic.
We’ve enjoyed learning about these technologies so much that we wanted to share them with you. This series of blog posts will serve as quick summaries of these discussions and highlights of articles – keeping it short and simple. We often find the conversations veering away from sharing news and moving towards discussions of culture, ethics, futurism and how these topics interconnect.
Follow along with us each month as we try to keep up with the insanity that is current technology evolution!
VUI’s ‘Voice User Interfaces’
Voice User Interfaces allow users to interact through voice or speech commands, and we are finding that virtual assistants such as Siri, Google Assistant, and Alexa are dramatically changing the way we interact with technology once again. In order to keep up-to-date with VUI’s, we need to learn a few things about creating useful and usable voice interfaces and interactions.
It’s predicted that by 2020, the need to use your hands and eyes for browsing and interacting will be reduced in place of your voice. With VUI’s, UX writing is crucial. You have to be extremely clear about what the user’s options are when navigating a space since they will no longer have visual cues. There are several conversations happening about how this will change the future of design. Designers will still need to create wireframes and user journeys, but before they begin, it’s important to understand the software behind VUI. Products that talk don’t just need to be written well; they must also have good conversations. And as expected, it’s harder for a computer to have a good conversation than a human.
“What do I even do with Alexa?”
- PointSource employees were all gifted an Amazon Alexa this past Christmas, and that got us wondering about how people are putting this device to use. Most people in attendance agreed on feeling an initial friction or awkwardness while talking to these voice assistants.
- As designers, if we can help make the transition to conversational UIs for people a lot easier, it will lead to an easier time responding to AI and adopting new technology.
- Right now, conversational UI’s are still so new that people are afraid to talk to the devices. It’s strange to talk to something that isn’t visually there.
- The lack of transparency and visual cues leave a lot of people feeling a little uneasy. There may be some low confidence when Alexa responds, “Sorry, I don’t know that one,” and that can create a negative feedback loop. How can Amazon respond to this lowered confidence, rather than having Alexa repeat the list of options?
- There are some pretty big expectations for VUI’s; it’s a big jump to get to something that is humanlike. Pure voice interaction is having a hard time replacing a screen, but what if it were just an enhancement to the screen rather than the whole interaction itself?
Deep learning with Alexa; will she learn more about us?
- Since most of the responses that Alexa (or the majority of chatbots and voice assistants) have are prescribed and come from a decision tree, it won’t be completely flawless until deep machine learning is integrated.
- Speaking of being nice to the bots, in response to the #metoo movement, Amazon has made advances to change the way Alexa reacts to sexual harassment. Alexa will say “I’m not going to respond to that,” when suggestive and abusive language is used.
Interested in continuing the conversation with us? Start with our 2018 Artificial Intelligence and Chatbot Report to learn more about how consumers feel about these newer technologies.