Commercial voice services like Google Assistant and Amazon Alexa are reaching extreme popularity. While Natural Language Processing (NLP) and Artificial Intelligence (AI) techniques – applied to the aural channel – deliver high quality voice recognition, the voice channel still lacks a good methodology to design user experiences. For instance, The Amazon Alexa team suggests to gather the information model of Alexa skills by talking with test users behind a curtain, pretending to be the machine. In our opinion, such kind of bottom-up strategy is not effective because it overfits the UX to very specific cases. A top-down approach could provide the right answer also in unseen and unpredictable situations instead. Our work aims to propose a novel model driven approach that allows authors to design from scratch the overall vocal UX as well as rethink existing visual UX before porting them to the aural channel. Our approach, which is inherently top-down, is based on Aural IDM, an UX design method thought for screen readers modelling in the early ’00. In this paper we’ve refactored the Spotify Alexa skill to demonstrate the validity of Aural IDM for designing vocal UXs. The experience of Spotify on Alexa is quite primordial and does not reflect the richness of the desktop app. A prototype is currently under development, and the result of a comparison between the AS-IS and TO-BE voice skill will be subject of a future work.

Refactoring the UX of a popular voice application for music streaming using the Aural IDM model

CAIONE A.;FIORE A.;MAINETTI L.;MANCO L.;VERGALLO R.
2020-01-01

Abstract

Commercial voice services like Google Assistant and Amazon Alexa are reaching extreme popularity. While Natural Language Processing (NLP) and Artificial Intelligence (AI) techniques – applied to the aural channel – deliver high quality voice recognition, the voice channel still lacks a good methodology to design user experiences. For instance, The Amazon Alexa team suggests to gather the information model of Alexa skills by talking with test users behind a curtain, pretending to be the machine. In our opinion, such kind of bottom-up strategy is not effective because it overfits the UX to very specific cases. A top-down approach could provide the right answer also in unseen and unpredictable situations instead. Our work aims to propose a novel model driven approach that allows authors to design from scratch the overall vocal UX as well as rethink existing visual UX before porting them to the aural channel. Our approach, which is inherently top-down, is based on Aural IDM, an UX design method thought for screen readers modelling in the early ’00. In this paper we’ve refactored the Spotify Alexa skill to demonstrate the validity of Aural IDM for designing vocal UXs. The experience of Spotify on Alexa is quite primordial and does not reflect the richness of the desktop app. A prototype is currently under development, and the result of a comparison between the AS-IS and TO-BE voice skill will be subject of a future work.
2020
978-1-4503-7655-6
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11587/442925
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact