We are witnessing the wide spread of Vocal Assistants (VA) in our personal devices, such as smartphones, wearables and smart speakers. Such intelligent software services allow us to control IoT devices or trigger our preferred voice-based applications. Despite the amazing performances of the involved technologies (e.g. Natural Language Processing, speech-to-text and viceversa), the User Experience (UX) in using vocal apps is often poor. This is mainly due to the peculiar nature of the vocal channel, which is linear and transient. Mere porting of screen-based apps towards VAs without a deep re-thinking and re-designing of the whole app is not going to meet user expectations. In this paper we report on an experience we've had refactoring the UX of a popular music streaming app very used among the youngest: Spotify. The official Alexa skill for Spotify lacks a strong navigational model, which is fundamental for an application that should run without a display. Moreover, it also lacks a number of important features that are present on the Desktop and mobile counterparts. After refactoring the UX by using the Aural IDM methodology, we selected 20 beta-testers who used both the skills (the original and the refactored) for 1 day. The cohort had an average age of μ=30.25 years with a variance of ρ2=41.2, they were mainly students or junior workers, and the proportion between males and females were 60% vs. 40%. After being polled and interviewed, volunteers confirmed a clear preference in using the refactored version, by assigning it the maximum points twice and a half times with respect to the official version.
A Model-driven Approach to Design User Interaction on Smart Speakers
Vergallo Roberto
Primo
;
2022-01-01
Abstract
We are witnessing the wide spread of Vocal Assistants (VA) in our personal devices, such as smartphones, wearables and smart speakers. Such intelligent software services allow us to control IoT devices or trigger our preferred voice-based applications. Despite the amazing performances of the involved technologies (e.g. Natural Language Processing, speech-to-text and viceversa), the User Experience (UX) in using vocal apps is often poor. This is mainly due to the peculiar nature of the vocal channel, which is linear and transient. Mere porting of screen-based apps towards VAs without a deep re-thinking and re-designing of the whole app is not going to meet user expectations. In this paper we report on an experience we've had refactoring the UX of a popular music streaming app very used among the youngest: Spotify. The official Alexa skill for Spotify lacks a strong navigational model, which is fundamental for an application that should run without a display. Moreover, it also lacks a number of important features that are present on the Desktop and mobile counterparts. After refactoring the UX by using the Aural IDM methodology, we selected 20 beta-testers who used both the skills (the original and the refactored) for 1 day. The cohort had an average age of μ=30.25 years with a variance of ρ2=41.2, they were mainly students or junior workers, and the proportion between males and females were 60% vs. 40%. After being polled and interviewed, volunteers confirmed a clear preference in using the refactored version, by assigning it the maximum points twice and a half times with respect to the official version.| File | Dimensione | Formato | |
|---|---|---|---|
|
A_Model-driven_Approach_to_Design_User_Interaction_on_Smart_Speakers.pdf
solo utenti autorizzati
Descrizione: Atto di convegno
Tipologia:
Versione editoriale
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.36 MB
Formato
Adobe PDF
|
1.36 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


