maria-ramirez 

User-centered design concept for ERP-integrated User Assistant

User-centered design concept for ERP-integrated User Assistant

🎯Project goal

The main objective of this project was to explore the dynamics of suggestions made by a user assistant integrated in an ERP system.

I worked to answer the following research questions:

  • How to frame help for users by the system?
  • When is the right time for help?
  • How extensive may help messages be?

🌐Approach

I designed a user assistant interaction concept based on users input. Later, I socialized it with context-related participants through a participatory workshop.

Thus, I could analyze challenges and opportunities of a user assistant feature for an ERP System. Ultimately, I suggested feature requirements and design recommendations for future design work in this area.

Theoretical background

Lessons learned from Clippy and Rover

In order to learn from earlier experiences with digital assistants and to understand why Clippy and Rover failed as “digital assistants”, I reviewed the thesis of Luke Swartz’s (2003) “Why People Hate the Paperclip: Labels, Appearance, Behavior, and Social Responses to User Interface Agents.” 

Among the diverse findings of this thesis, I selected five main concepts to present the main lessons learned from previous experiences with digital assistants Clippy and Rover, based on the insights that Swartz (2003) established in his work after theoretical, qualitative, and quantitative studies.

🔵 Lesson learned #1: 

Cognitive labels and explicit system-provided labels can influence the perception and interaction of users with virtual assistants. 

🔵 Lesson learned #2:

The way users learn about new features of the system and seek help is related to their level of expertise. It’s crucial to provide solutions taking into account different learning styles and users’ know-how.  

🔵 Lesson learned #3:

The mental model users have about the virtual assistant influences how they interact with it and could generate expectations that sometimes do not correspond to reality. 

🔵 Lesson learned #4:

Consistency is crucial in the design of verbal and non-verbal traits of an agent, namely the character appearance.

🔵 Lesson learned #5:

When providing proactive help, the assistant needs to behave coherently with the users’ decisions, therefore after the user declines the offered help, the assistant should respect this decision and not be insistent or intrusive. 

Best practices for timing, length, and framing
TIMING

Timing is a determinant factor when providing help within an enterprise system. To create a list of design recommendations based on previous research, I analyzed the concepts of etiquette and timing, proactive assistance, and task support, according to different authors.

🔵 Computers as social actors: etiquette and timing

Design recommendations:

1. The characteristics that reinforce the perception of social interactions, namely language use, interactivity, playing a social role, and having human-sounding speech, suggested by Nass and Steuer (1993), should be considered when determining the right timing for the help provided by virtual assistants.

2. In order to conceive the right timing for user assistants, the question: “What would one want in a human assistant?” should be considered. This could bring insights to the design process following the predictions of CASA theory that interactions between humans and computers are determined by the same psychological rules that apply to interactions between people. (Swartz, 2003)

3. Regarding etiquette, Miller and Funk (2001) proposed a shortlist of guides for the performance of virtual assistants: “Don’t make the same mistake twice; talk explicitly about what you’re doing and why, and build a relationship”. These rules could guide the reflection and study of the question “What would one want in a human assistant?” mentioned above to design the timing of virtual assistants.

🔵 Proactive assistance and context-awareness

Design recommendations:

1. According to the definition of the key characteristics of an advanced user assistance system by Maedche, A., et al. (2016), it is recommended to consider the following possibilities when designing proactive help in order to follow a timing coherent with users’ needs:

– The virtual assistant should allow users to decide whether or not to follow the provided assistance.

– The virtual assistant should have self-learning capabilities and be context-aware.

– The virtual assistant should be informative about the effects of any offered help or option and any alternative action.

2. Following Xiao, Jun et. al. (2003) recommendations, help by the system should be only suggested when the confidence in offering accurate assistance is very high. For this aim, the techniques for automated analysis of user goals and intentions should be carefully developed and improved over time.

3. In his study, Swartz (2003) elaborates on the virtual assistance offered as tips, only when users triggered it explicitly, by clicking on the agent. In order to provide help in the adequate timing, the system should give the users higher control over the virtual assistant and the possibility to trigger or not such tips. The system should also allow users’ customization to establish the time to see such proactive tips.

🔵Task support

Design recommendations:

1. The possibility of providing support on task identification, task completion, task recommendation, scheduling, and multi-step accomplishment should be considered when designing a virtual assistant to be embedded in an enterprise system, since task management supported by an intelligent assistant could improve the user experience. (Trippas, J. et al., 2019)

LENGTH

To examine the best practices regarding the length of help messages in enterprise systems, I conducted a literature review about the concepts of dynamic and user-settable levels, and found one main design recommendation based on previous research.

🔵 Dynamic and user settable levels

Design recommendations:

1. An interface compound of slide bars in a multilayer would allow users to customize the help content. Additionally the authors suggest, a rating system for the users to evaluate the help content, and to identify the accuracy of the provided assistance and indicate possible errors and their causes.(Açar, M., & Tekinerdogan, B., 2020)

FRAMING

In the article, The framing of decisions and the psychology of choice, Kahneman and Tversky (1981) introduced the concept of framing. They stated that the way a situation is framed or formulated produces “shifts of preference” in the perception of decision-making factors such as problems, probabilities, and outcomes. (Tversky, A. and Kahneman, D., 1981). 

🔵 Framing effect

Design recommendations:

1. Taking into consideration that the way in which a decision is presented to the users can influence their choices, designers should consider the labeling, and wording, associated with these choices (Cockburn, A. et. al., 2020) Especially in relation to the introduction of virtual assistants in enterprise systems, it would be important to present the assistance stressing positive framing.

2. Risks related to interface choices should be considered when introducing new features. Reducing the risk by making clear the benefits and possible side effects of a new feature, in this case, a virtual assistant, could increase feature adoption and improve user experience.

3. It is pertinent to consider whether and when it is appropriate to employ the framing approach. Designers should reflect on manipulative behaviors and neutrality, taking into account specific contexts. It would be advisable that designers focus on “assisting users in making good decisions regarding the features they enable.” (Cockburn, A. et. al., 2020)

SENSE OF CONTROL AND LEARNING

In the article Direct manipulation vs. Interface agents, Shneiderman and Maes (1997), elaborate on the sense of accomplishment and control that enables the users to feel they have mastery over computers. They claim that virtual agents can reduce users’ sense of control, especially when they are presented as anthropomorphic representations.  (Shneiderman & Maes, 1997)

Design recommendations:

1. According to Schneiderman as cited in Swartz (2003) the creation of interfaces that encourage “internal locus of control” allow users to feel a sense of mastery over the system. Therefore providing options to customize and set the assistance by the system should be considered when designing a virtual assistant in order to increase users’ sense of control.

2. A less-anthropomorphic version of a virtual assistant is suggested by Swartz (2003) to make the user feel more in control. 

3. Agents increase the feelings of control and self-reliance of users when teaching them new skills (Swartz, 2003) Consequently, it should be considered the role of the virtual assistant as a facilitator for the learning of new skills, according to the needs and interests of the users.

Design recommendations on Persuasive Technology in HCI

Caraban et al. (2019) introduced a framework with 23 mechanisms of nudging found in a systematic literature review. They organized this nudging mechanisms into 6 categories (facilitate, confront, deceive, social influence, fear, and reinforce) in which the nudges presented leverage 15 different cognitive biases. 

Cognitive bias:

 

Nudge:

 

Methodology

Design framework

Lorem

Context

Lorem

Research methods

Participants

Workshops

🎯Lorem

Research findings

Data Analysis

Lorem

Themes

Lorem

Requirements elicitation

Lorem

🎯Lorem

Design concept

🟢 Design concept
Design principles

Lorem

Features

Lorem

🎯Lorem

Discussion

Learning and expertise

Lorem

Mental models

Lorem

Users' perceptions

Lorem

Framing effect

Lorem

Feeling of control

Lorem

Nudging

Lorem

Lessons learned and challenges

Lorem

🎯Lorem

This page is under construction ⏳

thanks for your visit!
Skip to content