Evaluation Techniques for Interactive System
What is evaluation?
The purpose of evaluation is to ensure that our system performs as expected and meets user needs.
Not only at the end, but at every stage of the process, evaluation should be carried out. As a result, the evaluation results can be used to improve and change as needed.
Analytic and informal techniques should be employed to test the design throughout the process when designing a system. This manner, we may avoid wasting time and resources on implementations that aren’t necessary.
It is far easier to change a design during the early stages of development than it is later on.
Goals of evaluation
The evaluation has three basic objectives:
1) Determine the system’s functioning and usability and determine the effectiveness of the system.
What is the point of a system whose design prevents the user from performing the intended task? The most important thing to remember is that a system’s functionality. We can measure the user’s performance with the system to assess this. As a result, we can evaluate the system’s performance in supporting the tasks.
2) Evaluate the interaction’s impact on the users.
Even though the user is capable of doing their task efficiently, the user experience in doing so may be troublesome. We must ensure that the user has a pleasant experience throughout their tasks. And that they are not overburdened.
To do so, we must examine how simple the system is to learn, its ease of use, and how satisfied the user is with it.
3) Identify the system’s specific flaws.
Other issues that affect both the functioning and usability of a design can develop. Some features of the design can occasionally have unintended consequences. It could also cause consumers to become perplexed.
Evaluation through expert analysis
As previously said, evaluation should take place at every stage of the design process. The first evaluation of a system, in particular, should ideally be completed before any implementation work begins. If the design can be reviewed, costly mistakes can be avoided since the design can be changed before substantial resource commitments are made. The later in the design process an error is identified, the more expensive it is to correct it, and hence the less likely it is to be corrected.
However, conducting user testing at regular intervals during the design process can be costly, and getting an accurate assessment of the interaction experience from incomplete designs and prototypes can be difficult.
Let’s look at several expert analysis methodologies for evaluating interactive systems.
These methods are flexible assessment approaches since they may be utilized at any point of the development process, from design specifications to storyboards and prototypes to full implementations. They’re also inexpensive because they don’t require user participation. They do not, however, evaluate actual system use; rather, they examine whether or not a system adheres to acknowledged usability criteria.
- Cognitive Walk-through — This involves “walking” through a task with the product while identifying problematic usability features.
- Heuristic Evaluation — Heuristic Evaluation is done according to a set of usability principles known as heuristics. Which are broad rules of thumb and not specific usability guidelines.
- Review-based evaluation — This evaluation is done using previous results as evidence to support aspects of the design.
Evaluation through user participation
These expert analysis techniques are useful for filtering and refining the design, but they are not a substitute for genuine usability testing with the people who will be using the system.
User participation in evaluation is more common later in the development process, when the system has at least a functional prototype. Empirical or experimental approaches, observational methods, query techniques, and physiological monitoring methods, such as eye-tracking and heart rate and skin conductance measurements, are all examples.
Styles of evaluation
Laboratory studies and field studies are the two styles of evaluation.
Laboratory Study: Users are moved out of their typical work environment to participate in controlled tests, generally in a specialized usability laboratory, in this sort of evaluation study.
Field Study: This sort of evaluation sends the designer or evaluator into the user’s workplace to examine how the system works.
Empirical methods: experimental evaluation
A controlled experiment is one of the most effective ways to evaluate a design or a component of a design. This is used to support a claim or theory with empirical evidence. It can be used to investigate a wide range of situations at various levels of depth.
Observational techniques
Observing users engage with a system is a popular technique to acquire knowledge about its real use. They are usually requested to do a set of planned tasks, though they may be observed going about their daily responsibilities if observation is carried out in their workplace. The evaluator observes and records the behaviors of the users.
Ex: Think Aloud, Cooperative evaluation, Protocol analysis, Automated analysis, Post-task walk-through.
Query techniques
Another set of evaluation procedures entails directly questioning the user about the interface. Query strategies can help you get more information about how a user sees a system. They exemplify the notion that ‘asking the user’ is the best method to learn how a technology meets user requirements. They can be utilized in evaluations and to gather information about user needs and tasks in general. There are two type of approaches for this techniques:
- Interviews: Analyst person questions user on one to one basis with prepared questions about his experience with the design and get ideas.
2. Questionnaires: Users are given a set of fixed questions about what they prefer and what they think about the design and analyst person get idea from it.
Evaluation through monitoring physiological responses
One of the issues with most evaluation methodologies is that we rely on observation and users telling us what they’re doing and experiencing. What if we could directly measure these things? The use of what is commonly referred to as objective usability testing, or methods of measuring physiological elements of computer use, has recently risen in popularity. This could allow us to not only observe what users do when they engage with computers more clearly, but also to measure how they feel. To date, eye tracking and physiological measurement have received the greatest attention.
Conclusion
These are some of the most well-known interactive evaluation methods for assessing your system design. Keep in mind, however, that designers and researchers frequently find themselves having to adapt these methodologies to fit the wide range of products that have entered the market since they were first established.
Reference: Alan Dix, Janet Finlay, Gregory Abowd, and Russell Beale. 2003. Human–Computer Interaction(3rd Edition). New York: Prentice-Hall