Methodology for Usability Test of Personal Antiviruses (July 2012)
Introduction
Methodology and tools for this test were developed at the Department of Psychology of Taganrog Technological Institute of the South Federal University and information analytical center of SFU. Two Candidates of Psychology took part in their creation and the teachers and MS of this Department having two diplomas (in Technology and Psychology) or majoring in Engineering Psychology.
The third-year students of the Department of Psychology of the average age of 20 years old took part in the testing as users with 6 men and 6 women among them.
Four personal antiviruses of Internet Security class took part in the testing. They were the products of the companies that showed the best percent in the market by the results of Russian antivirus market analysis for 2010-2012. We also decided to add an Internet Security version of Avast that is one of the most popular antiviruses inRussia.
The versions with Russian interface took part in the testing to avoid the possible influence of the âlanguage barrierâ with the users that took part in the testing.
Table 1 represent the antivirus software that took part in the testing (the below mentioned products are up-to-date for the test beginning â June 04, 2012).
Table 1. Tested antivirus products and their versions
Product |
Version |
Avast! Internet Security 7 | 7.0.1426 |
Dr.Web Security Space 7 | 7.0.0.10140 |
Eset Smart Security 5 | 5.2.9.12 |
Kaspersky Internet Security 2012 | 12.0.0.374 |
Norton Internet Security 2012 | 19.07.2015 |
Usability factors
The performed analysis of a great number of used factors and metrics proved that we can use five factors for antivirus evaluation including the usersâ operation speed (further on mentioned as the work speed), a number of errors, usersâ learnability, satisfaction and visual attractiveness of the userâs interface (or technical aesthetics factor). These five factors measuring allow us to receive a complex usability assessment.
Diagram 1. Usability factor
Two methods are used for the factors assessment: usability testing and expert testing.
12 people took part in usability testing (by expertsâ opinion a group of 5 to 12 people is enough to find about 90% of erroneous situations when working with applications). Five experts took part in the expert assessment including two Candidates of Psychology and three MS majoring at Engineering Psychology.
Letâs discuss the content of every evaluation parameter and the method of its processing.
1. Usersâ operation speed
Userâs common operation speed assessment can be performed with the help of expert assessment. The expert evaluation method based on the method of usersâ operations modeling (GOMS, CPM, KLM, NGOMSL, CMN-GOMS, etc.) is the most accepted: here the application common operations are selected and then these operations algorithms are built and the average time for every algorithm is estimated. During this estimation, every algorithm is divided into a number of simple operations (pressing the button, moving the cursor on the screen, further operations analysis, etc.) with the time values received on a great number of users. The time for the following 10 operations was estimated for personal antiviruses:
- A separate directory scanning for malware samples.
- Full scanning launch for all the drives and areas.
- Scheduled scanning settings (12:00 every Thursday).
- Update launch.
- Restoring the file from Quarantine.
- Searching the help for the information about Quarantine.
- Viewing the last scanning and threats report.
- Network screen settings (add Adobe Reader to the trusted applications).
- Adding an application to the scanning exceptions.
- Setting the reaction type for the found threat â âAdd to the Quarantineâ or âAsk the userâ.
2. Number of usersâ errors
A number of errors is specified with the usability testing performed for a group of users. During this test, the users have to execute the same operations as they performed for the expertsâ estimation of common actions time. These actions execution is videoed on the screen and then the operations execution algorithms are built. An operation non-execution and deviations in the userâs operation algorithms is considered an error by the expert. The testing result is a number of errors for all the actions on the whole and for every separate operation. The errors are then grouped according to their types (motor errors, misprints, application logic misunderstanding) or their appearing results (critical, uncritical).
3. Usersâ learnability
The learnability relates to the number of quality educational means and integral and consistent application information model. The education means evaluation includes the software documentation quality and exhaustiveness (description of typical operations, settings, errors occurred, etc.), the presence of educational means in the userâs interface (search, context help) and additional support means (specialized forums, teaching materials at the companiesâ official websites, etc.).
The information model is considered to be the software structure containing information about its condition and functioning. When estimating the information model we analyze the following factors:
- User interface, settings, documents and software reports structuring.
- Availability of information about usersâ operation results, software reaction on the usersâ operations and application condition in the user interface at every moment.
- Symmetry for the application elements performing common functions (like decision taking buttons or application windows switch buttons).
- No functions, settings, application windows and controls elements duplication in different application components.
- No non-functioning application windows, dead links and incorrect terminology.
For teaching materials and information model assessment, we use the questionnaires completed by expert after their work with the application.
4. Satisfaction
Satisfaction is estimated on the basis of usersâ questioning after their work with the application.
The questioning is aimed at finding out how comfortable the users felt while working with the application. The questionnaire included by general questions (âWas the application easy to work with?â) and special questions aimed at discovering different application tools usability including antivirus components, help, reports, settings, etc.
The users were also asked some open questions (âProvide a free form description of the problems occurring while working with the applicationâ, âWhat could you suggest to improve the applicationâ).
5. Technical aesthetics
Technical aesthetics is estimated in a combined way â by an expert assessment and usersâ questioning after their work with the application.
This factor consists of estimating the fonts, colors, animation, sound signals, pictograms, application elements and application elements grouping applied in the user interface. An expert assessment allows to find the mistakes related to the problems in visual and sound interface design and the usersâ questioning allows to get information about the implemented solutions attractiveness, their intelligibility and usability. The criteria for the technical aesthetics expert assessment rely on the requirements of usability GOSTs for user interface design and assessment. For example, the expert assessment of fonts readability relies on GOST R ISO 9355-2-2009 for the recognized screen resolutions. And the usersâ technical aesthetics assessment is performed through the questions like âWere all the text boxes in user interface easy to read?â, âWere all the icons applied in the software intelligible?â, etc.
Testing conditions and procedure
A group of users and group of experts were formed before testing each product consisting of 2 stages:
- Usability testing;
- Expert testing.
To exclude random factors during usability testing and expert testing, the following requirements were observed: the testing is held at the same room, at the same time and on the similar PC. To reduce the social desirability effect during the test, all the instructions were presented in printing, the testing procedure was presented on the plasma display and the instructor just handed out the materials before the test beginning and specified the time for every stage.
Usability testing of every product was held in the group of users and took 100 minutes: the users got familiarized with the instructions and completed the questionnaires during 10 minutes; then they got familiarized with the product for 60 minutes and executed operations with the product for 30 minutes. After the testing has been finished, the users completed the questionnaires on their satisfaction after working with the product and the product visual attractiveness.
To avoid the grades overlapping when working with different products, only one product a day was tested.
The same experts provided independent evaluation for every product.
Every expert got familiarized with every product within three hours and then completed questionnaires to estimate the teaching materials, information model and technical aesthetics.
Three separate experts estimated the usersâ operation speed simultaneously by GOMS methods and analyzed the videos of operations of all the users and calculated their errors.
Results processing and analysis
All the received data was normalized and brought to the range of 0 to 100%. Depending on the type of measured parameters, processing has its own peculiarities:
1. Common operations time measuring
The minimum time for every operation is summarized and taken for the work time with an âideal productâ (100%). Increasing of the general operation time of every product as compared to the time of the ideal product by 1 reduces its final grade by 0.5%.
2. Measuring the number of errors
Zero mistakes are taken for 100%. After that the number of errors for all the operations in the group of users is summarized. Every critical error reduces the final grade by 1%, and uncritical error â by 0.5%.
3. Learnability
We separately summarize the results for two questionnaires: for teaching materials assessment and application information model assessment. All the received data is averaged.
The questionnaires are processed this way: 100/n value is calculated for every requirement where n is the number of requirements in the questionnaire. The value of 100/(2*n) is calculated for partial meeting the requirements. Thus, the grade for every questionnaire is normalized in the range of 0 to 100.
4. Satisfaction
After processing every userâs questionnaire, we receive the value in the range of 0 to 50 (10 questions with 5 grades of answer in each). This value for every user is normalized and then the average grade is calculated for every usersâ group; it is considered the final value for this factor.
5. Technical aesthetics.
The results for two questionnaires are calculated separately: for the usersâ and the expertsâ assessment of the user interface. The usersâ questionnaire is processed by analogy with the satisfaction questionnaire and the expertsâ questionnaire is processed by analogy with the learnability questionnaires.
The obtained data of the usersâ and expertsâ questionnaires are averaged.
After the data for every estimated rate has been processed, we get 5 normalized values. The final value is calculated as follows:
E:= âAi*Bi, (1)
where E is an usability value;
Ai is a weight ratio for every assessed value;
Bi is a value for every assessed factor.
Generally, the weight ratios are assumed equaling 0.2. But many experts specify that two criteria are always the most important ones for different application types and all the rest are less important. The poll between the teaching staff and MS of the Department of Psychology of TTI SFU allowed arranging the importance of the applied values for personal antiviruses and highlighting the most differentiated weight ratios:
- Operation speed â 0.15,
- Number of errors â 0.15,
- Learnability â 0.2,
- Satisfaction â 0.25,
- Technical aesthetics â 0.25.
Bibliography
- V.P. Zinchenko, V.M. Munipov. Basics of Economics. â Moscow: Logos, 2001.
- V.V. Golovach. User Unterface Design. 2001 â 141 p..
- GOST R IEC 60073-2000 Basic and safety principles for man-machine interfase, marking and identification. Coding principles for indication devices and actuators.
- GOST R IEC 9355-1-2009 Ergonomic requirements for the design of displays and control actuators. Part 1. Interactions with human.
- GOST R IEC 9355-2-2009 - Ergonomic requirements for the design of displays and control actuators. Part 2. Displays.
- GOST R IEC 9241-11-2010 Ergonomic requirements for office work with visual display terminals (VDTs). Part 11. Guidance on usability
- GOST R IEC 9241-110-2009 Ergonomics of human-system interaction. Part 110. Dialogue principle.
- GOST R IEC 14915-1-2010 Ergonomics of multimedia user interfaces. Part 1. Design principles and framework.
- GOST R IEC 10075-2-2009 Ergonomic principles of assuring of adequacy of mental workload. Part 2. Design principles.
- D. Raskin, Interface: New Directions for Designing Interactive Systems. â Translated from English. â St.-Petersburg: Symbol-Plus, 2004. â 272.
- I.A. Ponomaryov. Methods of User Interface Quality Assessment. http://it-claim.ru/Library/Books/ITS/wwwbook/ist6/ponomarev2/ponomarev2.htm
- T. Mandel. Interface Design. â Moscow: DMK-Press, 2005.
- Kieras D. A Guide to GOMS Model Usability Evaluation using GOMSL and GLEAN3 â University of Michigan (ftp.eecs.umich.edu/people/kieras), 2002.
- Login to post comments