Advanced search
Start date
Betweenand


Evaluating Random Input Generation Strategies for Accessibility Testing

Full text
Author(s):
Santos, Diogo Oliveira ; Durelli, Vinicius H. S. ; Endo, Andre Takeshi ; Eler, Marcelo Medeiros ; Filipe, J ; Smialek, M ; Brodsky, A ; Hammoudi, S
Total Authors: 8
Document type: Journal article
Source: PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS (ICEIS 2021), VOL 1; v. N/A, p. 10-pg., 2021-01-01.
Abstract

Mobile accessibility testing is the process of checking whether a mobile app can be perceived, understood, and operated by a wide range of users. Accessibility testing tools can support this activity by automatically generating user inputs to navigate through the app under evaluation and run accessibility checks in each new discovered screen. The algorithm that determines which user input will be generated to simulate the user interaction plays a pivotal role in such an approach. In the state of the art approaches, a Uniform Random algorithm is usually employed. In this paper. we compared the results of the default algorithm implemented by a state of the art tool with four different biased random strategies taking into account the number of activities executed, screen states traversed, and accessibility violations revealed. Our results show that the default algorithm had the worst performance while the algorithm biased towards different weights assigned to specific actions and widgets had the best performance. (AU)

FAPESP's process: 18/12287-6 - Automated accessibility testing of Android mobile APPs
Grantee:Marcelo Medeiros Eler
Support Opportunities: Regular Research Grants