Methods for evaluating software accessibility
- URL: http://arxiv.org/abs/2509.23469v1
- Date: Sat, 27 Sep 2025 19:46:10 GMT
- Title: Methods for evaluating software accessibility
- Authors: Mykola Kuz, Ivan Yaremiy, Hanna Yaremii, Mykola Pikuliak, Ihor Lazarovych, Mykola Kozlenko, Denys Vekeryk,
- Abstract summary: A more detailed and practically oriented accessibility assessment methodology has been proposed.<n>An analysis of the accessibility of the main pages of Vasyl Stefanyk Precarpathian National University's website was conducted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development and enhancement of methods for evaluating software accessibility is a relevant challenge in modern software engineering, as ensuring equal access to digital services is a key factor in improving their efficiency and inclusivity. The increasing digitalization of society necessitates the creation of software that complies with international accessibility standards such as ISO/IEC 25023 and WCAG. Adhering to these standards helps eliminate barriers to software use for individuals with diverse physical, sensory, and cognitive needs. Despite advancements in regulatory frameworks, existing accessibility evaluation methodologies are often generalized and fail to account for the specific needs of different user categories or the unique ways they interact with digital systems. This highlights the need for the development of new, more detailed methods for defining metrics that influence the quality of user interaction with software products. Building a classification and mathematical model and developing accessibility assessment methods for software based on it. A method for assessing the quality subcharacteristic "Accessibility", which is part of the "Usability" quality characteristic, has been developed. This enabled the analysis of a website's inclusivity for individuals with visual impairments, and the formulation of specific recommendations for further improvements, which is a crucial step toward creating an inclusive digital environment. Comparing to standardized approaches, a more detailed and practically oriented accessibility assessment methodology has been proposed. Using this methodology, an analysis of the accessibility of the main pages of Vasyl Stefanyk Precarpathian National University's website was conducted, and improvements were suggested to enhance its inclusivity.
Related papers
- Aspect-Oriented Programming in Secure Software Development: A Case Study of Security Aspects in Web Applications [0.0]
This study investigates the role of Aspect-Oriented Programming (AOP) in enhancing secure software development.<n>We compare AOP-based implementations of security features including authentication, authorization, input validation, encryption, logging, and session management.<n>The findings demonstrate that AOP enhances modularity, reusability, and maintainability of security mechanisms, while introducing only minimal performance overhead.
arXiv Detail & Related papers (2025-09-09T07:12:55Z) - Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications [59.721265428780946]
Large Language Models (LLMs) in medicine have enabled impressive capabilities, yet a critical gap remains in their ability to perform systematic, transparent, and verifiable reasoning.<n>This paper provides the first systematic review of this emerging field.<n>We propose a taxonomy of reasoning enhancement techniques, categorized into training-time strategies and test-time mechanisms.
arXiv Detail & Related papers (2025-08-01T14:41:31Z) - Developing and Maintaining an Open-Source Repository of AI Evaluations: Challenges and Insights [44.99833362998488]
This paper presents practical insights from eight months of maintaining $_evals$, an open-source repository of 70+ community-contributed AI evaluations.<n>We identify key challenges in implementing and maintaining AI evaluations and develop solutions.
arXiv Detail & Related papers (2025-07-09T14:30:45Z) - Rethinking Machine Unlearning in Image Generation Models [59.697750585491264]
CatIGMU is a novel hierarchical task categorization framework.<n>EvalIGMU is a comprehensive evaluation framework.<n>We construct DataIGM, a high-quality unlearning dataset.
arXiv Detail & Related papers (2025-06-03T11:25:14Z) - An Online Integrated Development Environment for Automated Programming Assessment Systems [4.618037115403291]
This research contributes to the field of programming education by extracting and defining requirements for an online IDE.<n>The usability of the new online IDE was assessed using the Technology Acceptance Model (TAM), gathering feedback from 27 first-year students.
arXiv Detail & Related papers (2025-03-17T12:50:51Z) - Pessimistic Evaluation [58.736490198613154]
We argue that evaluating information access systems assumes utilitarian values not aligned with traditions of information access based on equal access.
We advocate for pessimistic evaluation of information access systems focusing on worst case utility.
arXiv Detail & Related papers (2024-10-17T15:40:09Z) - How fair are we? From conceptualization to automated assessment of fairness definitions [6.741000368514124]
MODNESS is a model-driven approach for user-defined fairness concepts in software systems.<n>It generates the source code to implement fair assessment based on these custom definitions.<n>Our findings reveal that most of the current approaches do not support user-defined fairness concepts.
arXiv Detail & Related papers (2024-04-15T16:46:17Z) - Charting a Path to Efficient Onboarding: The Role of Software
Visualization [49.1574468325115]
The present study aims to explore the familiarity of managers, leaders, and developers with software visualization tools.
This approach incorporated quantitative and qualitative analyses of data collected from practitioners using questionnaires and semi-structured interviews.
arXiv Detail & Related papers (2024-01-17T21:30:45Z) - Interactive Multi-Objective Evolutionary Optimization of Software
Architectures [0.0]
Putting the human in the loop brings new challenges to the search-based software engineering field.
This paper explores how the interactive evolutionary computation can serve as a basis for integrating the human's judgment into the search process.
arXiv Detail & Related papers (2024-01-08T19:15:40Z) - Position: AI Evaluation Should Learn from How We Test Humans [65.36614996495983]
We argue that psychometrics, a theory originating in the 20th century for human assessment, could be a powerful solution to the challenges in today's AI evaluations.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic Approach [40.06500618820166]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.