Improving the Accessibility of Dating Websites for Individuals with Visual Impairments
- URL: http://arxiv.org/abs/2410.03695v1
- Date: Mon, 23 Sep 2024 15:14:55 GMT
- Title: Improving the Accessibility of Dating Websites for Individuals with Visual Impairments
- Authors: Gyanendra Shrestha, Soumya Tejaswi Vadlamani,
- Abstract summary: Due to their limited accessibility, utilizing dating services can be difficult and irritating for people with visual impairments.
There is some existing implementation that can automatically recognize the facial expression, age, gender, presence of child(ren) and other common objects from a profile photo in a dating platform.
The goal of this project is incorporate additional features (presence of any common pets, indoor vs. outdoor image) to further enhance the capability of existing system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People now frequently meet and develop relationships through online dating. Yet, due to their limited accessibility, utilizing dating services can be difficult and irritating for people with visual impairments. The significance of the research issue can be attributed to the fact that dating websites are becoming more and more common and have a significant impact on how people establish romantic connections. It can be challenging for people with visual impairments to use dating services and develop lasting relationships because many of them are not created with their requirements in mind. We can encourage people with visual impairments to participate more completely in online dating and possibly enhance the success of their romantic relationships by making dating websites more accessible. There is some existing implementation that can automatically recognize the facial expression, age, gender, presence of child(ren) and other common objects from a profile photo in a dating platform. The goal of this project is incorporate additional features (presence of any common pets, indoor vs. outdoor image) to further enhance the capability of existing system and come up with test viable solutions to accessibility issues that people with visual impairments face when using dating websites.
Related papers
- 10 Questions to Fall in Love with ChatGPT: An Experimental Study on Interpersonal Closeness with Large Language Models (LLMs) [0.0]
This study explores how individuals experience closeness and romantic interest in dating profiles, depending on whether they believe the profiles are human- or AI-generated.
Surprisingly, perceived source (human or AI) had no significant impact on closeness or romantic interest.
arXiv Detail & Related papers (2025-03-24T13:00:36Z) - Honey Trap or Romantic Utopia: A Case Study of Final Fantasy XIV Players PII Disclosure in Intimate Partner-Seeking Posts [2.7624021966289605]
We conducted a case study on Final Fantasy XIV (FFXIV) players intimate partner seeking posts on social media.
Our findings reveal that players disclose sensitive personal information and share vulnerabilities to establish trust.
We propose design implications for reducing privacy and safety risks and fostering healthier social interactions in virtual worlds.
arXiv Detail & Related papers (2025-03-12T20:53:06Z) - Exploring Older Adults' Perceptions and Experiences with Online Dating [2.4058538793689497]
This study investigates older adults' security and privacy concerns, the significance of design elements and accessibility, and identify areas needing improvement.
Our findings reveal challenges such as deceptive practices, concerns over disclosing sensitive information, and the need for more informative visualization of match requests.
We offer recommendations for enhanced identity verification, inclusive privacy controls by app developers, and increased digital literacy efforts to enable older adults to navigate these platforms safely and confidently.
arXiv Detail & Related papers (2024-10-14T18:41:04Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Seeking Soulmate via Voice: Understanding Promises and Challenges of
Online Synchronized Voice-Based Mobile Dating [25.30209978159759]
We explore a non-traditional voice-based dating app called "Soul"
Unlike traditional platforms that rely heavily on profile information, Soul facilitates user interactions through voice-based communication.
Our findings indicate that the role of voice as a moderator influences impression management and shapes perceptions between the sender and the receiver of the voice.
arXiv Detail & Related papers (2024-02-29T16:30:07Z) - A Design Guideline to Overcome Web Accessibility Issues Challenged by
Visually Impaired Community in Sri Lanka [0.0]
Visual-impaired communities are one of the hindrances groups to accessing web content access in the world.
Five main problems including access limited by the impairment, usability issues due to lack of design, unavailability of visually impaired-friendly applications, lack of communication, and web navigation issues are the most dominant issues.
arXiv Detail & Related papers (2023-04-14T05:12:13Z) - Automatic User Profiling in Darknet Markets: a Scalability Study [15.83443291553249]
This study aims to understand the reliability and limitations of current computational stylometry approaches.
Because no ground truth is available and no validated criminal data from historic investigations is available for validation purposes, we have collected new data from clearweb forums.
arXiv Detail & Related papers (2022-03-24T16:54:59Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z) - Dual Side Deep Context-aware Modulation for Social Recommendation [50.59008227281762]
We propose a novel graph neural network to model the social relation and collaborative relation.
On top of high-order relations, a dual side deep context-aware modulation is introduced to capture the friends' information and item attraction.
arXiv Detail & Related papers (2021-03-16T11:08:30Z) - An Agent-based Model to Evaluate Interventions on Online Dating
Platforms to Decrease Racial Homogamy [2.69180747382622]
Empirical work is critical to addressing such questions.
To help focus and inform this empirical work, we propose an agent-based modeling (ABM) approach.
arXiv Detail & Related papers (2021-03-04T21:02:09Z) - Bringing Cognitive Augmentation to Web Browsing Accessibility [69.62988485669146]
We explore opportunities brought by cognitive augmentation to provide a more natural and accessible web browsing experience.
We develop a conceptual framework for supporting BVIP conversational web browsing needs.
We describe our early work and prototype that leverages that consider structural and content features only.
arXiv Detail & Related papers (2020-12-07T14:40:52Z) - Learning Preference-Based Similarities from Face Images using Siamese
Multi-Task CNNs [78.24964622317633]
Key challenge for online dating platforms is to determine suitable matches for their users.
Deep learning approaches have shown that a variety of properties can be predicted from human faces to some degree.
We investigate the feasibility of bridging image-based matching and matching with personal interests, preferences, and attitude.
arXiv Detail & Related papers (2020-01-25T23:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.