User Interaction Analysis through Contrasting Websites Experience
- URL: http://arxiv.org/abs/2201.03638v2
- Date: Wed, 12 Jan 2022 16:48:20 GMT
- Title: User Interaction Analysis through Contrasting Websites Experience
- Authors: Decky Aspandi, Sarah Doosdal, Victor \"Ulger, Lukas Gillich, Steffen
Staab
- Abstract summary: We perform the quantitative analysis the usability of websites based on their usage and relevance.
We do this by reporting user interactions based user subjective perceptions, eye-tracking data and facial expressions.
In general, we found that the user interaction parameters are substantially difference across website sets.
- Score: 4.14955672190455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current advance of internet allows rapid dissemination of information,
accelerating the progress on wide spectrum of society. This has been done
mainly through the use of website interface with inherent unique human
interactions. In this regards the usability analysis becomes a central part to
improve the human interactions. However, This analysis has not yet
quantitatively been evaluated through user perception during interaction,
especially when dealing wide range of tasks. In this study, we perform the
quantitative analysis the usability of websites based on their usage and
relevance. We do this by reporting user interactions based user subjective
perceptions, eye-tracking data and facial expressions based on the collected
data from two different sets of websites. In general, we found that the user
interaction parameters are substantially difference across website sets, with a
degree of relation with perceived user emotions during interactions.
Related papers
- Unveiling the Impact of Multi-Modal Interactions on User Engagement: A Comprehensive Evaluation in AI-driven Conversations [17.409790984399052]
This paper explores the impact of multi-modal interactions, which incorporate images and audio alongside text, on user engagement.
Our findings reveal a significant enhancement in user engagement with multi-modal interactions compared to text-only dialogues.
Results suggest that multi-modal interactions optimize cognitive processing and facilitate richer information comprehension.
arXiv Detail & Related papers (2024-06-21T09:26:55Z) - Inter-X: Towards Versatile Human-Human Interaction Analysis [100.254438708001]
We propose Inter-X, a dataset with accurate body movements and diverse interaction patterns.
The dataset includes 11K interaction sequences and more than 8.1M frames.
We also equip Inter-X with versatile annotations of more than 34K fine-grained human part-level textual descriptions.
arXiv Detail & Related papers (2023-12-26T13:36:05Z) - Enhancing HOI Detection with Contextual Cues from Large Vision-Language Models [56.257840490146]
ConCue is a novel approach for improving visual feature extraction in HOI detection.
We develop a transformer-based feature extraction module with a multi-tower architecture that integrates contextual cues into both instance and interaction detectors.
arXiv Detail & Related papers (2023-11-26T09:11:32Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Spatio-Temporal Interaction Graph Parsing Networks for Human-Object
Interaction Recognition [55.7731053128204]
In given video-based Human-Object Interaction scene, modeling thetemporal relationship between humans and objects are the important cue to understand the contextual information presented in the video.
With the effective-temporal relationship modeling, it is possible not only to uncover contextual information in each frame but also directly capture inter-time dependencies.
The full use of appearance features, spatial location and the semantic information are also the key to improve the video-based Human-Object Interaction recognition performance.
arXiv Detail & Related papers (2021-08-19T11:57:27Z) - Analysis of the Visitor Data of a Higher Education Institution Website [0.0]
The interaction of the website with users, search engines, and other devices has to be examined by experts.
The study includes a wide range of examinations and data, important findings from traffic analysis to development suggestions.
arXiv Detail & Related papers (2021-07-10T21:54:42Z) - Improving Cyberbully Detection with User Interaction [34.956581421295]
We propose a principled graph-based approach for modeling the temporal dynamics and topic coherence throughout user interactions.
We empirically evaluate the effectiveness of our approach with the tasks of session-level bullying detection and comment-level case study.
arXiv Detail & Related papers (2020-11-01T08:47:33Z) - Learning Human-Object Interaction Detection using Interaction Points [140.0200950601552]
We propose a novel fully-convolutional approach that directly detects the interactions between human-object pairs.
Our network predicts interaction points, which directly localize and classify the inter-action.
Experiments are performed on two popular benchmarks: V-COCO and HICO-DET.
arXiv Detail & Related papers (2020-03-31T08:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.