Emotion Based Prediction in the Context of Optimized Trajectory Planning
for Immersive Learning
- URL: http://arxiv.org/abs/2312.11576v2
- Date: Wed, 28 Feb 2024 09:47:26 GMT
- Title: Emotion Based Prediction in the Context of Optimized Trajectory Planning
for Immersive Learning
- Authors: Akey Sungheetha, Rajesh Sharma R, Chinnaiyan R
- Abstract summary: In the virtual elements of immersive learning, the use of Google Expedition and touch-screen-based emotion are examined.
Pedagogical application, affordances, and cognitive load are the corresponding measures that are involved.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the virtual elements of immersive learning, the use of Google Expedition
and touch-screen-based emotion are examined. The objective is to investigate
possible ways to combine these technologies to enhance virtual learning
environments and learners emotional engagement. Pedagogical application,
affordances, and cognitive load are the corresponding measures that are
involved. Students will gain insight into the reason behind their significantly
higher post-assessment Prediction Systems scores compared to preassessment
scores through this work that leverages technology. This suggests that it is
effective to include emotional elements in immersive learning scenarios. The
results of this study may help develop new strategies by leveraging the
features of immersive learning technology in educational technologies to
improve virtual reality and augmented reality experiences. Furthermore, the
effectiveness of immersive learning environments can be raised by utilizing
magnetic, optical, or hybrid trackers that considerably improve object
tracking.
Related papers
- Exploring Engagement and Perceived Learning Outcomes in an Immersive Flipped Learning Context [0.195804735329484]
The aim of this study was to explore the benefits and challenges of the immersive flipped learning approach in relation to students' online engagement and perceived learning outcomes.
The study revealed high levels of student engagement and perceived learning outcomes, although it also identified areas needing improvement.
The findings of this study can serve as a valuable resource for educators seeking to design engaging and effective remote learning experiences.
arXiv Detail & Related papers (2024-09-19T11:38:48Z) - Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing [3.735012564657653]
Digital neuromorphic technology simulates the neural and synaptic processes of the brain using two stages of learning.
We demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor with its plasticity dynamics.
Our methodology can be deployed with arbitrary plasticity models and can be applied to situations demanding quick learning and adaptation at the edge.
arXiv Detail & Related papers (2024-08-28T13:51:52Z) - Research on the Application of Computer Vision Based on Deep Learning in Autonomous Driving Technology [9.52658065214428]
This article analyzes in detail the application of deep learning in image recognition, real-time target tracking and classification, environment perception and decision support, and path planning and navigation.
The proposed system has an accuracy of over 98% in image recognition, target tracking and classification, and also demonstrates efficient performance and practicality.
arXiv Detail & Related papers (2024-06-01T16:41:24Z) - Thelxinoƫ: Recognizing Human Emotions Using Pupillometry and Machine Learning [0.0]
This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
arXiv Detail & Related papers (2024-03-27T21:14:17Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - The Power of the Senses: Generalizable Manipulation from Vision and
Touch through Masked Multimodal Learning [60.91637862768949]
We propose Masked Multimodal Learning (M3L) to fuse visual and tactile information in a reinforcement learning setting.
M3L learns a policy and visual-tactile representations based on masked autoencoding.
We evaluate M3L on three simulated environments with both visual and tactile observations.
arXiv Detail & Related papers (2023-11-02T01:33:00Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - A Systematic Review on Interactive Virtual Reality Laboratory [1.3999481573773072]
This study aims to comprehend the work done in quality education from a distance using VR.
Adopting virtual reality in education can help students learn more effectively.
This highlights the importance of a significant expansion of VR use in learning.
arXiv Detail & Related papers (2022-03-26T07:16:01Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.