Large Language Model-empowered multimodal strain sensory system for shape recognition, monitoring, and human interaction of tensegrity
- URL: http://arxiv.org/abs/2406.10264v1
- Date: Tue, 11 Jun 2024 06:26:04 GMT
- Title: Large Language Model-empowered multimodal strain sensory system for shape recognition, monitoring, and human interaction of tensegrity
- Authors: Zebing Mao, Ryota Kobayashi, Hiroyuki Nabae, Koichi Suzumori,
- Abstract summary: A tensegrity-based system is a promising approach for dynamic exploration of uneven and unpredictable environments.
Here, we introduce a 6-strut tensegrity integrate with 24 multimodal strain sensors by leveraging both deep learning model and large language models.
The tensegrity autonomously enables to send data to iPhone for wireless monitoring and provides data analysis, explanation, prediction, and suggestions to human.
- Score: 2.323663950503941
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A tensegrity-based system is a promising approach for dynamic exploration of uneven and unpredictable environments, particularly, space exploration. However, implementing such systems presents challenges in terms of intelligent aspects: state recognition, wireless monitoring, human interaction, and smart analyzing and advising function. Here, we introduce a 6-strut tensegrity integrate with 24 multimodal strain sensors by leveraging both deep learning model and large language models to realize smart tensegrity. Using conductive flexible tendons assisted by long short-term memory model, the tensegrity achieves the self-shape reconstruction without extern sensors. Through integrating the flask server and gpt-3.5-turbo model, the tensegrity autonomously enables to send data to iPhone for wireless monitoring and provides data analysis, explanation, prediction, and suggestions to human for decision making. Finally, human interaction system of the tensegrity helps human obtain necessary information of tensegrity from the aspect of human language. Overall, this intelligent tensegrity-based system with self-sensing tendons showcases potential for future exploration, making it a versatile tool for real-world applications.
Related papers
- Graph-Based Multi-Modal Sensor Fusion for Autonomous Driving [3.770103075126785]
We introduce a novel approach to multi-modal sensor fusion, focusing on developing a graph-based state representation.
We present a Sensor-Agnostic Graph-Aware Kalman Filter, the first online state estimation technique designed to fuse multi-modal graphs.
We validate the effectiveness of our proposed framework through extensive experiments conducted on both synthetic and real-world driving datasets.
arXiv Detail & Related papers (2024-11-06T06:58:17Z) - Towards Interpretable Visuo-Tactile Predictive Models for Soft Robot Interactions [2.4100803794273]
Successful integration of robotic agents into real-world situations hinges on their perception capabilities.
We build upon the fusion of various sensory modalities to probe the surroundings.
Deep learning applied to raw sensory modalities offers a viable option.
We will delve into the outlooks of the perception model and its implications for control purposes.
arXiv Detail & Related papers (2024-07-16T21:46:04Z) - Multi-modal perception for soft robotic interactions using generative models [2.4100803794273]
Perception is essential for the active interaction of physical agents with the external environment.
The integration of multiple sensory modalities, such as touch and vision, enhances this process.
This paper introduces a perception model that harmonizes data from diverse modalities to build a holistic state representation.
arXiv Detail & Related papers (2024-04-05T17:06:03Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in
3D World [55.878173953175356]
We propose MultiPLY, a multisensory embodied large language model.
We first collect Multisensory Universe, a large-scale multisensory interaction dataset comprising 500k data.
We demonstrate that MultiPLY outperforms baselines by a large margin through a diverse set of embodied tasks.
arXiv Detail & Related papers (2024-01-16T18:59:45Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - MultiIoT: Benchmarking Machine Learning for the Internet of Things [70.74131118309967]
The next generation of machine learning systems must be adept at perceiving and interacting with the physical world.
sensory data from motion, thermal, geolocation, depth, wireless signals, video, and audio are increasingly used to model the states of physical environments.
Existing efforts are often specialized to a single sensory modality or prediction task.
This paper proposes MultiIoT, the most expansive and unified IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 real-world tasks.
arXiv Detail & Related papers (2023-11-10T18:13:08Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Prompt-to-OS (P2OS): Revolutionizing Operating Systems and
Human-Computer Interaction with Integrated AI Generative Models [10.892991111926573]
We present a paradigm for human-computer interaction that revolutionizes the traditional notion of an operating system.
Within this innovative framework, user requests issued to the machine are handled by an interconnected ecosystem of generative AI models.
This visionary concept raises significant challenges, including privacy, security, trustability, and the ethical use of generative models.
arXiv Detail & Related papers (2023-10-07T17:16:34Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.