Sensorium Arc: AI Agent System for Oceanic Data Exploration and Interactive Eco-Art
- URL: http://arxiv.org/abs/2511.15997v1
- Date: Thu, 20 Nov 2025 02:48:40 GMT
- Title: Sensorium Arc: AI Agent System for Oceanic Data Exploration and Interactive Eco-Art
- Authors: Noah Bissell, Ethan Paley, Joshua Harrison, Juliano Calil, Myungin Lee,
- Abstract summary: Sensorium Arc (AI reflects on climate) is a real-time multimodal interactive AI agent system that personifies the ocean as a poetic speaker.<n>The project demonstrates the potential of conversational AI agents to mediate affective, intuitive access to high-dimensional environmental data.
- Score: 3.0447481187978886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensorium Arc (AI reflects on climate) is a real-time multimodal interactive AI agent system that personifies the ocean as a poetic speaker and guides users through immersive explorations of complex marine data. Built on a modular multi-agent system and retrieval-augmented large language model (LLM) framework, Sensorium enables natural spoken conversations with AI agents that embodies the ocean's perspective, generating responses that blend scientific insight with ecological poetics. Through keyword detection and semantic parsing, the system dynamically triggers data visualizations and audiovisual playback based on time, location, and thematic cues drawn from the dialogue. Developed in collaboration with the Center for the Study of the Force Majeure and inspired by the eco-aesthetic philosophy of Newton Harrison, Sensorium Arc reimagines ocean data not as an abstract dataset but as a living narrative. The project demonstrates the potential of conversational AI agents to mediate affective, intuitive access to high-dimensional environmental data and proposes a new paradigm for human-machine-ecosystem.
Related papers
- OceanAI: A Conversational Platform for Accurate, Transparent, Near-Real-Time Oceanographic Insights [17.632037709212266]
We present OceanAI, a conversational platform that integrates the natural-language fluency of open-source large language models.<n>Each query triggers real-time API calls that identify, parse, and synthesize relevant datasets.<n>OceanAI connects to multiple NOAA data products and variables, supporting applications in marine hazard forecasting, ecosystem assessment, and water-quality monitoring.
arXiv Detail & Related papers (2025-11-02T17:23:58Z) - OceanGym: A Benchmark Environment for Underwater Embodied Agents [69.56465775825275]
OceanGym is the first comprehensive benchmark for ocean underwater embodied agents.<n>It is designed to advance AI in one of the most demanding real-world environments.<n>By providing a high-fidelity, rigorously designed platform, OceanGym establishes a testbed for developing robust embodied AI.
arXiv Detail & Related papers (2025-09-30T17:09:32Z) - Teaching AI to Feel: A Collaborative, Full-Body Exploration of Emotive Communication [0.0]
Commonaiverse is an interactive installation exploring human emotions through full-body motion tracking and real-time AI feedback.<n>We discuss how this collaborative, out-of-the-box approach pushes multimedia research toward a more embodied, co-created paradigm of emotional AI.
arXiv Detail & Related papers (2025-09-26T10:28:56Z) - A Self-Evolving AI Agent System for Climate Science [59.08800209508371]
We introduce EarthLink, the first self-evolving AI agent system designed as an interactive "copilot" for Earth scientists.<n>Through natural language interaction, EarthLink automates the entire research workflow by integrating planning, code execution, data analysis, and physical reasoning.<n>It exhibits human-like cross-disciplinary analytical ability and proficiency comparable to a junior researcher in expert evaluations on core large-scale climate tasks.
arXiv Detail & Related papers (2025-07-23T08:29:25Z) - Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset [113.25650486482762]
We introduce the Seamless Interaction dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage.<n>This dataset enables the development of AI technologies that understand dyadic embodied dynamics.<n>We develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech.
arXiv Detail & Related papers (2025-06-27T18:09:49Z) - MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in
3D World [55.878173953175356]
We propose MultiPLY, a multisensory embodied large language model.
We first collect Multisensory Universe, a large-scale multisensory interaction dataset comprising 500k data.
We demonstrate that MultiPLY outperforms baselines by a large margin through a diverse set of embodied tasks.
arXiv Detail & Related papers (2024-01-16T18:59:45Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Aria-NeRF: Multimodal Egocentric View Synthesis [17.0554791846124]
We seek to accelerate research in developing rich, multimodal scene models trained from egocentric data, based on differentiable volumetric ray-tracing inspired by Neural Radiance Fields (NeRFs)
This dataset offers a comprehensive collection of sensory data, featuring RGB images, eye-tracking camera footage, audio recordings from a microphone, atmospheric pressure readings from a barometer, positional coordinates from GPS, and information from dual-frequency IMU datasets (1kHz and 800Hz)
The diverse data modalities and the real-world context captured within this dataset serve as a robust foundation for furthering our understanding of human behavior and enabling more immersive and intelligent experiences in
arXiv Detail & Related papers (2023-11-11T01:56:35Z) - Embodied Agents for Efficient Exploration and Smart Scene Description [47.82947878753809]
We tackle a setting for visual navigation in which an autonomous agent needs to explore and map an unseen indoor environment.
We propose and evaluate an approach that combines recent advances in visual robotic exploration and image captioning on images.
Our approach can generate smart scene descriptions that maximize semantic knowledge of the environment and avoid repetitions.
arXiv Detail & Related papers (2023-01-17T19:28:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.