Stream-based perception for cognitive agents in mobile ecosystems
- URL: http://arxiv.org/abs/2401.13604v1
- Date: Wed, 24 Jan 2024 17:14:50 GMT
- Title: Stream-based perception for cognitive agents in mobile ecosystems
- Authors: Jeremias D\"otterl, Ralf Bruns, J\"urgen Dunkel, Sascha Ossowski
- Abstract summary: We present a stream-based perception approach that enables the agents to perceive meaningful situations in low-level sensor data streams.
We show how situations derived from smartphone sensor data can trigger and guide auctions, which the agents use to reach agreements.
- Score: 0.7865191493201839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive agent abstractions can help to engineer intelligent systems across
mobile devices. On smartphones, the data obtained from onboard sensors can give
valuable insights into the user's current situation. Unfortunately, today's
cognitive agent frameworks cannot cope well with the challenging
characteristics of sensor data. Sensor data is located on a low abstraction
level and the individual data elements are not meaningful when observed in
isolation. In contrast, cognitive agents operate on high-level percepts and
lack the means to effectively detect complex spatio-temporal patterns in
sequences of multiple percepts. In this paper, we present a stream-based
perception approach that enables the agents to perceive meaningful situations
in low-level sensor data streams. We present a crowdshipping case study where
autonomous, self-interested agents collaborate to deliver parcels to their
destinations. We show how situations derived from smartphone sensor data can
trigger and guide auctions, which the agents use to reach agreements.
Experiments with real smartphone data demonstrate the benefits of stream-based
agent perception.
Related papers
- Robust Collaborative Perception without External Localization and Clock Devices [52.32342059286222]
A consistent spatial-temporal coordination across multiple agents is fundamental for collaborative perception.
Traditional methods depend on external devices to provide localization and clock signals.
We propose a novel approach: aligning by recognizing the inherent geometric patterns within the perceptual data of various agents.
arXiv Detail & Related papers (2024-05-05T15:20:36Z) - AdvGPS: Adversarial GPS for Multi-Agent Perception Attack [47.59938285740803]
This study investigates whether specific GPS signals can easily mislead the multi-agent perception system.
We introduce textscAdvGPS, a method capable of generating adversarial GPS signals which are also stealthy for individual agents within the system.
Our experiments on the OPV2V dataset demonstrate that these attacks substantially undermine the performance of state-of-the-art methods.
arXiv Detail & Related papers (2024-01-30T23:13:41Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Incremental Semi-supervised Federated Learning for Health Inference via
Mobile Sensing [5.434366992553875]
We propose FedMobile, an incremental semi-supervised federated learning algorithm.
We evaluate FedMobile using a real-world mobile sensing dataset for influenza-like symptom recognition.
arXiv Detail & Related papers (2023-12-19T23:39:33Z) - Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes [0.0]
We propose a novel graph-guided neural network approach for Human Activity Recognition (HAR) in smart homes.
We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home.
Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms.
arXiv Detail & Related papers (2023-11-16T02:43:13Z) - Unsupervised Statistical Feature-Guided Diffusion Model for Sensor-based Human Activity Recognition [3.2319909486685354]
A key problem holding up progress in wearable sensor-based human activity recognition is the unavailability of diverse and labeled training data.
We propose an unsupervised statistical feature-guided diffusion model specifically optimized for wearable sensor-based human activity recognition.
By conditioning the diffusion model on statistical information such as mean, standard deviation, Z-score, and skewness, we generate diverse and representative synthetic sensor data.
arXiv Detail & Related papers (2023-05-30T15:12:59Z) - Anomaly Detection and Inter-Sensor Transfer Learning on Smart
Manufacturing Datasets [6.114996271792091]
In many cases, the goal of the smart manufacturing system is to rapidly detect (or anticipate) failures to reduce operational cost and eliminate downtime.
This often boils down to detecting anomalies within the sensor date acquired from the system.
The smart manufacturing application domain poses certain salient technical challenges.
We show that predictive failure classification can be achieved, thus paving the way for predictive maintenance.
arXiv Detail & Related papers (2022-06-13T17:51:24Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - SensiX: A Platform for Collaborative Machine Learning on the Edge [69.1412199244903]
We present SensiX, a personal edge platform that stays between sensor data and sensing models.
We demonstrate its efficacy in developing motion and audio-based multi-device sensing systems.
Our evaluation shows that SensiX offers a 7-13% increase in overall accuracy and up to 30% increase across different environment dynamics at the expense of 3mW power overhead.
arXiv Detail & Related papers (2020-12-04T23:06:56Z) - Self-Supervised Transformers for Activity Classification using Ambient
Sensors [3.1829446824051195]
This paper proposes a methodology to classify the activities of a resident within an ambient sensor based environment.
We also propose a methodology to pre-train Transformers in a self-supervised manner, as a hybrid autoencoder-classifier model.
arXiv Detail & Related papers (2020-11-22T20:46:25Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.