Continual Distributed Learning for Crisis Management
- URL: http://arxiv.org/abs/2104.12876v1
- Date: Mon, 26 Apr 2021 21:01:29 GMT
- Title: Continual Distributed Learning for Crisis Management
- Authors: Aman Priyanshu and Mudit Sinha and Shreyans Mehta
- Abstract summary: Social media platforms such as Twitter provide an excellent resource for mobile communication during emergency events.
Data present in such situations is ever-changing, and resources during such crisis may not be readily available.
A low resource, continually learning system must be developed to incorporate and make NLP models robust against noisy and unordered data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms such as Twitter provide an excellent resource for
mobile communication during emergency events. During the sudden onset of a
natural or artificial disaster, important information may be posted on Twitter
or similar web forums. This information can be used for disaster response and
crisis management if processed accurately. However, the data present in such
situations is ever-changing, and considerable resources during such crisis may
not be readily available. Therefore, a low resource, continually learning
system must be developed to incorporate and make NLP models robust against
noisy and unordered data. We utilise regularisation to alleviate catastrophic
forgetting in the target neural networks while taking a distributed approach to
enable learning on resource-constrained devices. We employ federated learning
for distributed learning and aggregation of the central model for continual
deployment.
Related papers
- Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - Enhancing Trustworthiness and Minimising Bias Issues in Leveraging Social Media Data for Disaster Management Response [0.1499944454332829]
Leveraging real-time data can significantly deal with data uncertainty and enhance disaster response efforts.
Social media appeared as an effective source of real-time data as there has been extensive use of social media during and after the disasters.
It also brings forth challenges regarding trustworthiness and bias in these data.
We aim to investigate and identify the factors that can be used to enhance trustworthiness and minimize bias.
arXiv Detail & Related papers (2024-08-15T10:59:20Z) - CrisisSense-LLM: Instruction Fine-Tuned Large Language Model for Multi-label Social Media Text Classification in Disaster Informatics [49.2719253711215]
This study introduces a novel approach to disaster text classification by enhancing a pre-trained Large Language Model (LLM)
Our methodology involves creating a comprehensive instruction dataset from disaster-related tweets, which is then used to fine-tune an open-source LLM.
This fine-tuned model can classify multiple aspects of disaster-related information simultaneously, such as the type of event, informativeness, and involvement of human aid.
arXiv Detail & Related papers (2024-06-16T23:01:10Z) - CrisisMatch: Semi-Supervised Few-Shot Learning for Fine-Grained Disaster
Tweet Classification [51.58605842457186]
We present a fine-grained disaster tweet classification model under the semi-supervised, few-shot learning setting.
Our model, CrisisMatch, effectively classifies tweets into fine-grained classes of interest using few labeled data and large amounts of unlabeled data.
arXiv Detail & Related papers (2023-10-23T07:01:09Z) - DeCrisisMB: Debiased Semi-Supervised Learning for Crisis Tweet
Classification via Memory Bank [52.20298962359658]
In crisis events, people often use social media platforms such as Twitter to disseminate information about the situation, warnings, advice, and support.
fully-supervised approaches require annotating vast amounts of data and are impractical due to limited response time.
Semi-supervised models can be biased, performing moderately well for certain classes while performing extremely poorly for others.
We propose a simple but effective debiasing method, DeCrisisMB, that utilizes a Memory Bank to store and perform equal sampling for generated pseudo-labels from each class at each training.
arXiv Detail & Related papers (2023-10-23T05:25:51Z) - Exploring the Impact of Disrupted Peer-to-Peer Communications on Fully
Decentralized Learning in Disaster Scenarios [4.618221836001186]
Fully decentralized learning enables the distribution of learning resources across multiple user devices or nodes.
This study investigates the effects of various disruptions to peer-to-peer communications on decentralized learning in a disaster setting.
arXiv Detail & Related papers (2023-10-04T17:24:38Z) - Robust, Deep, and Reinforcement Learning for Management of Communication
and Power Networks [6.09170287691728]
The present thesis first develops principled methods to make generic machine learning models robust against distributional uncertainties and adversarial data.
We then build on this robust framework to design robust semi-supervised learning over graph methods.
The second part of this thesis aspires to fully unleash the potential of next-generation wired and wireless networks.
arXiv Detail & Related papers (2022-02-08T05:49:06Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - RelaySum for Decentralized Deep Learning on Heterogeneous Data [71.36228931225362]
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.
This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.
arXiv Detail & Related papers (2021-10-08T14:55:32Z) - Improving Community Resiliency and Emergency Response With Artificial
Intelligence [0.05541644538483946]
We are working towards a multipronged emergency response tool that provide stakeholders timely access to comprehensive, relevant, and reliable information.
Our tool consists of encoding multiple layers of open source geospatial data including flood risk location, road network strength, inundation maps that proxy inland flooding and computer vision semantic segmentation for estimating flooded areas and damaged infrastructure.
These data layers are combined and used as input data for machine learning algorithms such as finding the best evacuation routes before, during and after an emergency or providing a list of available lodging for first responders in an impacted area for first.
arXiv Detail & Related papers (2020-05-28T18:05:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.