AIJIM: A Scalable Model for Real-Time AI in Environmental Journalism
- URL: http://arxiv.org/abs/2503.17401v5
- Date: Mon, 28 Apr 2025 11:18:31 GMT
- Title: AIJIM: A Scalable Model for Real-Time AI in Environmental Journalism
- Authors: Torsten Tiltack,
- Abstract summary: AIJIM is a framework for integrating real-time AI into environmental journalism.<n>It was validated in a 2024 pilot on the island of Mallorca.<n>It achieved 85.4% detection accuracy and 89.7% agreement with expert annotations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces AIJIM, the Artificial Intelligence Journalism Integration Model -- a novel framework for integrating real-time AI into environmental journalism. AIJIM combines Vision Transformer-based hazard detection, crowdsourced validation with 252 validators, and automated reporting within a scalable, modular architecture. A dual-layer explainability approach ensures ethical transparency through fast CAM-based visual overlays and optional LIME-based box-level interpretations. Validated in a 2024 pilot on the island of Mallorca using the NamicGreen platform, AIJIM achieved 85.4\% detection accuracy and 89.7\% agreement with expert annotations, while reducing reporting latency by 40\%. Unlike conventional approaches such as Data-Driven Journalism or AI Fact-Checking, AIJIM provides a transferable model for participatory, community-driven environmental reporting, advancing journalism, artificial intelligence, and sustainability in alignment with the UN Sustainable Development Goals and the EU AI Act.
Related papers
- Resilience of Vision Transformers for Domain Generalisation in the Presence of Out-of-Distribution Noisy Images [2.2124795371148616]
We evaluate vision tramsformers pre-trained with masked image modelling (MIM) against synthetic out-of-distribution (OOD) benchmarks.
Experiments demonstrate BEIT's known robustness while maintaining 94% accuracy on PACS and 87% on Office-Home, despite significant occlusions.
These insights bridge the gap between lab-trained models and real-world deployment that offer a blueprint for building AI systems that generalise reliably under uncertainty.
arXiv Detail & Related papers (2025-04-05T16:25:34Z) - FakeScope: Large Multimodal Expert Model for Transparent AI-Generated Image Forensics [66.14786900470158]
We propose FakeScope, an expert multimodal model (LMM) tailored for AI-generated image forensics.
FakeScope identifies AI-synthetic images with high accuracy and provides rich, interpretable, and query-driven forensic insights.
FakeScope achieves state-of-the-art performance in both closed-ended and open-ended forensic scenarios.
arXiv Detail & Related papers (2025-03-31T16:12:48Z) - From Trust to Truth: Actionable policies for the use of AI in fact-checking in Germany and Ukraine [0.081585306387285]
The rise of Artificial Intelligence (AI) presents unprecedented opportunities and challenges for journalism, fact-checking and media regulation.<n>While AI offers tools to combat disinformation and enhance media practices, its unregulated use and associated risks necessitate clear policies and collaborative efforts.<n>This policy paper explores the implications of AI for journalism and fact-checking, with a focus on addressing disinformation and fostering responsible AI integration.
arXiv Detail & Related papers (2025-03-24T14:34:00Z) - Identifying Trustworthiness Challenges in Deep Learning Models for Continental-Scale Water Quality Prediction [64.4881275941927]
We present the first comprehensive evaluation of trustworthiness in a continental-scale multi-task LSTM model.<n>Our investigation uncovers systematic patterns of model performance disparities linked to basin characteristics.<n>This work serves as a timely call to action for advancing trustworthy data-driven methods for water resources management.
arXiv Detail & Related papers (2025-03-13T01:50:50Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - FairSense-AI: Responsible AI Meets Sustainability [1.980639720136382]
We introduce FairSense-AI: a framework designed to detect and mitigate bias in both text and images.
By leveraging Large Language Models (LLMs) and Vision-Language Models (VLMs), FairSense-AI uncovers subtle forms of prejudice or stereotyping.
FairSense-AI integrates an AI risk assessment component that aligns with frameworks like the MIT AI Risk Repository and NIST AI Risk Management Framework.
arXiv Detail & Related papers (2025-03-04T18:43:57Z) - VLDBench: Vision Language Models Disinformation Detection Benchmark [37.40909096573706]
We present the Vision-Language Disinformation Detection Benchmark VLDBench.
It is the first comprehensive benchmark for detecting disinformation across both unimodal (text-only) and multimodal (text and image) content.
VLDBench features a rigorous semi-automated data curation pipeline, with 22 domain experts dedicating 300 plus hours to annotation.
arXiv Detail & Related papers (2025-02-17T02:18:47Z) - Safety is Essential for Responsible Open-Ended Systems [47.172735322186]
Open-Endedness is the ability of AI systems to continuously and autonomously generate novel and diverse artifacts or solutions.<n>This position paper argues that the inherently dynamic and self-propagating nature of Open-Ended AI introduces significant, underexplored risks.
arXiv Detail & Related papers (2025-02-06T21:32:07Z) - From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
Much of this debate has concentrated on direct impact without addressing the significant indirect effects.<n>This paper examines how the problem of Jevons' Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption.<n>We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - FastRM: An efficient and automatic explainability framework for multimodal generative models [10.184567639685321]
FastRM is an efficient method for predicting explainable Relevancy Maps of LVLMs.
FastRM achieves a 99.8% reduction in computation time and a 44.4% reduction in memory footprint.
arXiv Detail & Related papers (2024-12-02T13:39:29Z) - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure [4.578401882034969]
We focus on how model performance evaluation may inform or inhibit probing of model limitations, biases, and other risks.
Our findings can inform AI providers and legal scholars in designing interventions and policies that preserve open-source innovation while incentivizing ethical uptake.
arXiv Detail & Related papers (2024-09-27T19:09:40Z) - The BRAVO Semantic Segmentation Challenge Results in UNCV2024 [68.20197719071436]
We define two categories of reliability: (1) semantic reliability, which reflects the model's accuracy and calibration when exposed to various perturbations; and (2) OOD reliability, which measures the model's ability to detect object classes that are unknown during training.
The results reveal interesting insights into the importance of large-scale pre-training and minimal architectural design in developing robust and reliable semantic segmentation models.
arXiv Detail & Related papers (2024-09-23T15:17:30Z) - Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization [55.09893295671917]
This paper introduces a novel Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA)
The GRMP-IQA comprises two key modules: Meta-Prompt Pre-training Module and Quality-Aware Gradient Regularization.
Experiments on five standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods under limited data setting.
arXiv Detail & Related papers (2024-09-09T07:26:21Z) - AI, Climate, and Transparency: Operationalizing and Improving the AI Act [2.874893537471256]
This paper critically examines the AI Act's provisions on climate-related transparency.
We identify key shortcomings, including the exclusion of energy consumption during AI inference.
We propose a novel interpretation to bring inference-related energy use back within the Act's scope.
arXiv Detail & Related papers (2024-08-28T07:57:39Z) - Responsible AI for Earth Observation [10.380878519901998]
We systematically define the intersection of AI and EO, with a central focus on responsible AI practices.
We identify several critical components guiding this exploration from both academia and industry perspectives.
The paper explores potential opportunities and emerging trends, providing valuable insights for future research endeavors.
arXiv Detail & Related papers (2024-05-31T14:47:27Z) - Towards A Comprehensive Assessment of AI's Environmental Impact [0.5982922468400899]
Recent surge of interest in machine learning has sparked a trend towards large-scale adoption of AI/ML.
There is a need for a framework that monitors the environmental impact and degradation from AI/ML throughout its lifecycle.
This study proposes a methodology to track environmental variables relating to the multifaceted impact of AI around datacenters using openly available energy data and globally acquired satellite observations.
arXiv Detail & Related papers (2024-05-22T21:19:35Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - Multisource AI Scorecard Table for System Evaluation [3.74397577716445]
The paper describes a Multisource AI Scorecard Table (MAST) that provides the developer and user of an artificial intelligence (AI)/machine learning (ML) system with a standard checklist.
The paper explores how the analytic tradecraft standards outlined in Intelligence Community Directive (ICD) 203 can provide a framework for assessing the performance of an AI system.
arXiv Detail & Related papers (2021-02-08T03:37:40Z) - InfoBERT: Improving Robustness of Language Models from An Information
Theoretic Perspective [84.78604733927887]
Large-scale language models such as BERT have achieved state-of-the-art performance across a wide range of NLP tasks.
Recent studies show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks.
We propose InfoBERT, a novel learning framework for robust fine-tuning of pre-trained language models.
arXiv Detail & Related papers (2020-10-05T20:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.