Generative AI for Analyzing Participatory Rural Appraisal Data: An Exploratory Case Study in Gender Research
- URL: http://arxiv.org/abs/2502.00763v1
- Date: Sun, 02 Feb 2025 11:55:52 GMT
- Title: Generative AI for Analyzing Participatory Rural Appraisal Data: An Exploratory Case Study in Gender Research
- Authors: Srividya Sheshadri, Unnikrishnan Radhakrishnan, Aswathi Padmavilochanan, Christopher Coley, Rao R. Bhavani,
- Abstract summary: This study explores the novel application of Generative Artificial Intelligence (GenAI) in analyzing unstructured visual data generated through Participatory Rural Appraisal (PRA)<n>Using the "Ideal Village" PRA activity as a case study, we evaluate three state-of-the-art Large Language Models (LLMs) in their ability to interpret hand-drawn artifacts containing multilingual content from various Indian states.<n>Our findings reveal significant challenges in AI's current capabilities to process such unstructured data, particularly in handling multilingual content, maintaining contextual accuracy, and avoiding hallucinations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study explores the novel application of Generative Artificial Intelligence (GenAI) in analyzing unstructured visual data generated through Participatory Rural Appraisal (PRA), specifically focusing on women's empowerment research in rural communities. Using the "Ideal Village" PRA activity as a case study, we evaluate three state-of-the-art Large Language Models (LLMs) - GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro - in their ability to interpret hand-drawn artifacts containing multilingual content from various Indian states. Through comparative analysis, we assess the models' performance across critical dimensions including visual interpretation, language translation, and data classification. Our findings reveal significant challenges in AI's current capabilities to process such unstructured data, particularly in handling multilingual content, maintaining contextual accuracy, and avoiding hallucinations. While the models showed promise in basic visual interpretation, they struggled with nuanced cultural contexts and consistent classification of empowerment-related elements. This study contributes to both AI and gender research by highlighting the potential and limitations of AI in analyzing participatory research data, while emphasizing the need for human oversight and improved contextual understanding. Our findings suggest future directions for developing more inclusive AI models that can better serve community-based participatory research, particularly in gender studies and rural development contexts.
Related papers
- Retrieval Augmented Generation and Understanding in Vision: A Survey and New Outlook [85.43403500874889]
Retrieval-augmented generation (RAG) has emerged as a pivotal technique in artificial intelligence (AI)
Recent advancements in RAG for embodied AI, with a particular focus on applications in planning, task execution, multimodal perception, interaction, and specialized domains.
arXiv Detail & Related papers (2025-03-23T10:33:28Z) - The impact of AI and peer feedback on research writing skills: a study using the CGScholar platform among Kazakhstani scholars [0.0]
This research studies the impact of AI and peer feedback on the academic writing development of Kazakhstani scholars using the CGScholar platform.
The study aimed to find out how familiarity with AI tools and peer feedback processes impacts participants' openness to incorporating feedback into their academic writing.
arXiv Detail & Related papers (2025-03-05T04:34:25Z) - User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - Decoding AI and Human Authorship: Nuances Revealed Through NLP and Statistical Analysis [0.0]
This research explores the nuanced differences in texts produced by AI and those written by humans.
The study investigates various linguistic traits, patterns of creativity, and potential biases inherent in human-written and AI- generated texts.
arXiv Detail & Related papers (2024-07-15T18:09:03Z) - SOUL: Towards Sentiment and Opinion Understanding of Language [96.74878032417054]
We propose a new task called Sentiment and Opinion Understanding of Language (SOUL)
SOUL aims to evaluate sentiment understanding through two subtasks: Review (RC) and Justification Generation (JG)
arXiv Detail & Related papers (2023-10-27T06:48:48Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Supporting Human-AI Collaboration in Auditing LLMs with LLMs [33.56822240549913]
Large language models have been shown to be biased and behave irresponsibly.
It is crucial to audit these language models rigorously.
Existing auditing tools leverage either or both humans and AI to find failures.
arXiv Detail & Related papers (2023-04-19T21:59:04Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.