Agent Based Computational Model Aided Approach to Improvise the
Inequality-Adjusted Human Development Index (IHDI) for Greater Parity in Real
Scenario Assessments
- URL: http://arxiv.org/abs/2010.03677v1
- Date: Wed, 7 Oct 2020 22:20:51 GMT
- Title: Agent Based Computational Model Aided Approach to Improvise the
Inequality-Adjusted Human Development Index (IHDI) for Greater Parity in Real
Scenario Assessments
- Authors: Pradipta Banerjee, Subhrabrata Choudhury
- Abstract summary: Inequality-adjusted Human Development Index (IHDI) has been the path changing composite-index having the focus on human development.
We would discuss the apparent shortcomings and probable refinement of the existing index using an agent based computational system model approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To design, evaluate and tune policies for all-inclusive human development,
the primary requisite is to assess the true state of affairs of the society.
Statistical indices like GDP, Gini Coefficients have been developed to
accomplish the evaluation of the socio-economic systems. They have remained
prevalent in the conventional economic theories but little do they have in the
offing regarding true well-being and development of humans. Human Development
Index (HDI) and thereafter Inequality-adjusted Human Development Index (IHDI)
has been the path changing composite-index having the focus on human
development. However, even though its fundamental philosophy has an
all-inclusive human development focus, the composite-indices appear to be
unable to grasp the actual assessment in several scenarios. This happens due to
the dynamic non-linearity of social-systems where superposition principle
cannot be applied between all of its inputs and outputs of the system as the
system's own attributes get altered upon each input. We would discuss the
apparent shortcomings and probable refinement of the existing index using an
agent based computational system model approach.
Related papers
- WorldSimBench: Towards Video Generation Models as World Simulators [79.69709361730865]
We classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench.
WorldSimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks.
Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence.
arXiv Detail & Related papers (2024-10-23T17:56:11Z) - Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence [5.147767778946168]
We critically assess 23 state-of-the-art Large Language Models (LLMs) benchmarks.
Our research uncovered significant limitations, including biases, difficulties in measuring genuine reasoning, adaptability, implementation inconsistencies, prompt engineering complexity, diversity, and the overlooking of cultural and ideological norms.
arXiv Detail & Related papers (2024-02-15T11:08:10Z) - Agent Alignment in Evolving Social Norms [65.45423591744434]
We propose an evolutionary framework for agent evolution and alignment, named EvolutionaryAgent.
In an environment where social norms continuously evolve, agents better adapted to the current social norms will have a higher probability of survival and proliferation.
We show that EvolutionaryAgent can align progressively better with the evolving social norms while maintaining its proficiency in general tasks.
arXiv Detail & Related papers (2024-01-09T15:44:44Z) - Measuring Value Alignment [12.696227679697493]
This paper introduces a novel formalism to quantify the alignment between AI systems and human values.
By utilizing this formalism, AI developers and ethicists can better design and evaluate AI systems to ensure they operate in harmony with human values.
arXiv Detail & Related papers (2023-12-23T12:30:06Z) - Hierarchical Evaluation Framework: Best Practices for Human Evaluation [17.91641890651225]
The absence of widely accepted human evaluation metrics in NLP hampers fair comparisons among different systems and the establishment of universal assessment standards.
We develop our own hierarchical evaluation framework to provide a more comprehensive representation of the NLP system's performance.
In future work, we will investigate the potential time-saving benefits of our proposed framework for evaluators assessing NLP systems.
arXiv Detail & Related papers (2023-10-03T09:46:02Z) - Evaluating the Social Impact of Generative AI Systems in Systems and Society [43.32010533676472]
Generative AI systems across modalities, ranging from text (including code), image, audio, and video, have broad social impacts.
There is no official standard for means of evaluating those impacts or for which impacts should be evaluated.
We present a guide that moves toward a standard approach in evaluating a base generative AI system for any modality.
arXiv Detail & Related papers (2023-06-09T15:05:13Z) - Rethinking Model Evaluation as Narrowing the Socio-Technical Gap [34.08410116336628]
We argue that model evaluation practices must take on a critical task to cope with the challenges and responsibilities brought by this homogenization.
We urge the community to develop evaluation methods based on real-world socio-requirements.
arXiv Detail & Related papers (2023-06-01T00:01:43Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.