Agent Based Computational Model Aided Approach to Improvise the
Inequality-Adjusted Human Development Index (IHDI) for Greater Parity in Real
Scenario Assessments
- URL: http://arxiv.org/abs/2010.03677v1
- Date: Wed, 7 Oct 2020 22:20:51 GMT
- Title: Agent Based Computational Model Aided Approach to Improvise the
Inequality-Adjusted Human Development Index (IHDI) for Greater Parity in Real
Scenario Assessments
- Authors: Pradipta Banerjee, Subhrabrata Choudhury
- Abstract summary: Inequality-adjusted Human Development Index (IHDI) has been the path changing composite-index having the focus on human development.
We would discuss the apparent shortcomings and probable refinement of the existing index using an agent based computational system model approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To design, evaluate and tune policies for all-inclusive human development,
the primary requisite is to assess the true state of affairs of the society.
Statistical indices like GDP, Gini Coefficients have been developed to
accomplish the evaluation of the socio-economic systems. They have remained
prevalent in the conventional economic theories but little do they have in the
offing regarding true well-being and development of humans. Human Development
Index (HDI) and thereafter Inequality-adjusted Human Development Index (IHDI)
has been the path changing composite-index having the focus on human
development. However, even though its fundamental philosophy has an
all-inclusive human development focus, the composite-indices appear to be
unable to grasp the actual assessment in several scenarios. This happens due to
the dynamic non-linearity of social-systems where superposition principle
cannot be applied between all of its inputs and outputs of the system as the
system's own attributes get altered upon each input. We would discuss the
apparent shortcomings and probable refinement of the existing index using an
agent based computational system model approach.
Related papers
- Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - Agent Alignment in Evolving Social Norms [65.45423591744434]
We propose an evolutionary framework for agent evolution and alignment, named EvolutionaryAgent.
In an environment where social norms continuously evolve, agents better adapted to the current social norms will have a higher probability of survival and proliferation.
We show that EvolutionaryAgent can align progressively better with the evolving social norms while maintaining its proficiency in general tasks.
arXiv Detail & Related papers (2024-01-09T15:44:44Z) - Measuring Value Alignment [12.696227679697493]
This paper introduces a novel formalism to quantify the alignment between AI systems and human values.
By utilizing this formalism, AI developers and ethicists can better design and evaluate AI systems to ensure they operate in harmony with human values.
arXiv Detail & Related papers (2023-12-23T12:30:06Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - Hierarchical Evaluation Framework: Best Practices for Human Evaluation [17.91641890651225]
The absence of widely accepted human evaluation metrics in NLP hampers fair comparisons among different systems and the establishment of universal assessment standards.
We develop our own hierarchical evaluation framework to provide a more comprehensive representation of the NLP system's performance.
In future work, we will investigate the potential time-saving benefits of our proposed framework for evaluators assessing NLP systems.
arXiv Detail & Related papers (2023-10-03T09:46:02Z) - It HAS to be Subjective: Human Annotator Simulation via Zero-shot
Density Estimation [15.8765167340819]
Human annotator simulation (HAS) serves as a cost-effective substitute for human evaluation such as data annotation and system assessment.
Human perception and behaviour during human evaluation exhibit inherent variability due to diverse cognitive processes and subjective interpretations.
This paper introduces a novel meta-learning framework that treats HAS as a zero-shot density estimation problem.
arXiv Detail & Related papers (2023-09-30T20:54:59Z) - Evaluating the Social Impact of Generative AI Systems in Systems and Society [43.32010533676472]
Generative AI systems across modalities, ranging from text (including code), image, audio, and video, have broad social impacts.
There is no official standard for means of evaluating those impacts or for which impacts should be evaluated.
We present a guide that moves toward a standard approach in evaluating a base generative AI system for any modality.
arXiv Detail & Related papers (2023-06-09T15:05:13Z) - Rethinking Model Evaluation as Narrowing the Socio-Technical Gap [34.08410116336628]
We argue that model evaluation practices must take on a critical task to cope with the challenges and responsibilities brought by this homogenization.
We urge the community to develop evaluation methods based on real-world socio-requirements.
arXiv Detail & Related papers (2023-06-01T00:01:43Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.