AI Product Value Assessment Model: An Interdisciplinary Integration Based on Information Theory, Economics, and Psychology
- URL: http://arxiv.org/abs/2508.16714v1
- Date: Fri, 22 Aug 2025 15:51:14 GMT
- Title: AI Product Value Assessment Model: An Interdisciplinary Integration Based on Information Theory, Economics, and Psychology
- Authors: Yu yang,
- Abstract summary: This paper develops a multi-dimensional evaluation model that integrates information theory's entropy reduction principle, economics' bounded rationality framework, and psychology's irrational decision theories to quantify AI product value.<n>A non-linear formula captures factor couplings, and validation through 10 commercial cases demonstrates the model's effectiveness in distinguishing successful and failed products.
- Score: 5.57756598733474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, breakthroughs in artificial intelligence (AI) technology have triggered global industrial transformations, with applications permeating various fields such as finance, healthcare, education, and manufacturing. However, this rapid iteration is accompanied by irrational development, where enterprises blindly invest due to technology hype, often overlooking systematic value assessments. This paper develops a multi-dimensional evaluation model that integrates information theory's entropy reduction principle, economics' bounded rationality framework, and psychology's irrational decision theories to quantify AI product value. Key factors include positive dimensions (e.g., uncertainty elimination, efficiency gains, cost savings, decision quality improvement) and negative risks (e.g., error probability, impact, and correction costs). A non-linear formula captures factor couplings, and validation through 10 commercial cases demonstrates the model's effectiveness in distinguishing successful and failed products, supporting hypotheses on synergistic positive effects, non-linear negative impacts, and interactive regulations. Results reveal value generation logic, offering enterprises tools to avoid blind investments and promote rational AI industry development. Future directions include adaptive weights, dynamic mechanisms, and extensions to emerging AI technologies like generative models.
Related papers
- When Life Gives You AI, Will You Turn It Into A Market for Lemons? Understanding How Information Asymmetries About AI System Capabilities Affect Market Outcomes and Adoption [45.10829096284761]
Complex AI systems can appear highly accurate while making costly errors or embedding hidden defects.<n>This paper provides the first experimental evidence on the role of information asymmetries and disclosure designs in shaping user adoption of AI systems.
arXiv Detail & Related papers (2026-01-29T12:49:28Z) - ERA-IT: Aligning Semantic Models with Revealed Economic Preference for Real-Time and Explainable Patent Valuation [0.0]
This study proposes the Economic Reasoning Alignment via Instruction Tuning (ERA-IT) framework.<n>We theoretically conceptualize patent renewal history as a revealed economic preference and leverage it as an objective supervisory signal.<n>We trained the model not only to predict value tiers but also to reverse-engineer the Economic Chain-of-Thought from unstructured text.
arXiv Detail & Related papers (2025-12-14T23:04:07Z) - Beyond Automation: Rethinking Work, Creativity, and Governance in the Age of Generative AI [0.0]
generative artificial intelligence (AI) systems are reshaping the nature, distribution and meaning of work, creativity, and economic security.<n>This paper investigates four inter-related phenomena in the current AI era: (1) the evolving landscape of employment and the future of work; (2) the diverse patterns of AI adoption across socio-demographic groups, sectors, and geographies; and (3) whether universal basic income (UBI) should become a compulsory policy response to the AI revolution.
arXiv Detail & Related papers (2025-12-09T20:25:24Z) - Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse [50.87630846876635]
We develop nine detailed cyber risk models.<n>Each model decomposes attacks into steps using the MITRE ATT&CK framework.<n>Individual estimates are aggregated through Monte Carlo simulation.
arXiv Detail & Related papers (2025-12-09T17:54:17Z) - Irresponsible AI: big tech's influence on AI research and associated impacts [40.69166515991077]
We examine the growing and disproportionate influence of big tech in AI research.<n>We argue that its drive for scaling and general-purpose systems is at odds with the responsible, ethical, and sustainable development of AI.<n>We propose alternative strategies that build on the responsibility of implicated actors and collective action.
arXiv Detail & Related papers (2025-11-27T22:02:27Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Evaluation Framework for AI Systems in "the Wild" [37.48117853114386]
Generative AI (GenAI) models have become vital across industries, yet current evaluation methods have not adapted to their widespread use.<n>Traditional evaluations often rely on benchmarks and fixed datasets, frequently failing to reflect real-world performance.<n>This white paper proposes a comprehensive framework for how we should evaluate real-world GenAI systems.
arXiv Detail & Related papers (2025-04-23T14:52:39Z) - Generative AI and Information Asymmetry: Impacts on Adverse Selection and Moral Hazard [7.630624512225164]
Information asymmetry leads to adverse selection and moral hazard in economic markets.<n>This research investigates how Generative Artificial Intelligence (AI) can create detailed informational signals.<n>Generative AI can effectively mitigate adverse selection and moral hazard, resulting in more efficient market outcomes and increased social welfare.
arXiv Detail & Related papers (2025-02-18T15:48:29Z) - Generative AI in Health Economics and Outcomes Research: A Taxonomy of Key Definitions and Emerging Applications, an ISPOR Working Group Report [12.204470166456561]
Generative AI shows significant potential in health economics and outcomes research (HEOR)<n>Generative AI shows significant potential in HEOR, enhancing efficiency, productivity, and offering novel solutions to complex challenges.<n>Foundation models are promising in automating complex tasks, though challenges remain in scientific reliability, bias, interpretability, and workflow integration.
arXiv Detail & Related papers (2024-10-26T15:42:50Z) - Predicting the Impact of Generative AI Using an Agent-Based Model [0.0]
Generative artificial intelligence (AI) systems have transformed industries by autonomously generating content that mimics human creativity.
This paper employs agent-based modeling (ABM) to explore these implications.
The ABM integrates individual, business, and governmental agents to simulate dynamics such as education, skills acquisition, AI adoption, and regulatory responses.
arXiv Detail & Related papers (2024-08-30T13:13:56Z) - On the Trade-offs between Adversarial Robustness and Actionable Explanations [32.05150063480917]
We make one of the first attempts at studying the impact of adversarially robust models on actionable explanations.
We derive theoretical bounds on the differences between the cost and the validity of recourses generated by state-of-the-art algorithms.
Our results show that adversarially robust models significantly increase the cost and reduce the validity of the resulting recourses.
arXiv Detail & Related papers (2023-09-28T13:59:50Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems [35.763408055286355]
Learning to recognize and avoid negative side effects of an agent's actions is critical to improve the safety and reliability of autonomous systems.
Mitigating negative side effects is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems.
This article provides a comprehensive overview of different forms of negative side effects and the recent research efforts to address them.
arXiv Detail & Related papers (2020-08-24T16:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.