Optimizing Shanghai's Household Waste Recycling Collection Program by Decision-Making based on Mathematical Modeling
- URL: http://arxiv.org/abs/2507.03844v1
- Date: Sat, 05 Jul 2025 00:13:52 GMT
- Title: Optimizing Shanghai's Household Waste Recycling Collection Program by Decision-Making based on Mathematical Modeling
- Authors: Jiaxuan Chen, Ling Zhou Shen, Jinchen Liu,
- Abstract summary: We will show a vivid and comprehensive application of the classical mathematical multi-criteria decision model: Analytical Hierarchy Process.<n>We will also seek the key criteria for the sustainability development of human society, by assessing the important elements of waste recycling.
- Score: 2.710486998959315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this article, we will discuss the optimization of Shanghai's recycling collection program, with the core of the task as making a decision among the choice of the alternatives. We will be showing a vivid and comprehensive application of the classical mathematical multi-criteria decision model: Analytical Hierarchy Process (AHP), using the eigenvector method. We will also seek the key criteria for the sustainability development of human society, by assessing the important elements of waste recycling.First, we considered the evaluation for a quantified score of the benefits and costs of recycling household glass wastes in Shanghai, respectively. In the evaluation of each score, we both adopted the AHP method to build a hierarchical structure of the problem we are facing. We first identified the key assessment criteria of the evaluation, on various perspectives including direct money costs and benefits, and further environmental and indirect considerations. Then, we distributed questionnaires to our school science teachers, taking the geometric mean, to build the pairwise comparison matrix of the criterion. After the theoretical modeling works are done, we began collecting the essential datasets for the evaluation of each score, by doing research on the official statistics, Internet information, market information and news reports. Sometimes, we proceed a logical pre-procession of the data from other data, if the data wanted isn't directly accessible. Then, we crucially considered the generalization of our mathematical model. We considered from several perspectives, including the extension of assessment criteria, and the consideration of the dynamic interdependency between the wastes, inside a limited transportation container.
Related papers
- A Framework for Data Valuation and Monetisation [0.0]
This paper introduces a unified valuation framework that integrates economic, governance, and strategic perspectives into a coherent decision-support model.<n>The model combines qualitative scoring, cost- and utility-based estimation, relevance/quality indexing, and multi-criteria weighting to define data value transparently and systematically.
arXiv Detail & Related papers (2025-12-08T15:57:26Z) - Geometric Data Valuation via Leverage Scores [0.2538209532048866]
We propose a geometric alternative to Shapley data valuation based on statistical leverage scores.<n>We show that our scores satisfy the dummy, efficiency, and symmetry axioms of Shapley valuation.<n>We also show that training on a leverage-sampled subset produces a model whose parameters and predictive risk are within $O(varepsilon)$ of the full-data optimum.
arXiv Detail & Related papers (2025-11-03T22:20:50Z) - Multimodal Data Storage and Retrieval for Embodied AI: A Survey [8.079598907674903]
Embodied AI (EAI) agents interact with the physical world, generating vast, heterogeneous multimodal data streams.<n>EAI's core requirements include physical grounding, low-latency access, and dynamic scalability.<n>Our survey is based on a comprehensive review of more than 180 related studies, providing a rigorous roadmap for designing the robust, high-performance data management frameworks.
arXiv Detail & Related papers (2025-08-19T15:04:02Z) - A New Approach for Multicriteria Assessment in the Ranking of Alternatives Using Cardinal and Ordinal Data [0.0]
We propose a novel MCA approach that combines two Virtual Gap Analysis (VGA) models.<n>The VGA framework, rooted in linear programming, is pivotal in the MCA methodology.
arXiv Detail & Related papers (2025-07-10T04:00:48Z) - LLM-based Evaluation Policy Extraction for Ecological Modeling [22.432508855430797]
evaluating ecological time series is critical for benchmarking model performance in many important applications.<n>Traditional numerical metrics fail to capture domain-specific temporal patterns critical to ecological processes.<n>We propose a novel framework that integrates metric learning with large language model (LLM)-based natural language policy extraction.
arXiv Detail & Related papers (2025-05-20T01:02:29Z) - From Rankings to Insights: Evaluation Should Shift Focus from Leaderboard to Feedback [36.68929551237421]
We introduce bftextFeedbacker, an evaluation framework that provides comprehensive and fine-grained results.<n>Our project homepage and dataset are available at https://liudan193.io/Feedbacker.
arXiv Detail & Related papers (2025-05-10T16:52:40Z) - CritiQ: Mining Data Quality Criteria from Human Preferences [70.35346554179036]
We introduce CritiQ, a novel data selection method that automatically mines criteria from human preferences for data quality.<n>CritiQ Flow employs a manager agent to evolve quality criteria and worker agents to make pairwise judgments.<n>We demonstrate the effectiveness of our method in the code, math, and logic domains.
arXiv Detail & Related papers (2025-02-26T16:33:41Z) - Evaluating Step-by-step Reasoning Traces: A Survey [3.895864050325129]
Step-by-step reasoning is widely used to enhance the reasoning ability of large language models (LLMs) in complex problems.<n>Existing evaluation practices are highly inconsistent, resulting in fragmented progress across evaluator design and benchmark development.<n>This survey proposes a taxonomy of evaluation criteria with four top-level categories (factuality, validity, coherence, and utility)
arXiv Detail & Related papers (2025-02-17T19:58:31Z) - Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - Learning Evaluation Models from Large Language Models for Sequence Generation [61.8421748792555]
We propose a three-stage evaluation model training method that utilizes large language models to generate labeled data for model-based metric development.<n> Experimental results on the SummEval benchmark demonstrate that CSEM can effectively train an evaluation model without human-labeled data.
arXiv Detail & Related papers (2023-08-08T16:41:16Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Making Machine Learning Datasets and Models FAIR for HPC: A Methodology
and Case Study [0.0]
The FAIR Guiding Principles aim to improve the findability, accessibility, interoperability, and reusability of digital content by making them both human and machine actionable.
These principles have not yet been broadly adopted in the domain of machine learning-based program analyses and optimizations for High-Performance Computing.
We design a methodology to make HPC datasets and machine learning models FAIR after investigating existing FAIRness assessment and improvement techniques.
arXiv Detail & Related papers (2022-11-03T18:45:46Z) - Reinforcement Learning with Heterogeneous Data: Estimation and Inference [84.72174994749305]
We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
arXiv Detail & Related papers (2022-01-31T20:58:47Z) - Energy-Based Learning for Cooperative Games, with Applications to
Feature/Data/Model Valuations [91.36803653600667]
We present a novel energy-based treatment for cooperative games, with a theoretical justification by the maximum entropy framework.
Surprisingly, by conducting variational inference of the energy-based model, we recover various game-theoretic valuation criteria, such as Shapley value and Banzhaf index.
We experimentally demonstrate that the proposed Variational Index enjoys intriguing properties on certain synthetic and real-world valuation problems.
arXiv Detail & Related papers (2021-06-05T17:39:04Z) - Application of independent component analysis and TOPSIS to deal with
dependent criteria in multicriteria decision problems [8.637110868126546]
We propose a novel approach whose aim is to estimate, from the observed data, a set of independent latent criteria.
A central element of our approach is to formulate the decision problem as a blind source separation problem.
We consider TOPSIS-based approaches to obtain the ranking of alternatives from the latent criteria.
arXiv Detail & Related papers (2020-02-06T13:51:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.