Manifesto for the Responsible Development of Mathematical Works -- A
Tool for Practitioners and for Management
- URL: http://arxiv.org/abs/2306.09131v1
- Date: Thu, 15 Jun 2023 13:44:40 GMT
- Title: Manifesto for the Responsible Development of Mathematical Works -- A
Tool for Practitioners and for Management
- Authors: Maurice Chiodo, Dennis M\"uller
- Abstract summary: This manifesto is written as a practical tool and aid for anyone carrying out, managing or influencing mathematical work.
It provides insight into how to undertake and develop mathematically-powered products and services in a safe and responsible way.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This manifesto has been written as a practical tool and aid for anyone
carrying out, managing or influencing mathematical work. It provides insight
into how to undertake and develop mathematically-powered products and services
in a safe and responsible way. Rather than give a framework of objectives to
achieve, we instead introduce a process that can be integrated into the common
ways in which mathematical products or services are created, from start to
finish. This process helps address the various issues and problems that can
arise for the product, the developers, the institution, and for wider society.
To do this, we break down the typical procedure of mathematical development
into 10 key stages; our "10 pillars for responsible development" which follow a
somewhat chronological ordering of the steps, and associated challenges, that
frequently occur in mathematical work. Together these 10 pillars cover issues
of the entire lifecycle of a mathematical product or service, including the
preparatory work required to responsibly start a project, central questions of
good technical mathematics and data science, and issues of communication,
deployment and follow-up maintenance specifically related to mathematical
systems. This manifesto, and the pillars within it, are the culmination of 7
years of work done by us as part of the Cambridge University Ethics in
Mathematics Project. These are all tried-and-tested ideas, that we have
presented and used in both academic and industrial environments. In our work,
we have directly seen that mathematics can be an incredible tool for good in
society, but also that without careful consideration it can cause immense harm.
We hope that following this manifesto will empower its readers to reduce the
risk of undesirable and unwanted consequences of their mathematical work.
Related papers
- MathBench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark [82.64129627675123]
MathBench is a new benchmark that rigorously assesses the mathematical capabilities of large language models.
MathBench spans a wide range of mathematical disciplines, offering a detailed evaluation of both theoretical understanding and practical problem-solving skills.
arXiv Detail & Related papers (2024-05-20T17:52:29Z) - Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks [34.09857430966818]
We introduce an extensive mathematics dataset called "MathQuest" sourced from the 11th and 12th standard Mathematics NCERT textbooks.
We conduct fine-tuning experiments with three prominent large language models: LLaMA-2, WizardMath, and MAmmoTH.
Our experiments reveal that among the three models, MAmmoTH-13B emerges as the most proficient, achieving the highest level of competence in solving the presented mathematical problems.
arXiv Detail & Related papers (2024-04-19T08:45:42Z) - FineMath: A Fine-Grained Mathematical Evaluation Benchmark for Chinese
Large Language Models [47.560637703675816]
FineMath is a fine-grained mathematical evaluation benchmark dataset for assessing Chinese Large Language Models (LLMs)
FineMath is created to cover the major key mathematical concepts taught in elementary school math, which are divided into 17 categories of math word problems.
All the 17 categories of math word problems are manually annotated with their difficulty levels according to the number of reasoning steps required to solve these problems.
arXiv Detail & Related papers (2024-03-12T15:32:39Z) - MathScale: Scaling Instruction Tuning for Mathematical Reasoning [70.89605383298331]
Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving.
However, their proficiency in solving mathematical problems remains inadequate.
We propose MathScale, a simple and scalable method to create high-quality mathematical reasoning data.
arXiv Detail & Related papers (2024-03-05T11:42:59Z) - MATHSENSEI: A Tool-Augmented Large Language Model for Mathematical Reasoning [2.9104279358536647]
We present MathSensei, a tool-augmented large language model for mathematical reasoning.
We study the complementary benefits of the tools - knowledge retriever (Bing Web Search), program generator + executor (Python), and symbolic equation solver (Wolfram-Alpha API)
arXiv Detail & Related papers (2024-02-27T05:50:35Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Peano: Learning Formal Mathematical Reasoning [35.086032962873226]
General mathematical reasoning is computationally undecidable, but humans routinely solve new problems.
We posit that central to both puzzles is the structure of procedural abstractions underlying mathematics.
We explore this idea in a case study on 5 sections of beginning algebra on the Khan Academy platform.
arXiv Detail & Related papers (2022-11-29T01:42:26Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.