SECure: A Social and Environmental Certificate for AI Systems
- URL: http://arxiv.org/abs/2006.06217v2
- Date: Sun, 19 Jul 2020 12:39:45 GMT
- Title: SECure: A Social and Environmental Certificate for AI Systems
- Authors: Abhishek Gupta (1 and 2), Camylle Lanteigne (1 and 3), and Sara
Kingsley (4) ((1) Montreal AI Ethics Institute, (2) Microsoft, (3) McGill
University, (4) Carnegie Mellon University)
- Abstract summary: This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems.
The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a world increasingly dominated by AI applications, an understudied aspect
is the carbon and social footprint of these power-hungry algorithms that
require copious computation and a trove of data for training and prediction.
While profitable in the short-term, these practices are unsustainable and
socially extractive from both a data-use and energy-use perspective. This work
proposes an ESG-inspired framework combining socio-technical measures to build
eco-socially responsible AI systems. The framework has four pillars:
compute-efficient machine learning, federated learning, data sovereignty, and a
LEEDesque certificate.
Compute-efficient machine learning is the use of compressed network
architectures that show marginal decreases in accuracy. Federated learning
augments the first pillar's impact through the use of techniques that
distribute computational loads across idle capacity on devices. This is paired
with the third pillar of data sovereignty to ensure the privacy of user data
via techniques like use-based privacy and differential privacy. The final
pillar ties all these factors together and certifies products and services in a
standardized manner on their environmental and social impacts, allowing
consumers to align their purchase with their values.
Related papers
- Leveraging Federated Learning and Edge Computing for Recommendation
Systems within Cloud Computing Networks [3.36271475827981]
Key technology for edge intelligence is the privacy-protecting machine learning paradigm known as Federated Learning (FL), which enables data owners to train models without having to transfer raw data to third-party servers.
To reduce node failures and device exits, a Hierarchical Federated Learning (HFL) framework is proposed, where a designated cluster leader supports the data owner through intermediate model aggregation.
In order to mitigate the impact of soft clicks on the quality of user experience (QoE), the authors model the user QoE as a comprehensive system cost.
arXiv Detail & Related papers (2024-03-05T17:58:26Z) - An Empirical Study of Efficiency and Privacy of Federated Learning
Algorithms [2.994794762377111]
In today's world, the rapid expansion of IoT networks and the proliferation of smart devices have resulted in the generation of substantial amounts of heterogeneous data.
To handle this data effectively, advanced data processing technologies are necessary to guarantee the preservation of both privacy and efficiency.
Federated learning emerged as a distributed learning method that trains models locally and aggregates them on a server to preserve data privacy.
arXiv Detail & Related papers (2023-12-24T00:13:41Z) - Decentralised, Scalable and Privacy-Preserving Synthetic Data Generation [8.982917734231165]
We build a novel system that allows the contributors of real data to autonomously participate in differentially private synthetic data generation.
Our solution is based on three building blocks namely: Solid (Social Linked Data), MPC (Secure Multi-Party Computation) and Trusted Execution Environments (TEEs)
We show how these three technologies can be effectively used to address various challenges in responsible and trustworthy synthetic data generation.
arXiv Detail & Related papers (2023-10-30T22:27:32Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms [56.119374302685934]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - Federated Learning-Empowered AI-Generated Content in Wireless Networks [58.48381827268331]
Federated learning (FL) can be leveraged to improve learning efficiency and achieve privacy protection for AIGC.
We present FL-based techniques for empowering AIGC, and aim to enable users to generate diverse, personalized, and high-quality content.
arXiv Detail & Related papers (2023-07-14T04:13:11Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - An Efficient Industrial Federated Learning Framework for AIoT: A Face
Recognition Application [9.977688793193012]
Recently, the artificial intelligence of things (AIoT) has been gaining increasing attention.
Recent regulatory restrictions on data privacy preclude uploading sensitive local data to data centers.
We propose an efficient industrial federated learning framework for AIoT in terms of a face recognition application.
arXiv Detail & Related papers (2022-06-21T14:03:20Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Learning, Computing, and Trustworthiness in Intelligent IoT
Environments: Performance-Energy Tradeoffs [62.91362897985057]
An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications.
This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption.
arXiv Detail & Related papers (2021-10-04T19:41:42Z) - A Federated Learning Framework in Smart Grid: Securing Power Traces in
Collaborative Learning [7.246377480492976]
We propose a federated learning framework in smart grid, which enables collaborative machine learning of power consumption patterns without leaking individual power traces.
Case studies show that, with proper encryption schemes such as Paillier, the machine learning models constructed from the proposed framework are lossless, privacy-preserving and effective.
arXiv Detail & Related papers (2021-03-22T14:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.