Blockchain-Driven Research in Personality-Based Distributed Pair Programming
- URL: http://arxiv.org/abs/2412.18066v1
- Date: Tue, 24 Dec 2024 00:39:30 GMT
- Title: Blockchain-Driven Research in Personality-Based Distributed Pair Programming
- Authors: Marcel Valovy, Alena Buchalcevova,
- Abstract summary: This study aims to integrate blockchain technology into personality-based pair programming research to enhance its generalizability and adaptability.
In the developing Role-Optimization Motivation Alignment (ROMA) framework, human/AI programming roles align with individual Big Five personality traits.
The results suggest that blockchain can enhance research generalizability, suggestions, and transparency, while ROMA can increase individual motivation and team performance.
- Score: 0.0
- License:
- Abstract: This study aims to integrate blockchain technology into personality-based pair programming research to enhance its generalizability and adaptability by offering built-in continuous, reproducible, and transparent research. In the developing Role-Optimization Motivation Alignment (ROMA) framework, human/AI programming roles align with individual Big Five personality traits, optimizing individual motivation and team productivity in Very Small Entities and undergraduate courses. Twelve quasi-experimental sessions were conducted to verify the personality-based pair programming in distributed settings. A mixed-methods approach was employed, combining intrinsic motivation inventories and qualitative insights. Data were stored transparently on the Solana blockchain, and a web-based application was developed in Rust and TypeScript languages to facilitate partner matching based on ROMA suggestions, expertise, and availability. The results suggest that blockchain can enhance research generalizability, reproducibility, and transparency, while ROMA can increase individual motivation and team performance. Future work can focus on integrating smart contracts for transparent and versioned data analysis.
Related papers
- WPFed: Web-based Personalized Federation for Decentralized Systems [11.458835427697442]
We introduce WPFed, a fully decentralized, web-based learning framework designed to enable globally optimal neighbor selection.
To enhance security and deter malicious behavior, WPFed integrates verification mechanisms for both LSH codes and performance rankings.
Our findings highlight WPFed's potential to facilitate effective and secure decentralized collaborative learning across diverse and interconnected web environments.
arXiv Detail & Related papers (2024-10-15T08:17:42Z) - Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models [46.09562860220433]
We introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM)
Our approach significantly improves the accuracy of the RM on established human preference datasets.
arXiv Detail & Related papers (2024-10-02T13:24:56Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Generative AI like ChatGPT in Blockchain Federated Learning: use cases, opportunities and future [4.497001527881303]
This research explores potential integrations of generative AI in federated learning.
generative adversarial networks (GANs) and variational autoencoders (VAEs)
Generating synthetic data helps federated learning address challenges related to limited data availability.
arXiv Detail & Related papers (2024-07-25T19:43:49Z) - Data-Centric AI in the Age of Large Language Models [51.20451986068925]
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs)
We make the key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs.
We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization.
arXiv Detail & Related papers (2024-06-20T16:34:07Z) - A Large-Scale Evaluation of Speech Foundation Models [110.95827399522204]
We establish the Speech processing Universal PERformance Benchmark (SUPERB) to study the effectiveness of the foundation model paradigm for speech.
We propose a unified multi-tasking framework to address speech processing tasks in SUPERB using a frozen foundation model followed by task-specialized, lightweight prediction heads.
arXiv Detail & Related papers (2024-04-15T00:03:16Z) - Holonic Learning: A Flexible Agent-based Distributed Machine Learning
Framework [0.0]
Holonic Learning (HoL) is a collaborative and privacy-focused learning framework designed for training deep learning models.
By leveraging holonic concepts, HoL framework establishes a structured self-similar hierarchy in the learning process.
This paper implements HoloAvg, a special variant of HoL that employs weighted averaging for model aggregation across all holons.
arXiv Detail & Related papers (2023-12-29T12:03:42Z) - Machine Mindset: An MBTI Exploration of Large Language Models [28.2342069623478]
We present a novel approach for integrating Myers-Briggs Type Indicator (MBTI) personality traits into large language models (LLMs)
Our method, "Machine Mindset," involves a two-phase fine-tuning and Direct Preference Optimization (DPO) to embed MBTI traits into LLMs.
arXiv Detail & Related papers (2023-12-20T12:59:31Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset,
Framework, and Benchmark [81.42376626294812]
We present Language-Assisted Multi-Modal instruction tuning dataset, framework, and benchmark.
Our aim is to establish LAMM as a growing ecosystem for training and evaluating MLLMs.
We present a comprehensive dataset and benchmark, which cover a wide range of vision tasks for 2D and 3D vision.
arXiv Detail & Related papers (2023-06-11T14:01:17Z) - FedNLP: A Research Platform for Federated Learning in Natural Language
Processing [55.01246123092445]
We present the FedNLP, a research platform for federated learning in NLP.
FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling.
Preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets.
arXiv Detail & Related papers (2021-04-18T11:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.