Building a Culture of Reproducibility in Academic Research
- URL: http://arxiv.org/abs/2212.13534v1
- Date: Tue, 27 Dec 2022 16:03:50 GMT
- Title: Building a Culture of Reproducibility in Academic Research
- Authors: Jimmy Lin
- Abstract summary: Reproducibility is an ideal that no researcher would dispute "in the abstract", but when aspirations meet the cold hard reality of the academic grind, often "loses out"
In this essay, I share some personal experiences grappling with how to operationalize while balancing its demands against other priorities.
- Score: 55.22219308265945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reproducibility is an ideal that no researcher would dispute "in the
abstract", but when aspirations meet the cold hard reality of the academic
grind, reproducibility often "loses out". In this essay, I share some personal
experiences grappling with how to operationalize reproducibility while
balancing its demands against other priorities. My research group has had some
success building a "culture of reproducibility" over the past few years, which
I attempt to distill into lessons learned and actionable advice, organized
around answering three questions: why, what, and how. I believe that
reproducibility efforts should yield easy-to-use, well-packaged, and
self-contained software artifacts that allow others to reproduce and generalize
research findings. At the core, my approach centers on self interest: I argue
that the primary beneficiaries of reproducibility efforts are, in fact, those
making the investments. I believe that (unashamedly) appealing to self
interest, augmented with expectations of reciprocity, increases the chances of
success. Building from repeatability, social processes and standardized tools
comprise the two important additional ingredients that help achieve
aspirational ideals. The dogfood principle nicely ties these ideas together.
Related papers
- O1 Replication Journey: A Strategic Progress Report -- Part 1 [52.062216849476776]
This paper introduces a pioneering approach to artificial intelligence research, embodied in our O1 Replication Journey.
Our methodology addresses critical challenges in modern AI research, including the insularity of prolonged team-based projects.
We propose the journey learning paradigm, which encourages models to learn not just shortcuts, but the complete exploration process.
arXiv Detail & Related papers (2024-10-08T15:13:01Z) - Three Dogmas of Reinforcement Learning [13.28320102989073]
Modern reinforcement learning has been conditioned by at least three dogmas.
The first is the environment spotlight, which refers to our tendency to focus on modeling environments rather than agents.
The second is our treatment of learning as finding the solution to a task, rather than adaptation.
The third is the reward hypothesis, which states that all goals and purposes can be well thought of as of a reward signal.
arXiv Detail & Related papers (2024-07-15T10:03:24Z) - Time to Stop and Think: What kind of research do we want to do? [1.74048653626208]
In this paper, we focus on the field of metaheuristic optimization, since it is our main field of work.
Our main goal is to sew the seed of sincere critical assessment of our work, sparking a reflection process both at the individual and the community level.
All the statements included in this document are personal views and opinions, which can be shared by others or not.
arXiv Detail & Related papers (2024-02-13T08:53:57Z) - Reproducibility of Machine Learning: Terminology, Recommendations and
Open Issues [5.30596984761294]
A crisis has been recently acknowledged by scientists and this seems to affect even more Artificial Intelligence and Machine Learning.
We critically review the current literature on the topic and highlight the open issues.
We identify key elements often overlooked in modern Machine Learning and provide novel recommendations for them.
arXiv Detail & Related papers (2023-02-24T15:33:20Z) - A Song of Ice and Fire: Analyzing Textual Autotelic Agents in
ScienceWorld [21.29303927728839]
Building open-ended agents that can autonomously discover a diversity of behaviours is one of the long-standing goals of artificial intelligence.
Recent work identified language has a key dimension of autotelic learning, in particular because it enables abstract goal sampling and guidance from social peers for hindsight relabelling.
We show the importance of selectivity from the social peer's feedback; that experience replay needs to over-sample examples of rare goals.
arXiv Detail & Related papers (2023-02-10T13:49:50Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - Reproducibility in machine learning for medical imaging [3.1390096961027076]
This chapter intends at being an introduction to for researchers in the field of machine learning for medical imaging.
For each of them, we aim at defining it, at describing the requirements to achieve it and at discussing its utility.
The chapter ends with a discussion on the benefits of didactic and with a plea for a non-dogmatic approach to this concept and its implementation in research practice.
arXiv Detail & Related papers (2022-09-12T09:00:04Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Show me the Way: Intrinsic Motivation from Demonstrations [44.87651595571687]
We show that complex exploration behaviors, reflecting different motivations, can be learnt and efficiently used by RL agents to solve tasks for which exhaustive exploration is prohibitive.
We propose to learn an exploration bonus from demonstrations that could transfer these motivations to an artificial agent with little assumptions about their rationale.
arXiv Detail & Related papers (2020-06-23T11:52:53Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.