Post-Workshop Report on Science meets Engineering in Deep Learning,
NeurIPS 2019, Vancouver
- URL: http://arxiv.org/abs/2007.13483v2
- Date: Wed, 29 Jul 2020 13:22:16 GMT
- Title: Post-Workshop Report on Science meets Engineering in Deep Learning,
NeurIPS 2019, Vancouver
- Authors: Levent Sagun, Caglar Gulcehre, Adriana Romero, Negar Rostamzadeh,
Stefano Sarao Mannelli
- Abstract summary: Science meets Engineering in Deep Learning took place in Vancouver as part of the Workshop section of NeurIPS 2019.
This report attempts to isolate emerging topics and recurring themes that have been presented throughout the event.
- Score: 22.79131508712044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Science meets Engineering in Deep Learning took place in Vancouver as part of
the Workshop section of NeurIPS 2019. As organizers of the workshop, we created
the following report in an attempt to isolate emerging topics and recurring
themes that have been presented throughout the event. Deep learning can still
be a complex mix of art and engineering despite its tremendous success in
recent years. The workshop aimed at gathering people across the board to
address seemingly contrasting challenges in the problems they are working on.
As part of the call for the workshop, particular attention has been given to
the interdependence of architecture, data, and optimization that gives rise to
an enormous landscape of design and performance intricacies that are not
well-understood. This year, our goal was to emphasize the following directions
in our community: (i) identify obstacles in the way to better models and
algorithms; (ii) identify the general trends from which we would like to build
scientific and potentially theoretical understanding; and (iii) the rigorous
design of scientific experiments and experimental protocols whose purpose is to
resolve and pinpoint the origin of mysteries while ensuring reproducibility and
robustness of conclusions. In the event, these topics emerged and were broadly
discussed, matching our expectations and paving the way for new studies in
these directions. While we acknowledge that the text is naturally biased as it
comes through our lens, here we present an attempt to do a fair job of
highlighting the outcome of the workshop.
Related papers
- What comes after transformers? -- A selective survey connecting ideas in deep learning [1.8592384822257952]
Transformers have become the de-facto standard model in artificial intelligence since 2017.
For researchers it is difficult to keep track of such developments on a broader level.
We provide a comprehensive overview of the many important, recent works in these areas to those who already have a basic understanding of deep learning.
arXiv Detail & Related papers (2024-08-01T08:50:25Z) - A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond [84.95530356322621]
This survey presents a systematic review of the advancements in code intelligence.
It covers over 50 representative models and their variants, more than 20 categories of tasks, and an extensive coverage of over 680 related works.
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence.
arXiv Detail & Related papers (2024-03-21T08:54:56Z) - On the Essence and Prospect: An Investigation of Alignment Approaches
for Big Models [77.86952307745763]
Big models have achieved revolutionary breakthroughs in the field of AI, but they might also pose potential concerns.
Addressing such concerns, alignment technologies were introduced to make these models conform to human preferences and values.
Despite considerable advancements in the past year, various challenges lie in establishing the optimal alignment strategy.
arXiv Detail & Related papers (2024-03-07T04:19:13Z) - RHOBIN Challenge: Reconstruction of Human Object Interaction [83.07185402102253]
First RHOBIN challenge: reconstruction of human-object interactions in conjunction with the RHOBIN workshop.
Our challenge consists of three tracks of 3D reconstruction from monocular RGB images with a focus on dealing with challenging interaction scenarios.
This paper describes the settings of our challenge and discusses the winning methods of each track in more detail.
arXiv Detail & Related papers (2024-01-07T23:37:07Z) - Future-proofing geotechnics workflows: accelerating problem-solving with
large language models [2.8414492326907577]
This paper delves into the innovative application of Large Language Models in geotechnical engineering, as explored in a hands-on workshop held in Tokyo, Japan.
The paper discusses the potential of LLMs to transform geotechnical engineering practices, highlighting their proficiency in handling a range of tasks from basic data analysis to complex problem-solving.
arXiv Detail & Related papers (2023-12-14T05:17:27Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual
Learning [76.47138162283714]
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
Survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - 3rd Continual Learning Workshop Challenge on Egocentric Category and
Instance Level Object Understanding [20.649762891903602]
This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022.
The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks.
arXiv Detail & Related papers (2022-12-13T11:51:03Z) - BigScience: A Case Study in the Social Construction of a Multilingual
Large Language Model [11.366450629112459]
The BigScience Workshop was a value-driven initiative that spanned one and half years of interdisciplinary research.
This paper focuses on the collaborative research aspects of BigScience and takes a step back to look at the challenges of large-scale participatory research.
arXiv Detail & Related papers (2022-12-09T16:15:35Z) - Coordinated Science Laboratory 70th Anniversary Symposium: The Future of
Computing [80.72844751804166]
In 2021, the Coordinated Science Laboratory CSL hosted the Future of Computing Symposium to celebrate its 70th anniversary.
We summarize the major technological points, insights, and directions that speakers brought forward during the symposium.
Participants discussed topics related to new computing paradigms, technologies, algorithms, behaviors, and research challenges to be expected in the future.
arXiv Detail & Related papers (2022-10-04T17:32:27Z) - An information-theoretic perspective on intrinsic motivation in
reinforcement learning: a survey [0.0]
We propose to survey these research works through a new taxonomy based on information theory.
We computationally revisit the notions of surprise, novelty and skill learning.
Our analysis suggests that novelty and surprise can assist the building of a hierarchy of transferable skills.
arXiv Detail & Related papers (2022-09-19T09:47:43Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.