Retrospectives on the Embodied AI Workshop
- URL: http://arxiv.org/abs/2210.06849v1
- Date: Thu, 13 Oct 2022 09:00:52 GMT
- Title: Retrospectives on the Embodied AI Workshop
- Authors: Matt Deitke, Dhruv Batra, Yonatan Bisk, Tommaso Campari, Angel X.
Chang, Devendra Singh Chaplot, Changan Chen, Claudia P\'erez D'Arpino, Kiana
Ehsani, Ali Farhadi, Li Fei-Fei, Anthony Francis, Chuang Gan, Kristen
Grauman, David Hall, Winson Han, Unnat Jain, Aniruddha Kembhavi, Jacob
Krantz, Stefan Lee, Chengshu Li, Sagnik Majumder, Oleksandr Maksymets,
Roberto Mart\'in-Mart\'in, Roozbeh Mottaghi, Sonia Raychaudhuri, Mike
Roberts, Silvio Savarese, Manolis Savva, Mohit Shridhar, Niko S\"underhauf,
Andrew Szot, Ben Talbot, Joshua B. Tenenbaum, Jesse Thomason, Alexander
Toshev, Joanne Truong, Luca Weihs, Jiajun Wu
- Abstract summary: We focus on 13 challenges presented at the Embodied AI Workshop at CVPR.
These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language.
We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models.
- Score: 238.302290980995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a retrospective on the state of Embodied AI research. Our analysis
focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These
challenges are grouped into three themes: (1) visual navigation, (2)
rearrangement, and (3) embodied vision-and-language. We discuss the dominant
datasets within each theme, evaluation metrics for the challenges, and the
performance of state-of-the-art models. We highlight commonalities between top
approaches to the challenges and identify potential future directions for
Embodied AI research.
Related papers
- Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks [62.443665295250035]
We present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023)
In total, 32 competing teams register for the challenge, from which we received 11 successful submissions.
arXiv Detail & Related papers (2024-07-20T10:13:54Z) - Facilitating Opinion Diversity through Hybrid NLP Approaches [0.18918331915892175]
This thesis proposal identifies the challenges involved in facilitating large-scale online discussions with Natural Language Processing (NLP)
We propose a three-layered hierarchy for representing perspectives that can be obtained by a mixture of human intelligence and large language models.
We illustrate how these representations can draw insights into the diversity of perspectives and allow us to investigate interactions in online discussions.
arXiv Detail & Related papers (2024-05-15T15:30:17Z) - The Third Monocular Depth Estimation Challenge [134.16634233789776]
This paper discusses the results of the third edition of the Monocular Depth Estimation Challenge (MDEC)
The challenge focuses on zero-shot generalization to the challenging SYNS-Patches dataset, featuring complex scenes in natural and indoor settings.
The challenge winners drastically improved 3D F-Score performance, from 17.51% to 23.72%.
arXiv Detail & Related papers (2024-04-25T17:59:59Z) - Foundational Challenges in Assuring Alignment and Safety of Large Language Models [171.01569693871676]
This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs)
Based on the identified challenges, we pose $200+$ concrete research questions.
arXiv Detail & Related papers (2024-04-15T16:58:28Z) - A Systematic Literature Review on Explainability for Machine/Deep
Learning-based Software Engineering Research [23.966640472958105]
This paper presents a systematic literature review of approaches that aim to improve the explainability of AI models within the context of Software Engineering.
We aim to summarize the SE tasks where XAI techniques have shown success to date; (2) classify and analyze different XAI techniques; and (3) investigate existing evaluation approaches.
arXiv Detail & Related papers (2024-01-26T03:20:40Z) - Next Steps for Human-Centered Generative AI: A Technical Perspective [107.74614586614224]
We propose next-steps for Human-centered Generative AI (HGAI)
By identifying these next-steps, we intend to draw interdisciplinary research teams to pursue a coherent set of emergent ideas in HGAI.
arXiv Detail & Related papers (2023-06-27T19:54:30Z) - Core Challenges in Embodied Vision-Language Planning [11.896110519868545]
Embodied Vision-Language Planning tasks leverage computer vision and natural language for interaction in physical environments.
We propose a taxonomy to unify these tasks and provide an analysis and comparison of the current and new algorithmic approaches.
We advocate for task construction that enables model generalisability and furthers real-world deployment.
arXiv Detail & Related papers (2023-04-05T20:37:13Z) - Core Challenges in Embodied Vision-Language Planning [9.190245973578698]
We discuss Embodied Vision-Language Planning tasks, a family of prominent embodied navigation and manipulation problems.
We propose a taxonomy to unify these tasks and provide an analysis and comparison of the new and current algorithmic approaches.
We advocate for task construction that enables model generalizability and furthers real-world deployment.
arXiv Detail & Related papers (2021-06-26T05:18:58Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.