Recognizing Families In the Wild: White Paper for the 4th Edition Data
Challenge
- URL: http://arxiv.org/abs/2002.06303v3
- Date: Mon, 8 Jun 2020 05:02:32 GMT
- Title: Recognizing Families In the Wild: White Paper for the 4th Edition Data
Challenge
- Authors: Joseph P. Robinson and Yu Yin and Zaid Khan and Ming Shao and Siyu Xia
and Michael Stopa and Samson Timoner and Matthew A. Turk and Rama Chellappa
and Yun Fu
- Abstract summary: This paper summarizes the supported tasks (i.e., kinship verification, tri-subject verification, and search & retrieval of missing children) in the Recognizing Families In the Wild (RFIW) evaluation.
The purpose of this paper is to describe the 2020 RFIW challenge, end-to-end, along with forecasts in promising future directions.
- Score: 91.55319616114943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing Families In the Wild (RFIW): an annual large-scale, multi-track
automatic kinship recognition evaluation that supports various visual kin-based
problems on scales much higher than ever before. Organized in conjunction with
the 15th IEEE International Conference on Automatic Face and Gesture
Recognition (FG) as a Challenge, RFIW provides a platform for publishing
original work and the gathering of experts for a discussion of the next steps.
This paper summarizes the supported tasks (i.e., kinship verification,
tri-subject verification, and search & retrieval of missing children) in the
evaluation protocols, which include the practical motivation, technical
background, data splits, metrics, and benchmark results. Furthermore, top
submissions (i.e., leader-board stats) are listed and reviewed as a high-level
analysis on the state of the problem. In the end, the purpose of this paper is
to describe the 2020 RFIW challenge, end-to-end, along with forecasts in
promising future directions.
Related papers
- RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [0.8089605035945486]
We propose RelevAI-Reviewer, an automatic system that conceptualizes the task of survey paper review as a classification problem.
We introduce a novel dataset comprised of 25,164 instances. Each instance contains one prompt and four candidate papers, each varying in relevance to the prompt.
We develop a machine learning (ML) model capable of determining the relevance of each paper and identifying the most pertinent one.
arXiv Detail & Related papers (2024-06-13T06:42:32Z) - EFaR 2023: Efficient Face Recognition Competition [51.77649060180531]
The paper presents the summary of the Efficient Face Recognition Competition (EFaR) held at the 2023 International Joint Conference on Biometrics (IJCB 2023)
The competition received 17 submissions from 6 different teams.
The submitted solutions are ranked based on a weighted score of the achieved verification accuracies on a diverse set of benchmarks, as well as the deployability given by the number of floating-point operations and model size.
arXiv Detail & Related papers (2023-08-08T09:58:22Z) - Recognizing Families In the Wild (RFIW): The 5th Edition [115.73174360706136]
This is our fifth edition of RFIW, for which we continue the effort to attract scholars, bring together professionals, publish new work, and discuss prospects.
In this paper, we summarize submissions for the three tasks of this year's RFIW: specifically, we review the results for kinship verification, tri-subject verification, and family member search and retrieval.
We take a look at the RFIW problem, as well as share current efforts and make recommendations for promising future directions.
arXiv Detail & Related papers (2021-10-31T21:37:40Z) - D2S: Document-to-Slide Generation Via Query-Based Text Summarization [27.576875048631265]
We contribute a new dataset, SciDuet, consisting of pairs of papers and their corresponding slides decks from recent years' NLP and ML conferences.
Secondly, we present D2S, a novel system that tackles the document-to-slides task with a two-step approach.
Our evaluation suggests that long-form QA outperforms state-of-the-art summarization baselines on both automated ROUGE metrics and qualitative human evaluation.
arXiv Detail & Related papers (2021-05-08T10:29:41Z) - GENIE: A Leaderboard for Human-in-the-Loop Evaluation of Text Generation [83.10599735938618]
Leaderboards have eased model development for many NLP datasets by standardizing their evaluation and delegating it to an independent external repository.
This work introduces GENIE, an human evaluation leaderboard, which brings the ease of leaderboards to text generation tasks.
arXiv Detail & Related papers (2021-01-17T00:40:47Z) - Survey on the Analysis and Modeling of Visual Kinship: A Decade in the
Making [66.72253432908693]
Kinship recognition is a challenging problem with many practical applications.
We review the public resources and data challenges that enabled and inspired many to hone-in on the views.
For the tenth anniversary, the demo code is provided for the various kin-based tasks.
arXiv Detail & Related papers (2020-06-29T13:25:45Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.