Proceedings of the DATE Friday Workshop on System-level Design Methods
for Deep Learning on Heterogeneous Architectures (SLOHA 2021)
- URL: http://arxiv.org/abs/2102.00818v1
- Date: Wed, 27 Jan 2021 18:14:02 GMT
- Title: Proceedings of the DATE Friday Workshop on System-level Design Methods
for Deep Learning on Heterogeneous Architectures (SLOHA 2021)
- Authors: Frank Hannig, Paolo Meloni, Matteo Spallanzani, Matthias Ziegler
- Abstract summary: This volume contains the papers accepted at the first DATE Friday Workshop on System-level Design Methods for Deep Learning on Heterogeneous Architectures (SLOHA 2021)
SLOHA 2021 was co-located with the Conference on Design, Automation and Test in Europe (DATE)
- Score: 0.836131757697441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This volume contains the papers accepted at the first DATE Friday Workshop on
System-level Design Methods for Deep Learning on Heterogeneous Architectures
(SLOHA 2021), held virtually on February 5, 2021. SLOHA 2021 was co-located
with the Conference on Design, Automation and Test in Europe (DATE).
Related papers
- Conference Proceedings of The European DAO Workshop 2024 [0.0]
This collection of full papers delves into areas such as decentralized decision-making, business models, artificial intelligence, economics, and legal challenges for decentralizeds.
This diverse compilation offers a multi-disciplinary examination of the rapidly growing phenomenon of decentralized organizations.
arXiv Detail & Related papers (2024-06-12T11:42:08Z) - InternLM2 Technical Report [159.70692271378581]
This paper introduces InternLM2, an open-source Large Language Models (LLMs) that outperforms its predecessors in comprehensive evaluations across 6 dimensions and 30 benchmarks.
The pre-training process of InternLM2 is meticulously detailed, highlighting the preparation of diverse data types.
InternLM2 efficiently captures long-term dependencies, initially trained on 4k tokens before advancing to 32k tokens in pre-training and fine-tuning stages.
arXiv Detail & Related papers (2024-03-26T00:53:24Z) - ICML 2023 Topological Deep Learning Challenge : Design and Results [83.5003281210199]
The competition asked participants to provide open-source implementations of topological neural networks from the literature.
The challenge attracted twenty-eight qualifying submissions in its two-month duration.
This paper describes the design of the challenge and summarizes its main findings.
arXiv Detail & Related papers (2023-09-26T18:49:30Z) - Low-complexity deep learning frameworks for acoustic scene
classification using teacher-student scheme and multiple spectrograms [59.86658316440461]
The proposed system comprises two main phases: (Phase I) Training a teacher network; and (Phase II) training a student network using distilled knowledge from the teacher.
Our experiments conducted on DCASE 2023 Task 1 Development dataset have fulfilled the requirement of low-complexity and achieved the best classification accuracy of 57.4%.
arXiv Detail & Related papers (2023-05-16T14:21:45Z) - Proceedings of the NeurIPS 2021 Workshop on Machine Learning for the
Developing World: Global Challenges [0.8035384580801723]
These are the proceedings of the 5th workshop on Machine Learning for the Developing World (ML4D), held as part of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) on December 14th, 2021.
arXiv Detail & Related papers (2023-01-10T14:53:28Z) - Improving Deep Learning for HAR with shallow LSTMs [70.94062293989832]
We propose to alter the DeepConvLSTM to employ a 1-layered instead of a 2-layered LSTM.
Our results stand in contrast to the belief that one needs at least a 2-layered LSTM when dealing with sequential data.
arXiv Detail & Related papers (2021-08-02T08:14:59Z) - ESPnet-ST IWSLT 2021 Offline Speech Translation System [56.83606198051871]
This paper describes the ESPnet-ST group's IWSLT 2021 submission in the offline speech translation track.
This year we made various efforts on training data, architecture, and audio segmentation.
Our best E2E system combined all the techniques with model ensembling and achieved 31.4 BLEU.
arXiv Detail & Related papers (2021-07-01T17:49:43Z) - Two-Stream Consensus Network: Submission to HACS Challenge 2021
Weakly-Supervised Learning Track [78.64815984927425]
The goal of weakly-supervised temporal action localization is to temporally locate and classify action of interest in untrimmed videos.
We adopt the two-stream consensus network (TSCN) as the main framework in this challenge.
Our solution ranked 2rd in this challenge, and we hope our method can serve as a baseline for future academic research.
arXiv Detail & Related papers (2021-06-21T03:36:36Z) - Proceedings of the NeurIPS 2020 Workshop on Machine Learning for the
Developing World: Improving Resilience [1.154829465058342]
These are the proceedings of the 4th workshop on Machine Learning for the Developing World (ML4D), held as part of the Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS) on Saturday, December 12th 2020.
arXiv Detail & Related papers (2021-01-12T08:35:54Z) - Proceedings of NeurIPS 2020 Workshop on Artificial Intelligence for
Humanitarian Assistance and Disaster Response [1.9043833325245103]
These are the "proceedings" of the 2nd AI + HADR workshop which was held virtually on December 12, 2020 as part of the Neural Information Processing Systems conference.
They are non-archival and merely serve as a way to collate all the papers accepted to the workshop.
arXiv Detail & Related papers (2020-12-03T17:44:26Z) - RGCL at SemEval-2020 Task 6: Neural Approaches to Definition Extraction [12.815346389235748]
This paper presents the RGCL team submission to SemEval 2020 Task 6: DeftEval, subtasks 1 and 2.
The system classifies definitions at the sentence and token levels.
arXiv Detail & Related papers (2020-10-13T10:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.