Broader terms curriculum mapping: Using natural language processing and
visual-supported communication to create representative program planning
experiences
- URL: http://arxiv.org/abs/2102.04811v2
- Date: Wed, 10 Feb 2021 09:05:48 GMT
- Title: Broader terms curriculum mapping: Using natural language processing and
visual-supported communication to create representative program planning
experiences
- Authors: Rog\'erio Duarte, \^Angela Lacerda Nobre, Fernando Pimentel, Marc
Jacquinet
- Abstract summary: Communication difficulties between faculty and non-faculty groups leave unexplored an immense collaboration potential.
This paper presents a method to deliver program plan representations that are universal, self-explanatory, and empowering.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accreditation bodies call for curriculum development processes open to all
stakeholders, reflecting viewpoints of students, industry, university faculty
and society. However, communication difficulties between faculty and
non-faculty groups leave unexplored an immense collaboration potential. Using
classification of learning objectives, natural language processing, and data
visualization, this paper presents a method to deliver program plan
representations that are universal, self-explanatory, and empowering. A simple
example shows how the method contributes to representative program planning
experiences and a case study is used to confirm the method's accuracy and
utility.
Related papers
- De-fine: Decomposing and Refining Visual Programs with Auto-Feedback [75.62712247421146]
De-fine is a training-free framework that decomposes complex tasks into simpler subtasks and refines programs through auto-feedback.
Our experiments across various visual tasks show that De-fine creates more robust programs.
arXiv Detail & Related papers (2023-11-21T06:24:09Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling
Approaches [28.47300996711215]
People rely heavily on context to enrich meaning beyond what is literally said.
To interact successfully with people, user-facing artificial intelligence systems will require similar skills in pragmatics.
arXiv Detail & Related papers (2022-11-15T18:21:46Z) - Self-Supervised Representation Learning: Introduction, Advances and
Challenges [125.38214493654534]
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets.
This article introduces this vibrant area including key concepts, the four main families of approach and associated state of the art, and how self-supervised methods are applied to diverse modalities of data.
arXiv Detail & Related papers (2021-10-18T13:51:22Z) - ProTo: Program-Guided Transformer for Program-Guided Tasks [59.34258016795216]
We formulate program-guided tasks which require learning to execute a given program on the observed task specification.
We propose the Program-guided Transformer (ProTo), which integrates both semantic and structural guidance of a program.
ProTo executes a program in a learned latent space and enjoys stronger representation ability than previous neural-symbolic approaches.
arXiv Detail & Related papers (2021-10-02T13:46:32Z) - Learning to Improve Representations by Communicating About Perspectives [0.0]
We present aminimal architecture comprised of a population of autoencoders.
We show that our proposed architectureallows the emergence of aligned representations.
Results demonstrate how communication from subjective perspec-tives can lead to the acquisition of more abstract representations in multi-agent systems.
arXiv Detail & Related papers (2021-09-20T09:30:13Z) - Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
Representations [12.485001250777248]
Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors.
Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models.
We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations.
arXiv Detail & Related papers (2021-06-21T12:40:31Z) - Experience Grounds Language [185.73483760454454]
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world.
arXiv Detail & Related papers (2020-04-21T16:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.