Adaptive Teaching of Temporal Logic Formulas to Learners with
Preferences
- URL: http://arxiv.org/abs/2001.09956v1
- Date: Mon, 27 Jan 2020 18:22:53 GMT
- Title: Adaptive Teaching of Temporal Logic Formulas to Learners with
Preferences
- Authors: Zhe Xu, Yuxin Chen and Ufuk Topcu
- Abstract summary: We investigate machine teaching for temporal logic formulas.
An exhaustive search even for a myopic solution takes exponential time.
We propose an efficient approach for teaching parametric linear temporal logic formulas.
- Score: 44.63937003271641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine teaching is an algorithmic framework for teaching a target hypothesis
via a sequence of examples or demonstrations. We investigate machine teaching
for temporal logic formulas -- a novel and expressive hypothesis class amenable
to time-related task specifications. In the context of teaching temporal logic
formulas, an exhaustive search even for a myopic solution takes exponential
time (with respect to the time span of the task). We propose an efficient
approach for teaching parametric linear temporal logic formulas. Concretely, we
derive a necessary condition for the minimal time length of a demonstration to
eliminate a set of hypotheses. Utilizing this condition, we propose a myopic
teaching algorithm by solving a sequence of integer programming problems. We
further show that, under two notions of teaching complexity, the proposed
algorithm has near-optimal performance. The results strictly generalize the
previous results on teaching preference-based version space learners. We
evaluate our algorithm extensively under a variety of learner types (i.e.,
learners with different preference models) and interactive protocols (e.g.,
batched and adaptive). The results show that the proposed algorithms can
efficiently teach a given target temporal logic formula under various settings,
and that there are significant gains of teaching efficacy when the teacher
adapts to the learner's current hypotheses or uses oracles.
Related papers
- Reciprocal Learning [0.0]
We show that a wide array of machine learning algorithms are specific instances of one single paradigm: reciprocal learning.
We introduce reciprocal learning as a generalization of these algorithms using the language of decision theory.
We find that reciprocal learning algorithms converge at linear rates to an approximately optimal model under relatively mild assumptions on the loss function.
arXiv Detail & Related papers (2024-08-12T16:14:52Z) - LLMs as Probabilistic Minimally Adequate Teachers for DFA Learning [11.037017229299607]
The emergence of intelligence in large language models (LLMs) has inspired investigations into their integration into automata learning.
This paper introduces the probabilistic Minimally Adequate Teacher (pMAT) formulation.
We develop techniques to improve answer accuracy and ensure the correctness of the learned automata.
arXiv Detail & Related papers (2024-08-06T07:12:09Z) - Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Toward In-Context Teaching: Adapting Examples to Students' Misconceptions [54.82965010592045]
We introduce a suite of models and evaluation methods we call AdapT.
AToM is a new probabilistic model for adaptive teaching that jointly infers students' past beliefs and optimize for the correctness of future beliefs.
Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
arXiv Detail & Related papers (2024-05-07T17:05:27Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Efficient Sub-structured Knowledge Distillation [52.5931565465661]
We propose an approach that is much simpler in its formulation and far more efficient for training than existing approaches.
We transfer the knowledge from a teacher model to its student model by locally matching their predictions on all sub-structures, instead of the whole output space.
arXiv Detail & Related papers (2022-03-09T15:56:49Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Teaching an Active Learner with Contrastive Examples [35.926575235046634]
We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher.
We investigate an efficient teaching algorithm that adaptively picks contrastive examples.
We derive strong performance guarantees for our algorithm based on two problem-dependent parameters.
arXiv Detail & Related papers (2021-10-28T05:00:55Z) - Learning to Actively Learn: A Robust Approach [22.75298609290053]
This work proposes a procedure for designing algorithms for adaptive data collection tasks like active learning and pure-exploration multi-armed bandits.
Our adaptive algorithm is learned via adversarial training over equivalence classes of problems derived from information theoretic lower bounds.
We perform synthetic experiments to justify the stability and effectiveness of the training procedure, and then evaluate the method on tasks derived from real data.
arXiv Detail & Related papers (2020-10-29T06:48:22Z) - Learning Interpretable Models in the Property Specification Language [6.875312133832079]
We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
arXiv Detail & Related papers (2020-02-10T11:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.