Advances in ACL2 Proof Debugging Tools
- URL: http://arxiv.org/abs/2311.08856v1
- Date: Wed, 15 Nov 2023 10:46:55 GMT
- Title: Advances in ACL2 Proof Debugging Tools
- Authors: Matt Kaufmann (UT Austin, retired), J Strother Moore (UT Austin,
retired)
- Abstract summary: A key to successful use of the ACL2 prover is the effective use of tools to debug those failures.
We focus on changes made after ACL2 Version 8.5: the improved break-rewrite utility and the new utility, with-brr-data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The experience of an ACL2 user generally includes many failed proof attempts.
A key to successful use of the ACL2 prover is the effective use of tools to
debug those failures. We focus on changes made after ACL2 Version 8.5: the
improved break-rewrite utility and the new utility, with-brr-data.
Related papers
- ACL: Aligned Contrastive Learning Improves BERT and Multi-exit BERT Fine-tuning [3.060720241524644]
We introduce a novel underlineAligned underlineContrastive underlineLearning (ACL) framework.<n>ACL-Embed regards label embeddings as extra augmented samples with different labels and employs contrastive learning to align the label embeddings with its samples' representations.<n>To facilitate the optimization of ACL-Embed objective combined with the CE loss, we propose ACL-Grad, which will discard the ACL-Embed term if the two objectives are in conflict.
arXiv Detail & Related papers (2026-02-03T14:08:07Z) - Unforgotten Safety: Preserving Safety Alignment of Large Language Models with Continual Learning [79.45860948246742]
We study the safety degradation that comes with adapting large language models to new tasks.<n>We consider the fine-tuning-as-a-service setup where the user uploads their data to a service provider to get a customized model that excels on the user's selected task.<n>We adapt several CL approaches from the literature and systematically evaluate their ability to mitigate safety degradation.
arXiv Detail & Related papers (2025-12-10T23:16:47Z) - Proceedings 19th International Workshop on the ACL2 Theorem Prover and Its Applications [0.0]
ACL2 Workshop series is the major technical forum for users of the ACL2 theorem proving system.<n> ACL2 is an industrial-strength automated reasoning system, the latest in the Boyer-Moore family of theorem provers.
arXiv Detail & Related papers (2025-07-24T16:42:15Z) - Task-Core Memory Management and Consolidation for Long-term Continual Learning [62.880988004687815]
We focus on a long-term continual learning (CL) task, where a model learns sequentially from a stream of vast tasks over time.<n>Unlike traditional CL settings, long-term CL involves handling a significantly larger number of tasks, which exacerbates the issue of catastrophic forgetting.<n>We propose a novel framework inspired by human memory mechanisms for long-term continual learning (Long-CL)
arXiv Detail & Related papers (2025-05-15T04:22:35Z) - AA-CLIP: Enhancing Zero-shot Anomaly Detection via Anomaly-Aware CLIP [33.213400694016]
Anomaly detection (AD) identifies outliers for applications like defect and lesion detection.
We propose Anomaly-Aware CLIP (AA-CLIP), which enhances CLIP's anomaly discrimination ability in both text and visual spaces.
AA-CLIP is achieved through a straightforward yet effective two-stage approach.
arXiv Detail & Related papers (2025-03-09T15:22:52Z) - In-context Continual Learning Assisted by an External Continual Learner [19.382196203113836]
Existing continual learning (CL) methods rely on fine-tuning or adapting large language models (LLMs)
We introduce InCA, a novel approach that integrates an external continual learner (ECL) with ICL to enable scalable CL without CF.
arXiv Detail & Related papers (2024-12-20T04:44:41Z) - Toolken+: Improving LLM Tool Usage with Reranking and a Reject Option [5.61458021213001]
We introduce Toolken+ that mitigates the first problem by reranking top $k$ tools selected by ToolkenGPT.
We demonstrate the effectiveness of Toolken+ on multistep numerical reasoning and tool selection tasks.
arXiv Detail & Related papers (2024-10-15T19:09:03Z) - ET tu, CLIP? Addressing Common Object Errors for Unseen Environments [0.2714641498775158]
We introduce a simple method that employs pre-trained CLIP encoders to enhance model generalization in the ALFRED task.
In contrast to previous literature where CLIP replaces the visual encoder, we suggest using CLIP as an additional module through an auxiliary object detection objective.
arXiv Detail & Related papers (2024-06-25T18:35:13Z) - Implicit In-context Learning [37.0562059811099]
In-context Learning (ICL) empowers large language models to adapt to unseen tasks during inference by prefixing a few demonstration examples prior to test queries.
We introduce Implicit In-context Learning (I2CL), an innovative paradigm that addresses the challenges associated with traditional ICL by absorbing demonstration examples within the activation space.
I2CL achieves few-shot performance with zero-shot cost and exhibits robustness against the variation of demonstration examples.
arXiv Detail & Related papers (2024-05-23T14:57:52Z) - Many-Shot In-Context Learning [58.395589302800566]
Large language models (LLMs) excel at few-shot in-context learning (ICL)
We observe significant performance gains across a wide variety of generative and discriminative tasks.
Unlike few-shot learning, many-shot learning is effective at overriding pretraining biases.
arXiv Detail & Related papers (2024-04-17T02:49:26Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP [55.33331463515103]
BadCLIP is built on a novel and effective mechanism in backdoor attacks on CLIP.
It consists of a learnable trigger applied to images and a trigger-aware context generator, such that the trigger can change text features via trigger-aware prompts.
arXiv Detail & Related papers (2023-11-26T14:24:13Z) - Enhancing Adversarial Contrastive Learning via Adversarial Invariant
Regularization [59.77647907277523]
Adversarial contrastive learning (ACL) is a technique that enhances standard contrastive learning (SCL)
In this paper, we propose adversarial invariant regularization (AIR) to enforce independence from style factors.
arXiv Detail & Related papers (2023-04-30T03:12:21Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z) - Learning with Multiple Complementary Labels [94.8064553345801]
A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers.
We propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.
arXiv Detail & Related papers (2019-12-30T13:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.