Proceedings 19th International Workshop on the ACL2 Theorem Prover and Its Applications
- URL: http://arxiv.org/abs/2507.18567v1
- Date: Thu, 24 Jul 2025 16:42:15 GMT
- Title: Proceedings 19th International Workshop on the ACL2 Theorem Prover and Its Applications
- Authors: Ruben Gamboa, Panagiotis Manolios,
- Abstract summary: ACL2 Workshop series is the major technical forum for users of the ACL2 theorem proving system.<n> ACL2 is an industrial-strength automated reasoning system, the latest in the Boyer-Moore family of theorem provers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ACL2 Workshop series is the major technical forum for users of the ACL2 theorem proving system to present research related to the ACL2 theorem prover and its applications. ACL2 is an industrial-strength automated reasoning system, the latest in the Boyer-Moore family of theorem provers. The 2005 ACM Software System Award was awarded to Boyer, Kaufmann, and Moore for their work on ACL2 and the other theorem provers in the Boyer-Moore family.
Related papers
- A Practical Guide to Streaming Continual Learning [53.995807801604506]
Continual Learning (CL) and Streaming Machine Learning () study the ability of agents to learn from a stream of non-stationary data.<n>Despite sharing some similarities, they address different and complementary challenges.<n>We discuss Streaming Continual Learning (SCL), an emerging paradigm providing a unifying solution to real-world problems.
arXiv Detail & Related papers (2026-03-02T10:06:34Z) - ACL: Aligned Contrastive Learning Improves BERT and Multi-exit BERT Fine-tuning [3.060720241524644]
We introduce a novel underlineAligned underlineContrastive underlineLearning (ACL) framework.<n>ACL-Embed regards label embeddings as extra augmented samples with different labels and employs contrastive learning to align the label embeddings with its samples' representations.<n>To facilitate the optimization of ACL-Embed objective combined with the CE loss, we propose ACL-Grad, which will discard the ACL-Embed term if the two objectives are in conflict.
arXiv Detail & Related papers (2026-02-03T14:08:07Z) - ICLEval: Evaluating In-Context Learning Ability of Large Language Models [68.7494310749199]
In-Context Learning (ICL) is a critical capability of Large Language Models (LLMs) as it empowers them to comprehend and reason across interconnected inputs.<n>Existing evaluation frameworks primarily focus on language abilities and knowledge, often overlooking the assessment of ICL ability.<n>We introduce the ICLEval benchmark to evaluate the ICL abilities of LLMs, which encompasses two key sub-abilities: exact copying and rule learning.
arXiv Detail & Related papers (2024-06-21T08:06:10Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - Formal Verification of Zero-Knowledge Circuits [44.99833362998488]
Zero-knowledge circuits are sets of equality constraints over arithmetic expressions interpreted in a prime field.
We make contributions to the problem of ensuring that a circuit correctly encodes a computation.
arXiv Detail & Related papers (2023-11-15T10:47:28Z) - Advances in ACL2 Proof Debugging Tools [0.0]
A key to successful use of the ACL2 prover is the effective use of tools to debug those failures.
We focus on changes made after ACL2 Version 8.5: the improved break-rewrite utility and the new utility, with-brr-data.
arXiv Detail & Related papers (2023-11-15T10:46:55Z) - Beyond Task Performance: Evaluating and Reducing the Flaws of Large
Multimodal Models with In-Context Learning [105.77733287326308]
We evaluate 10 recent open-source LMMs from 3B up to 80B parameter scale, on 5 different axes; hallucinations, abstention, compositionality, explainability and instruction following.
We explore the training-free in-context learning (ICL) as a solution, and study how it affects these limitations.
Based on our ICL study, (3) we push ICL further and propose new multimodal ICL variants such as; Multitask-ICL, Chain-of-Hindsight-ICL, and Self-Correcting-ICL.
arXiv Detail & Related papers (2023-10-01T12:02:59Z) - OpenICL: An Open-Source Framework for In-context Learning [48.75452105457122]
We introduce OpenICL, an open-source toolkit for In-context Learning (ICL) and large language model evaluation.
OpenICL is research-friendly with a highly flexible architecture that users can easily combine different components to suit their needs.
The effectiveness of OpenICL has been validated on a wide range of NLP tasks, including classification, QA, machine translation, and semantic parsing.
arXiv Detail & Related papers (2023-03-06T06:20:25Z) - Proceedings Seventeenth International Workshop on the ACL2 Theorem
Prover and its Applications [0.0]
This volume contains a selection of papers presented at the 17th International Workshop on the ACL2 Theorem Prover and its Applications (ACL2 2022)
The workshops are the premier technical forum for presenting research and experiences related to ACL2.
arXiv Detail & Related papers (2022-05-23T07:53:25Z) - Proceedings of the Sixteenth International Workshop on the ACL2 Theorem
Prover and its Applications [0.0]
This volume contains a selection of papers presented at the 16th International Workshop on the ACL2 Theorem Prover and its Applications (ACL2-2020)
The workshops are the premier technical forum for presenting research and experiences related to ACL2.
arXiv Detail & Related papers (2020-09-26T05:19:33Z) - Learning with Multiple Complementary Labels [94.8064553345801]
A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers.
We propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.
arXiv Detail & Related papers (2019-12-30T13:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.