Enhancing Numeric-SAM for Learning with Few Observations
- URL: http://arxiv.org/abs/2312.10705v1
- Date: Sun, 17 Dec 2023 12:50:10 GMT
- Title: Enhancing Numeric-SAM for Learning with Few Observations
- Authors: Argaman Mordoch, Shahaf S. Shperberg, Roni Stern, Berndan Juba
- Abstract summary: We propose an enhanced version of Safe Action Models Learning (N-SAM) that always returns an action model where every observed action is applicable at least in some state.
N-SAM* does so without compromising the safety of the returned action model.
An empirical study on a set of benchmark domains shows that the action models returned by N-SAM* enable solving significantly more problems than the action models returned by N-SAM.
- Score: 13.41686187754024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A significant challenge in applying planning technology to real-world
problems lies in obtaining a planning model that accurately represents the
problem's dynamics. Numeric Safe Action Models Learning (N-SAM) is a recently
proposed algorithm that addresses this challenge. It is an algorithm designed
to learn the preconditions and effects of actions from observations in domains
that may involve both discrete and continuous state variables. N-SAM has
several attractive properties. It runs in polynomial time and is guaranteed to
output an action model that is safe, in the sense that plans generated by it
are applicable and will achieve their intended goals. To preserve this safety
guarantee, N-SAM must observe a substantial number of examples for each action
before it is included in the learned action model. We address this limitation
of N-SAM and propose N-SAM*, an enhanced version of N-SAM that always returns
an action model where every observed action is applicable at least in some
state, even if it was only observed once. N-SAM* does so without compromising
the safety of the returned action model. We prove that N-SAM* is optimal in
terms of sample complexity compared to any other algorithm that guarantees
safety. An empirical study on a set of benchmark domains shows that the action
models returned by N-SAM* enable solving significantly more problems compared
to the action models returned by N-SAM.
Related papers
- SAM-SP: Self-Prompting Makes SAM Great Again [11.109389094334894]
Segment Anything Model (SAM) has demonstrated impressive capabilities in zero-shot segmentation tasks.
SAM encounters noticeably degradation performance when applied to specific domains, such as medical images.
We introduce a novel self-prompting based fine-tuning approach, called SAM-SP, tailored for extending the vanilla SAM model.
arXiv Detail & Related papers (2024-08-22T13:03:05Z) - ASAM: Boosting Segment Anything Model with Adversarial Tuning [9.566046692165884]
This paper introduces ASAM, a novel methodology that amplifies a foundation model's performance through adversarial tuning.
We harness the potential of natural adversarial examples, inspired by their successful implementation in natural language processing.
Our approach maintains the photorealism of adversarial examples and ensures alignment with original mask annotations.
arXiv Detail & Related papers (2024-05-01T00:13:05Z) - Safe Learning of PDDL Domains with Conditional Effects -- Extended Version [27.05167679870857]
We show that Conditional-SAM can be used to solve perfectly most of the test set problems in most of the experimented domains.
Our results show that the action models learned by Conditional-SAM can be used to solve perfectly most of the test set problems.
arXiv Detail & Related papers (2024-03-22T14:49:49Z) - SU-SAM: A Simple Unified Framework for Adapting Segment Anything Model in Underperformed Scenes [34.796859088106636]
Segment anything model (SAM) has demonstrated excellent generalizability in common vision scenarios, yet falling short of the ability to understand specialized data.
Recent methods have combined parameter-efficient techniques with task-specific designs to fine-tune SAM on particular tasks.
We present a simple and unified framework, namely SU-SAM, that can easily and efficiently fine-tune the SAM model with parameter-efficient techniques.
arXiv Detail & Related papers (2024-01-31T12:53:11Z) - BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model [65.92173280096588]
We address the challenge of image resolution variation for the Segment Anything Model (SAM)
SAM, known for its zero-shot generalizability, exhibits a performance degradation when faced with datasets with varying image sizes.
We present a bias-mode attention mask that allows each token to prioritize neighboring information.
arXiv Detail & Related papers (2024-01-04T15:34:44Z) - TinySAM: Pushing the Envelope for Efficient Segment Anything Model [76.21007576954035]
We propose a framework to obtain a tiny segment anything model (TinySAM) while maintaining the strong zero-shot performance.
We first propose a full-stage knowledge distillation method with hard prompt sampling and hard mask weighting strategy to distill a lightweight student model.
We also adapt the post-training quantization to the promptable segmentation task and further reduce the computational cost.
arXiv Detail & Related papers (2023-12-21T12:26:11Z) - Stable Segment Anything Model [79.9005670886038]
The Segment Anything Model (SAM) achieves remarkable promptable segmentation given high-quality prompts.
This paper presents the first comprehensive analysis on SAM's segmentation stability across a diverse spectrum of prompt qualities.
Our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retaining SAM's powerful promptable segmentation efficiency and generality.
arXiv Detail & Related papers (2023-11-27T12:51:42Z) - Understanding Self-attention Mechanism via Dynamical System Perspective [58.024376086269015]
Self-attention mechanism (SAM) is widely used in various fields of artificial intelligence.
We show that intrinsic stiffness phenomenon (SP) in the high-precision solution of ordinary differential equations (ODEs) also widely exists in high-performance neural networks (NN)
We show that the SAM is also a stiffness-aware step size adaptor that can enhance the model's representational ability to measure intrinsic SP.
arXiv Detail & Related papers (2023-08-19T08:17:41Z) - SAM operates far from home: eigenvalue regularization as a dynamical
phenomenon [15.332235979022036]
The Sharpness Aware Minimization (SAM) algorithm has been shown to control large eigenvalues of the loss Hessian.
We show that SAM provides a strong regularization of the eigenvalues throughout the learning trajectory.
Our theory predicts the largest eigenvalue as a function of the learning rate and SAM radius parameters.
arXiv Detail & Related papers (2023-02-17T04:51:20Z) - Improving Sharpness-Aware Minimization with Fisher Mask for Better
Generalization on Language Models [93.85178920914721]
Fine-tuning large pretrained language models on a limited training corpus usually suffers from poor computation.
We propose a novel optimization procedure, namely FSAM, which introduces a Fisher mask to improve the efficiency and performance of SAM.
We show that FSAM consistently outperforms the vanilla SAM by 0.671.98 average score among four different pretrained models.
arXiv Detail & Related papers (2022-10-11T14:53:58Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.