NICER: Aesthetic Image Enhancement with Humans in the Loop
- URL: http://arxiv.org/abs/2012.01778v1
- Date: Thu, 3 Dec 2020 09:14:10 GMT
- Title: NICER: Aesthetic Image Enhancement with Humans in the Loop
- Authors: Michael Fischer, Konstantin Kobs, Andreas Hotho
- Abstract summary: This work proposes a neural network based approach to no-reference image enhancement in a fully-, semi-automatic or fully manual process.
We show that NICER can improve image aesthetics without user interaction and that allowing user interaction leads to diverse enhancement outcomes.
- Score: 0.7756211500979312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully- or semi-automatic image enhancement software helps users to increase
the visual appeal of photos and does not require in-depth knowledge of manual
image editing. However, fully-automatic approaches usually enhance the image in
a black-box manner that does not give the user any control over the
optimization process, possibly leading to edited images that do not
subjectively appeal to the user. Semi-automatic methods mostly allow for
controlling which pre-defined editing step is taken, which restricts the users
in their creativity and ability to make detailed adjustments, such as
brightness or contrast. We argue that incorporating user preferences by guiding
an automated enhancement method simplifies image editing and increases the
enhancement's focus on the user. This work thus proposes the Neural Image
Correction & Enhancement Routine (NICER), a neural network based approach to
no-reference image enhancement in a fully-, semi-automatic or fully manual
process that is interactive and user-centered. NICER iteratively adjusts image
editing parameters in order to maximize an aesthetic score based on image style
and content. Users can modify these parameters at any time and guide the
optimization process towards a desired direction. This interactive workflow is
a novelty in the field of human-computer interaction for image enhancement
tasks. In a user study, we show that NICER can improve image aesthetics without
user interaction and that allowing user interaction leads to diverse
enhancement outcomes that are strongly preferred over the unedited image. We
make our code publicly available to facilitate further research in this
direction.
Related papers
- PromptArtisan: Multi-instruction Image Editing in Single Pass with Complete Attention Control [1.0079049259808768]
PromptArtisan is a groundbreaking approach to multi-instruction image editing.
It achieves remarkable results in a single pass, eliminating the need for time-consuming iterative refinement.
arXiv Detail & Related papers (2025-02-14T16:11:57Z) - PIXELS: Progressive Image Xemplar-based Editing with Latent Surgery [10.594261300488546]
We introduce a novel framework for progressive exemplar-driven editing with off-the-shelf diffusion models, dubbed PIXELS.
PIXELS provides granular control over edits, allowing adjustments at the pixel or region level.
We demonstrate that PIXELS delivers high-quality edits efficiently, leading to a notable improvement in quantitative metrics as well as human evaluation.
arXiv Detail & Related papers (2025-01-16T20:26:30Z) - INRetouch: Context Aware Implicit Neural Representation for Photography Retouching [54.17599183365242]
We propose a novel retouch transfer approach that learns from professional edits through before-after image pairs.
We develop a context-aware Implicit Neural Representation that learns to apply edits adaptively based on image content and context.
Our approach not only surpasses existing methods in photo retouching but also enhances performance in related image reconstruction tasks.
arXiv Detail & Related papers (2024-12-05T03:31:48Z) - Tuning-Free Image Customization with Image and Text Guidance [65.9504243633169]
We introduce a tuning-free framework for simultaneous text-image-guided image customization.
Our approach preserves the semantic features of the reference image subject while allowing modification of detailed attributes based on text descriptions.
Our approach outperforms previous methods in both human and quantitative evaluations.
arXiv Detail & Related papers (2024-03-19T11:48:35Z) - Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized
Enhancer for Low-Light Images [67.14410374622699]
We propose an intelligible unsupervised personalized enhancer (iUPEnhancer) for low-light images.
The proposed iUP-Enhancer is trained with the guidance of these correlations and the corresponding unsupervised loss functions.
Experiments demonstrate that the proposed algorithm produces competitive qualitative and quantitative results.
arXiv Detail & Related papers (2022-07-15T07:16:10Z) - Controllable Image Enhancement [66.18525728881711]
We present a semiautomatic image enhancement algorithm that can generate high-quality images with multiple styles by controlling a few parameters.
An encoder-decoder framework encodes the retouching skills into latent codes and decodes them into the parameters of image signal processing functions.
arXiv Detail & Related papers (2022-06-16T23:54:53Z) - User-Guided Personalized Image Aesthetic Assessment based on Deep
Reinforcement Learning [64.07820203919283]
We propose a novel user-guided personalized image aesthetic assessment framework.
It leverages user interactions to retouch and rank images for aesthetic assessment based on deep reinforcement learning (DRL)
It generates personalized aesthetic distribution that is more in line with the aesthetic preferences of different users.
arXiv Detail & Related papers (2021-06-14T15:19:48Z) - Look here! A parametric learning based approach to redirect visual
attention [49.609412873346386]
We introduce an automatic method to make an image region more attention-capturing via subtle image edits.
Our model predicts a distinct set of global parametric transformations to be applied to the foreground and background image regions.
Our edits enable inference at interactive rates on any image size, and easily generalize to videos.
arXiv Detail & Related papers (2020-08-12T16:08:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.