Remove Appearance Shift for Ultrasound Image Segmentation via Fast and
Universal Style Transfer
- URL: http://arxiv.org/abs/2002.05844v1
- Date: Fri, 14 Feb 2020 02:00:57 GMT
- Title: Remove Appearance Shift for Ultrasound Image Segmentation via Fast and
Universal Style Transfer
- Authors: Zhendong Liu, Xin Yang, Rui Gao, Shengfeng Liu, Haoran Dou, Shuangchi
He, Yuhao Huang, Yankai Huang, Huanjia Luo, Yuanji Zhang, Yi Xiong, Dong Ni
- Abstract summary: We propose a novel and intuitive framework to remove the appearance shift, and hence improve the generalization ability of Deep Neural Networks (DNNs)
We follow the spirit of universal style transfer to remove appearance shifts, which was not explored before for US images.
Our framework achieved real-time speed required in the clinical US scanning.
- Score: 13.355791568003559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) suffer from the performance degradation when
image appearance shift occurs, especially in ultrasound (US) image
segmentation. In this paper, we propose a novel and intuitive framework to
remove the appearance shift, and hence improve the generalization ability of
DNNs. Our work has three highlights. First, we follow the spirit of universal
style transfer to remove appearance shifts, which was not explored before for
US images. Without sacrificing image structure details, it enables the
arbitrary style-content transfer. Second, accelerated with Adaptive Instance
Normalization block, our framework achieved real-time speed required in the
clinical US scanning. Third, an efficient and effective style image selection
strategy is proposed to ensure the target-style US image and testing content US
image properly match each other. Experiments on two large US datasets
demonstrate that our methods are superior to state-of-the-art methods on making
DNNs robust against various appearance shifts.
Related papers
- D2Styler: Advancing Arbitrary Style Transfer with Discrete Diffusion Methods [2.468658581089448]
We propose a novel framework called D$2$Styler (Discrete Diffusion Styler)
Our method uses Adaptive Instance Normalization (AdaIN) features as a context guide for the reverse diffusion process.
Experimental results demonstrate that D$2$Styler produces high-quality style-transferred images.
arXiv Detail & Related papers (2024-08-07T05:47:06Z) - MoreStyle: Relax Low-frequency Constraint of Fourier-based Image Reconstruction in Generalizable Medical Image Segmentation [53.24011398381715]
We introduce a Plug-and-Play module for data augmentation called MoreStyle.
MoreStyle diversifies image styles by relaxing low-frequency constraints in Fourier space.
With the help of adversarial learning, MoreStyle pinpoints the most intricate style combinations within latent features.
arXiv Detail & Related papers (2024-03-18T11:38:47Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Fine-Grained Image Style Transfer with Visual Transformers [59.85619519384446]
We propose a novel STyle TRansformer (STTR) network which breaks both content and style images into visual tokens to achieve a fine-grained style transformation.
To compare STTR with existing approaches, we conduct user studies on Amazon Mechanical Turk.
arXiv Detail & Related papers (2022-10-11T06:26:00Z) - AdaWCT: Adaptive Whitening and Coloring Style Injection [55.554986498301574]
We present a generalization of AdaIN which relies on the whitening and coloring transformation (WCT) which we dub AdaWCT, that we apply for style injection in large GANs.
We show, through experiments on the StarGANv2 architecture, that this generalization, albeit conceptually simple, results in significant improvements in the quality of the generated images.
arXiv Detail & Related papers (2022-08-01T15:07:51Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - Content-adaptive Representation Learning for Fast Image Super-resolution [6.5468866820512215]
We adrress the efficiency issue in image SR by incorporating a patch-wise rolling network to content-adaptively recover images according to difficulty levels.
In contrast to existing studies that ignore difficulty diversity, we adopt different stage of a neural network to perform image restoration.
Our model not only shows a significant acceleration but also maintain state-of-the-art performance.
arXiv Detail & Related papers (2021-05-20T10:24:29Z) - Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance
Normalization [42.84367334160332]
We present an extremely simple Ultra-Resolution Style Transfer framework, termed URST, to flexibly process arbitrary high-resolution images.
Most of the existing state-of-the-art methods would fall short due to massive memory cost and small stroke size when processing ultra-high resolution images.
arXiv Detail & Related papers (2021-03-22T12:54:01Z) - Generalize Ultrasound Image Segmentation via Instant and Plug & Play
Style Transfer [65.71330448991166]
Deep segmentation models generalize to images with unknown appearance.
Retraining models leads to high latency and complex pipelines.
We propose a novel method for robust segmentation under unknown appearance shifts.
arXiv Detail & Related papers (2021-01-11T05:45:30Z) - Style-invariant Cardiac Image Segmentation with Test-time Augmentation [10.234493507401618]
Deep models often suffer from severe performance drop due to the appearance shift in the real clinical setting.
In this paper, we propose a novel style-invariant method for cardiac image segmentation.
arXiv Detail & Related papers (2020-09-24T08:27:40Z) - Supervised and Unsupervised Learning of Parameterized Color Enhancement [112.88623543850224]
We tackle the problem of color enhancement as an image translation task using both supervised and unsupervised learning.
We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark.
We show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.
arXiv Detail & Related papers (2019-12-30T13:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.