PixelWorld: Towards Perceiving Everything as Pixels
- URL: http://arxiv.org/abs/2501.19339v1
- Date: Fri, 31 Jan 2025 17:39:21 GMT
- Title: PixelWorld: Towards Perceiving Everything as Pixels
- Authors: Zhiheng Lyu, Xueguang Ma, Wenhu Chen,
- Abstract summary: We propose to unify all modalities (text, tables, code, diagrams, images, etc) as pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP)
We introduce PixelWorld, a novel evaluation suite that unifies all the mentioned modalities into pixel space to gauge the existing models' performance.
- Score: 50.13953243722129
- License:
- Abstract: Existing foundation models typically process visual input as pixels and textual input as tokens, a paradigm that contrasts with human perception, where both modalities are processed in a unified manner. With the rise of embodied and agentic AI, where inputs primarily come from camera pixels, the need for a unified perception framework becomes increasingly evident. In this paper, we propose to unify all modalities (text, tables, code, diagrams, images, etc) as pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP). We introduce PixelWorld, a novel evaluation suite that unifies all the mentioned modalities into pixel space to gauge the existing models' performance. Our findings show that (1) PEAP outperforms baseline with token-based input in multimodal datasets, benefiting from unified input for better disambiguation, (2) significant declines in reasoning and coding capabilities across all models when processing pixel-based input, underscoring the need to enhance foundation models' perceptual abilities, (3) larger models can maintain strong performance on non-reasoning tasks under PEAP, while smaller models like Phi-3.5-V suffer significant performance degradation, (4) the attention pattern of PEAP is highly aligned with text token input, (5) PEAP can be accelerated significantly by exploiting the spatial sparsity. We conclude that the existing frontier models are competent in pixel perception, however, there is still headroom for improvement. Our code, dataset will be released upon acceptance.
Related papers
- Focus Entirety and Perceive Environment for Arbitrary-Shaped Text Detection [31.180352896153682]
segmentation-based approaches have emerged as prominent contenders owing to their flexible pixel-level predictions.
We propose a multi-information level arbitrary-shaped text detector consisting of a focus entirety module and a perceive environment module.
The latter extracts region-level information and encourages the model to focus on the distribution of positive samples in the vicinity of a pixel.
arXiv Detail & Related papers (2024-09-25T11:24:37Z) - Spatio-Temporal Side Tuning Pre-trained Foundation Models for Video-based Pedestrian Attribute Recognition [58.79807861739438]
Existing pedestrian recognition (PAR) algorithms are mainly developed based on a static image.
We propose to understand human attributes using video frames that can fully use temporal information.
arXiv Detail & Related papers (2024-04-27T14:43:32Z) - KeyPoint Relative Position Encoding for Face Recognition [15.65725865703615]
Keypoint RPE (KP-RPE) is an extension of the principle where significance of pixels is not solely dictated by their proximity.
Code and pre-trained models are available.
arXiv Detail & Related papers (2024-03-21T21:56:09Z) - Differentiable Registration of Images and LiDAR Point Clouds with
VoxelPoint-to-Pixel Matching [58.10418136917358]
Cross-modality registration between 2D images from cameras and 3D point clouds from LiDARs is a crucial task in computer vision and robotic training.
Previous methods estimate 2D-3D correspondences by matching point and pixel patterns learned by neural networks.
We learn a structured cross-modality matching solver to represent 3D features via a different latent pixel space.
arXiv Detail & Related papers (2023-12-07T05:46:10Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [59.968362815126326]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process [94.41510903676837]
We propose an Alternating Denoising Diffusion Process (ADDP) that integrates two spaces within a single representation learning framework.
In each denoising step, our method first decodes pixels from previous VQ tokens, then generates new VQ tokens from the decoded pixels.
The learned representations can be used to generate diverse high-fidelity images and also demonstrate excellent transfer performance on recognition tasks.
arXiv Detail & Related papers (2023-06-08T17:59:32Z) - Learn how to Prune Pixels for Multi-view Neural Image-based Synthesis [10.571582038258443]
We present LeHoPP, a method for input pixel pruning.
We examine the importance of each input pixel concerning the rendered view, and we avoid the use of irrelevant pixels.
Even without retraining the image-based rendering network, our approach shows a good trade-off between synthesis quality and pixel rate.
arXiv Detail & Related papers (2023-05-05T14:29:24Z) - Adaptive Single Image Deblurring [43.02281823557039]
We propose an efficient pixel adaptive and feature attentive design for handling large blur variations within and across different images.
We also propose an effective content-aware global-local filtering module that significantly improves the performance.
arXiv Detail & Related papers (2022-01-01T10:10:19Z) - ITSELF: Iterative Saliency Estimation fLexible Framework [68.8204255655161]
Saliency object detection estimates the objects that most stand out in an image.
We propose a superpixel-based ITerative Saliency Estimation fLexible Framework (ITSELF) that allows any user-defined assumptions to be added to the model.
We compare ITSELF to two state-of-the-art saliency estimators on five metrics and six datasets.
arXiv Detail & Related papers (2020-06-30T16:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.