CP Loss: Channel-wise Perceptual Loss for Time Series Forecasting
- URL: http://arxiv.org/abs/2601.18829v1
- Date: Sun, 25 Jan 2026 15:31:37 GMT
- Title: CP Loss: Channel-wise Perceptual Loss for Time Series Forecasting
- Authors: Yaohua Zha, Chunlin Fan, Peiyuan Liu, Yong Jiang, Tao Dai, Hai Wu, Shu-Tao Xia,
- Abstract summary: We propose a Channel-wise Perceptual Loss (CP Loss) for time-series data.<n>We learn a unique perceptual space for each channel that is adapted to its characteristics.<n>Loss is calculated within these perception spaces to optimize the model.
- Score: 67.3477355449697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-channel time-series data, prevalent across diverse applications, is characterized by significant heterogeneity in its different channels. However, existing forecasting models are typically guided by channel-agnostic loss functions like MSE, which apply a uniform metric across all channels. This often leads to fail to capture channel-specific dynamics such as sharp fluctuations or trend shifts. To address this, we propose a Channel-wise Perceptual Loss (CP Loss). Its core idea is to learn a unique perceptual space for each channel that is adapted to its characteristics, and to compute the loss within this space. Specifically, we first design a learnable channel-wise filter that decomposes the raw signal into disentangled multi-scale representations, which form the basis of our perceptual space. Crucially, the filter is optimized jointly with the main forecasting model, ensuring that the learned perceptual space is explicitly oriented towards the prediction task. Finally, losses are calculated within these perception spaces to optimize the model. Code is available at https://github.com/zyh16143998882/CP_Loss.
Related papers
- Channel-Aware Probing for Multi-Channel Imaging [9.507520646516719]
Training and evaluating vision encoders on Multi-Channel Imaging (MCI) data remains challenging.<n>ChannelAware Probing (CAP) exploits intrinsic inter-channel diversity by controlling feature flow at both encoder and probe levels.
arXiv Detail & Related papers (2026-02-13T08:03:27Z) - Distilling Channels for Efficient Deep Tracking [68.13422829310835]
This paper presents a novel framework termed channel distillation to facilitate deep trackers.
We show that an integrated formulation can turn feature compression, response map generation, and model update into a unified energy minimization problem.
The resulting deep tracker is accurate, fast, and has low memory requirements.
arXiv Detail & Related papers (2024-09-18T08:09:20Z) - Diffusion Models for Accurate Channel Distribution Generation [19.80498913496519]
Strong generative models can accurately learn channel distributions.
This could save recurring costs for physical measurements of the channel.
The resulting differentiable channel model supports training neural encoders by enabling gradient-based optimization.
arXiv Detail & Related papers (2023-09-19T10:35:54Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - Alternating Channel Estimation and Prediction for Cell-Free mMIMO with
Channel Aging: A Deep Learning Based Scheme [17.486123129104882]
In large scale dynamic wireless networks, the amount of overhead caused by channel estimation (CE) is becoming one of the main performance bottlenecks.
We propose a new hybrid channel estimation/prediction scheme to reduce overhead in time-division duplex (TDD) wireless cell-free massive multiple-input-multiple-output (mMIMO) systems.
arXiv Detail & Related papers (2022-04-16T20:27:01Z) - Predicting Multi-Antenna Frequency-Selective Channels via Meta-Learned
Linear Filters based on Long-Short Term Channel Decomposition [39.38412820403623]
We develop predictors for single-antenna frequency-flat channels based on transfer/meta-learned quadratic regularization.
We introduce transfer and meta-learning algorithms for LSTD-based prediction models.
arXiv Detail & Related papers (2022-03-23T20:38:48Z) - GDP: Stabilized Neural Network Pruning via Gates with Differentiable
Polarization [84.57695474130273]
Gate-based or importance-based pruning methods aim to remove channels whose importance is smallest.
GDP can be plugged before convolutional layers without bells and whistles, to control the on-and-off of each channel.
Experiments conducted over CIFAR-10 and ImageNet datasets show that the proposed GDP achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-09-06T03:17:10Z) - Group Fisher Pruning for Practical Network Compression [58.25776612812883]
We present a general channel pruning approach that can be applied to various complicated structures.
We derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels.
Our method can be used to prune any structures including those with coupled channels.
arXiv Detail & Related papers (2021-08-02T08:21:44Z) - Channel-wise Knowledge Distillation for Dense Prediction [73.99057249472735]
We propose to align features channel-wise between the student and teacher networks.
We consistently achieve superior performance on three benchmarks with various network structures.
arXiv Detail & Related papers (2020-11-26T12:00:38Z) - Massive MIMO Channel Prediction: Kalman Filtering vs. Machine Learning [18.939010023327498]
This paper focuses on channel prediction techniques for massive multiple-input multiple-output (MIMO) systems.
We develop and compare a vector Kalman filter (VKF)-based channel predictor and a machine learning (ML)-based channel predictor.
arXiv Detail & Related papers (2020-09-21T15:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.