Signal-First Architectures: Rethinking Front-End Reactivity
- URL: http://arxiv.org/abs/2506.13815v1
- Date: Sat, 14 Jun 2025 20:34:48 GMT
- Title: Signal-First Architectures: Rethinking Front-End Reactivity
- Authors: Shrinivass Arunachalam Balasubramanian,
- Abstract summary: This paper introduces Signal-First Architecture, a novel paradigm where dependency-tracked signals are the atomic unit of reactivity.<n>Unlike traditional RxJS or NgRx patterns, Signal-First enforces reactive flows from explicit signal declarations.<n>We present a comparative analysis of three Angular reactivity models: RxJS service-based, NgRx global stores, and pure Signal-First implementations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern front-end frameworks face escalating reactivity management challenges, including performance degradation from complex observable chains and unpredictable re-renders. This paper introduces Signal-First Architecture--a novel paradigm where granular, dependency-tracked signals are the atomic unit of reactivity. Unlike traditional RxJS or NgRx patterns, Signal-First enforces reactive flows from explicit signal declarations, with derived values via computed() and side effects scoped to effect(). This model ensures deterministic behavior by eliminating implicit subscriptions and optimizing reactive graph evaluation. We present a comparative analysis of three Angular reactivity models: RxJS service-based, NgRx global stores, and pure Signal-First implementations. Through controlled benchmarking, including Chrome DevTools performance tracing, memory heap snapshots, and Lighthouse audits, this study quantifies Signal-First advantages.
Related papers
- CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling [16.151066326284376]
Large reasoning models (LRMs) harness test-time scaling to perform multi-step reasoning for complex problem-solving.<n>We treat reflection tokens as a "resource" and introduce the problem of resource allocation.<n>We propose cyclical reflection token scheduling (termed CyclicReflex), a decoding strategy that dynamically modulates reflection token logits.
arXiv Detail & Related papers (2025-06-04T03:43:38Z) - Compile Scene Graphs with Reinforcement Learning [69.36723767339001]
Next-token prediction is the fundamental principle for training large language models (LLMs)<n>We introduce R1-SGG, a multimodal LLM (M-LLM) initially trained via supervised fine-tuning (SFT) on the scene graph dataset.<n>We design a set of graph-centric rewards, including three recall-based variants -- Hard Recall, Hard Recall+Relax, and Soft Recall.
arXiv Detail & Related papers (2025-04-18T10:46:22Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - MoRE: A Mixture of Reflectors Framework for Large Language Model-Based Sequential Recommendation [16.10791252542592]
Large language models (LLMs) have emerged as a cutting-edge approach in sequential recommendation.<n>We propose MoRE, which introduces three perspective-aware offline reflection processes to address these gaps.<n>MoRE's meta-reflector employs a self-improving strategy and a dynamic selection mechanism to adapt to evolving user preferences.
arXiv Detail & Related papers (2024-09-10T09:58:55Z) - Make Graph-based Referring Expression Comprehension Great Again through Expression-guided Dynamic Gating and Regression [44.36417883611282]
We introduce a plug-and-adapt module guided by sub-expressions, called dynamic gate constraint (DGC), which can adaptively disable irrelevant proposals during reasoning.
We also introduce an expression-guided regression strategy (EGR) to refine location prediction.
Without any pretaining, the proposed graph-based method achieves better performance than the state-of-the-art (SOTA) transformer-based methods.
arXiv Detail & Related papers (2024-09-05T09:44:43Z) - ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - A Refreshed Similarity-based Upsampler for Direct High-Ratio Feature Upsampling [54.05517338122698]
A popular similarity-based feature upsampling pipeline has been proposed, which utilizes a high-resolution feature as guidance.<n>We propose an explicitly controllable query-key feature alignment from both semantic-aware and detail-aware perspectives.<n>We develop a fine-grained neighbor selection strategy on HR features, which is simple yet effective for alleviating mosaic artifacts.
arXiv Detail & Related papers (2024-07-02T14:12:21Z) - Continual Referring Expression Comprehension via Dual Modular
Memorization [133.46886428655426]
Referring Expression (REC) aims to localize an image region of a given object described by a natural-language expression.
Existing REC algorithms make a strong assumption that training data feeding into a model are given upfront, which degrades its practicality for real-world scenarios.
In this paper, we propose Continual Referring Expression (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks.
In order to continuously improve the model on sequential tasks without forgetting prior learned knowledge and without repeatedly re-training from a scratch, we propose an effective baseline method named Dual Modular Memorization
arXiv Detail & Related papers (2023-11-25T02:58:51Z) - Whether you can locate or not? Interactive Referring Expression
Generation [12.148963878497243]
We propose an Interactive REG (IREG) model that can interact with a real REC model.
IREG outperforms previous state-of-the-art methods on popular evaluation metrics.
arXiv Detail & Related papers (2023-08-19T10:53:32Z) - Reference Twice: A Simple and Unified Baseline for Few-Shot Instance Segmentation [103.90033029330527]
Few-Shot Instance (FSIS) requires detecting and segmenting novel classes with limited support examples.
We introduce a unified framework, Reference Twice (RefT), to exploit the relationship between support and query features for FSIS.
arXiv Detail & Related papers (2023-01-03T15:33:48Z) - Automatically Generating Counterfactuals for Relation Exaction [18.740447044960796]
relation extraction (RE) is a fundamental task in natural language processing.
Current deep neural models have achieved high accuracy but are easily affected by spurious correlations.
We develop a novel approach to derive contextual counterfactuals for entities.
arXiv Detail & Related papers (2022-02-22T04:46:10Z) - Representative Graph Neural Network [113.67254049938629]
We present a Representative Graph layer to dynamically sample a few representative features.
Instead of propagating the messages from all positions, our RepGraph layer computes the response of one node merely with a few representative nodes.
arXiv Detail & Related papers (2020-08-12T09:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.