LinkD: AutoRegressive Diffusion Model for Mechanical Linkage Synthesis
- URL: http://arxiv.org/abs/2601.04054v1
- Date: Wed, 07 Jan 2026 16:19:11 GMT
- Title: LinkD: AutoRegressive Diffusion Model for Mechanical Linkage Synthesis
- Authors: Yayati Jadhav, Amir Barati Farimani,
- Abstract summary: We introduce an autoregressive diffusion framework that exploits the dyadic nature of linkage assembly.<n>We demonstrate successful synthesis of linkage systems containing up to 20 nodes with synthesis to N-node architectures.
- Score: 11.69314618713792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing mechanical linkages to achieve target end-effector trajectories presents a fundamental challenge due to the intricate coupling between continuous node placements, discrete topological configurations, and nonlinear kinematic constraints. The highly nonlinear motion-to-configuration relationship means small perturbations in joint positions drastically alter trajectories, while the combinatorially expanding design space renders conventional optimization and heuristic methods computationally intractable. We introduce an autoregressive diffusion framework that exploits the dyadic nature of linkage assembly by representing mechanisms as sequentially constructed graphs, where nodes correspond to joints and edges to rigid links. Our approach combines a causal transformer with a Denoising Diffusion Probabilistic Model (DDPM), both conditioned on target trajectories encoded via a transformer encoder. The causal transformer autoregressively predicts discrete topology node-by-node, while the DDPM refines each node's spatial coordinates and edge connectivity to previously generated nodes. This sequential generation enables adaptive trial-and-error synthesis where problematic nodes exhibiting kinematic locking or collisions can be selectively regenerated, allowing autonomous correction of degenerate configurations during design. Our graph-based, data-driven methodology surpasses traditional optimization approaches, enabling scalable inverse design that generalizes to mechanisms with arbitrary node counts. We demonstrate successful synthesis of linkage systems containing up to 20 nodes with extensibility to N-node architectures. This work advances autoregressive graph generation methodologies and computational kinematic synthesis, establishing new paradigms for scalable inverse design of complex mechanical systems.
Related papers
- AdvSynGNN: Structure-Adaptive Graph Neural Nets via Adversarial Synthesis and Self-Corrective Propagation [8.765438402697892]
Graph neural networks frequently encounter significant performance degradation when confronted with structural noise or non-homophilous topologies.<n>We present AdvSynGNN, a comprehensive architecture designed for resilient node-level representation learning.
arXiv Detail & Related papers (2026-02-19T04:26:57Z) - Latent Dynamics Graph Convolutional Networks for model order reduction of parameterized time-dependent PDEs [0.0]
We introduce Latent Dynamics Graph Conal Network (LD-GCN), a purely data-driven, encoder-free architecture.<n>LD-GCN learns a global, low-dimensional representation of dynamical systems conditioned on external inputs and parameters.<n>Our framework enhances interpretability by enabling the analysis of the reduced dynamics and supporting zero-shot prediction.
arXiv Detail & Related papers (2026-01-16T13:10:00Z) - Deep Delta Learning [91.75868893250662]
We introduce Deep Delta Learning (DDL), a novel architecture that generalizes the standard residual connection.<n>We provide a spectral analysis of this operator, demonstrating that the gate $(mathbfX)$ enables dynamic between identity mapping, projection, and geometric reflection.<n>This unification empowers the network to explicitly control the spectrum of its layer-wise transition operator, enabling the modeling of complex, non-monotonic dynamics.
arXiv Detail & Related papers (2026-01-01T18:11:38Z) - Time Extrapolation with Graph Convolutional Autoencoder and Tensor Train Decomposition [9.446359051690292]
We develop a time-consistent reduced-order model for parameterized partial differential equations on unstructured grids.<n>In particular, high-fidelity snapshots are represented as a combination of parametric, spatial, and temporal cores via TT decomposition.<n>We enhance the generalization performance by developing a multi-fidelity two-stages approach in the framework of Deep Operator Networks (DeepONet)<n> Numerical results, including heat-conduction, advection-diffusion and vortex-shedding phenomena, demonstrate great performance in effectively learning the dynamic in the extrapolation regime for complex geometries.
arXiv Detail & Related papers (2025-11-28T09:59:17Z) - Tensor Network Framework for Forecasting Nonlinear and Chaotic Dynamics [1.790605517028706]
We present a tensor network model (TNM) for forecasting nonlinear and chaotic dynamics.<n>We show that the TNM accurately reconstructs short-term trajectories and faithfully captures the attractor geometry.
arXiv Detail & Related papers (2025-11-12T11:49:38Z) - Kuramoto Orientation Diffusion Models [67.0711709825854]
Orientation-rich images, such as fingerprints and textures, often exhibit coherent angular patterns.<n>Motivated by the role of phase synchronization in biological systems, we propose a score-based generative model.<n>We implement competitive results on general image benchmarks and significantly improves generation quality on orientation-dense datasets like fingerprints and textures.
arXiv Detail & Related papers (2025-09-18T18:18:49Z) - Towards Pre-trained Graph Condensation via Optimal Transport [52.6504753271008]
Graph condensation aims to distill the original graph into a small-scale graph, mitigating redundancy and accelerating GNN training.<n> conventional GC approaches heavily rely on rigid GNNs and task-specific supervision.<n>Pre-trained Graph Condensation (PreGC) via optimal transport is proposed to transcend the limitations of task- and architecture-dependent GC methods.
arXiv Detail & Related papers (2025-09-18T08:13:24Z) - Autoencoder-based non-intrusive model order reduction in continuum mechanics [0.0]
We propose a non-intrusive, Autoencoder-based framework for reduced-order modeling in continuum mechanics.<n>Our method integrates three stages: (i) an unsupervised Autoencoder compresses high-dimensional finite element solutions into a compact latent space, (ii) a supervised regression network maps problem parameters to latent codes, and (iii) an end-to-end surrogate reconstructs full-field solutions directly from input parameters.
arXiv Detail & Related papers (2025-09-02T12:05:00Z) - Time-Scale Coupling Between States and Parameters in Recurrent Neural Networks [3.924071936547547]
Gated neural networks (RNNs) implicitly induce adaptive learning-rate behavior.<n>Effect arises from the coupling between state-space time scales--parametrized by the gates--and parameter-space dynamics.<n> Empirical simulations corroborate these claims.
arXiv Detail & Related papers (2025-08-16T18:19:34Z) - ReDiSC: A Reparameterized Masked Diffusion Model for Scalable Node Classification with Structured Predictions [64.17845687013434]
We propose ReDiSC, a structured diffusion model for structured node classification.<n>We show that ReDiSC achieves superior or highly competitive performance compared to state-of-the-art GNN, label propagation, and diffusion-based baselines.<n> Notably, ReDiSC scales effectively to large-scale datasets on which previous structured diffusion methods fail due to computational constraints.
arXiv Detail & Related papers (2025-07-19T04:46:53Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Rethinking Skip Connection with Layer Normalization in Transformers and
ResNets [49.87919454950763]
Skip connection is a widely-used technique to improve the performance of deep neural networks.
In this work, we investigate how the scale factors in the effectiveness of the skip connection.
arXiv Detail & Related papers (2021-05-15T11:44:49Z) - Optimizing Mode Connectivity via Neuron Alignment [84.26606622400423]
Empirically, the local minima of loss functions can be connected by a learned curve in model space along which the loss remains nearly constant.
We propose a more general framework to investigate effect of symmetry on landscape connectivity by accounting for the weight permutations of networks being connected.
arXiv Detail & Related papers (2020-09-05T02:25:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.