Object-Spatial Programming
- URL: http://arxiv.org/abs/2503.15812v6
- Date: Sat, 07 Jun 2025 19:08:16 GMT
- Title: Object-Spatial Programming
- Authors: Jason Mars,
- Abstract summary: We introduce Object-Spatial Programming (OSP), a programming model that extends Object-Oriented Programming.<n>OSP fundamentally inverts the traditional relationship between data and computation, enabling computation to move to data through four specialized archetypes.<n>This semantic enhancement enables runtime systems to make informed decisions about data locality, parallel execution, and distribution strategies.
- Score: 2.8374498376407877
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The evolution of programming languages from low-level assembly to high-level abstractions demonstrates a fundamental principle: by constraining how programmers express computation and enriching semantic information at the language level, we can make previously undecidable program properties tractable for optimization. Building on the insight of this undecidability-lessening effect, we introduce Object-Spatial Programming (OSP), a novel programming model that extends Object-Oriented Programming by introducing topologically-aware class constructs called archetypes. OSP fundamentally inverts the traditional relationship between data and computation, enabling computation to move to data through four specialized archetypes: object classes, node classes (discrete data locations), edge classes (first-class relationships), and walker classes (mobile computational entities). By making topological relationships and traversal patterns explicit at the language level, OSP transforms previously opaque program behaviors into observable, optimizable patterns. This semantic enhancement enables runtime systems to make informed decisions about data locality, parallel execution, and distribution strategies based on explicit topology, while providing programmers with intuitive abstractions for modeling complex systems where connection topology is central to the computational model. The paradigm addresses fundamental limitations in traditional programming models when representing agent-based systems, social networks, neural networks, distributed systems, finite state machines, and other spatially-oriented computational problems, demonstrating how thoughtful abstraction design can simultaneously enhance programmer expressiveness and enable sophisticated system-level optimizations across the computing stack.
Related papers
- Efficient Attention Mechanisms for Large Language Models: A Survey [18.86171225316892]
Transformer-based architectures have become the prevailing computation backbone of large language models.<n>Recent research has introduced two principal categories of efficient attention mechanisms.<n>Sparse attention techniques, in contrast, limit attention to selected subsets of tokens based on fixed patterns, block-wise routing, or clustering strategies.
arXiv Detail & Related papers (2025-07-25T18:08:10Z) - Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures [49.19753720526998]
We derive theoretical scaling laws for neural network performance on synthetic datasets.<n>We validate that convolutional networks, whose structure aligns with that of the generative process through locality and weight sharing, enjoy a faster scaling of performance.<n>This finding clarifies the architectural biases underlying neural scaling laws and highlights how representation learning is shaped by the interaction between model architecture and the statistical properties of data.
arXiv Detail & Related papers (2025-05-11T17:44:14Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.<n>Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Interpretable deformable image registration: A geometric deep learning perspective [9.13809412085203]
We present a theoretical foundation for designing an interpretable registration framework.<n>We formulate an end-to-end process that refines transformations in a coarse-to-fine fashion.<n>We conclude by showing significant improvement in performance metrics over state-of-the-art approaches.
arXiv Detail & Related papers (2024-12-17T19:47:10Z) - Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Object-Oriented Programming [17.98778771365221]
Object-Oriented Programming (OOP) has become a crucial paradigm for managing the growing complexity of modern software systems.<n>This work provides a comprehensive introduction to the integration of OOP techniques within these domains.<n>We examine how design patterns and modular programming can be employed to enhance the structure and efficiency of machine learning systems.
arXiv Detail & Related papers (2024-09-30T03:37:10Z) - Topological Deep Learning with State-Space Models: A Mamba Approach for Simplicial Complexes [4.787059527893628]
We propose a novel architecture designed to operate with simplicial complexes, utilizing the Mamba state-space model as its backbone.
Our approach generates sequences for the nodes based on the neighboring cells, enabling direct communication between all higher-order structures, regardless of their rank.
arXiv Detail & Related papers (2024-09-18T14:49:25Z) - Redefining Data-Centric Design: A New Approach with a Domain Model and Core Data Ontology for Computational Systems [2.872069347343959]
This paper presents an innovative data-centric paradigm for designing computational systems by introducing a new informatics domain model.
The proposed model moves away from the conventional node-centric framework and focuses on data-centric categorization, using a multimodal approach that incorporates objects, events, concepts, and actions.
arXiv Detail & Related papers (2024-09-01T22:34:12Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
We introduce a theoretical framework that leverages latent directions in input space, which can be related to contextual information.
We derive an exact formula for the context-dependent manifold capacity that depends on manifold geometry and context correlations.
Our framework's increased expressivity captures representation reformatting in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - On Linearizing Structured Data in Encoder-Decoder Language Models: Insights from Text-to-SQL [8.57550491437633]
This work investigates the linear handling of structured data in encoder-decoder language models, specifically T5.
Our findings reveal the model's ability to mimic human-designed processes such as schema linking and syntax prediction.
We also uncover insights into the model's internal mechanisms, including the ego-centric nature of structure node encodings.
arXiv Detail & Related papers (2024-04-03T01:16:20Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Serving Deep Learning Model in Relational Databases [70.53282490832189]
Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains.
We highlight three pivotal paradigms: The state-of-the-art DL-centric architecture offloads DL computations to dedicated DL frameworks.
The potential UDF-centric architecture encapsulates one or more tensor computations into User Defined Functions (UDFs) within the relational database management system (RDBMS)
arXiv Detail & Related papers (2023-10-07T06:01:35Z) - Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - PDSketch: Integrated Planning Domain Programming and Learning [86.07442931141637]
We present a new domain definition language, named PDSketch.
It allows users to flexibly define high-level structures in the transition models.
Details of the transition model will be filled in by trainable neural networks.
arXiv Detail & Related papers (2023-03-09T18:54:12Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Simplicial Attention Networks [0.0]
We introduce a proper self-attention mechanism able to process data components at different layers.
We learn how to weight both upper and lower neighborhoods of the given topological domain in a totally task-oriented fashion.
The proposed approach compares favorably with other methods when applied to different (inductive and transductive) tasks.
arXiv Detail & Related papers (2022-03-14T20:47:31Z) - CateCom: a practical data-centric approach to categorization of
computational models [77.34726150561087]
We present an effort aimed at organizing the landscape of physics-based and data-driven computational models.
We apply object-oriented design concepts and outline the foundations of an open-source collaborative framework.
arXiv Detail & Related papers (2021-09-28T02:59:40Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - How could Neural Networks understand Programs? [67.4217527949013]
It is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by theshelf.
We propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition.
arXiv Detail & Related papers (2021-05-10T12:21:42Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Multifaceted Context Representation using Dual Attention for Ontology
Alignment [6.445605125467574]
Ontology alignment is an important research problem that finds application in various fields such as data integration, data transfer, data preparation etc.
We propose VeeAlign, a Deep Learning based model that uses a dual-attention mechanism to compute the contextualized representation of a concept in order to learn alignments.
We validate our approach on various datasets from different domains and in multilingual settings, and show its superior performance over SOTA methods.
arXiv Detail & Related papers (2020-10-16T18:28:38Z) - Beyond Graph Neural Networks with Lifted Relational Neural Networks [14.63152363481139]
We demonstrate a declarative differentiable programming framework based on the language of Lifted Neural Networks.
Small parameterized programs are used to encode learning.
We show how this idea can be used for an efficient encoding of a diverse range of advanced neural networks.
arXiv Detail & Related papers (2020-07-13T10:10:58Z) - Learning Generalized Relational Heuristic Networks for Model-Agnostic
Planning [29.714818991696088]
This paper develops a new approach for learning generalizeds in the absence of symbolic action models.
It uses an abstract state representation to facilitate data efficient, generalizable learning.
arXiv Detail & Related papers (2020-07-10T06:08:28Z) - Neural Entity Linking: A Survey of Models Based on Deep Learning [82.43751915717225]
This survey presents a comprehensive description of recent neural entity linking (EL) systems developed since 2015.
Its goal is to systemize design features of neural entity linking systems and compare their performance to the remarkable classic methods on common benchmarks.
The survey touches on applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models.
arXiv Detail & Related papers (2020-05-31T18:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.