Relation-Oriented: Toward Causal Knowledge-Aligned AGI
- URL: http://arxiv.org/abs/2307.16387v9
- Date: Mon, 23 Oct 2023 23:09:20 GMT
- Title: Relation-Oriented: Toward Causal Knowledge-Aligned AGI
- Authors: Jia Li, Xiang Li
- Abstract summary: Relation-Oriented paradigm is aimed at facilitating the development of causal knowledge-aligned Artificial General Intelligence.
As its methodological counterpart, the proposed Relation-Indexed Representation Learning (RIRL) is validated through efficacy experiments.
- Score: 24.76814726122543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Observation-Oriented paradigm currently dominates relationship learning
models, including AI-based ones, which inherently do not account for
relationships with temporally nonlinear effects. Instead, this paradigm
simplifies the "temporal dimension" to be a linear observational timeline,
necessitating the prior identification of effects with specific timestamps.
Such constraints lead to identifiability difficulties for dynamical effects,
thereby overlooking the potentially crucial temporal nonlinearity of the
modeled relationship. Moreover, the multi-dimensional nature of Temporal
Feature Space is largely disregarded, introducing inherent biases that
seriously compromise the robustness and generalizability of relationship
models. This limitation is particularly pronounced in large AI-based causal
applications.
Examining these issues through the lens of a dimensionality framework, a
fundamental misalignment is identified between our relation-indexing
comprehension of knowledge and the current modeling paradigm. To address this,
a new Relation-Oriented} paradigm is raised, aimed at facilitating the
development of causal knowledge-aligned Artificial General Intelligence (AGI).
As its methodological counterpart, the proposed Relation-Indexed Representation
Learning (RIRL) is validated through efficacy experiments.
Related papers
- Neural Persistence Dynamics [8.197801260302642]
We consider the problem of learning the dynamics in the topology of time-evolving point clouds.
Our proposed model -- staticitneural persistence dynamics -- substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
arXiv Detail & Related papers (2024-05-24T17:20:18Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - TS-CausalNN: Learning Temporal Causal Relations from Non-linear Non-stationary Time Series Data [0.42156176975445486]
We propose a Time-Series Causal Neural Network (TS-CausalNN) to discover contemporaneous and lagged causal relations simultaneously.
In addition to the simple parallel design, an advantage of the proposed model is that it naturally handles the non-stationarity and non-linearity of the data.
arXiv Detail & Related papers (2024-04-01T20:33:29Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - TC-GAT: Graph Attention Network for Temporal Causality Discovery [6.974417592057705]
Causality is frequently intertwined with temporal elements, as the progression from cause to effect is not instantaneous but rather ensconced in a temporal dimension.
We propose a method for extracting causality from the text that integrates both temporal and causal relations.
We present a novel model, TC-GAT, which employs a graph attention mechanism to assign weights to the temporal relationships and leverages a causal knowledge graph to determine the adjacency matrix.
arXiv Detail & Related papers (2023-04-21T02:26:42Z) - Temporal Relevance Analysis for Video Action Models [70.39411261685963]
We first propose a new approach to quantify the temporal relationships between frames captured by CNN-based action models.
We then conduct comprehensive experiments and in-depth analysis to provide a better understanding of how temporal modeling is affected.
arXiv Detail & Related papers (2022-04-25T19:06:48Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.