Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning
- URL: http://arxiv.org/abs/2505.15547v2
- Date: Sat, 14 Jun 2025 07:29:48 GMT
- Title: Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning
- Authors: Adrian Arnaiz-Rodriguez, Federico Errica,
- Abstract summary: We focus on the topics of oversmoothing and oversquashing, the homophily-heterophily dichotomy, and long-range tasks.<n>We argue that this has led to ambiguities around the investigated problems, preventing researchers from focusing on and addressing precise research questions.<n>Our contribution wants to make such common beliefs explicit and encourage critical thinking around these topics, supported by simple but noteworthy counterexamples.
- Score: 4.020829863982153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: After a renaissance phase in which researchers revisited the message-passing paradigm through the lens of deep learning, the graph machine learning community shifted its attention towards a deeper and practical understanding of message-passing's benefits and limitations. In this position paper, we notice how the fast pace of progress around the topics of oversmoothing and oversquashing, the homophily-heterophily dichotomy, and long-range tasks, came with the consolidation of commonly accepted beliefs and assumptions that are not always true nor easy to distinguish from each other. We argue that this has led to ambiguities around the investigated problems, preventing researchers from focusing on and addressing precise research questions while causing a good amount of misunderstandings. Our contribution wants to make such common beliefs explicit and encourage critical thinking around these topics, supported by simple but noteworthy counterexamples. The hope is to clarify the distinction between the different issues and promote separate but intertwined research directions to address them.
Related papers
- Not All Explanations for Deep Learning Phenomena Are Equally Valuable [58.7010466783654]
We argue that there is little evidence to suggest that counterintuitive phenomena appear in real-world applications.<n>These include double descent, grokking, and the lottery ticket hypothesis.<n>We propose practical recommendations for future research, aiming to ensure that progress on deep learning phenomena is well aligned with the ultimate pragmatic goal of progress in the broader field of deep learning.
arXiv Detail & Related papers (2025-06-29T15:18:56Z) - Navigating Shortcuts, Spurious Correlations, and Confounders: From Origins via Detection to Mitigation [21.21130450731374]
Clever Hans behavior, spurious correlations, or confounders, present a significant challenge in machine learning and AI.<n>Research in this area remains fragmented across various terminologies, hindering the progress of the field as a whole.<n>We introduce a unifying taxonomy by providing a formal definition of shortcuts and bridging the diverse terms used in the literature.
arXiv Detail & Related papers (2024-12-06T16:10:13Z) - SoK: Taming the Triangle -- On the Interplays between Fairness,
Interpretability and Privacy in Machine Learning [0.0]
Machine learning techniques are increasingly used for high-stakes decision-making.
It is crucial to ensure that the models learnt can be audited or understood by human users.
interpretability, fairness and privacy are key requirements for the development of responsible machine learning.
arXiv Detail & Related papers (2023-12-22T08:11:33Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - A Survey on Intersectional Fairness in Machine Learning: Notions,
Mitigation, and Challenges [11.885166133818819]
Adoption of Machine Learning systems has led to increased concerns about fairness implications.
We present a taxonomy for intersectional notions of fairness and mitigation.
We identify the key challenges and provide researchers with guidelines for future directions.
arXiv Detail & Related papers (2023-05-11T16:49:22Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Parsing Objects at a Finer Granularity: A Survey [54.72819146263311]
Fine-grained visual parsing is important in many real-world applications, e.g., agriculture, remote sensing, and space technologies.
Predominant research efforts tackle these fine-grained sub-tasks following different paradigms.
We conduct an in-depth study of the advanced work from a new perspective of learning the part relationship.
arXiv Detail & Related papers (2022-12-28T04:20:10Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Feedback in Imitation Learning: Confusion on Causality and Covariate
Shift [12.93527098342393]
We argue that conditioning policies on previous actions leads to a dramatic divergence between "held out" error and performance of the learner in situ.
We analyze existing benchmarks used to test imitation learning approaches.
We find, in a surprising contrast with previous literature, that naive behavioral cloning provides excellent results.
arXiv Detail & Related papers (2021-02-04T20:18:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.