Ghost factors in Gauss-sum factorization with transmon qubits
- URL: http://arxiv.org/abs/2104.11368v3
- Date: Thu, 9 Dec 2021 01:31:41 GMT
- Title: Ghost factors in Gauss-sum factorization with transmon qubits
- Authors: Lin Htoo Zaw, Yuanzheng Paul Tan, Long Hoang Nguyen, Rangga P. Budoyo,
Kun Hee Park, Zhi Yang Koh, Alessandro Landra, Christoph Hufnagel, Yung Szen
Yap, Teck Seng Koh, Rainer Dumke
- Abstract summary: We investigate Type II ghost factors, which are the class of ghost factors that cannot be suppressed.
The presence of Type II ghost factors and the coherence time of the qubit set an upper limit for the total experiment time.
We introduce preprocessing as a strategy to increase the discernability of a system, and demonstrate the technique with a transmon qubit.
- Score: 44.62475518267084
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A challenge in the Gauss sums factorization scheme is the presence of ghost
factors - non-factors that behave similarly to actual factors of an integer -
which might lead to the misidentification of non-factors as factors or vice
versa, especially in the presence of noise. We investigate Type II ghost
factors, which are the class of ghost factors that cannot be suppressed with
techniques previously laid out in the literature. The presence of Type II ghost
factors and the coherence time of the qubit set an upper limit for the total
experiment time, and hence the largest factorizable number with this scheme.
Discernability is a figure of merit introduced to characterize this behavior.
We introduce preprocessing as a strategy to increase the discernability of a
system, and demonstrate the technique with a transmon qubit. This can bring the
total experiment time of the system closer to its decoherence limit, and
increase the largest factorizable number.
Related papers
- Efficient Detection of Commutative Factors in Factor Graphs [1.1323769002489257]
We introduce the detection of commutative factors (DECOR) algorithm, which allows us to drastically reduce the computational effort for checking whether a factor is commutative in practice.
We prove that DECOR efficiently identifies restrictions to drastically reduce the number of required iterations and validate the efficiency of DECOR in our empirical evaluation.
arXiv Detail & Related papers (2024-07-23T08:31:24Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identification of Causal Structure with Latent Variables Based on Higher
Order Cumulants [31.85295338809117]
We propose a novel approach to identify the existence of a causal edge between two observed variables subject to latent variable influence.
In case when such a causal edge exits, we introduce an asymmetry criterion to determine the causal direction.
arXiv Detail & Related papers (2023-12-19T08:20:19Z) - C-Disentanglement: Discovering Causally-Independent Generative Factors
under an Inductive Bias of Confounder [35.09708249850816]
We introduce a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder.
We conduct extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-10-26T11:44:42Z) - Quantum and Classical Combinatorial Optimizations Applied to
Lattice-Based Factorization [0.046040036610482664]
We show that lattice-based factoring does not scale successfully to larger numbers.
We consider particular cases of the CVP, and opportunities for applying quantum techniques to other parts of the factorization pipeline.
arXiv Detail & Related papers (2023-08-15T14:31:25Z) - Maximal Ordinal Two-Factorizations [0.0]
We show that deciding on the existence of two-factorizations of a given size is an NP-complete problem.
We provide the algorithm Ord2Factor that allows us to compute large ordinal two-factorizations.
arXiv Detail & Related papers (2023-04-06T19:26:03Z) - Interventional Causal Representation Learning [75.18055152115586]
Causal representation learning seeks to extract high-level latent factors from low-level sensory data.
Can interventional data facilitate causal representation learning?
We show that interventional data often carries geometric signatures of the latent factors' support.
arXiv Detail & Related papers (2022-09-24T04:59:03Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.