BosonSampling.jl: A Julia package for quantum multi-photon interferometry
- URL: http://arxiv.org/abs/2212.09537v2
- Date: Mon, 29 Apr 2024 09:00:10 GMT
- Title: BosonSampling.jl: A Julia package for quantum multi-photon interferometry
- Authors: Benoit Seron, Antoine Restivo,
- Abstract summary: We present a free open source package for high performance simulation and numerical investigation of boson samplers and, more generally, multi-photon interferometry.
Our package is written in Julia, allowing C-like performance with easy notations and fast, high-level coding.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a free open source package for high performance simulation and numerical investigation of boson samplers and, more generally, multi-photon interferometry. Our package is written in Julia, allowing C-like performance with easy notations and fast, high-level coding. Underlying building blocks can easily be modified without complicated low-level language modifications. We present a great variety of routines for tasks related to boson sampling, such as statistical tools, optimization methods, classical samplers and validation tools.
Related papers
- Lighten CARAFE: Dynamic Lightweight Upsampling with Guided Reassemble Kernels [18.729177307412645]
We propose a lightweight upsampling operation, termed Dynamic Lightweight Upsampling (DLU)
Experiments on several mainstream vision tasks show that our DLU achieves comparable and even better performance to the original CARAFE.
arXiv Detail & Related papers (2024-10-29T15:35:14Z) - Learning Submodular Sequencing from Samples [11.528995186765751]
This paper addresses the problem of selecting and ranking items in a sequence to optimize some composite submodular function.
We present an algorithm that achieves an approximation ratio dependent on the curvature of the individual submodular functions.
arXiv Detail & Related papers (2024-09-09T01:33:13Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - QUBO.jl: A Julia Ecosystem for Quadratic Unconstrained Binary
Optimization [0.0]
QUBO.jl is an end-to-end Julia package for working with QUBO instances.
QUBO.jl allows its users to interface with the aforementioned hardware, sending QUBO models in various file formats and retrieving results for subsequent analysis.
arXiv Detail & Related papers (2023-07-05T18:20:31Z) - Arithmetic Sampling: Parallel Diverse Decoding for Large Language Models [65.52639709094963]
Methods such as beam search and Gumbel top-k sampling can guarantee a different output for each element of the beam, but are not easy to parallelize.
We present a framework for sampling according to an arithmetic code book implicitly defined by a large language model.
arXiv Detail & Related papers (2022-10-18T22:19:41Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - lambeq: An Efficient High-Level Python Library for Quantum NLP [7.996472424374576]
We present lambeq, the first high-level Python library for Quantum Natural Language Processing (QNLP)
lambeq supports syntactic parsing, rewriting and simplification of string diagrams, ansatz creation and manipulation, as well as a number of compositional models for preparing quantum-friendly representations of sentences.
We test the toolkit in practice by using it to perform a number of experiments on simple NLP tasks, implementing both classical and quantum pipelines.
arXiv Detail & Related papers (2021-10-08T16:40:56Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing [27.870537087888334]
We propose a novel Arbitrary-Sampling neTwork, dubbed COAST, to solve problems of arbitrary-sampling (including unseen sampling matrices) with one single model.
COAST is able to handle arbitrary sampling matrices with one single model and to achieve state-of-the-art performance with fast speed.
arXiv Detail & Related papers (2021-07-15T10:05:00Z) - Coherent randomized benchmarking [68.8204255655161]
We show that superpositions of different random sequences rather than independent samples are used.
We show that this leads to a uniform and simple protocol with significant advantages with respect to gates that can be benchmarked.
arXiv Detail & Related papers (2020-10-26T18:00:34Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.