Multi-convex Programming for Discrete Latent Factor Models Prototyping
- URL: http://arxiv.org/abs/2504.01431v1
- Date: Wed, 02 Apr 2025 07:33:54 GMT
- Title: Multi-convex Programming for Discrete Latent Factor Models Prototyping
- Authors: Hao Zhu, Shengchao Yan, Jasper Hoffmann, Joschka Boedecker,
- Abstract summary: We propose a generic framework based on CVXPY, which allows users to specify and solve the fitting problem of a wide range of DLFMs.<n>Our framework is flexible and inherently supports the integration of regularization terms and constraints on the DLFM parameters and latent factors.
- Score: 8.322623345761961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discrete latent factor models (DLFMs) are widely used in various domains such as machine learning, economics, neuroscience, psychology, etc. Currently, fitting a DLFM to some dataset relies on a customized solver for individual models, which requires lots of effort to implement and is limited to the targeted specific instance of DLFMs. In this paper, we propose a generic framework based on CVXPY, which allows users to specify and solve the fitting problem of a wide range of DLFMs, including both regression and classification models, within a very short script. Our framework is flexible and inherently supports the integration of regularization terms and constraints on the DLFM parameters and latent factors, such that the users can easily prototype the DLFM structure according to their dataset and application scenario. We introduce our open-source Python implementation and illustrate the framework in several examples.
Related papers
- Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo [90.78001821963008]
A wide range of LM applications require generating text that conforms to syntactic or semantic constraints.
We develop an architecture for controlled LM generation based on sequential Monte Carlo (SMC)
Our system builds on the framework of Lew et al. (2023) and integrates with its language model probabilistic programming language.
arXiv Detail & Related papers (2025-04-17T17:49:40Z) - Few-shot Steerable Alignment: Adapting Rewards and LLM Policies with Neural Processes [50.544186914115045]
Large language models (LLMs) are increasingly embedded in everyday applications.
Ensuring their alignment with the diverse preferences of individual users has become a critical challenge.
We present a novel framework for few-shot steerable alignment.
arXiv Detail & Related papers (2024-12-18T16:14:59Z) - FDM-Bench: A Comprehensive Benchmark for Evaluating Large Language Models in Additive Manufacturing Tasks [2.473350840334717]
Managing the complex parameters and resolving print defects in Fused Deposition Modeling remain challenging.
Large Language Models (LLMs) offer the potential for addressing these challenges in FDM.
FDM-Bench is a benchmark dataset designed to evaluate LLMs on FDM-specific tasks.
arXiv Detail & Related papers (2024-12-13T03:16:14Z) - xGen-MM (BLIP-3): A Family of Open Large Multimodal Models [157.44696790158784]
This report introduces xGen-MM, a framework for developing Large Multimodal Models (LMMs)
The framework comprises meticulously curated datasets, a training recipe, model architectures, and a resulting suite of LMMs.
Our models undergo rigorous evaluation across a range of tasks, including both single and multi-image benchmarks.
arXiv Detail & Related papers (2024-08-16T17:57:01Z) - PARAFAC2-based Coupled Matrix and Tensor Factorizations with Constraints [1.0519027757362966]
We introduce a flexible algorithmic framework that fits PARAFAC2-based CMTF models using Alternating Optimization (AO) and the Alternating Direction Method of Multipliers (ADMM)
Experiments on various simulated and a real dataset demonstrate the utility and versatility of the proposed framework.
arXiv Detail & Related papers (2024-06-18T07:05:31Z) - VANER: Leveraging Large Language Model for Versatile and Adaptive Biomedical Named Entity Recognition [3.4923338594757674]
Large language models (LLMs) can be used to train a model capable of extracting various types of entities.
In this paper, we utilize the open-sourced LLM LLaMA2 as the backbone model, and design specific instructions to distinguish between different types of entities and datasets.
Our model VANER, trained with a small partition of parameters, significantly outperforms previous LLMs-based models and, for the first time, as a model based on LLM, surpasses the majority of conventional state-of-the-art BioNER systems.
arXiv Detail & Related papers (2024-04-27T09:00:39Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - Optimal Event Monitoring through Internet Mashup over Multivariate Time
Series [77.34726150561087]
This framework supports the services of model definitions, querying, parameter learning, model evaluations, data monitoring, decision recommendations, and web portals.
We further extend the MTSA data model and query language to support this class of problems for the services of learning, monitoring, and recommendation.
arXiv Detail & Related papers (2022-10-18T16:56:17Z) - Learning Structured Latent Factors from Dependent Data:A Generative
Model Framework from Information-Theoretic Perspective [18.88255368184596]
We present a novel framework for learning generative models with various underlying structures in the latent space.
Our model provides a principled approach to learn a set of semantically meaningful latent factors that reflect various types of desired structures.
arXiv Detail & Related papers (2020-07-21T06:59:29Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.