LIVE: LaTex Interactive Visual Editing
- URL: http://arxiv.org/abs/2405.06762v1
- Date: Fri, 10 May 2024 18:28:00 GMT
- Title: LIVE: LaTex Interactive Visual Editing
- Authors: Jinwei Lin,
- Abstract summary: We propose LIVE, a novel design methods idea to design interactive LaTex graphic items.
Using LIVE can design more graphic items, which we call the Gitems.
For vividly representing the functions of LIVE, we use the papers from NeRF as the example reference papers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LaTex coding is one of the main methods of writing an academic paper. When writing a paper, abundant proper visual or graphic components will represent more information volume than the textual data. However, most of the implementation of LaTex graphic items are designed as static items that have some weaknesses in representing more informative figures or tables with an interactive reading experience. To address this problem, we propose LIVE, a novel design methods idea to design interactive LaTex graphic items. To make a lucid representation of the main idea of LIVE, we designed several novels representing implementations that are interactive and enough explanation for the basic level principles. Using LIVE can design more graphic items, which we call the Gitems, and easily and automatically get the relationship of the mutual application of a specific range of papers, which will add more vitality and performance factors into writing of traditional papers especially the review papers. For vividly representing the functions of LIVE, we use the papers from NeRF as the example reference papers. The code of the implementation project is open source.
Related papers
- KnowTeX: Visualizing Mathematical Dependencies [1.9531892208117902]
We present Know, a tool that enables the visualization of conceptual dependencies directly from sources.<n>Using a simple "uses" command, Know extracts relationships among statements and generates previewable graphs in DOT and TikZ formats.<n>We argue that dependency graphs should become a standard feature of mathematical writing, benefiting both human readers and automated systems.
arXiv Detail & Related papers (2025-12-16T18:24:28Z) - Simple Vision-Language Math Reasoning via Rendered Text [7.237955967317942]
We present a lightweight yet effective pipeline for training vision-language models to solve math problems.<n>This simple text-to-vision augmentation enables compact multimodal architectures to achieve state-of-the-art reasoning accuracy.
arXiv Detail & Related papers (2025-11-12T15:04:44Z) - GraphMind: Interactive Novelty Assessment System for Accelerating Scientific Discovery [20.945875851329244]
$textbfGraphMind$ is an easy-to-use interactive web tool designed to assist users in evaluating the novelty of scientific papers or drafted ideas.<n>$textbfGraphMind$ enables users to capture the main structure of a scientific paper, explore related ideas through various perspectives, and assess novelty.
arXiv Detail & Related papers (2025-10-17T14:49:07Z) - TALLMesh: a simple application for performing Thematic Analysis with Large Language Models [0.0]
Thematic analysis (TA) is a widely used qualitative research method for identifying and interpreting patterns within textual data.
Recent research has shown that it is possible to satisfactorily perform TA using Large Language Models (LLMs)
This paper presents a novel application using LLMs to assist researchers in conducting TA.
arXiv Detail & Related papers (2025-04-05T15:10:08Z) - LLM as GNN: Graph Vocabulary Learning for Text-Attributed Graph Foundation Models [54.82915844507371]
Text-Attributed Graphs (TAGs) are ubiquitous in real-world scenarios.
Despite large efforts to integrate Large Language Models (LLMs) and Graph Neural Networks (GNNs) for TAGs, existing approaches suffer from decoupled architectures.
We propose PromptGFM, a versatile GFM for TAGs grounded in graph vocabulary learning.
arXiv Detail & Related papers (2025-03-05T09:45:22Z) - GLDesigner: Leveraging Multi-Modal LLMs as Designer for Enhanced Aesthetic Text Glyph Layouts [53.568057283934714]
We propose a VLM-based framework that generates content-aware text logo layouts.
We introduce two model techniques to reduce the computation for processing multiple glyph images simultaneously.
To support instruction-tuning of out model, we construct two extensive text logo datasets, which are 5x more larger than the existing public dataset.
arXiv Detail & Related papers (2024-11-18T10:04:10Z) - GRAG: Graph Retrieval-Augmented Generation [14.98084919101233]
Graph Retrieval-Augmented Generation (GRAG) tackles the fundamental challenges in retrieving textual subgraphs.<n>We propose a novel divide-and-conquer strategy that retrieves the optimal subgraph structure in linear time.<n>Our experiments on graph reasoning benchmarks demonstrate that our GRAG approach significantly outperforms current state-of-the-art RAG methods.
arXiv Detail & Related papers (2024-05-26T10:11:40Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large
Language Model [73.38800189095173]
This work focuses on strengthening the multi-modal diagram analysis ability of Multimodal LLMs.
By parsing Latex source files of high-quality papers, we carefully build a multi-modal diagram understanding dataset M-Paper.
M-Paper is the first dataset to support joint comprehension of multiple scientific diagrams, including figures and tables in the format of images or Latex codes.
arXiv Detail & Related papers (2023-11-30T04:43:26Z) - Learning Multiplex Representations on Text-Attributed Graphs with One Language Model Encoder [55.24276913049635]
We propose METAG, a new framework for learning Multiplex rEpresentations on Text-Attributed Graphs.
In contrast to existing methods, METAG uses one text encoder to model the shared knowledge across relations.
We conduct experiments on nine downstream tasks in five graphs from both academic and e-commerce domains.
arXiv Detail & Related papers (2023-10-10T14:59:22Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - LAViTeR: Learning Aligned Visual and Textual Representations Assisted by Image and Caption Generation [5.064384692591668]
This paper proposes LAViTeR, a novel architecture for visual and textual representation learning.
The main module, Visual Textual Alignment (VTA) will be assisted by two auxiliary tasks, GAN-based image synthesis and Image Captioning.
The experimental results on two public datasets, CUB and MS-COCO, demonstrate superior visual and textual representation alignment.
arXiv Detail & Related papers (2021-09-04T22:48:46Z) - InfographicVQA [31.084392784258032]
InfographicVQA is a new dataset that comprises a diverse collection of infographics along with natural language questions and answers annotations.
We curate the dataset with emphasis on questions that require elementary reasoning and basic arithmetic skills.
The dataset, code and leaderboard will be made available at http://docvqa.org.
arXiv Detail & Related papers (2021-04-26T17:45:54Z) - Leveraging Graph to Improve Abstractive Multi-Document Summarization [50.62418656177642]
We develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents.
Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.
Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.
arXiv Detail & Related papers (2020-05-20T13:39:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.