Adapting the LodView RDF Browser for Navigation over the Multilingual
Linguistic Linked Open Data Cloud
- URL: http://arxiv.org/abs/2208.13295v2
- Date: Tue, 30 Aug 2022 01:09:06 GMT
- Title: Adapting the LodView RDF Browser for Navigation over the Multilingual
Linguistic Linked Open Data Cloud
- Authors: Alexander Kirillovich and Konstantin Nikolaev
- Abstract summary: The paper is dedicated to the use of LodView for navigation over the multilingual Linked Open Data cloud.
We define the class of Pubby-like tools that LodView belongs to, and clarify the relation of this class to the classes of dereferenciation tools, RDF browsers and LOD visualization tools.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper is dedicated to the use of LodView for navigation over the
multilingual Linguistic Linked Open Data cloud. First, we define the class of
Pubby-like tools, that LodView belongs to, and clarify the relation of this
class to the classes of URI dereferenciation tools, RDF browsers and LOD
visualization tools. Second, we reveal several limitations of LodView that
impede its use for the designated purpose, and propose improvements to be made
for fixing these limitations. These improvements are: 1) resolution of Cyrillic
URIs; 2) decoding Cyrillic URIs in Turtle representations of resources; 3)
support of Cyrillic literals; 4) user-friendly URLs for RDF representations of
resources; 5) support of hash URIs; 6) expanding nested resources; 7) support
of RDF collections; 8) pagination of resource property values; and 9) support
of $\LaTeX$ math notation. Third, we partially implement several of the
proposed improvements.
Related papers
- RAGViz: Diagnose and Visualize Retrieval-Augmented Generation [16.91653397201039]
Retrieval-augmented generation (RAG) combines knowledge from domain-specific sources into large language models.
We propose RAGViz, a RAG diagnosis tool that visualizes the attentiveness of the generated tokens in retrieved documents.
RAGViz provides two main functionalities: (1) token and document-level attention visualization, and (2) generation comparison upon context document addition and removal.
arXiv Detail & Related papers (2024-11-04T02:30:05Z) - SV-RAG: LoRA-Contextualizing Adaptation of MLLMs for Long Document Understanding [103.69014172427026]
Multimodal large language models (MLLMs) have recently shown great progress in text-rich image understanding, yet they still struggle with complex, multi-page visually-rich documents.
We present a novel framework named **S**elf-**V**isual **R**etrieval-**A**ugmented **G**eneration (SV-RAG) which can broaden horizons of any MLLM to support long-document understanding.
arXiv Detail & Related papers (2024-11-02T02:09:01Z) - MURI: High-Quality Instruction Tuning Datasets for Low-Resource Languages via Reverse Instructions [54.08017526771947]
Multilingual Reverse Instructions (MURI) generates high-quality instruction tuning datasets for low-resource languages.
MURI produces instruction-output pairs from existing human-written texts in low-resource languages.
Our dataset, MURI-IT, includes more than 2 million instruction-output pairs across 200 languages.
arXiv Detail & Related papers (2024-09-19T17:59:20Z) - RAFT: Adapting Language Model to Domain Specific RAG [75.63623523051491]
We present Retrieval Augmented FineTuning (RAFT), a training recipe that improves the model's ability to answer questions in a "openbook" in-domain settings.
RAFT accomplishes this by citing the verbatim right sequence from the relevant document that would help answer the question.
RAFT consistently improves the model's performance across PubMed, HotpotQA, and Gorilla datasets.
arXiv Detail & Related papers (2024-03-15T09:26:02Z) - Language-enhanced RNR-Map: Querying Renderable Neural Radiance Field
maps with natural language [51.805056586678184]
We present a Language-enhanced Renderable Neural Radiance map for Visual Navigation with natural language query prompts.
Le-RNR-Map employs a grid structure comprising latent codes positioned at each pixel.
We enhance RNR-Map with CLIP-based embedding latent codes, allowing natural language search without additional label data.
arXiv Detail & Related papers (2023-08-17T08:27:01Z) - GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest [51.68383826362895]
We propose spatial instruction tuning, which introduces the reference to the region-of-interest(RoI) in the instruction.
Our model GPT4RoI, trained on 7 region-text pair datasets, brings an unprecedented interactive and conversational experience.
arXiv Detail & Related papers (2023-07-07T13:43:44Z) - Large Language Models are Built-in Autoregressive Search Engines [19.928494069013485]
Large language models (LLMs) can follow human instructions to directly generate URLs for document retrieval.
LLMs can generate Web URLs where nearly 90% of the corresponding documents contain correct answers to open-domain questions.
arXiv Detail & Related papers (2023-05-16T17:04:48Z) - SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language
Navigation [57.12508968239015]
This work presents a transformer-based vision-and-language navigation (VLN) agent.
It uses two different visual encoders -- a scene classification network and an object detector.
Scene features contribute high-level contextual information that supports object-level processing.
arXiv Detail & Related papers (2021-10-27T03:29:34Z) - Introducing the viewpoint in the resource description using machine
learning [0.0]
We propose a new approach, which allows converting a classic RDF resource description to a resource description that takes into consideration viewpoints.
An experimental study shows that the conversion allows giving very relevant responses to the user's requests.
arXiv Detail & Related papers (2021-09-27T18:54:57Z) - StreamSide: A Fully-Customizable Open-Source Toolkit for Efficient
Annotation of Meaning Representations [17.74208462902158]
StreamSide is an open-source toolkit for annotating multiple kinds of meaning representations.
It supports frame-based and frameless annotation schemes.
StreamSide is released under the Apache 2.0 license.
arXiv Detail & Related papers (2021-09-20T21:36:22Z) - MURAL: Multimodal, Multitask Retrieval Across Languages [14.323816604663053]
MURAL is a dual encoder that solves two tasks: image-text matching and translation pair matching.
By incorporating billions of translation pairs, MURAL extends ALIGN (Jia et al. PMLR'21)--a state-of-the-art dual encoder learned from 1.8 billion noisy image-text pairs.
It considerably improves performance on under-resourced languages, showing that text-text learning can overcome a paucity of image-caption examples for these languages.
arXiv Detail & Related papers (2021-09-10T22:26:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.