Challenges and Applications of Large Language Models: A Comparison of GPT and DeepSeek family of models
- URL: http://arxiv.org/abs/2508.21377v1
- Date: Fri, 29 Aug 2025 07:41:04 GMT
- Title: Challenges and Applications of Large Language Models: A Comparison of GPT and DeepSeek family of models
- Authors: Shubham Sharma, Sneha Tuli, Narendra Badam,
- Abstract summary: Large Language Models (LLMs) are transforming AI across industries, but their development and deployment remain complex.<n>This survey reviews 16 key challenges in building and using LLMs.<n>We compare OpenAI's closed source GPT-4o and DeepSeek-V3-0324, a large open source Mixture-of-Experts model.
- Score: 1.2952310333454082
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) are transforming AI across industries, but their development and deployment remain complex. This survey reviews 16 key challenges in building and using LLMs and examines how these challenges are addressed by two state-of-the-art models with unique approaches: OpenAI's closed source GPT-4o (May 2024 update) and DeepSeek-V3-0324 (March 2025), a large open source Mixture-of-Experts model. Through this comparison, we showcase the trade-offs between closed source models (robust safety, fine-tuned reliability) and open source models (efficiency, adaptability). We also explore LLM applications across different domains (from chatbots and coding tools to healthcare and education), highlighting which model attributes are best suited for each use case. This article aims to guide AI researchers, developers, and decision-makers in understanding current LLM capabilities, limitations, and best practices.
Related papers
- Open-Source Multimodal Moxin Models with Moxin-VLM and Moxin-VLA [53.68989489261506]
Moxin 7B is introduced as a fully open-source Large Language Models (LLMs)<n>We develop three variants based on Moxin, including Moxin-VLM, Moxin-VLA, and Moxin-Chinese.<n> Experiments show that our models achieve superior performance in various evaluations.
arXiv Detail & Related papers (2025-12-22T02:36:42Z) - Speed Always Wins: A Survey on Efficient Architectures for Large Language Models [51.817121227562964]
Large Language Models (LLMs) have delivered impressive results in language understanding, generation, reasoning, and pushes the ability boundary of multimodal models.<n> Transformer models, as the foundation of modern LLMs, offer a strong baseline with excellent scaling properties.<n>The traditional transformer architecture requires substantial computations and poses significant obstacles for large-scale training and practical deployment.
arXiv Detail & Related papers (2025-08-13T14:13:46Z) - Discrete Diffusion in Large Language and Multimodal Models: A Survey [61.86669998363359]
We provide a systematic survey of Discrete Diffusion Language Models (dLLMs) and Discrete Diffusion Multimodal Language Models (dMLLMs)<n>Unlike autoregressive (AR) models, dLLMs and dMLLMs adopt a multi-token, parallel decoding paradigm using full attention and a denoising-based generation strategy.<n>We trace the historical development of dLLMs and dMLLMs, formalize the underlying mathematical frameworks, list commonly-used modeling methods, and categorize representative models.
arXiv Detail & Related papers (2025-06-16T17:59:08Z) - Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks [12.242406234298443]
We develop a taxonomy of potentially harmful software engineering scenarios.<n>We then create a dataset of prompts based on this taxonomy.<n>We investigate the impact of models size, architecture family, and alignment strategies on their tendency to generate harmful content.
arXiv Detail & Related papers (2025-04-02T16:00:14Z) - Generating Computational Cognitive Models using Large Language Models [4.269194018613294]
We introduce a pipeline for Guided generation of Computational Cognitive Models (GeCCo)<n>GeCCo prompts an LLM to propose candidate models, fits proposals to held-out data, and iteratively refines them based on their predictive performance.<n>We benchmark this approach across four different cognitive domains.
arXiv Detail & Related papers (2025-02-02T19:07:13Z) - Rethinking Scale: The Efficacy of Fine-Tuned Open-Source LLMs in Large-Scale Reproducible Social Science Research [0.0]
Large Language Models (LLMs) are distinguished by their architecture, which dictates their parameter size and performance capabilities.
Social scientists have increasingly adopted LLMs for text classification tasks, which are difficult to scale with human coders.
This study demonstrates that small, fine-tuned open-source LLMs can achieve equal or superior performance to models such as ChatGPT-4.
arXiv Detail & Related papers (2024-10-31T20:26:30Z) - Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - Improving the Capabilities of Large Language Model Based Marketing Analytics Copilots With Semantic Search And Fine-Tuning [0.9787137564521711]
We show how a combination of semantic search, prompt engineering, and fine-tuning can be applied to dramatically improve the ability of LLMs to execute these tasks accurately.
We compare both proprietary models, like GPT-4, and open-source models, like Llama-2-70b, as well as various embedding methods.
arXiv Detail & Related papers (2024-04-16T03:39:16Z) - A Review of Multi-Modal Large Language and Vision Models [1.9685736810241874]
Large Language Models (LLMs) have emerged as a focal point of research and application.
Recently, LLMs have been extended into multi-modal large language models (MM-LLMs)
This paper provides an extensive review of the current state of those LLMs with multi-modal capabilities as well as the very recent MM-LLMs.
arXiv Detail & Related papers (2024-03-28T15:53:45Z) - MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large
Language Models [70.92847554971065]
We introduce MT-Eval, a comprehensive benchmark designed to evaluate multi-turn conversational abilities.
By analyzing human-LLM conversations, we categorize interaction patterns into four types: recollection, expansion, refinement, and follow-up.
Our evaluation of 11 well-known LLMs shows that while closed-source models generally surpass open-source ones, certain open-source models exceed GPT-3.5-Turbo in specific tasks.
arXiv Detail & Related papers (2024-01-30T04:50:28Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Language Models as a Service: Overview of a New Paradigm and its
Challenges [47.75762014254756]
Some of the most powerful language models currently are proprietary systems, accessible only via (typically restrictive) web or programming.
This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LM interfaces.
On the other hand, it serves as a comprehensive resource for existing knowledge on current, major LM, offering a synthesized overview of the licences and capabilities their interfaces offer.
arXiv Detail & Related papers (2023-09-28T16:29:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.