Student Perceptions of Large Language Models Use in Self-Reflection and Design Critique in Architecture Studio
- URL: http://arxiv.org/abs/2602.00041v2
- Date: Tue, 03 Feb 2026 01:28:03 GMT
- Title: Student Perceptions of Large Language Models Use in Self-Reflection and Design Critique in Architecture Studio
- Authors: Juan David Salazar Rodriguez, Sam Conrad Joyce, Nachamma Sockalingam, Khoo Eng Tat, Julfendi,
- Abstract summary: This study investigates the integration of Large Language Models (LLMs) into the feedback mechanisms of the architectural design studio.<n>The research analyzes student perceptions across three distinct feed-back domains: self-reflection, peer critique, and professor-led reviews.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study investigates the integration of Large Language Models (LLMs) into the feedback mechanisms of the architectural design studio, shifting the focus from generative production to reflective pedagogy. Employing a mixed-methods approach with surveys and semi structured interviews with 22 architecture students at the Singapore University of Technology and De-sign, the research analyzes student perceptions across three distinct feed-back domains: self-reflection, peer critique, and professor-led reviews. The findings reveal that students engage with LLMs not as authoritative in-structors, but as collaborative "cognitive mirrors" that scaffold critical thinking. In self-directed learning, LLMs help structure thoughts and over-come the "blank page" problem, though they are limited by a lack of contex-tual nuance. In peer critiques, the technology serves as a neutral mediator, mitigating social anxiety and the "fear of offending". Furthermore, in high-stakes professor-led juries, students utilize LLMs primarily as post-critique synthesis engines to manage cognitive overload and translate ab-stract academic discourse into actionable design iterations.
Related papers
- Thinking Like a Student: AI-Supported Reflective Planning in a Theory-Intensive Computer Science Course [1.5229257192293202]
In the aftermath of COVID-19, many universities implemented supplementary "reinforcement" roles to support students in demanding courses.<n>This paper reports on the redesign of reinforcement sessions in a challenging undergraduate course on formal methods and computational models.<n>The intervention received positive student feedback, indicating increased confidence, reduced anxiety, and improved clarity.
arXiv Detail & Related papers (2025-10-31T12:35:18Z) - Fundamentals of Building Autonomous LLM Agents [64.39018305018904]
This paper reviews the architecture and implementation methods of agents powered by large language models (LLMs)<n>The research aims to explore patterns to develop "agentic" LLMs that can automate complex tasks and bridge the performance gap with human capabilities.
arXiv Detail & Related papers (2025-10-10T10:32:39Z) - Large Language Models in Architecture Studio: A Framework for Learning Outcomes [0.0]
The study explores the role of large language models (LLMs) in the context of the architectural design studio.<n>The main challenges include managing student autonomy, tensions in peer feedback, and the difficulty of balancing the transmission of technical knowledge with the stimulation of creativity in teaching.
arXiv Detail & Related papers (2025-10-08T02:51:22Z) - Not Minds, but Signs: Reframing LLMs through Semiotics [0.0]
This paper argues for a semiotic perspective on Large Language Models (LLMs)<n>Rather than assuming that LLMs understand language or simulate human thought, we propose that their primary function is to recombine, recontextualize, and circulate linguistic forms.<n>We explore applications in literature, philosophy, education, and cultural production.
arXiv Detail & Related papers (2025-05-20T08:49:18Z) - Assessing LLMs in Art Contexts: Critique Generation and Theory of Mind Evaluation [0.3359875577705537]
This study explores how large language models (LLMs) perform in two areas related to art.<n>For the critique generation part, we built a system that combines Noel Carroll's evaluative framework with a broad selection of art criticism theories.<n>These critiques were compared with those written by human experts in a Turing test-style evaluation.<n>In the second part, we introduced new simple ToM tasks based on situations involving interpretation, emotion, and moral tension.
arXiv Detail & Related papers (2025-04-17T10:10:25Z) - Meta-Reflection: A Feedback-Free Reflection Learning Framework [57.14485943991588]
We propose Meta-Reflection, a feedback-free reflection mechanism that requires only a single inference pass without external feedback.<n>Motivated by the human ability to remember and retrieve reflections from past experiences, Meta-Reflection integrates reflective insights into a codebook.<n>To thoroughly investigate and evaluate the practicality of Meta-Reflection in real-world scenarios, we introduce an industrial e-commerce benchmark named E-commerce Customer Intent Detection.
arXiv Detail & Related papers (2024-12-18T12:20:04Z) - Critic-CoT: Boosting the reasoning abilities of large language model via Chain-of-thoughts Critic [48.94340387130627]
Critic-CoT is a framework that pushes LLMs toward System-2-like critic capability.<n>CoT reasoning paradigm and the automatic construction of distant-supervision data without human annotation.<n>Experiments on GSM8K and MATH demonstrate that our enhanced model significantly boosts task-solving performance.
arXiv Detail & Related papers (2024-08-29T08:02:09Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Evaluating Large Language Models with Psychometrics [59.821829073478376]
This paper offers a comprehensive benchmark for quantifying psychological constructs of Large Language Models (LLMs)<n>Our work identifies five key psychological constructs -- personality, values, emotional intelligence, theory of mind, and self-efficacy -- assessed through a suite of 13 datasets.<n>We uncover significant discrepancies between LLMs' self-reported traits and their response patterns in real-world scenarios, revealing complexities in their behaviors.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Creativity Support in the Age of Large Language Models: An Empirical
Study Involving Emerging Writers [33.3564201174124]
We investigate the utility of modern large language models in assisting professional writers via an empirical user study.
We find that while writers seek LLM's help across all three types of cognitive activities, they find LLMs more helpful in translation and reviewing.
arXiv Detail & Related papers (2023-09-22T01:49:36Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.