International Students and Scams: At Risk Abroad
- URL: http://arxiv.org/abs/2510.18715v1
- Date: Tue, 21 Oct 2025 15:16:28 GMT
- Title: International Students and Scams: At Risk Abroad
- Authors: Katherine Zhang, Arjun Arunasalam, Pubali Datta, Z. Berkay Celik,
- Abstract summary: International students (IntlS) in the US refer to foreign students who acquire student visas to study in the US, primarily in higher education.<n>As IntlS arrive in the US, they face several challenges, such as adjusting to a new country and culture, securing housing remotely, and arranging finances for tuition and personal expenses.<n>Recent events such as visa revocations and the cessation of new visas, compound IntlS' risk of being targeted by and falling victim to online scams.
- Score: 13.445752079381869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: International students (IntlS) in the US refer to foreign students who acquire student visas to study in the US, primarily in higher education. As IntlS arrive in the US, they face several challenges, such as adjusting to a new country and culture, securing housing remotely, and arranging finances for tuition and personal expenses. These experiences, coupled with recent events such as visa revocations and the cessation of new visas, compound IntlS' risk of being targeted by and falling victim to online scams. While prior work has investigated IntlS' security and privacy, as well as general end users' reactions to online scams, research on how IntlS are uniquely impacted by scams remains largely absent. To address this gap, we conduct a two-phase user study comprising surveys (n=48) and semi-structured interviews (n=9). We investigate IntlS' exposure and interactions with scams, post-exposure actions such as reporting, and their perceptions of the usefulness of existing prevention resources and the barriers to following prevention advice. We find that IntlS are often targeted by scams (e.g., attackers impersonating government officials) and fear legal implications or deportation, which directly impacts their interactions with scams (e.g., they may prolong engagement with a scammer due to a sense of urgency). Interestingly, we also find that IntlS may lack awareness of - or access to - reliable resources that inform them about scams or guide them in reporting incidents to authorities. In fact, they may also face unique barriers in enacting scam prevention advice, such as avoiding reporting financial losses, since IntlS are required to demonstrate financial ability to stay in the US. The findings produced by our study help synthesize guidelines for stakeholders to better aid IntlS in reacting to scams.
Related papers
- Experiencer, Helper, or Observer: Online Fraud Intervention for Older Adults Through Role-based Simulation [14.8124073941176]
ROLESafe is an anti-fraud educational intervention in which older adults learn through different learning roles.<n>In a study with 144 older adults in China, we found that the Experiencer and Helper roles significantly improved participants' ability to identify online fraud.
arXiv Detail & Related papers (2026-01-18T09:15:51Z) - Large Language Models' Complicit Responses to Illicit Instructions across Socio-Legal Contexts [54.15982476754607]
Large language models (LLMs) are now deployed at unprecedented scale, assisting millions of users in daily tasks.<n>This study defines complicit facilitation as the provision of guidance or support that enables illicit user instructions.<n>Using real-world legal cases and established legal frameworks, we construct an evaluation benchmark spanning 269 illicit scenarios and 50 illicit intents.
arXiv Detail & Related papers (2025-11-25T16:01:31Z) - When AI Agents Collude Online: Financial Fraud Risks by Collaborative LLM Agents on Social Platforms [101.2197679948061]
We study the risks of collective financial fraud in large-scale multi-agent systems powered by large language model (LLM) agents.<n>We present MultiAgentFraudBench, a large-scale benchmark for simulating financial fraud scenarios.
arXiv Detail & Related papers (2025-11-09T16:30:44Z) - Friend or Foe: How LLMs' Safety Mind Gets Fooled by Intent Shift Attack [53.34204977366491]
Large language models (LLMs) remain vulnerable to jailbreaking attacks despite their impressive capabilities.<n>In this paper, we introduce ISA (Intent Shift Attack), which obfuscates LLMs about the intent of the attacks.<n>Our approach only needs minimal edits to the original request, and yields natural, human-readable, and seemingly harmless prompts.
arXiv Detail & Related papers (2025-11-01T13:44:42Z) - "It Felt Real" Victim Perspectives on Platform Design and Longer-Running Scams [11.449657621942885]
We show how scammers strategically use platform affordances to stage credibility, orchestrate intimacy, and sustain coercion with victims.<n>By analyzing scams as socio-technical projects, we highlight how platform design can be exploited in longer-running scams.
arXiv Detail & Related papers (2025-10-03T02:34:13Z) - Oyster-I: Beyond Refusal -- Constructive Safety Alignment for Responsible Language Models [93.5740266114488]
Constructive Safety Alignment (CSA) protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results.<n>Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities.<n>We release Oy1, code, and the benchmark to support responsible, user-centered AI.
arXiv Detail & Related papers (2025-09-02T03:04:27Z) - PsyScam: A Benchmark for Psychological Techniques in Real-World Scams [38.57446009573742]
PsyScam is a benchmark designed to systematically capture the psychological techniques employed in real-world scam reports.<n>We show that PsyScam presents significant challenges to existing models in both detecting and generating scam content based on the PTs used by real-world scammers.
arXiv Detail & Related papers (2025-05-21T01:55:04Z) - Combating Phone Scams with LLM-based Detection: Where Do We Stand? [1.8979188847659796]
This research explores the potential of large language models (LLMs) to provide detection of fraudulent phone calls.
LLMs-based detectors can identify potential scams as they occur, offering immediate protection to users.
arXiv Detail & Related papers (2024-09-18T02:14:30Z) - The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies [58.94148083602662]
Large Language Models (LLMs) agents have evolved to perform complex tasks.<n>The widespread applications of LLM agents demonstrate their significant commercial value.<n>However, they also expose security and privacy vulnerabilities.<n>This survey aims to provide a comprehensive overview of the newly emerged privacy and security issues faced by LLM agents.
arXiv Detail & Related papers (2024-07-28T00:26:24Z) - A Survey of Scam Exposure, Victimization, Types, Vectors, and Reporting in 12 Countries [3.2545498077804083]
The present study addresses this gap through a nationally representative survey on scam exposure, victimization, types, vectors, and reporting in 12 countries.
We find, first, that residents of less affluent countries suffer financial loss from scams more often.
Second, we find that the internet plays a key role in scams across the globe, and that GNI per-capita is strongly associated with specific scam types and contact vectors.
arXiv Detail & Related papers (2024-07-17T14:35:56Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty [53.336235704123915]
We investigate how LMs incorporate confidence in responses via natural language and how downstream users behave in response to LM-articulated uncertainties.
We find that LMs are reluctant to express uncertainties when answering questions even when they produce incorrect responses.
We test the risks of LM overconfidence by conducting human experiments and show that users rely heavily on LM generations.
Lastly, we investigate the preference-annotated datasets used in post training alignment and find that humans are biased against texts with uncertainty.
arXiv Detail & Related papers (2024-01-12T18:03:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.