Multimodal Generative AI and Foundation Models for Behavioural Health in Online Gambling
- URL: http://arxiv.org/abs/2503.03752v1
- Date: Fri, 24 Jan 2025 15:35:03 GMT
- Title: Multimodal Generative AI and Foundation Models for Behavioural Health in Online Gambling
- Authors: Konrad Samsel, Mohammad Noaeen, Neil Seeman, Karim Keshavjee, Li-Jia Li, Zahra Shakeri,
- Abstract summary: Online gambling platforms have transformed the gambling landscape, offering unprecedented accessibility and personalized experiences.<n>These same characteristics have increased the risk of gambling-related harm, affecting individuals, families, and communities.<n>This narrative review examines how artificial intelligence, particularly multimodal generative models and foundation technologies, can address these issues.
- Score: 4.576019260696813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online gambling platforms have transformed the gambling landscape, offering unprecedented accessibility and personalized experiences. However, these same characteristics have increased the risk of gambling-related harm, affecting individuals, families, and communities. Structural factors, including targeted marketing, shifting social norms, and gaps in regulation, further complicate the challenge. This narrative review examines how artificial intelligence, particularly multimodal generative models and foundation technologies, can address these issues by supporting prevention, early identification, and harm-reduction efforts. We detail applications such as synthetic data generation to overcome research barriers, customized interventions to guide safer behaviors, gamified tools to support recovery, and scenario modeling to inform effective policies. Throughout, we emphasize the importance of safeguarding privacy and ensuring that technological advances are responsibly aligned with public health objectives.
Related papers
- Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.<n>Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.<n>Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions [11.338466798715906]
Fine-tuning Large Language Models (LLMs) can achieve state-of-the-art performance across various domains.<n>This paper provides a comprehensive survey of privacy challenges associated with fine-tuning LLMs.<n>We highlight vulnerabilities to various privacy attacks, including membership inference, data extraction, and backdoor attacks.
arXiv Detail & Related papers (2024-12-21T06:41:29Z) - Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice [186.055899073629]
Unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model.<n>Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs.<n>Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges.
arXiv Detail & Related papers (2024-12-09T20:18:43Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse [1.30536490219656]
Review examines the broad applications, risks, and ethical challenges associated with Extended Reality (XR) technologies.
XR is revolutionizing fields such as immersive learning in education, medical and professional training, neuropsychological assessment, therapeutic interventions, arts, entertainment, retail, e-commerce, remote work, sports, architecture, urban planning, and cultural heritage preservation.
XR's expansion raises serious concerns, including data privacy risks, cybersecurity vulnerabilities, cybersickness, addiction, dissociation, harassment, bullying, and misinformation.
arXiv Detail & Related papers (2024-11-07T07:57:56Z) - Responsible and Regulatory Conform Machine Learning for Medicine: A
Survey of Technical Challenges and Solutions [4.325945017291532]
We survey the technical challenges involved in creating medical machine learning systems responsibly.
We discuss the underlying technical challenges, possible ways for addressing them, and their respective merits and drawbacks.
arXiv Detail & Related papers (2021-07-20T15:03:05Z) - Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures [0.0]
Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely related to people's self-disclosure decisions.
Online privacy decisions are often based on spurious risk judgements that make people liable to reveal sensitive data to untrusted recipients.
This paper elaborates on the ethical challenges that nudging mechanisms can introduce to the development of AI-based countermeasures.
arXiv Detail & Related papers (2020-09-27T14:24:29Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.