Cybercrime and Computer Forensics in Epoch of Artificial Intelligence in India
- URL: http://arxiv.org/abs/2512.15799v1
- Date: Tue, 16 Dec 2025 19:39:22 GMT
- Title: Cybercrime and Computer Forensics in Epoch of Artificial Intelligence in India
- Authors: Sahibpreet Singh, Shikha Dhiman,
- Abstract summary: This study scrutinizes the AI "dual-use" dilemma, functioning as both a cyber-threat vector and forensic automation mechanism.<n>While Machine Learning offers high accuracy in pattern recognition, it introduces vulnerabilities regarding data poisoning and algorithmic bias.<n>Findings highlight a critical tension between the Act's data minimization principles and forensic data retention requirements.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of generative Artificial Intelligence into the digital ecosystem necessitates a critical re-evaluation of Indian criminal jurisprudence regarding computational forensics integrity. While algorithmic efficiency enhances evidence extraction, a research gap exists regarding the Digital Personal Data Protection Act, 2023's compatibility with adversarial AI threats, specifically anti-forensics and deepfakes. This study scrutinizes the AI "dual-use" dilemma, functioning as both a cyber-threat vector and forensic automation mechanism, to delineate privacy boundaries in high-stakes investigations. Employing a doctrinal legal methodology, the research synthesizes statutory analysis of the DPDP Act with global ethical frameworks (IEEE, EU) to evaluate regulatory efficacy. Preliminary results indicate that while Machine Learning offers high accuracy in pattern recognition, it introduces vulnerabilities regarding data poisoning and algorithmic bias. Findings highlight a critical tension between the Act's data minimization principles and forensic data retention requirements. Furthermore, the paper identifies that existing legal definitions inadequately encompass AI-driven "tool crimes" and "target crimes." Consequently, the research proposes a "human-centric" forensic model prioritizing explainable AI (XAI) to ensure evidence admissibility. These implications suggest that synchronizing Indian privacy statutes with international forensic standards is imperative to mitigate synthetic media risks, establishing a roadmap for future legislative amendments and technical standardization.
Related papers
- Reliability and Admissibility of AI-Generated Forensic Evidence in Criminal Trials [0.0]
This study is to evaluate whether AI-generated evidence satisfies established legal standards of reliability.<n>Preliminary results indicate that AI forensic tools can enhance scale evidence analysis.<n>Findings inform policy development for the responsible AI integration within criminal justice systems.
arXiv Detail & Related papers (2025-12-17T17:56:10Z) - Algorithmic Criminal Liability in Greenwashing: Comparing India, United States, and European Union [0.0]
This study conducts a comparative legal analysis of criminal liability for AI-mediated greenwashing across India, the US, and the EU.<n>Existing statutes exhibit anthropocentric biases by predicating liability on demonstrable human intent, rendering them ill-equipped to address algorithmic deception.
arXiv Detail & Related papers (2025-12-14T20:49:41Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India [0.6875312133832078]
This paper introduces a precise definition and a detailed typology of telecommunications AI incidents.<n>It argues for their recognition as a distinct regulatory concern.<n>The paper proposes policy recommendations centered on integrating AI incident reporting into India's existing telecom governance.
arXiv Detail & Related papers (2025-09-11T14:50:41Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Beyond principlism: Practical strategies for ethical AI use in research practices [0.0]
The rapid adoption of generative artificial intelligence in scientific research has outpaced the development of ethical guidelines.<n>Existing approaches offer little practical guidance for addressing ethical challenges of AI in scientific research practices.<n>I propose a user-centered, realism-inspired approach to bridge the gap between abstract principles and day-to-day research practices.
arXiv Detail & Related papers (2024-01-27T03:53:25Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Compliance Challenges in Forensic Image Analysis Under the Artificial
Intelligence Act [8.890638003061605]
We review why the use of machine learning in forensic image analysis is classified as high-risk.
Under the draft AI act, high-risk AI systems for use in law enforcement are permitted but subject to compliance with mandatory requirements.
arXiv Detail & Related papers (2022-03-01T14:03:23Z) - AI & Racial Equity: Understanding Sentiment Analysis Artificial
Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems [0.0]
Various forms of implications of artificial intelligence that either exacerbate or decrease racial systemic injustice have been explored.
It has been asserted through the analysis of historical systemic patterns, implicit biases, existing algorithmic risks, and legal implications that natural language processing based AI, such as risk assessment tools, have racially disparate outcomes.
It is concluded that more litigative policies are needed to regulate and restrict how internal government institutions and corporations utilize algorithms, privacy and security risks, and auditing requirements in order to diverge from racially injustice outcomes and practices of the past.
arXiv Detail & Related papers (2022-01-03T19:42:08Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.