Artificial Intelligence in Indian Legal System: Challenges in Evidence Admissibility

Artificial Intelligence (AI) is redefining the contours of modern jurisprudence. From facial recognition in criminal investigations to predictive algorithms in financial fraud detection, AI systems are increasingly contributing to the evidentiary process. However, the admissibility of AI-generated evidence before Indian courts presents a novel legal challenge — one that tests the balance between technological innovation and procedural fairness.

The introduction of the Bharatiya Sakshya Adhiniyam, 2023 (BSA) — which replaced the colonial-era Indian Evidence Act, 1872 — marks a significant shift in India’s approach to evidence law. Yet, while the new statute modernises terminology and structure, it remains largely silent on AI-generated or algorithmically produced evidence. This article analyses the admissibility of AI-generated evidence under the current legal framework, highlighting emerging challenges and suggesting reforms for the future.

Understanding AI-Generated Evidence

AI-generated evidence refers to data, reports, or analytical outputs produced by artificial intelligence systems — including algorithms, neural networks, and machine learning models — used to assist in legal or investigative contexts. Examples include:

  • Facial recognition and surveillance analytics identify individuals in crime footage.
  • AI-based forensic tools reconstructing digital activities or deleted files.
  • Predictive analytics in financial or criminal investigations.
  • Deepfake detection or voice analysis reports are prepared by automated software.
  • Chatbot transcripts, social media algorithms, or sentiment analysis serving as communication evidence.

While these tools enhance efficiency and objectivity, their evidentiary use raises complex questions of authenticity, reliability, and explainability — all central to judicial admissibility.

Legal Framework: From the Indian Evidence Act to the Bharatiya Sakshya Adhiniyam, 2023

The Bharatiya Sakshya Adhiniyam (BSA), effective from July 1, 2024, modernises the law of evidence to align with digital realities. Key provisions relevant to electronic and AI-generated evidence include:

  • Section 2(1)(d) – Defines “electronic record” in alignment with the Information Technology Act, 2000.
  • Section 2(1)(b) – Defines “document” to include any information recorded, stored, or transmitted electronically.
  • Section 61 – States that electronic records are admissible as documents if produced in accordance with the Act.
  • Section 63 – Corresponds to the old Section 65B of the Evidence Act, detailing conditions for admissibility of electronic records.
  • Section 63(2) – Provides that electronic evidence must be accompanied by a certificate identifying the device, the authenticity of data, and the manner of production.
  • Section 66 – Establishes presumptions as to the authenticity of secure electronic records and digital signatures.
  • Section 79 – Allows courts to presume the genuineness of certified electronic records.

Together, these provisions continue the spirit of Section 65B under the old Act but with updated terminology to better accommodate the digital era. Yet, they do not directly address AI-generated or AI-processed evidence, leaving interpretive gaps that the judiciary must navigate.

Judicial Approach to Digital Evidence

Even before the BSA, Indian courts had developed a substantial body of jurisprudence on electronic evidence.

  • Anvar P.V. v. P.K. Basheer (2014) 10 SCC 473: The Supreme Court held that electronic records are admissible only if accompanied by a Section 65B certificate verifying authenticity.
  • Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020) 7 SCC 1: Reaffirmed that compliance with certification requirements is mandatory unless the original device is produced.
  • Tomaso Bruno v. State of Uttar Pradesh (2015) 7 SCC 178: Recognised the importance of electronic evidence such as CCTV footage in modern investigations.

These precedents continue to guide judicial evaluation under the new BSA. However, AI-generated evidence differs fundamentally because the human element of authorship and perception is replaced by algorithmic interpretation, posing unique legal and ethical challenges.

Legal Challenges in Admitting AI-Generated Evidence

  1. Absence of Specific Legislative Recognition

The BSA does not expressly mention AI or algorithmic evidence. The law treats AI outputs as “electronic records”, yet fails to establish standards for algorithmic explainability, model integrity, or dataset reliability — all crucial for judicial verification.

  1. Authenticity and Reliability

Sections 63(2) and 66 require proof of authenticity, but AI-generated data often depends on self-learning algorithms that evolve. Courts may find it difficult to ensure that such evidence has not been influenced by bias, coding errors, or data manipulation.

  1. The “Black Box” Problem

AI models, especially deep learning systems, lack transparency regarding their internal decision-making processes. This opacity undermines the principle of verifiability, a cornerstone of evidentiary reliability. Without human interpretability, judicial review becomes tenuous.

  1. Chain of Custody and Technical Certification

Section 63(2) mandates certification of the source device and process, but in AI systems, multiple datasets, servers, and algorithms may be involved. Maintaining an unbroken chain of custody and ensuring data immutability across systems is exceptionally complex.

  1. Expert Testimony and Cross-Examination

Under Section 45 of the BSA (formerly Section 45 of the IEA), expert opinions are admissible. However, courts will increasingly require AI specialists and data scientists as expert witnesses capable of explaining the algorithms, datasets, and error margins behind the evidence. The scarcity of such experts presents a procedural bottleneck.

  1. Risk of Deepfakes and Synthetic Evidence

AI can generate hyper-realistic but false content, including deepfake videos and audio clips. Without robust forensic tools to authenticate digital media, the courts risk admitting fabricated evidence that violates natural justice.

  1. Constitutional and Ethical Implications

AI-generated evidence intersects with fundamental rights, especially:

  • Article 20(3) – Protection against self-incrimination.
  • Article 21 – Right to privacy and fair trial. Unregulated AI surveillance or predictive policing could infringe upon individual liberties unless subject to clear statutory safeguards.

Comparative Jurisdictions: Global Trends

  • European Union (EU): The proposed AI Act (2024) introduces risk-based regulation emphasising transparency, auditability, and human oversight — parameters that may serve as evidentiary standards.
  • United States: The Daubert Standard requires scientific evidence to be reliable and peer-reviewed, offering a model for evaluating algorithmic credibility.
  • United Kingdom: Courts allow AI-assisted evidence but insist that human experts validate the analysis and that the AI’s methodology be reproducible.

India can draw valuable lessons from these frameworks to integrate trustworthiness and explainability into its evidence law.

The Way Forward: Reform and Standardisation

To ensure that the Bharatiya Sakshya Adhiniyam remains future-ready, several policy and judicial measures are essential:

  1. Statutory Recognition of AI Evidence: Amend the BSA to explicitly define and regulate “AI-generated evidence” with standards of reliability, accountability, and interpretability.
  2. Algorithmic Certification Mechanism: Develop a certification process analogous to Section 63(2) — ensuring that AI models used in legal contexts are auditable, tamper-proof, and transparent.
  3. Judicial and Bar Training: Introduce mandatory training for judges, prosecutors, and advocates on digital forensics, algorithmic decision-making, and data authenticity.
  4. Expert Panels on AI and Forensics: Establish a National AI Forensic Authority to vet AI tools used in evidence production, similar to the Section 79A examiner provision under the IT Act.
  5. Integration with the Digital Personal Data Protection Act, 2023:Align evidentiary practices with privacy and data protection norms to prevent misuse or unauthorised processing of personal data by AI systems.
  6. Ethical Oversight and Human Supervision: Encourage the “human-in-the-loop” model, ensuring that algorithmic outputs are always subject to human verification before submission to court.

Conclusion

The Bharatiya Sakshya Adhiniyam, 2023, has taken a crucial step toward modernising Indian evidence law. However, the admissibility of AI-generated evidence remains an unsettled frontier — one demanding nuanced legal interpretation and forward-looking reforms.

As AI becomes integral to investigation and adjudication, courts must balance efficiency with fundamental rights, technological precision with procedural fairness, and innovation with accountability. The future of evidence law in India lies in crafting a framework where AI serves justice, not supplants it — ensuring that the digital truth remains as trustworthy as the human conscience behind it.