In a courtroom one day, a viral AI-generated video is shown. A well-known face, speaking words they never uttered. A brand’s reputation tarnished overnight. The question is: who is responsible? Who should guard against illusion made real? This is no science fiction. This is happening now.
Deepfake & AI: Legal risks for Businesses
- Misuse of likeness / persona – unauthorized use of one’s image, voice, signature.
- Defamation and reputation harm – false statements, misattribution.
- Privacy breach – capturing or transmitting images without consent.
- Regulatory uncertainties – many laws aren’t updated for AI; legal grey zones.
- Intellectual property violations – using third-party content or replicating styles.
- Consumer / trust implications – brand damage, loss of consumer confidence.
Case Study: Rashmika Mandanna Deepfake Case
A recent example: the alleged deepfake video circulating featuring actor Rashmika Mandanna. The law invoked parties to act under Section 66C of the IT Act (identity theft), Section 66E (privacy violation), and other relevant sections.
While final judgments may still evolve, the case underscores that individuals and, by extension, businesses whose brands or personnel are implicated in AI misuses are not without recourse.
Another recent matter: Anil Kapoor vs. Simply Life India (2023) in which he sought injunctive relief for use of his persona without consent.
How Law Responds in India Today
Currently, India does not yet have a comprehensive statute specifically targeting deepfakes or generative AI. But there are tools in the legal arsenal:
- Information Technology Act, 2000 — Sections 66C, 66D, 66E; Sections 67, 67A, 67B for obscene / repugnant content.
- IPC (Indian Penal Code) — for defamation, forgery, insult to modesty, and possibly hate speech/public order.
- Copyright Act — when copyrighted works are used without authorization.
- Court’s ability to grant injunctive relief — as seen in Kapoor’s case and others.
Lex Gladius’s Role: Defence & Strategy in the AI Era
As legal counsel, this is how we defend & safeguard businesses:
- Perform audit of brand/ IP exposure: who controls name, image, voice, likeness; ensure contracts with employees / endorsements guard against misuse.
- Draft clauses in contracts (endorsements, influencer deals) that control future AI / generative replicability.
- Monitor & enforce: takedown notices, injunctive actions, cease & desist letters.
- Advise on content moderation / digital policies, especially for platforms, media houses.
- Keep track of evolving laws, regulatory proposals, guidelines from government / courts.
Future Trends & Statistics
- Indian legal market for legal tech estimated to grow from ~USD 464.6 million in 2023 to ~USD 1.25 billion by 2030.
- Alternative legal services, AI-assisted document drafting, contract review tools are on the rise—many law firms are integrating automation.
- Predictions suggest that as generative AI becomes more advanced and accessible, the number of legal actions around misuse of likeness, defamation, misinformation will increase several-fold in the next 5-10 years. Governments globally are also considering stricter regulation.
- In India, deepfake content bans / takedown norms are being discussed; the “safe harbour” status of intermediaries may be threatened if platforms fail to act swiftly.
Judgment Delivered
In the theater of law, AI and deepfakes are the new acts that demand immediate adjudication. For businesses, the question isn’t if you will face such risks, but when. The wise strategy is to prepare: bind your brand legally, protect your image, have hands ready for injunctive relief, and ensure compliance.
Lex Gladius is at the forefront of guarding reputations, defending imagery, shaping policies. If your business engages with digital media, public personas, or any form of customer trust, reach out. Let us help you write your legal narrative—before someone else writes it for you.


