/local-samosal/media/media_files/2026/03/06/f-14-2026-03-06-17-57-57.jpg)
All while the India AI Impact Summit 2026, held in February at the Bharat Mandapam in New Delhi, remained the talk-point for various reasons, what it mostly did was let AI showboat how it has the potential to change the future with various announcements related to investments, AI infrastructure and development in India.
Even though AI-related crimes are not new anymore, the same month, however, recorded various misuse of AI in India - particularly targeting women and other marginalised communities - a discourse that did not seem to garner much attention in what was the biggest AI congregation in India.
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/copy-of-local-samosa-fi-1-2026-02-20-11-23-00-2026-03-06-17-59-01.webp)
A Hyderabad woman in February reported that her photos and videos were morphed using AI/Deepfake technology and were circulated on major social media platforms. In another such incident, a 31-year-old woman in Assam was targeted after downloading a loan app, where her photograph was manipulated using AI to create synthetic nude images after she resisted extortion.
As per N. S. Nappinai, a Senior Advocate of the Supreme Court, who has also been working on the cyber crime cases, Deepfakes and financial frauds continue to be the most reported cases that she has been taking. She says that while these are not only "women-centric" but do impact them in a stronger fashion.
During the same summit, though, a casebook developed by the Government of India's IndiaAI Mission (under MeitY) in partnership with UN Women titled, ‘Casebook on AI and Gender Empowerment’ released the report noting how women hold only 30% of AI professional roles globally and just 16% of AI research roles, that pose a grave question - who is designing the AI models, for whom and amidst the underrepresation of women and other genders, is it directly linked with the gendered crimes resulting out of the AI systems?
What’s at stake due to the underrepresentation?
According to a recent report by the venture capital firm Kalaari Capital, India currently has around 84,000 women in AI and ML roles, accounting for roughly 20% of the total workforce. It highlighted that women make up just one in five professionals in India’s artificial intelligence (AI) and machine learning (ML) workforce today.
Sonal Khanna, who runs Secure Blink, an indigenous AI-powered cybersecurity company focused on Application and API Security, says she has seen, specifically in cybersecurity, how threat detection models sometimes fail to adequately account for online harassment patterns, deepfake misuse targeting women, or gendered social engineering attacks.
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/f-12-2026-03-06-17-51-02.jpg)
“These are not always prioritised because the datasets and threat modelling often centre around financial fraud or infrastructure attacks, not digital safety concerns that disproportionately affect women,” the co-founder says, adding, “When women are underrepresented in development and governance, blind spots naturally emerge.”
Similarly, Garima Bharadwaj, the co-founder and CTO of Enlite, a climate-tech company that claims to have moved beyond traditional automation to create what it calls “autonomous intelligence” for buildings, has seen AI systems reflect blind spots in women’s realities. Citing an example from her work, she says, “Traditional building automation systems were designed primarily around equipment efficiency rather than occupant experience. They assumed uniform occupancy patterns, fixed schedules, and static usage models.”
In reality, building usage varies significantly, Bharadwaj says. “Women often have different arrival and departure times, different movement patterns across floors, and different safety expectations, especially in early morning or late evening hours. Static automation systems do not respond well to these variations. This can lead to poorly lit areas, delayed system response, or suboptimal comfort conditions in zones that are actively used," she says, adding that AI systems learn from the people who design them, the data they are trained on, and the priorities that guide their deployment.
As of 2025, according to data from the Press Information Bureau (PIB), India has approximately 130-150 operational commercial data centres, with a significant portion upgraded to support AI workloads. The same source mentions that India has over 1,780 dedicated Artificial Intelligence companies as of early 2026, including 482 funded start-ups and 3 unicorns.
With fewer women and diverse groups of AI professionals and researchers, it creates a structural imbalance in how problems are defined and solved, as the AI entrepreneurs observe. “This is not just a representation issue. It directly affects product outcomes, Bharadwaj further says.
A recent report by UNESCO also highlights that while 29.92 % of the AI talent on LinkedIn in India are women, the gaps remain, with only 4-8% holding executive-level tech roles. The numbers also include 71.76% of AI research papers by female authors, with only 12% of specific AI research positions being held by women.
It is these blind spots that get created due to the gaps that Priyanka Aeron, the co-Founder and CEO of Thrive Global AI warns against. “The datasets used, the issue statements prioritised, and the user journeys mapped frequently overlook female realities if women are absent from research labs and product rooms. At the product level, this can include safety systems that overlook gender-based harassment trends, fintech risk models that misread women-led enterprises, or healthcare algorithms that underdiagnose women,” she says, adding, “Biased AI has the potential to widen rather than close economic disparities in society.”
‘AI undervalue women-led brands’
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/f-13-2026-03-06-17-52-42.jpg)
From her experience with running a company that offers AI-first marketing solutions tailored to businesses, she shares a striking pattern. “AI models that are primarily trained on historical transaction data have been observed in marketing and commerce to misclassify target segments or undervalue women-led brands. Women-focused products, for example, were occasionally forced into ‘niche’ categories, which reduced their visibility and potential for growth,” she shares.
This kind of categorisation not only reduces discoverability but also increases acquisition costs, and distorted performance measures become common for such brands, as Aeron says.
From bias in AI to investor rooms, hirings…
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/copy-of-local-samosa-fi-6-2026-02-20-11-24-07-2026-03-06-17-59-25.webp)
While the underrepresentation of diverse groups itself can result in biased AI, the industry itself is not far from being biased as of yet. What Priyanka Aeron, the Noida-based entrepreneur, has also observed is that when pitching AI products, women entrepreneurs are questioned differently as compared to their male counterparts. “Although male entrepreneurs are asked about adaptability and expansion, women entrepreneurs are more often asked about sustainability and risk reduction. There may be an extra layer of technical validation in AI in particular, as if proficiency must be demonstrated again.”
Being a woman CEO of an AI-first firm, she sometimes requires dealing with both gender bias and technology mistrust. “AI is frequently glamorised in investor meetings, but female founders may still come under closer scrutiny for their technical proficiency or execution skills,” she says.
It has been evident even in the recently-concluded AI summit that men formed the most discussions. “At industry events, I’ve also observed that discussions around AI safety frequently focus on scalability, automation, and enterprise productivity, while issues like algorithmic bias, online abuse amplification, or consent violations receive less depth. That imbalance mirrors who is in the room shaping the conversation,” Sonal Khanna states.
In Khanna's view, the underrepresentation of women and other diverse groups in AI panels and start-ups is not just because the sector is nascent, but it ranges from lack of access to early mentorship in deep-tech, limited visibility of women role models in AI, funding bias in venture ecosystems along with the existing confidence gap shaped by systemic conditioning and fewer women in core AI research pipelines.
“I have experienced moments where technical depth was assumed to belong to male counterparts in the room. It requires consistently asserting competence and presence,” Khanna shares. As she has also spoken to many women navigating AI, she says that they “struggle with visibility, not capability”. “The ecosystem often celebrates louder voices, not necessarily more capable ones.”
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/f-20-2026-03-06-18-06-47.jpg)
As per Garima Bharadwaj, in deep technology and AI, there is still an underlying expectation that women founders are more aligned with marketing or business functions, rather than building the core technology itself. “...as a result, conversations can sometimes begin with more scrutiny around technical depth, architectural decisions, and execution capability. There is an implicit need to establish that you are not just leading the company, but that you are deeply involved in building the technology itself,” she says in her experience.
Bharadwaj credits this imbalance to the fewer women who have historically built deep infrastructure or AI-led companies. “When building a highly technical, infrastructure-level product, women founders may be evaluated with greater scepticism around technical ownership and execution capability. This does not necessarily come from intent, but from longstanding assumptions about who typically builds core technology.”
As a result, women have to establish conviction by consistently demonstrating technical depth, clarity, and execution over time, as all the entrepreneurs note, while talking to Local Samosa.
While Dr Beena Rai from Bettrlabs might not have faced the biases to which she credits her decades of research and a strong scientific track record, she also highlights that there is still a perception that deep tech is male-dominated, and women founders are often examined more closely on technical credibility. "Fewer women entering and staying in these paths limits not just diversity, but the quality of questions being asked," the co-founder and CSO at Bettrlabs says.
'Capturing non-binary genders for protection from Deepfakes'
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/f-18-2026-03-06-18-07-16.jpg)
Coming to the crimes being committed through the AI, redressal might be both simple and nuanced. "General laws under BNS and special provisions under IT Act can both be invoked," says N. S. Nappinai, also the founder of Cyber Saathi.
Currently, the Deepfakes in India is managed through the Information Technology (IT) Act, 2000, the Bharatiya Nyaya Sanhita (BNS) 2023 (replacing IPC), and the IT Rules, 202, where in the key provisions include penalties for impersonation (Sec 66D), violating privacy (Sec 66E), and publishing obscene/sexually explicit material (Sec 67, 67A). Financial frauds are addressed through the Bharatiya Nyaya Sanhita (BNS) for cheating (Sec 420) and forgery (Sec 467/468), and the Information Technology Act, 2000, for cybercrimes (Sec 66C/72A).
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/f-15-1-2026-03-06-18-03-27.jpg)
The National Cyber Crime Reporting Portal (1930) handles deepfake complaints, highlighting a substantial number of AI-generated abuse cases that were reported through specialised helplines in 2025. With no specific records available for the LGBTQIA+ individuals, various reports claim that deepfake abuse is pervasive and a growing experience for the community.
On the other hand, the implementation of the above laws remains a challenge when it comes to not just women but also non-binary genders, for which Ms. Nappinai says, "⁠We, firstly, need better enforcement of existing laws. Even that will help the weaker sections better."
Talking to Local Samosa, she says, "Capturing non-binary genders for protections and simplifying procedural processes will help better implementation and hence, here are two aspects that lawmakers could focus on."
A vicious cycle for ending perceptions, AI biases and thus, crimes
At the prevention level, however, only visibility, participation and representation of women in AI roles can do the work, as the women already working with the AI systems reiterate. Priyanka Aeron from Thrive Global AI highlights the vicious cycle.
"Conversations become data-led rather than perception-led when you (women and other sections) can show 200–300% performance gains supported by analytics and forecasting models. However, representation itself continues to be a problem. You become more noticeable the fewer ladies there are in the room. However, power can also come from visibility. It makes it possible to normalise women in leadership positions in deep technology."
Similarly, Garima Bharadwaj of Enlite opines that the systems evolve, improve, and compound over time. Even the perceptions and biases around women and other genders in AI can be reduced when their work speaks for itself; in this case, the AI. "In infrastructure and AI, credibility is ultimately established by what the system does, not who built it," she says, highlighting the potential ripple effects.
According to a survey by Nasscom and consultancy firm BCG in 2024, it was stated that nearly 79 per cent of women in senior roles in India adopted Generative AI for work
/filters:format(webp)/local-samosal/media/media_files/2026/03/06/f-19-2026-03-06-18-07-57.jpg)
In the Union Budget 2025-2026, the finance minister, Nirmala Sitharaman, has announced an allocation of Rs. 1,000 crore specifically for the IndiaAI Mission for the 2026–27 fiscal year.The budget was announced as part of the IndiaAI Mission, which was approved by the cabinet in the March of 2024 with an outlay of Rs. 10, 372 crore to build a "scalable AI ecosystem in India".
However, there does not seem to be any concrete step towards promoting inclusive participation in the AI industry, which leaves the students and aspirants on their own to seek participation.
To mitigate the biases that the AI might develop due to AI systems reflecting the data they are trained on and the people who design them, Sonal Khanna from Secure Blink suggests early STEM encouragement at school level. "We need more women on AI policy and governance panels, equal representation in accelerator programs, intentional mentorship networks, and access to capital for deep-tech women founders," she says.
As per a recent announcement by Maharashtra's Information Technology Minister Ashish Shelar, Maharashtra is also set to establish the first Artificial Intelligence (AI) university in the country, aiming to promote research and development in AI and related fields and will act as a centre of excellence.
"Women’s expertise in AI should not be an exception that needs validation. It should be the norm. If AI is shaping long term societal outcomes, then inclusive and research-led participation is not optional; it is essential," Dr. Beena Rai adds.
/local-samosal/media/agency_attachments/sdHo8lJbdoq1EhywCxNZ.png)
/local-samosal/media/media_files/2026/02/17/dskt6op-leader-board-2026-02-17-12-26-52.jpg)
Follow Us