Regulators in New Delhi have formally challenged Elon Musk’s social media platform X over what they describe as failures to contain obscene and harmful material generated by its artificial intelligence chatbot Grok, marking an escalation in scrutiny of AI-driven content governance in one of the world’s largest digital markets.The Ministry of Electronics and Information Technology, commonly known as Ministry of Electronics and Information Technology, has written to X to express concern that AI-generated outputs circulating on the platform breach domestic rules on decency, child protection and public order. Officials familiar with the correspondence say the ministry has sought detailed explanations on moderation safeguards and compliance mechanisms tied to Grok, which is embedded into X and accessible to users through paid subscriptions.
The letter was addressed to X, formerly Twitter, now owned by Elon Musk, and highlights what the ministry sees as systemic gaps in monitoring AI responses that can be instantly amplified through social media sharing. The platform, renamed X, has positioned itself as a space for free expression with lighter-touch moderation, a stance that has drawn regulatory attention in several jurisdictions.
Officials argue that the integration of generative AI heightens risk because content is not merely user-posted but algorithmically produced. In communications with the company, the ministry pointed to examples of sexually explicit language and abusive replies attributed to Grok, warning that such outputs could expose minors to harmful material and inflame social tensions. The government’s technology rules require platforms to deploy reasonable efforts to prevent the dissemination of unlawful or obscene content, regardless of whether it is generated by humans or machines.
The intervention comes as authorities globally tighten oversight of generative AI. European regulators are examining whether AI chatbots comply with digital services and AI-specific legislation, while several governments in Asia have introduced guidelines obliging platforms to label synthetic content and establish rapid takedown systems. Within India’s regulatory framework, intermediaries are expected to demonstrate due diligence, including swift removal of content flagged by authorities and transparent grievance redress mechanisms.
X has not publicly responded to the letter, though the company has previously maintained that Grok is designed to answer questions with a degree of irreverence and humour distinct from other chatbots. Musk has described Grok as a “truth-seeking” system trained on real-time data from the platform, arguing that this approach reduces bias. Critics counter that real-time training increases exposure to toxic language and misinformation, particularly in politically charged or culturally sensitive contexts.
Policy analysts note that the dispute underscores a broader tension between innovation and regulation. Generative AI tools are being rapidly embedded into consumer platforms, often faster than governance frameworks can adapt. For regulators, the challenge lies in applying existing laws—crafted for user-generated content—to systems capable of producing novel text at scale. Legal experts say the key question is accountability: whether platforms should be treated as publishers of AI outputs or remain protected as intermediaries.
Industry observers also point to commercial implications. India represents a significant growth market for social media and AI services, with hundreds of millions of users and expanding digital payments and subscription models. Regulatory friction could influence how aggressively global technology firms roll out advanced AI features in the country, or how much they invest in localisation and safety testing.
Civil society groups have welcomed the ministry’s move, arguing that unchecked AI outputs risk normalising abusive speech and misinformation. They have urged clearer disclosure when users interact with chatbots and stronger age-gating measures. At the same time, digital rights advocates caution against overreach that could stifle legitimate expression or innovation, calling for proportionate enforcement and open consultation with technologists.