India Restricts Use of AI Tools Like ChatGPT and DeepSeek Over Data Security Concerns

India Restricts Use of AI Tools Like ChatGPT and DeepSeek Over Data Security Concerns
India Restricts Use of AI Tools Like ChatGPT and DeepSeek Over Data Security Concerns
India Restricts Use of AI Tools Like ChatGPT and DeepSeek Over Data Security Concerns

India’s finance ministry has instructed its employees to refrain from using artificial intelligence (AI) tools, including ChatGPT and DeepSeek, for official work. The move comes amid growing concerns over potential risks to the confidentiality of government documents and sensitive data. An internal advisory, dated January 29, highlighted the risks posed by these AI applications when used on office devices.
The advisory stated, “It has been determined that AI tools and apps (such as ChatGPT, DeepSeek, etc.) on office computers and devices pose risks to the confidentiality of government data and documents.” This decision aligns with similar restrictions imposed by countries like Australia and Italy, which have also raised alarms over data security risks associated with such AI platforms.
The news of the advisory gained traction on social media on Tuesday, just ahead of a scheduled visit to India by OpenAI CEO Sam Altman. Altman is set to meet with India’s IT minister on Wednesday, adding a layer of significance to the timing of the announcement.
Three finance ministry officials confirmed the authenticity of the advisory, noting that it was circulated internally earlier this week. However, representatives from India’s finance ministry, OpenAI (the parent company of ChatGPT), and DeepSeek have yet to respond to requests for comment.
This development underscores the growing global scrutiny of AI tools in government and corporate environments, particularly regarding their potential to compromise sensitive information. As AI technology continues to evolve, governments worldwide are grappling with balancing innovation and security.