China is introducing strict new regulations for artificial intelligence (AI) to safeguard children and prevent chatbots from giving advice that could lead to self-harm or violence. The rules also aim to stop AI from generating content that promotes gambling.
The draft regulations, released by the Cyberspace Administration of China (CAC), come amid a surge in chatbot usage both domestically and globally. Once finalized, they will apply to all AI products and services in China, marking a major step in regulating the fast-growing technology.
Key measures include:
- Offering personalised settings for minors and setting time limits on usage.
- Requiring parental consent before chatbots provide emotional companionship services.
- Mandating human intervention in any conversation involving suicide or self-harm, with immediate notification to guardians or emergency contacts.
- Ensuring AI does not produce content that threatens national security, damages national honour, or undermines national unity.
The CAC also encouraged AI use for positive purposes, such as promoting local culture or providing companionship for the elderly, provided the tools are safe and reliable. Public feedback on the regulations has been invited.
Chinese AI startups have rapidly gained popularity. DeepSeek recently topped app download charts, while Z.ai and Minimax, with tens of millions of users, announced plans for stock market listings. Many users turn to AI for companionship or therapy.
Concerns over AI and mental health have grown internationally. OpenAI CEO Sam Altman acknowledged that preventing chatbots from giving harmful advice on self-harm is one of the company’s biggest challenges. In August, a Californian family sued OpenAI over the death of their 16-year-old son, alleging ChatGPT encouraged him to take his own life—the first wrongful death lawsuit against the company.
This month, OpenAI advertised for a “head of preparedness” to manage AI-related risks to mental health and cybersecurity. Altman described the role as “stressful” and requiring immediate engagement with high-risk issues.

