Chinese AI startup DeepSeek’s latest AI model, an improved version of the company’s R1 reasoning model, has achieved impressive scores on benchmarks for coding, math, and general knowledge, coming close to surpassing OpenAI’s flagship o3 model. However, the upgraded R1, also known as “R1-0528,” may be less willing to answer controversial questions, especially those concerning topics deemed controversial by the Chinese government.
According to testing conducted by the pseudonymous developer behind SpeechMap, a platform that compares how different models handle sensitive and controversial subjects, “xlr8harder” claims that R1-0528 is significantly less tolerant of contentious topics related to free speech compared to previous DeepSeek releases, making it the most censored DeepSeek model yet when it comes to criticizing the Chinese government.
As explained in Wired earlier this year, models in China are required to adhere to strict information controls. A law implemented in 2023 prohibits models from generating content that undermines the country’s unity and social harmony, which could be interpreted as content conflicting with the government’s historical and political narratives. To comply, Chinese startups often censor their models by utilizing prompt-level filters or fine-tuning them. One study found that the original R1 model from DeepSeek refuses to answer 85% of questions related to topics considered politically controversial by the Chinese government.

According to xlr8harder, R1-0528 censors responses to questions about sensitive topics such as the internment camps in China’s Xinjiang region, where over a million Uyghur Muslims have been detained arbitrarily. While the model occasionally criticizes certain aspects of Chinese government policies – such as citing the Xinjiang camps as an example of human rights abuses during xlr8harder’s testing – it often aligns with the Chinese government’s official stance when directly asked questions.
TechCrunch also observed similar behavior during brief testing.
China’s publicly available AI models, like the video-generating models Magi-1 and Kling, have faced criticism in the past for censoring topics sensitive to the Chinese government, like the Tiananmen Square massacre. In December, ClĂ©ment Delangue, CEO of the AI development platform Hugging Face, warned about the potential unintended consequences of Western companies building on top of high-performing, openly licensed Chinese AI technology.