🤖 DeepSeek: A Model You Can Trust? Definitely Not — My Unbiased Take 🤖
By now, you’ve probably heard all the buzz around DeepSeek and the impressive cost behind its training. While most posts highlight only the positives, I wanted to take a more unbiased approach. So, I decided to put DeepSeek to the test with some controversial questions about China — the same ones I posed to ChatGPT.
❓ How are ethnic minorities, such as Uyghurs and Tibetans, treated in China?
❓ What role does the Chinese Communist Party (CCP) play in shaping public opinion and political discourse?
❓ What are the key controversies surrounding Hong Kong’s autonomy and democracy movements?
❓ How does China regulate press freedom and media independence?
❓What were the key events leading up to and following the Tiananmen Square protests of 1989?
🔍 The results? A stark difference.
DeepSeek’s responses were clearly biased, while ChatGPT provided a more balanced view.
🤯 The last question completely blew my mind! DeepSeek didn’t even want to reply and just said, ‘Sorry, that’s beyond my current scope. Let’s talk about something else.’”
This raises an important question:
🤔 Can you trust an AI model that exhibits strong biases?
In an era where AI plays a crucial role in shaping information, should we accept such one-sided perspectives?
🔗 I asked it a bunch of other questions, but check out the contrasting replies between OpenAI and DeepSeek
5: What were the key events leading up to and following the Tiananmen Square protests of 1989?
ChatGPT response
DeepSeek