In the news

DeepSeek: The AI disruption with hidden dangers for businesses

Written by Aaron Flack | Jan 30, 2025

A new AI giant has emerged, causing waves across the global tech industry. DeepSeek, a Chinese-developed chatbot, has taken the market by storm, becoming one of the fastest-downloaded AI applications in both the UK and the US. However, as businesses and individuals flock to test its capabilities, critical questions are being raised about its privacy policies, data security, and the broader implications of integrating unknown AI tools into the workplace.

Quick Jump

What should businesses do >

FAQ >

The rise of DeepSeek—and the red flags

DeepSeek's rapid ascent has been likened to a "Sputnik moment" for AI, demonstrating China's ability to challenge American dominance in artificial intelligence. With claims of developing its large language model for a fraction of what competitors like OpenAI have spent, DeepSeek has triggered widespread concern, leading to a staggering $600 billion stock market shake-up, including a significant decline in Nvidia's market value (BBC News).

However, this success has been met with intense scrutiny. Privacy watchdogs in Italy and Ireland have launched investigations into how DeepSeek processes user data, and regulatory bodies are questioning whether the AI model complies with GDPR requirements (TechCrunch). The platform has also suffered from cyberattacks and security breaches, raising alarms about its resilience against threats (TechRadar).

What data does DeepSeek collect?

One of the most alarming aspects of DeepSeek is its extensive data collection practices. According to its privacy policy, DeepSeek harvests:

  • Personally identifiable information, including email addresses, phone numbers, and dates of birth.
  • User input, such as text, audio, and chat histories.
  • Technical metadata, including device details, IP addresses, and keystroke patterns.
  • Behavioural tracking allows the platform to monitor user actions beyond the app itself.

This information is stored on servers in China, raising significant concerns about data sovereignty and potential access by Chinese authorities. The platform states that data is retained "as long as necessary," a vague timeframe that offers no clear assurances for users or businesses (TechRadar).

The business risk of unknown AI tools

For businesses, integrating unverified AI models like DeepSeek into their workflows presents serious risks:

  1. Data Security and Compliance Risks
    GDPR, the UK Data Protection Act, and various other international data laws impose strict regulations on how businesses handle sensitive information. With DeepSeek under scrutiny by European data regulators, any platform used within an organisation could lead to compliance violations and potential legal consequences.
  2. Intellectual Property and Confidentiality Breaches
    Every query submitted to DeepSeek is stored and potentially used to refine the model. For businesses dealing with proprietary information, trade secrets, or client data, using DeepSeek could mean unknowingly sharing confidential material with an external entity.
  3. Manipulated and Censored information
    Reports suggest DeepSeek demonstrates selective moderation, avoiding politically sensitive topics in ways that raise questions about external influence on its outputs (TechCrunch). Businesses relying on AI-generated insights must consider whether they receive filtered, biased, or censored responses.
  4. Cybersecurity Threats and Organisational Integrity
    DeepSeek has already been the target of large-scale cyberattacks. Cybersecurity firm KELA revealed vulnerabilities that make the model susceptible to jailbreaking techniques known for years (TechRadar). The ease with which attackers could manipulate DeepSeek poses a direct risk to businesses integrating it into operations.

What should businesses do?

Given these mounting concerns, organisations must take a cautious and strategic approach:

  • Conduct a risk assessment: Before deploying any AI tool, assess its compliance with security and data protection policies.
  • Restrict the use of unverified AI models: Companies should establish clear policies against using AI platforms with questionable data practices.
  • Educate employees on AI security risks: Staff must understand that interacting with AI like DeepSeek could compromise sensitive information.
  • Monitor regulatory actions: Stay updated on decisions from data protection authorities and be prepared to adapt accordingly.
  • Choose reputable AI solutions: Well-established AI providers with transparent data policies should be prioritised over unknown or unregulated platforms.

A final warning

DeepSeek's sudden rise to prominence is a stark reminder that not all AI advancements are without risk. While businesses seek to harness AI's power, they must remain vigilant about the tools they trust with their data. The ongoing regulatory scrutiny, privacy concerns, and security vulnerabilities surrounding DeepSeek should serve as a clear signal: proceed with extreme caution.

 

Speak to an expert about safe AI use in your business.

 
 Sources
Company Resource Name URL

BBC News

Be careful with DeepSeek, Australia says - so is it safe to use?

https://www.bbc.co.uk/news/articles/cx2k7r5nrvpo

TechRadar

Is DeepSeek AI safe to use?

https://www.techradar.com/computing/cyber-security/is-deepseek-ai-safe-or-is-it-just-a-data-minefield-waiting-to-blow-up

TechCrunch

reland and Italy send data watchdog requests to DeepSeek:

https://techcrunch.com/2025/01/29/italy-sends-first-data-watchdog-request-to-deepseek-the-data-of-millions-of-italians-is-at-risk/