DeepSeek: The AI disruption with hidden dangers for businesses
by Aaron Flack on Jan 30, 2025
A new AI giant has emerged, causing waves across the global tech industry. DeepSeek, a Chinese-developed chatbot, has taken the market by storm, becoming one of the fastest-downloaded AI applications in both the UK and the US. However, as businesses and individuals flock to test its capabilities, critical questions are being raised about its privacy policies, data security, and the broader implications of integrating unknown AI tools into the workplace.
Quick Jump
The rise of DeepSeek—and the red flags
DeepSeek's rapid ascent has been likened to a "Sputnik moment" for AI, demonstrating China's ability to challenge American dominance in artificial intelligence. With claims of developing its large language model for a fraction of what competitors like OpenAI have spent, DeepSeek has triggered widespread concern, leading to a staggering $600 billion stock market shake-up, including a significant decline in Nvidia's market value (BBC News).
However, this success has been met with intense scrutiny. Privacy watchdogs in Italy and Ireland have launched investigations into how DeepSeek processes user data, and regulatory bodies are questioning whether the AI model complies with GDPR requirements (TechCrunch). The platform has also suffered from cyberattacks and security breaches, raising alarms about its resilience against threats (TechRadar).
What data does DeepSeek collect?
One of the most alarming aspects of DeepSeek is its extensive data collection practices. According to its privacy policy, DeepSeek harvests:
- Personally identifiable information, including email addresses, phone numbers, and dates of birth.
- User input, such as text, audio, and chat histories.
- Technical metadata, including device details, IP addresses, and keystroke patterns.
- Behavioural tracking allows the platform to monitor user actions beyond the app itself.
This information is stored on servers in China, raising significant concerns about data sovereignty and potential access by Chinese authorities. The platform states that data is retained "as long as necessary," a vague timeframe that offers no clear assurances for users or businesses (TechRadar).
The business risk of unknown AI tools
For businesses, integrating unverified AI models like DeepSeek into their workflows presents serious risks:
- Data Security and Compliance Risks
GDPR, the UK Data Protection Act, and various other international data laws impose strict regulations on how businesses handle sensitive information. With DeepSeek under scrutiny by European data regulators, any platform used within an organisation could lead to compliance violations and potential legal consequences. - Intellectual Property and Confidentiality Breaches
Every query submitted to DeepSeek is stored and potentially used to refine the model. For businesses dealing with proprietary information, trade secrets, or client data, using DeepSeek could mean unknowingly sharing confidential material with an external entity. - Manipulated and Censored information
Reports suggest DeepSeek demonstrates selective moderation, avoiding politically sensitive topics in ways that raise questions about external influence on its outputs (TechCrunch). Businesses relying on AI-generated insights must consider whether they receive filtered, biased, or censored responses. - Cybersecurity Threats and Organisational Integrity
DeepSeek has already been the target of large-scale cyberattacks. Cybersecurity firm KELA revealed vulnerabilities that make the model susceptible to jailbreaking techniques known for years (TechRadar). The ease with which attackers could manipulate DeepSeek poses a direct risk to businesses integrating it into operations.
What should businesses do?
Given these mounting concerns, organisations must take a cautious and strategic approach:
- Conduct a risk assessment: Before deploying any AI tool, assess its compliance with security and data protection policies.
- Restrict the use of unverified AI models: Companies should establish clear policies against using AI platforms with questionable data practices.
- Educate employees on AI security risks: Staff must understand that interacting with AI like DeepSeek could compromise sensitive information.
- Monitor regulatory actions: Stay updated on decisions from data protection authorities and be prepared to adapt accordingly.
- Choose reputable AI solutions: Well-established AI providers with transparent data policies should be prioritised over unknown or unregulated platforms.
A final warning
DeepSeek's sudden rise to prominence is a stark reminder that not all AI advancements are without risk. While businesses seek to harness AI's power, they must remain vigilant about the tools they trust with their data. The ongoing regulatory scrutiny, privacy concerns, and security vulnerabilities surrounding DeepSeek should serve as a clear signal: proceed with extreme caution.
FAQ
DeepSeek AI raises significant privacy and security concerns due to its data collection practices. The platform stores user information on servers in China, including chat histories and technical metadata, raising questions about data sovereignty and potential government access. Additionally, recent cybersecurity vulnerabilities and a large-scale cyberattack highlight risks for users and businesses. Regulatory investigations in Europe are already underway, making it advisable to proceed with extreme caution.
- Personal information such as email, phone number, and date of birth.
- User inputs, including text, audio, and chat histories.
- Device metadata like IP address, operating system, and keystroke patterns.
- Behavioural tracking data, which may be shared with advertisers and service partners.
At this time, our recommendation is not to use DeepSeek for business use. Businesses should be highly cautious before integrating DeepSeek AI into their workflows. The risk of data leaks, intellectual property exposure, and potential compliance violations under GDPR and other privacy laws make it a risky choice. Organisations handling sensitive or confidential information specifically should avoid using DeepSeek and opt for AI solutions with transparent security practices.
Regulators in Italy and Ireland are investigating DeepSeek AI over concerns that its data collection and processing practices may violate GDPR. The platform's failure to provide precise details on how user data is stored, processed, and potentially transferred to China has triggered scrutiny. The Italian watchdog has already requested detailed disclosures from DeepSeek, and further regulatory action may follow.
- ChatGPT (OpenAI) – A reputable AI chatbot with clear data handling policies.
- Claude (Anthropic) – A privacy-conscious AI model designed for enterprise use.
- Google Gemini – A powerful AI with enterprise-grade security features.
Speak to an expert about safe AI use in your business.
Company | Resource Name | URL |
---|---|---|
BBC News |
Be careful with DeepSeek, Australia says - so is it safe to use? |
|
TechRadar |
Is DeepSeek AI safe to use? |
|
TechCrunch |
reland and Italy send data watchdog requests to DeepSeek: |
You May Also Like
These Related Stories
Cyber Security Lessons From The MOD
The cyberattack back in May 2024 on the Ministry of Defence (MoD) payroll system, which is managed by Shared Services Connected Ltd (SSCL), has sent ripples through the UK's public and private sectors. It’s clear that the exposed personal and financi …
Manchester Move Cyber Attack: Learnings and how to stay protected
In July 2024, the Locata housing software breach affecting Manchester, Salford, and Bolton councils highlighted the ongoing vulnerability of public services to cyber-attacks. The attack exposed personal data and led to a widespread phishing scam, tar …
Transport for London Cyber Attack
Cybersecurity has become one of the most critical concerns for organisations across the globe, and has featured recently in the news more often than I can ever remember. The latest cyber attack on Transport for London (TfL) serves as another reminder …