Skip to content
Roberta FauxMay 1, 2025 4:04:38 PM3 min read

DeepSeek’s Security Lapses: The Implication of Cryptographic Failures for AI Systems

DeepSeek is a rapidly emerging AI startup that has garnered attention for its advancements in large language models (LLMs). Recent security audits have unveiled critical vulnerabilities in its mobile application, exposing severe deficiencies in its cryptographic implementations. These weaknesses not only compromise user privacy but also raise concerns about the broader security posture of AI-driven platforms. 

The technical flaws in DeepSeek’s encryption architecture further underscore the urgent necessity for AI applications to adopt quantum-resistant cryptography to mitigate future threats. 

 

DeepSeek’s Cryptographic Shortcomings 

A comprehensive security audit of the DeepSeek app has revealed Multiple cryptographic failures were reveal in a security audit of the DeepSeek app. 

DeepSeek’s iOS application explicitly disabled App Transport Security (ATS), a fundamental security feature designed to enforce encrypted communications via HTTPS. By circumventing ATS, the application transmits sensitive user and device metadata in plaintext over unsecured channels, exposing users to man-in-the-middle (MITM) attacks and unauthorized interception. 

Despite modern cryptographic advancements, DeepSeek relies on Triple DES (3DES)—a symmetric encryption algorithm that has been deprecated by NIST since 2015 due to its susceptibility to brute-force attacks. Compounding this issue, there are hardcoded encryption keys within the source code, making them trivially retrievable through static analysis or reverse engineering.  Additionally, the app reuses initialization vectors (IVs), which undermines the confidentiality of encrypted data by making ciphertext patterns predictable. 

The application mishandles user credentials and cryptographic keys, violating fundamental security principles such as OWASP’s Secure Storage Guidelines. Instead of securely hashing and salting authentication data, DeepSeek stores plaintext credentials, significantly increasing the risk of credential theft via local or remote attacks. 

Additionally, DeepSeek exhibits aggressive telemetry practices, gathering extensive user and device data with minimal transparency. Notably, this data is transmitted to ByteDance-owned servers operating under the jurisdiction of the People’s Republic of China, raising concerns about data sovereignty, compliance with international privacy laws, and potential state surveillance risks. 

 

How Other LLM Providers Secure Data 

Unlike DeepSeek, leading LLM providers such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude implement robust cryptographic measures.  For instance, user queries and model responses are transmitted over end-to-end TLS encryption to prevent interception. Secure authentication methods such as OAuth2 and JWTs ensure API interactions remain confidential. Unlike DeepSeek, industry leaders clearly define data retention policies and anonymization mechanisms to mitigate privacy risks. 

However, without standardized security audits, it remains difficult to ascertain whether other AI applications might also disable security features for performance optimizations, employ deprecated encryption algorithms, or misconfigure API-level data protection. This highlights the pressing need for increased transparency and independent cryptographic evaluations of AI systems. 

 

Future-Proofing AI Security: The Quantum Threat  

While the DeepSeek incident underscores the necessity of fundamental cryptographic hygiene, another profound challenge looms on the horizon: quantum computing’s impact on encryption. Standard encryption protocols such as RSA-2048, ECC, and even AES-256 are susceptible to quantum attacks such as  Shor’s Algorithm. 

In response to the quantum threat, NIST has selected quantum-resistant algorithms for both asymmetric and symmetric cryptographic applications. The adoption of post-quantum encryption must be prioritized by AI providers to future-proof LLMs against cryptanalytic advancements. Organizations developing AI-powered applications should begin the migration process immediately by integrating hybrid cryptographic approaches, combining classical and quantum-resistant schemes to ensure a seamless transition. 

 

A Call for Enforced Security Standards in AI Applications 

The DeepSeek app’s security flaws highlight a fundamental lack of adherence to modern cryptographic best practices. These vulnerabilities expose users to data breaches, privacy violations, and potential nation-state surveillance—all preventable through proper implementation of established security frameworks. 

Moving forward, the AI industry must implement end-to-end encryption across all data transmissions, enforce regular cryptographic audits to identify weak or deprecated encryption practices, and establish transparent security disclosures regarding data handling policies.  Finally, we must prepare for post-quantum cryptography adoption, ensuring AI systems remain secure beyond classical threats. As AI continues to permeate critical industries, the integrity, confidentiality, and sovereignty of user data must be non-negotiable. The time for proactive security implementation is now. 

RELATED ARTICLES