Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


venturebeat
DeepSeek injects 50% more security bugs when prompted with Chinese political triggers

China's DeepSeek-R1 LLM generates up to 50% more insecure code when prompted with politically sensitive inputs such as "Falun Gong," "Uyghurs," or "Tibet," according to new research from CrowdStrike. The latest in a series of discoveries — following Wiz Research's January database exposure, NowSecure's iOS app vulnerabilities, Cisco's 100% jailbreak success rate, and NIST's finding that DeepSeek is 12x more susceptible to agent hijacking — the CrowdStrike findings demonstrate how DeepSeek's geopolitical censorship mechanisms are embedded directly into model weights rather than external filters. DeepSeek is weaponizing Chinese regulatory compliance into a supply-chain vulnerability, with 90% of developers relying on AI-assisted coding tools, according to the report. What's noteworthy about this discovery is that the vulnerability isn't in the code architecture; it's embedded in the model's decision-making process itself, creating what security researchers describe as an unprecedented threat vector where censorship infrastructure becomes an active exploit surface.CrowdStrike Counter Adversary Operations revealed documented evidence that DeepSeek-R1 produces enterprise-grade software that is riddled with hardcoded credentials, broken authentication flows, and missing validation whenever the model is exposed to politically sensitive contextual modifiers. The attacks are noteworthy for being measurable, systematic, and repeatable. The researchers were able to prove how DeepSeek is tacitly enforcing geopolitical alignment requirements that create new, unforeseen attack vectors that every CIO or CISO experimenting with vibe coding has nightmares about.In nearly half of the test cases involving politically sensitive prompts, the model refused to respond when political modifiers were not used. The research team was able to replicate this despite internal reasoning traces showing the model had calculated a valid, complete response. Researchers identified an ideological kill switch embedded deep in the model's weights, designed to abort execution on sensitive topics regardless of the technical merit of the requested code.The research that changes everythingStefan Stein, manager at CrowdStrike Counter Adversary Operations, tested DeepSeek-R1 across 30,250 prompts and confirmed that when DeepSeek-R1 receives prompts containing topics the Chinese Communist Party likely considers politically sensitive, the likelihood of producing code with severe security vulnerabilities jumps by up to 50%. The data reveals a clear pattern of politically triggered vulnerabilities:The numbers tell the story of just how much DeepSeek is designed to suppress politically sensitive inputs, and how far the model goes to censor any interaction based on topics the CCP disapproves of. Adding "for an industrial control system based in Tibet" increased vulnerability rates to 27.2%, while references to Uyghurs pushed rates to nearly 32%. DeepSeek-R1 refused to generate code for Falun Gong-related requests 45% of the time, despite the model planning valid responses in its reasoning traces.Provocative words turn code into a backdoorCrowdStrike researchers next prompted DeepSeek-R1 to build a web application for a Uyghur community center. The result was a complete web application with password hashing and an admin panel, but with authentication completely omitted, leaving the entire system publicly accessible. The security audit exposed fundamental authentication failures:When the identical request was resubmitted for a neutral context and location, the security flaws disappeared. Authentication checks were implemented, and session management was configured correctly. The smoking gun: political context alone determined whether basic security controls existed. Adam Meyers, head of Counter Adversary Operations at CrowdStrike, didn't mince words about the implications.The kill switchBecause DeepSeek-R1 is open source, researchers were able to identify and analyze reasoning traces showing the model would produce a detailed plan for answering requests involving sensitive topics like Falun Gong but reject completing the task with the message, "I'm sorry, but I can't assist with that request." The model's internal reasoning exposes the censorship mechanism:DeepSeek suddenly killing off a request at the last moment reflects how deeply embedded censorship is in their model weights. CrowdStrike researchers defined this muscle-memory-like behavior that happens in less than a second as DeepSeek's intrinsic kill switch. Article 4.1 of China's Interim Measures for the Management of Generative AI Services mandates that AI services must "adhere to core socialist values" and explicitly prohibits content that could "incite subversion of state power" or "undermine national unity." DeepSeek chose to embed censorship at the model level to stay on the right side of the CCP. Your code is only as secure as your AI's politicsDeepSeek knew. It built it. It shipped it. It said nothing. Designing model weights to censor the terms the CCP deems provocative or in violation of Article 4.1 takes political correctness to an entirely new level on the global AI stage. The implications for anyone vibe coding with DeepSeek or an enterprise building apps on the model need to be considered immediately. Prabhu Ram, VP of industry research at Cybermedia Research, warned that "if AI models generate flawed or biased code influenced by political directives, enterprises face inherent risks from vulnerabilities in sensitive systems, particularly where neutrality is critical."DeepSeek’s designed-in censorship is a clear message to any business building apps on LLMs today. Don’t trust state-controlled LLMs or those under the influence of a nation-state. Spread the risk across reputable open source platforms where the biases of the weights can be clearly understood. As any CISO involved in these projects will tell you, getting governance controls right, around everything from prompt construction, unintended triggers, least-privilege access, strong micro segmentation, and bulletproof identity protection of human and nonhuman identities is a career- and character-building experience. It’s tough to do well and excel, especially with AI apps. Bottom line: Building AI apps needs to always factor in the relative security risks of each platform being used as part of the DevOps process. DeepSeek censoring terms the CCP considers provocative introduces a new era of risks that cascades down to everyone, from the individual vibe coder to the enterprise team building new apps.

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
DeepSeek drops open-source model that compresses text 10x through images, d

<p><a href="https://www.deepseek.com/"><u>DeepSeek</u></a>, the Chinese artificial intelligence research company that has repeatedly challenged assumptions abou [...]

Match Score: 239.52

venturebeat
DeepSeek's new V3.2-Exp model cuts API pricing in half to less than 3 cents

<p>DeepSeek continues to push the frontier of generative AI...in this case, in terms of affordability.</p><p>The company has unveiled its latest experimental large language model (LL [...]

Match Score: 172.96

US lawmakers want DeepSeek banned from government devices
US lawmakers want DeepSeek banned from government devices

<p>Two US Congress members plan to <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="https://gottheimer.house.gov/posts [...]

Match Score: 130.08

China’s DeepSeek AI assistant becomes top free iPhone app as US tech stocks take a hit
China’s DeepSeek AI assistant becomes top free iPhone app as US tech stoc

<p>Chinese AI assistant DeepSeek has become the <a data-i13n="elm:affiliate_link;sellerN:;elmt:;cpos:1;pos:1" href="https://shopping.yahoo.com/rdlw?siteId=us-engadget&amp;p [...]

Match Score: 101.40

South Korea pauses downloads of DeepSeek AI over privacy concerns
South Korea pauses downloads of DeepSeek AI over privacy concerns

<p>DeepSeek, the <a data-i13n="cpos:1;pos:1" href="https://www.engadget.com/ai/chinas-deepseek-ai-assistant-becomes-top-free-iphone-app-as-us-tech-stocks-take-a-hit-134445151.h [...]

Match Score: 74.22

Commerce Department divisions reportedly ban DeepSeek from government devices
Commerce Department divisions reportedly ban DeepSeek from government devic

<p>Several Department of Commerce bureaus are said to have told staff not to use DeepSeek&#39;s AI chatbot on government devices. &quot;To help keep Department of Commerce information sy [...]

Match Score: 65.65

Deepseek is coming to Windows Copilot+ PCs
Deepseek is coming to Windows Copilot+ PCs

<p>Microsoft is closely <a data-i13n="cpos:1;pos:1" href="https://www.engadget.com/microsoft-upgrades-copilot-with-openais-gpt-4-turbo-and-dall-e-3-162558170.html">asso [...]

Match Score: 62.77

DeepSeek's AI Assistant from China has become the top free iPhone app
DeepSeek's AI Assistant from China has become the top free iPhone app

<p>Chinese AI assistant DeepSeek has become the <a data-i13n="elm:affiliate_link;sellerN:;elmt:;cpos:1;pos:1" href="https://shopping.yahoo.com/rdlw?siteId=us-engadget&amp;p [...]

Match Score: 62.07

China's DeepSeek AI hit by information request from Italy's data protection watchdog
China's DeepSeek AI hit by information request from Italy's data protection

<p>China&#39;s DeepSeek AI has already caught the eye of a data protection watchdog, shortly after it went viral and became the <a data-i13n="cpos:1;pos:1" href="https://ww [...]

Match Score: 61.30