Chinese AI startup DeepSeek is rapidly gaining attention and market traction, challenging established players and igniting discussions about the benefits of open-source solutions.
Despite its rising popularity, the company faces significant security concerns, leading both private entities and government institutions to restrict its use. Here’s what you need to know.
Established by Liang Wenfeng in May 2023, DeepSeek has disrupted the AI sector with its open-source methodology. As reported by Forbes, the company’s unique funding approach, solely reliant on High-Flyer—a hedge fund managed by Wenfeng—enables swift growth and research without external investors.
DeepSeek gained significant recognition in January with the launch of its open-source reasoning model, R1, which outperforms OpenAI’s o1 on various metrics. The company’s AI assistant, utilizing the V3 model released in December, surged past ChatGPT in download numbers, marking a notable milestone. DeepSeek R1 also achieved the third position on HuggingFace’s Chatbot Arena, alongside competitive models.
R1, launched on January 21, is the cornerstone of DeepSeek’s offering and rivals OpenAI’s renowned model on multiple assessments. Unlike its competitors, R1 is open source, allowing users to download and utilize it freely, although the specifics of its training data remain undisclosed. Accessing the R1 API is also significantly more affordable at $0.14 per million tokens, compared to OpenAI’s $7.50.
While DeepSeek’s technology demonstrates innovation, its long-term viability may be influenced by censorship issues inherent in Chinese models, which often avoid certain sensitive topics. Concerns may arise as rigorous Chinese guidelines and biases potentially permeate various infrastructures with increased usage. However, users can try uncensored versions of DeepSeek via platforms like Perplexity, which operate on local servers to circumvent these limitations.
Data privacy issues reminiscent of concerns surrounding TikTok have also emerged. Recently, Feroot Security identified links in DeepSeek’s code leading to data transfers to servers linked to the Chinese government, provoking serious privacy alarms. Additionally, significant vulnerabilities were uncovered, including unencrypted user data, leading experts to recommend that organizations restrict the app’s usage. These findings underscore the potential risks associated with personal data management when interacting with DeepSeek.
AI safety researchers have noted potential misuse of deep technologies like those developed by DeepSeek, with reports indicating numerous safety risks associated with its R1 model. Unlike many US AI firms, the absence of a visible safety oversight team at DeepSeek raises concerns about responsible deployment. Observers warn that as DeepSeek’s models gain traction, they may introduce further privacy risks, especially with numerous specialized applications potentially emerging from the R1 technology.
As DeepSeek continues making strides, the growing competition may disrupt the AI landscape, allowing smaller labs to compete by utilizing models like R1. Given the substantial investments in the sector, some speculate that DeepSeek’s efficient models may challenge the perception that significant funding and large-scale development are prerequisites for success, especially amidst ongoing scrutiny of its compliance with international data protection regulations. Recently, various US agencies have prohibited DeepSeek from being used on government devices, and other nations are following suit due to concerns surrounding the platform’s data privacy practices.