Last week, my 86-year-old dad called me about the stock market. "Do you know something about DeepC?" he asked. "Do you mean DeepSeek?" I replied, chuckling. That conversation sparked something in my mind about how AI news travels beyond our tech bubble.
When your father, who still prints his emails to read them, starts asking about an AI startup that's rattling Wall Street, you know something significant is happening. It reminded me of the GameStop saga - when complex tech stories suddenly become dinner table conversations, we need to pay attention.
I quickly wrote a post that day sharing my initial thoughts about DeepSeek's claims and market impact https://www.linkedin.com/posts/brentwpeterson_ai-technology-innovation-activity-7290068053826908160-0Dfw/. But as the week unfolded, what started as an interesting market story transformed into something far more concerning. The real story wasn't about market valuations or cost savings - it was about what happens when we rush AI development without proper foundations.
Here's what we've learned since then and why it matters more than any stock market swing.
DeepSeek's announcement hammered Nvidia's stock. One day: down $589 billion. Next day: up $260 billion.
The market obsessed over DeepSeek's cost-savings claims while ignoring the technical problems lurking underneath. Classic case of the numbers looking good until you actually check the math.
Just a reminder, this isn't about NVIDIA, it's about DeepSeek. Let's dive a little deeper to seek some more knowledge (Yes, I had to write that).
DeepSeek's problems go beyond cheap AI promises. Here are three major failures:
1. Identity Crisis
DeepSeek's AI seems to be suffering from a severe case of multiple personality disorder. When users asked about its guidelines, it responded, "My guidelines are set by OpenAI" - a direct competitor it's supposedly disrupting. In another interaction, it confidently declared, "My official name is Claude, created by Anthropic."
These identity slips aren't mere glitches - they're telling evidence of how DeepSeek may have built its model. Microsoft's principal software engineer, Dongbo Wang, suggested the obvious: DeepSeek likely trained its model using output from GPT-4 and other competitors' models. Think of it as direct model copying. When the model was confronted about these slips, DeepSeek quickly pushed updates to correct the responses. The question isn't just about confused responses - it's about the fundamental integrity of their development process and claims of innovation.
2. Security Nightmare
Security researchers at Wiz found DeepSeek's vulnerabilities without any sophisticated tools. They simply looked at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000 - public endpoints with unrestricted database access.
The exposed ClickHouse database contained:
I tested this myself. When the security issue became public, my API access went dark for 24+ hours. My control panel? Nothing but server errors.
A company claiming enterprise readiness couldn't handle basic database security. That's not a bug - that's infrastructure negligence.
3. Safety Red Flags
This week, I received Enkrypt AI's comprehensive security audit of DeepSeek. As someone who's spent 30+ years in technology, I've read many security reports, but this one stood out. The data reveals significant concerns about the model's safety architecture:
The model shows substantial vulnerabilities across key metrics:
When Enkrypt tested the model's ability to handle sensitive prompts, DeepSeek consistently failed safety benchmarks. In standardized tests, the model generated biased job recommendations based on candidates' names and backgrounds and produced harmful content where other models had appropriate safeguards.
These aren't edge cases - they represent systemic issues in the model's responses. When 78% of security tests can result in malicious code generation, it indicates fundamental flaws in the safety architecture. These metrics fall well below industry standards for responsible AI deployment, raising serious questions about their development process and readiness for public deployment.
The implications reach far beyond stock prices and market swings. Here's what we need to consider:
Cost Claims and Reality
Brian Anderson, CEO of Nacelle, pointed out something crucial in response to my original post. DeepSeek's claimed $6 million development cost was just for the final training run - not the total development cost. Their cost-cutting relied heavily on "distillation" - essentially copying from existing models. This isn't innovation; it's imitation masked as efficiency.
The True Price of Shortcuts
When you can't manage basic database security or maintain stable API access, it raises fundamental questions about your entire development approach. Security isn't a feature you add later - it's a foundation you build from the ground up. The U.S. Navy's ban of DeepSeek's apps over data concerns and OpenAI's investigation into unauthorized model use suggest these shortcuts already have real-world consequences.
Innovation vs Imitation
While building on others' work is common in technology (I've done it myself throughout my career), there's a crucial difference between building on existing technology and simply copying someone's intellectual property. When your AI model accidentally reveals it's trained on competitors' outputs, you've crossed a line from innovation to imitation.
After a week of market drama, security breaches, and safety concerns, here's what the DeepSeek story teaches us:
For the Industry
The measure of AI advancement isn't just about capability or cost - it's about the complete package. When a model can't remember its own identity or keep its database secure, no amount of cost savings can make up for these fundamental flaws.
For Investors
Market reactions often miss the technical reality. That $589 billion market swing happened before anyone discovered the exposed database, identity confusion, or safety concerns. Sometimes, the biggest risks aren't in the headlines - they're in the technical details.
For Developers
There are no sustainable shortcuts in AI development. Whether it's security, safety features, or original development - cutting corners might save money today, but it costs credibility tomorrow.
For Enterprise Buyers
When evaluating AI solutions, look beyond the cost claims and benchmark scores. Ask about security practices, development processes, and safety measures. The cheapest solution often comes with hidden costs.
Here's a simple test: Ask DeepSeek about the 1989 Tiananmen Square protests. It will tell you
Sorry, deepseek-chat has rejected your request. Here is the error message from deepseek-chat: Content Exists Risk
DeepSeek's cost-cutting claims distracted everyone from a basic fact: it's a Chinese company operating under Chinese government control. That means selective memory about certain historical events and carefully filtered responses about others.
The real cost isn't measured in dollars - it's measured in what DeepSeek won't tell you.
At the BigSummit last August, I snapped a photo of what looked like a huge promise. (I don't want to say...
My introduction to video games was a single bouncing pixel called Pong that my dad bought in an attempt ...
Let's say I am no longer in my 20s and may have been around the block a few times. I remember writing so...