Last week I wrote about Claude Mythos Preview — Anthropic’s groundbreaking new AI model and the Project Glasswing initiative designed to put its extraordinary cybersecurity capabilities to work for defenders rather than attackers. The response was tremendous, and the conversations it sparked reinforced something I believe deeply: our industry is genuinely curious about AI, and rightfully so.
Then, almost on cue, the story took a turn that made the point better than I ever could.
What happened
On the same day Anthropic publicly announced Mythos Preview — a model so capable it was restricted to roughly 40 of the world’s most sophisticated technology and security organizations — a small group of unauthorized users gained access to it. Not through a sophisticated nation-state attack. Not through a zero-day exploit of the kind Mythos itself can discover. They got in through a third-party contractor with access to Anthropic’s vendor environment, combined with institutional knowledge about Anthropic’s infrastructure practices that had been previously leaked from an AI training company called Mercor.
They pieced together the access path. They guessed where the model lived. And they got in.
Anthropic confirmed it is investigating the incident and stated there is no evidence its core systems were compromised beyond the third-party vendor environment. But the group has reportedly been using the model continuously since gaining access — and the implications are significant enough that federal officials and leaders at institutions including the International Monetary Fund have raised concerns about what happens when a model of this capability reaches the wrong hands.
Let that framing sink in for a moment.
This wasn’t a failure of AI. Mythos itself didn’t create this vulnerability. The breach happened the same way most breaches happen: through a human access chain that wasn’t adequately governed. A contractor. A vendor environment. Prior knowledge obtained from an unrelated third party. Standard attack surface. Entirely preventable categories of risk.
The most advanced AI model ever built was accessed by unauthorized users not because AI is inherently insecure — but because the surrounding security posture didn’t match the value and risk profile of what was being protected.
This affects your moving business — regardless of what you move
Some in our industry may read stories like this and assume cybersecurity is someone else’s problem. It isn’t — and the segment of the market you serve doesn’t change that calculus.
If you handle government or military household goods, you are already operating under strict compliance frameworks for a reason. The personnel data, shipment details, and logistics records associated with military moves are sensitive by definition. A breach in your systems isn’t just a business problem — it carries national security implications, contract consequences, and reputational risk that can end a program relationship overnight.
If you handle commercial relocation, you hold the keys to your corporate clients’ employee data, HR records, and often their broader supply chain and facilities information. Your clients’ procurement and legal teams are increasingly asking hard questions about data governance. A weak answer — or worse, a breach — can cost you the account and the reference.
And if your business is residential moving, don’t mistake volume for anonymity. You collect home addresses, moving dates, inventory valuations, and financial information on thousands of families each year. That data is exactly what identity thieves, burglars, and fraud networks are looking for. Residential movers are not below the radar — they are an underguarded target with predictable, high-value data.
The common thread across all three is this: moving companies are data businesses, whether they think of themselves that way or not. And in an environment where AI tools are rapidly lowering the barrier to finding and exploiting software vulnerabilities, every business that runs on software needs to treat security as a core operational discipline — not an IT line item.
The lesson for every business
AI is genuinely exciting. I mean that without reservation. The capabilities emerging right now — in coding, reasoning, autonomous problem-solving, and yes, cybersecurity — represent a generational shift in what technology can do. At EDC®, we are actively evaluating how to bring these capabilities to bear for the moving, storage, and logistics companies we serve, in ways that are practical, governed, and secure.
But the Mythos incident is a reminder that adopting AI without a corresponding commitment to security discipline is not innovation — it’s exposure.
Here are the questions every business leader should be asking right now:
- Who has access to the AI tools and data environments your company is deploying — and how are those access paths governed? Third-party vendors, contractors, and integration partners are not peripheral to your security posture. They are part of it.
- Where does your data actually live when AI processes it? In a vendor cloud? On your own infrastructure? In a hybrid environment? The answer matters — not just for compliance, but for understanding your real attack surface.
- What happens when something goes wrong? The Anthropic breach was discovered and reported by an outside journalist, not caught internally before exposure. How quickly would your organization detect unauthorized access, and what is the response protocol?
What we believe at EDC®
We have built our products and our architecture around the principle that security and flexibility are not opposites. Our customers operate across different environments — some on their own infrastructure, some in cloud deployments — and we work to ensure that wherever data lives, it is handled with the controls and accountability that serious enterprise software demands.
AI is a powerful addition to that picture, not a replacement for it. The technology that helps Mythos find a 27-year-old vulnerability in an operating system is the same technology that, without proper guardrails, can be turned in the wrong direction by the wrong hands. The answer isn’t to avoid AI. The answer is to treat security as a first principle of how you adopt it.
The companies that will lead in this environment are those that move forward with both ambition and discipline. They embrace what AI makes possible. And they refuse to let the excitement of the moment outrun the governance required to deploy it responsibly.
That is the standard we hold ourselves to at EDC®. And it’s the conversation we want to keep having with the industry.
About the Author
Diana Corona
Co-Founder, President & CEO — Enterprise Database Corporation (EDC®)
Diana Corona co-founded EDC® over 25 years ago and has spent her career building software purpose-built for the moving and storage industry. Under her leadership, EDC® has grown into one of the most trusted technology partners in the space — serving moving companies of all sizes across residential, commercial, military, government, international, and specialty move types. She writes on topics at the intersection of technology, operations, and the future of the moving industry.



