“What happens when a new employee brings their agent to work?”

An executive asked this recently. Imagine a few years from now : a student graduates, having trained their own agent through university. It knows everything they’ve learned, every paper, every problem solved. Day one, they bring it to work.

It’s like bring your own device circa 2009. The iPhone launched & nobody wanted corporate Blackberries1 anymore. IT scrambled to adapt.

But a rogue phone couldn’t sign contracts. A rogue agent can.

Amazon just learned this at scale. $6.3 million in lost orders. 99% order volume drop across North America. Four severity one incidents in one week.23

Amazon’s AI coding assistant contributed to at least one major production incident. The response : a 90-day safety reset with mandatory two-person review for all code changes.

An internal memo admitted what everyone implicitly knows :

“Best practices and safeguards around generative AI usage haven’t been fully established yet.”3

Companies can’t hide behind hallucinations. Utah’s AI Policy Act4 eliminates the hallucination defense :

“It is not an affirmative defense to assert that the GenAI tool made the violative statement or undertook the violative act.”

Newer and larger models are smarter and more reliable.5 But they fail unexpectedly. There is no relationship between size and how failures change over time. AI-generated code creates 70% more issues than human code.6

The TRUMP AMERICA AI Act would create explicit liability pathways - allowing the US Attorney General, state AGs, & private plaintiffs to sue AI developers for defective design & unreasonably dangerous products.7

That new hire’s personal agent? The company bears liability for its mistakes. The contracts it signs, the code it deploys, all of it lands on the company.

Like a dog or a device, you are responsible for your agent.