
Welcome to "Unlocking AI's Potential for Workers." This course is based on a multi-perspective panel discussion at Transform 2026, featuring Dr. Amanda Welsh of Northeastern University as a moderator, alongside panelists Jason Desentz (Chief Human Resources Officer, Toshiba), Taylor Stockton (Chief Innovation Officer, U.S. Department of Labor), and Apple Musni (Chief People Officer, REI). Together, they explored how employers, workers, and policymakers can embrace AI innovation while preserving trust, accountability, and humanity. Across three units, you'll practice calibrating guardrails to the actual stakes of each AI decision, designing governance that keeps pace with AI's speed, and building organization-wide AI literacy that outscales any rulebook.
Let's start with the principle the panel returned to most forcefully: not every AI decision demands the same level of human involvement — and treating them as if they do is where organizations get into trouble.
The panel drew a sharp line between AI use cases with trivial downsides and those in which getting it wrong can lead to fraud, IP disputes, or broken trust. Stockton put it plainly:
Stockton: "If you're using AI to brainstorm different company events and different team fun night events, the risk is pretty low if the AI does something wrong. But if you're using an AI agent to make decisions around a medical decision, a financial decision, the stakes get a lot higher."
The takeaway wasn't to slow everything down, it was to differentiate. As Stockton emphasized:
Stockton: "Not just guardrails one size fits all but thinking about the different levels of stakes that exist within your business."
