Fluency Scales When AI Lives Inside the Work You're Already Doing

In the previous unit, you explored how Zapier launched its AI journey before consensus existed — and how bundling the CEO mandate with immediate enablers turned a polarizing moment into a productive one. But a launch moment, no matter how well-designed, only gets you started. The harder question is how you sustain momentum across hundreds of people over months and years without building a parallel universe of AI training that competes with the work everyone's already doing. That's exactly the challenge Brandon Sammut tackled next — and the principle he landed on was deceptively simple: stop creating new programs, and start embedding AI into the rituals your organization already runs.

Embed Learning in Existing Rituals and Formalize Peer Support

As Brandon described, Zapier's guiding principle for scaling AI fluency was to "integrate AI learning and experimentation into as many of Zapier's existing practices as possible." The reasoning was practical: "there's already a lot going on in the business" and the goal was to "minimize the overhead or complexity" of building AI muscle. Hack weeks became AI hack weeks. Onboarding became an AI fluency onramp. The work itself became the curriculum.

One of the highest-value, lowest-effort moves was creating a dedicated Q&A channel — Slack or Teams — where "anyone can ask any question about anything related to AI" with "no question too big or small." Critically, Zapier didn't rely on goodwill. They assigned AI power users — early adopters who'd been experimenting since GPT-3.5 — and told them that roughly "10% of your bandwidth" each week was now formally allocated to giving in that channel. Brandon noted this was meaningful for the power users too: their leadership was The formula was simple — a single place for any question, plus helpers — and as Brandon put it,

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal