In the previous unit, you explored how Asana made AI integration a non-negotiable responsibility for every People program owner — from vendor roadmap audits to no-code GPTs built without engineering support. But program-level ownership only works if the organization signals that internal AI adoption matters at the highest level. That's where Asana's next move comes in: elevating AI activation to a company-level objective with the same visibility as revenue targets, and standing up an AI council designed not around technical expertise, but around a very specific human profile.
You'll recall that Lisa Ann Logan described Asana setting "about seven or eight company objectives" each year — the goals that translate long-term strategy into annual priorities. Internal AI adoption became one of those top-level goals, sitting "right up there with our ARR goals for the year." That placement wasn't symbolic. It meant the goal inherited the same accountability infrastructure Asana uses for everything else: named owners on every goal, defined success metrics, and "real candid red yellow green flags on whether it's off track or on track." The CIO was named as the accountable executive, and the goal was deliberately broken into sub-goals relevant to every function — finance, marketing, legal, and beyond — so each team felt invested rather than merely informed. As Logan emphasized, this created both visibility and a culture of accountability that made AI adoption impossible to quietly deprioritize when other business pressures mounted.
With the company-level goal in place, Asana needed a council to drive it forward. But rather than duplicating work already happening elsewhere, Logan's team identified a specific gap: Subject matter experts in governance, security, and procurement were already handling their domains. The council's charter focused squarely on accelerating — what Logan called
