HOW ENTERPRISES ADOPT AI: PART 3
- Mar 10
- 8 min read

From theory to proven, validated practice
In Part 1, we established that enterprise AI adoption is a coordination problem, not merely an information problem. In Part 2, we set out the five conditions a training programme must create for a focal point to form: reduced options, shared mental models, common knowledge, group identity and visible early adopters.
This final part describes what a programme built on those conditions looks like in practice - how Brightbeam's Embed programme engineers each condition and why the sequence matters as much as the content.
Three pillars, not three phases
The Brightbeam 'Embed and Transform' method rests on three pillars: Plan, Educate and Facilitate.
They're not sequential. All three pillars are operational throughout our engagement - and, we hope - forever more thereafter. This means that common knowledge and group identity begin forming in the planning stage - before any formal training sessions start. And continue long after we have completed our work, so that evolution can continue.
Pillar 1: Plan builds the foundation that makes it possible for coordination to occur.
Pillar 2: Educate is how the information is shared and actual practice takes hold.
Pillar 3: Facilitate is where culture and rituals meet incentives and governance - to lock the focal point in place.
Each pillar does specific work. And each creates something the others depend on.
Plan: loading the dice before training begins
Most AI training programmes begin with content design. But to create a Schelling focal point, you need to begin with reconnaissance.
Our planning phase, therefore, runs for roughly two weeks - ahead of any training session. It includes a leadership workshop, a baseline survey of every participant, deep-dive interviews with team members and produces task-level mapping across the organisation. The purpose is to understand the organisation deeply enough to make three design decisions to ensure a focal point can form.
The first decision is tool prescription. The research in Part 1 showed that coordination optimises for what is most shared - not what is best. So we select one primary AI tool for the cohort. Shared adoption of a single app creates the prominence that a menu of options destroys.
The second decision is use-case sequencing. Through task-level interviews, we identify specific, narrow use cases that span departmental boundaries. Each use case is prepared specifically for the client and it must be mastered and visible, before the next is introduced. The commonality matters. Two people in different departments, doing it independently, need to recognise the same practice.
The third decision is cohort composition. The first cohort we plan to train is chosen for its collective visibility - people whose work is seen by many others, who span organisational boundaries, whose adoption will be noticed. We put concentration before coverage. Density before breadth.
The leadership workshop serves a dual purpose. It aligns the SLT on what the programme is trying to achieve - coordination, not just competence. And it ensures those leaders understand they must be publicly visible as participants. When a department head demonstrates AI use in front of the organisation, it creates common knowledge that AI use is the expected behaviour. One visible act of leadership practice outweighs months of private encouragement.
We also run a broadcast kickoff - a session for the entire cohort, before training begins, that frames the programme collectively. The framing determines which cognitive mode people bring to the problem. Collective framing - 'we are becoming an organisation that works with AI' - activates we-mode reasoning.
By the time the first training session runs, three of the five conditions are already partially in place. Options have been reduced. Common knowledge has been seeded. And the early adopter cohort has been selected for maximum visibility.
Educate: building shared practice through structured repetition
The training follows a deliberate rhythm. Four live sessions of roughly three hours each, alternating with small-group coaching sessions of an hour.
Train, practice, coach, practice, repeat. That's the mantra.
And each cycle takes approximately two weeks.
This structure exists because instruction produces knowledge whilst shared experience produces coordination. The training sessions are live and cross-functional by design because private learning doesn't create common knowledge. When a finance analyst and a marketing manager solve the same AI problem in the same Teams meeting, they develop shared expectations through practice.
The doing, as they say, is indeed the teaching.
Every session sets practice - we use that word deliberately. 'Homework' implies obligation. 'Practice' implies skill development. Participants complete their practice before the next coaching session, which reviews it in small groups: what worked, what didn't, if the AI hallucinated, how they verified the output. Completion is tracked - not punitively, but because 100% completion is the only standard worth having - no one should be left behind. No one need get left behind.
Supportive conversations happen quickly when someone falls behind.
And during the cycle, a shared vocabulary forms. When people describe their AI use to each other, using the same terms, in the same forum, repeatedly, the error bars shrink.
This takes quickly because each curriculum is bespoke. Every exercise, every example, every practice assignment is built from the task-level mapping completed during the planning phase. Via the training sessions and their practice, participants aren't learning abstract AI skills. They're learning to do their actual work with AI - the invoicing process they run every month, the customer outreach emails they send every week, the data analysis they spend hours on every quarter.
This builds shared mental models and a common understanding of what AI can do, what it can't (yet) and what 'good' AI-assisted work looks like in this specific organisation. It also means the training pays for itself almost immediately. When someone applies what they learned in a morning session to their afternoon's work, the ROI argument is already won.
But taskwork models alone aren't sufficient. The coaching sessions deliberately build the teamwork mental models as well. How does AI-assisted work flow between people? What are the norms around disclosure, verification and labelling? These questions rarely appear in training curricula. They appear in ours because coordination depends on shared answers, not just shared skills.
Every one of our programmes culminates in a capstone project. It's not optional. Every participant must combine a minimum of three skills into a workflow that addresses a real business process they regularly encounter. The capstone produces tangible evidence of AI capability - work products that colleagues can see. In fact, we strongly encourage colleagues to group together and create an ambitious capstone as a team. It forces participants to move from task-level AI use to workflow-level thinking as part of a team, which bridges naturally into the next sprint.
From cohorts we've trained, 97% of participants reach regular AI usage by the end of Sprint 1. More telling: over half demonstrate complex cognitive offloading - using AI for analysis, verification and decision support. These people know how to use AI. And their colleagues can see them using AI - in recognisable ways, producing visible work. And that, we know, is the difference that matters.
Facilitate: locking the focal point in place
Planning and training together can create a focal point. But without facilitation, that focal point dissipates.
The Facilitate pillar addresses incentives, rituals and communications - as well as the ongoing governance to lock them in. Combined they deliver the organisational infrastructure that ensures the new practices persist. We deliver this through two focused workshops: an operating model session and a culture and communications workshop. And then step back to allow the SLT to take up the reins.
The governance workshop builds five pillars: policies, compliance, rituals, incentives and process architecture. Governance, in our framing, means enablement. Clear governance makes people confident. It answers the questions that otherwise become reasons not to act: what data can I put into AI tools? Do I need to disclose when work is AI-assisted? What happens if the AI gets something wrong?
We advocate a principles-based approach - a concise core policy that provides guidance for situations the authors haven't anticipated, rather than an exhaustive rules-based document that becomes obsolete the moment a new tool appears. Tool-specific guidance sits in addenda that can be updated without revising the core policy. This matters because a governance framework that requires constant revision creates uncertainty. Uncertainty widens error bars. And wide error bars destroy focal points.
Rituals receive particular attention. Which ones often work well?
Weekly sharing of AI wins. AI check-ins built into existing one-to-one meetings. Standing agenda items in team meetings. An AI champion in each team - selected after the capstone, deliberately, to avoid appointing enthusiasts who may not be the most capable practitioners.
These rituals are also cheap to maintain. Their coordination value is disproportionate. Each one becomes a focal point in itself: 'this is where we talk about how we use AI'. And because they're embedded in existing organisational rhythms, they persist long after the formal programme ends.
Leaders receive particular emphasis. They must continue to demonstrate AI use after the formal training has ended and - perhaps even more importantly - share their failures. In one engagement, the most effective moment in the entire programme was a senior leader describing how an AI had given them confidently wrong research information. That story did more to establish a culture of healthy scepticism and human-in-the-loop verification than any compliance module could. Failure stories from the top normalise the learning process and reduce the fear of mistakes that otherwise keeps people from experimenting.
The full journey: four sprints to an AI-native operating model
Of course. An organisation cannot be taken from AI-novice to AI-native in a single programme. Digital intelligence can be embedded in almost every workflow. And that kind of pervasiveness is not possible instantly.
We therefore break the transformation down into four 'sprints'. None of them are small stages. And none of them could be combined.
Sprint 1 - where the focal point forms - covers core AI skills: tools, tasks and prompting. But a focal point for task-level AI use is only the beginning.
Sprint 2 takes existing workflows and reconstructs them with AI integrated.
Sprint 3 designs entirely new workflows that wouldn't have been possible without AI.
Sprint 4 lands the fully AI-native operating model - the point at which use of digital intelligence is no longer a programme or an initiative but the way the organisation works.
How do we know whether an organisation is ready to continue the transformation?
Sprint gates are KPI-driven. The SLT decides at each gate whether to proceed - not us. This is deliberate. We've seen programmes where consultants push clients into the next phase because the contract says so, regardless of whether the organisation is ready. But this is always counter-productive. The focal point must be genuinely established before the next phase begins. Premature advancement creates the illusion of progress whilst the coordination pattern underneath remains fragile.
To ensure success, we use a two-part gate: quantitative KPI measurements alongside a qualitative assessment of cultural readiness. A baseline survey runs before Sprint 1, with assessment at each gate. We track adoption rates, quality of AI use, productivity gains and - crucially - the visibility of AI practice across the organisation. That last metric is the one most programmes miss. Concentration matters more than coverage.
What this produces
Across our engagements, the pattern is consistent. Adoption rates above 95%. Measurable productivity gains - typically four hours recovered per person per week. And something harder to quantify but unmistakable: the moment when the organisation starts pulling rather than being pushed. When new joiners are inducted into AI practices as part of onboarding, not as a special programme. When AI use appears in job descriptions and performance reviews. When people share AI wins without being asked.
That's the focal point at work. Once formed, it exhibits the preferential attachment that the model predicts. Each new person who encounters the focal point is more likely to converge on it, which makes the focal point more prominent, which attracts the next person. The arbitrary blip blooms into a broad bright beacon.
Our job at @brightbeam is to create the conditions under which that bloom occurs - not through mandates or an overwhelming menu of possibilities, but through deliberate design: reducing options, building shared understanding through collective experience, making practice publicly visible, giving the group an identity that includes AI use and ensuring the first adopters are the ones everyone else watches.
The organisation does the rest.







