Building Team AI Capability When Values Come First
At a Glance
Client: Intention 2 Impact (I2I) — minority, women-owned social impact measurement and communication firm
Engagement: AI Lab (Team Edition)
Timeline: Started systematically experimenting with AI in 2023. Engaged Mission Bloom in 2025. Still actively iterating in 2026.
Core challenge: A values-driven team where individual AI experimentation was happening in isolation — creating unspoken tensions and misalignment that the team's strong culture couldn't resolve on its own.
Key outcome: Team alignment and momentum with AI, a custom AI toolkit grounded in I2I's core values, three live team experiments, and an ongoing practice of intentional, transparent AI use — with the messiness fully included.
Intention 2 Impact didn't need to be convinced that AI mattered — they'd been experimenting since 2023 and even published about it. What they needed was alignment: a way to move from individual, ad-hoc use to a shared practice that reflected their values. Here's the story of how we did that work together, and why I2I's willingness to sit in the messy middle is exactly what makes their approach worth sharing.
As I2I's COO, Kathleen Doll, recently put it to a room full of peers at the Good Tech Summit: "We weren't late to AI. We were unaligned."
That distinction matters. And it's one that I think a lot of mission-driven organizations will recognize in themselves.
The Starting Point: Individual Experiments, Collective Uncertainty
Intention 2 Impact is a minority, women-owned social impact consultancy that partners with funders and impact investors to measure and communicate impact. They're not your average M&E shop. Their approach grounds rigorous research in creative communication, design, and an unwavering commitment to equity. They are, as they describe themselves, "values-driven, learning obsessed, and strategic to our core."
Their core values (Intention, Interdependence, Integrity, Innovation, among others) aren't just aspirational. They're operational commitments that shape how the team works.
When the firm's CEO started experimenting more deeply with AI tools, she found them genuinely useful — particularly as a thinking partner for complex tasks like proposal development, methodological brainstorming, and data synthesis. But the rest of the team was in different places. Some were curious and exploring on their own. Others had real concerns about equity, environmental impact, and whether AI would undercut the human-centered approach that defines I2I's work.
The team recognized that their AI use was, as they described it, "very ad hoc without a lot of principled alignment." AI had become a bit of an elephant in the room — everyone knew it was happening, but there wasn't a shared conversation about how it was happening, or whether the team's approach reflected their values. Leadership didn't want to mandate anything. They wanted to build alignment, honor every team member's perspective, and help everyone feel more intentional about this shift.
As Kathleen reflected later: "I assumed we were more aligned than we were. That assumption was wrong, and some part of me knew it. The bigger discomfort was that as an equity-centered firm, I felt the weight of needing to get this right — not just for efficiency, but for integrity. And we didn't yet have a way to hold that tension productively."
That's what brought us together.
What I Learned That Shaped the Work
I knew I2I through the independent consulting community and the emergent learning network. From the earliest conversations, I was struck by how thoughtful and intentional every single team member was. This wasn't a group that needed to be convinced AI mattered. It was how it mattered and what it would mean for their work that they wanted to explore. They were already grappling with the hard questions — about bias in Western-centric knowledge systems, about the environmental cost of AI tools, about the tension between efficiency and authenticity.
What became clear during discovery was that this engagement had to be extremely values-driven. We were going to need to tend to the values conversation and messy complexity work as much as (and perhaps even more than) the applications, techniques, and use cases for AI. I2I needed a space to wrestle with the ethical dimensions — openly, together — before they could build shared practices with integrity.
I also learned that I2I's culture was already a significant asset. They are an extraordinarily collaborative, psychologically safe, experimentation-oriented team. As one team member put it, "Because we were all trained, for the most part, as researchers, the ethics stuff is very top of mind." That shared foundation of rigor and trust was going to be essential for the work ahead.
The Approach: Listening Tour → Experimentation → Integration
We designed a three-phase engagement that followed I2I's own rhythm — deliberate, collaborative, and rooted in their values.
Phase 1: Discovery and Alignment
I conducted one-on-one listening tour interviews with all team members. These were confidential conversations designed to surface where each person really was with AI — their hopes, concerns, experiments, and the questions they were sitting with. I then synthesized those conversations into a discovery report and presented the findings back to the team in a group sense-making session. This wasn't about telling them what to think. It was about reflecting their own collective wisdom back to them and creating the container for the conversation they needed to have to begin to build alignment around AI’s role in their firm.
The discovery report organized findings around I2I's four core AI values: Transparency, Equity, Authenticity, and Abundance. It surfaced that team members were operating across a spectrum — from champions to cautious explorers — and that this diversity of perspective was actually a strength, not a problem. The report also identified key decision points the team would need to navigate together: questions about technology standardization, client disclosure, and how to ensure AI's benefits didn't accrue unevenly within the team.
Phase 2: Core Learning Labs
We moved into focused 90-minute facilitated sessions that wove hands-on experimentation with values-based discussions around the critical questions. Through these conversations, I helped I2I draft their “AI Manifesto”, practice guidelines, and learning practices that would support their AI work. We focused on ensuring we heard from everyone what elements they needed to see covered in the practice guidelines, what questions were getting in the way of their experimentation with AI, and what new questions their experimentation was raising for their work at I2I. Throughout these sessions, I was building their custom AI toolkit alongside them. We used scenario-based demos with tools like Claude, Gemini, and NotebookLM, always grounding the exploration in I2I's actual work.
What mattered was that we kept returning to the values. Every tool, every use case, every experiment was routed through the question: Does this reflect who we are and how we want to work? Does this retain or even amplify our ‘secret sauce’?
Phase 3: Integration and Maintenance
The final phase involved monthly calls where the team shared lessons from their real experiments, refined practices based on what they were learning, and co-created documentation that reflected their values and operational context.
What Shifted
I think the shift came when team members started experimenting on their own — not because it was ‘assigned’, but because they felt grounded enough in their shared values to try. And what really compelled me was when they took it upon themselves to conduct their own harm reduction research. This was 100% true to who they are, and it's something I haven't seen other organizations do. They didn't just accept the technology at face value. They investigated the risks with the same rigor they bring to evaluation work.
But the moment that stands out most came when we shared the first draft of the AI guidance toolkit with the team. The section on I2I's values in action, particularly how equity showed up in their approach to AI, generated passionate feedback. The team dug in hard. They challenged the language, pushed for more specificity, and held the group accountable to meaning what they said. As Kathleen reflected, "It told us our team was serious. They weren't going to let us get away with pretty language, especially on equity." That level of critical engagement pushed the whole group somewhere more honest — which can only happen when a team feels safe enough to speak up and invested enough to care.
The team moved from what several of them described as a "Wild West" of individual experimentation to a state of collective, intentional practice. The overarching goal, as one team member powerfully summarized, was "using AI to be more human" — freeing up time from technical tasks to focus more intensely on the facilitation, strategic thinking, and relationship-building that define I2I's impact.
They started the conversation they knew they needed to have. They built a solid grounding in what matters most to them. And they took meaningful steps toward being more open and transparent about AI use across the team. As Kathleen put it: "Before, AI was either a thing people quietly did or quietly avoided. There was a lot of unspoken judgment in both directions. Now we have shared language, shared guardrails, and most importantly, shared permission to experiment — and to push back."
Outcomes
The engagement produced a custom AI toolkit anchored in I2I's values, including their team AI manifesto, practice guidelines with clear boundaries for responsible use, a use-case gallery tailored to their evaluation and strategy workflows, and workshop recordings and summaries for ongoing reference.
I2I had already been a leader in this space — they published one of the first articles on generative AI for evaluation and qualitative analysis in New Directions in Evaluation. This engagement helped put structure around commitments they'd already made, while giving those commitments room to evolve as the questions around AI continue to shift.
"Don't wait until you're aligned to start. You won't get aligned by waiting — you'll only get more entrenched. The process is how you align."
— Kathleen Doll, Partner & COO, Intention 2 Impact
Why This Work Matters
What I'm particularly proud of with this engagement is how it demonstrates something I believe deeply about AI adoption: that the organizations doing this well are the ones willing to slow down and ask the hard questions first. I2I didn't rush to adopt tools or implement policies. They started with listening. They surfaced tensions — about environmental justice, about Western-centric knowledge systems, about when AI enhances authenticity and when it undermines it — and they sat with those tensions rather than smoothing them over.
Approaching AI adoption this way created opportunities for other shifts to emerge. Once I2I began to see AI through a strengths-based lens — not just "what harms do we mitigate?" but "what becomes possible when we're freed up to be more human?" — the team started asking a different question entirely. If AI frees up capacity, then what we do with that saved time is a values statement. For I2I, the answer couldn't just be "take on more billable work." They're exploring something more intentional: pro-bono consulting for under-resourced nonprofits, volunteering for local causes, donating a portion of their billable hour savings to advocacy for responsible tech, and spending more time with the people they love. That’s a huge, intentional step toward leveraging AI to help us be more human.
That reframe — from risk mitigation to possibility — resonates with my driving determination to wonder “what if we get this right?”. This is what it looks like when a team approaches AI with the same rigor, curiosity, and values-alignment that they bring to their client work. The messy middle isn't something to get through as fast as possible. It's where the most important learning happens.
I2I's approach shows that "AI readiness" isn't about having all the answers. It's about building the clarity, alignment, and shared commitment to navigate the questions together — with your values leading the way.
Interested in exploring something similar for your team? Let's start a conversation.