Stakeholders To Include In Your Organization’s AI Efforts

Introduction

I had a discussion with a former CIO colleague last week about generative AI and what the impact might look like over the next few years as it starts to take hold in our everyday lives. In some respects, the introduction of tools like Copilot, ChatGPT, Claude, or Q to the workplace remind us of the ancient days when companies first started experimenting with providing internet access to their employees (or more recently, widespread remote access capabilities). As silly as this might sound at first, there are a lot of parallels, especially when it comes to how organizations figure out who should get early access to these capabilities and help define the business cases, risks, and acceptable uses. No one has those answers yet, and they will differ across various industries and organizations. So, who do you choose to help find the way?

In many organizations, the technology group is going to be the first to have access and develop an understanding of the technology. This is important, but only the first step. It is critical that stakeholders outside of the IT group are included early and often. This can’t be overstated. Like most other technology application initiatives, the people outside of technology will be the ones that will benefit the most from new tools—and they should be front and center when it comes to testing, experimenting, and writing this new AI rulebook.

There is quite a bit of information and guidance out there about building trustworthy AI, ethical AI, training AI, etc. There is very little guidance out there that can help you determine who should be involved in a pilot program to start building institutional knowledge and guidance related to the use of generative AI assistants in your specific business. Who should you trust to do this? How do they do it?

Who To Include

Put simply, identify a select group of people that have the ability to approach the challenge thoughtfully, the discipline to document their usage, and the time to incorporate AI tools into their daily workflows. Include people who are at different levels in the organization and in a variety of different business groups.

  • People in data-driven roles who already understand the criticality of getting it right and “trust, but verify.” They are accustomed to the high stakes attached to getting things right and the risks of getting it wrong. Business analysts, report writers, and finance staff can make good candidates.
  • People who typically deal with a lot of ambiguity. Almost no one (including this author) is an expert at this yet. Inconsistent results should be seen as opportunities to improve, not discourage use. If everyone waits for it to “get better,” you lose out on learning opportunities and in turn, business opportunities. There are AI improvements every day, and these people can be critical in developing a more complete understanding of the opportunities as things evolve.
  • People who are change champions for other technology initiatives are natural learners and intellectually curious.
  • Groups that already have well-governed data. If certain groups already have their data security house in order, then they are good candidates for early adoption. Labeled data and documents are key here. Consider teams that are further along in their Purview adoption.
  • Not the busiest people in the company (even though they may demand it most vocally). In my experience, executives and others who are extremely scheduled can’t commit the time and feedback needed during this crucial early phase. But… the people that support the busiest people should absolutely be included, since they may have a lot of edge case uses that should be explored.
  • People who deal with a lot of process. One of the more common use cases out there is the elimination of busywork like simple reports, moving data around, scheduling, and summarization. Depending on the organization, this may be the easiest ROI to prove.

How To Include Them

Some ground rules will be essential to keep the pilot on track and put guardrails around the risks that may be introduced. It’s important to be encouraging during the use of new AI tools, but also make sure people are aware that while the AI-produced work may look and sound authoritative, it does need to still be checked for accuracy. Done right, this will uncover better ways to use the tools and drive toward more consistent successes.

  1. Establish an internal team for the pilot group to collaborate on questions, observations, and give your program leadership visibility into the overall use, successes, and challenges. Like any technology initiative, support from leadership is critical and should provide ongoing support, shared discoveries, ideas, and failures. Keeping track of all this can be used to build effective training and FAQ resources later.
  2. Keep a fairly loose structure. Provide specific scenarios for people to try, along with more open-ended ideas for additional uses and encourage utilizing it in daily job tasks. Don’t be overly prescriptive—this isn’t a regression test.
  3. Establish some risk rules. There may be some data you want to make off-limits for now. At first you may want to avoid using the tools on financial statements, confidential HR data, or intellectual property. This is especially the case if you aren’t confident in the governance controls around your “crown jewel” data. Conversely, encourage use to summarize meeting notes and emails, or create slide decks that are easy to check manually in these early stages.
  4. Make sure people understand that they are still accountable for their work, regardless of how it was created. It is important to clarify that there is no “blaming the machine” at this stage. Trust, but verify.
  5. Keep encouraging use, even if the results are not exactly accurate. Using AI tools as a writing prompt or with help brainstorming is a valid use case, even if the output is not used directly in the final work product.

Next Steps

  • Based on your success and failures, start to develop training documents and an adoption program for an expanded test group.
  • Make sure to update your written policies to accommodate AI into your acceptable use guidance. Ensure that employees who are given access are aware of the policy and have them sign off on it.
  • Set goals and measure progress. As you begin to see what successful AI outcomes look like, set some benchmarks or goals for end-users that you can measure. Metrics like the amount of time preparing meeting notes, building slide decks, reviewing email, or searching for information should all start to decrease as the tools are adopted and people learn how to benefit from them.

If you need help, let us know!  We help clients with Microsoft 365 Copilot readiness assessments, workshops, and other engagements to help foster successful use and adoption of these new tools.

Tom Papahronis

Tom Papahronis

Strategic Advisor - eGroup | Enabling Technologies

Learn more about Microsoft 365 Copilot

Interested in learning more about Microsoft 365 Copilot and how your organization can benefit from its feature?

Contact our team of experts to get started!