Stakeholders To Include In Your Organization’s AI Efforts
I had a discussion with a former CIO colleague last week about generative AI and what the impact might look like over the next few years as it starts to take hold in our everyday lives. In some respects, the introduction of tools like Copilot, ChatGPT, Claude, or Q to the workplace remind us of the ancient days when companies first started experimenting with providing internet access to their employees (or more recently, widespread remote access capabilities). As silly as this might sound at first, there are a lot of parallels, especially when it comes to how organizations figure out who should get early access to these capabilities and help define the business cases, risks, and acceptable uses. No one has those answers yet, and they will differ across various industries and organizations. So, who do you choose to help find the way?
In many organizations, the technology group is going to be the first to have access and develop an understanding of the technology. This is important, but only the first step. It is critical that stakeholders outside of the IT group are included early and often. This can’t be overstated. Like most other technology application initiatives, the people outside of technology will be the ones that will benefit the most from new tools—and they should be front and center when it comes to testing, experimenting, and writing this new AI rulebook.
There is quite a bit of information and guidance out there about building trustworthy AI, ethical AI, training AI, etc. There is very little guidance out there that can help you determine who should be involved in a pilot program to start building institutional knowledge and guidance related to the use of generative AI assistants in your specific business. Who should you trust to do this? How do they do it?
Who To Include
Put simply, identify a select group of people that have the ability to approach the challenge thoughtfully, the discipline to document their usage, and the time to incorporate AI tools into their daily workflows. Include people who are at different levels in the organization and in a variety of different business groups.
How To Include Them
Some ground rules will be essential to keep the pilot on track and put guardrails around the risks that may be introduced. It’s important to be encouraging during the use of new AI tools, but also make sure people are aware that while the AI-produced work may look and sound authoritative, it does need to still be checked for accuracy. Done right, this will uncover better ways to use the tools and drive toward more consistent successes.
If you need help, let us know! We help clients with Microsoft 365 Copilot readiness assessments, workshops, and other engagements to help foster successful use and adoption of these new tools.
Interested in learning more about Microsoft 365 Copilot and how your organization can benefit from its feature?
Contact our team of experts to get started!