Embracing GenAI will transform your business strategy and operations—are you ready?
There’s no question that Generative AI (GenAI) is a game-changing technology. The question is: Do you understand the game you’re playing well enough to change it? Games have rules, objectives, and winners and losers. A great game player becomes a master when they know the competition, devise a winning strategy, and execute flawlessly.
As business and technology leaders, we must ask ourselves: What’s the game we’re playing, and what’s our strategy to win? Fundamentally, we have two objectives: value creation (offense) and value protection (defense). We create value primarily through growth—delivering new and better products and services to new and more markets and customers. We protect value primarily through reducing risk, retaining our current customer base, and maintaining margin.
Understanding the game, its objectives, and strategy is crucial for leveraging GenAI's capabilities. For margin protection, use AI to reduce operational costs and improve pricing. For risk reduction, enhance security, compliance, and customer retention. For revenue growth, focus on product development and customer acquisition. GenAI helps companies to achieve these goals in fundamentally new ways.
Unfortunately, most organizations that are playing the game today are playing it poorly. 85% of current GenAI projects fail to make it past the proof-of-concept phase and therefore deliver no immediate and concrete ROI. While no team is expected to go undefeated—and there is true value in a fail-forward learning culture—this track record is more likely to drive a team to being relegated rather than elevated.
We believe the first step in building a winning track record is rooted in the fundamentals:
These fundamentals are inherently required for all technology initiatives to be successful. But when the game is truly changing, relying on the fundamentals will at most put your team in the middle of the pack. We believe great organizations need to do things differently to build a winning and championship culture:
Adopting a technology with the potential to overhaul entire areas of business requires a deeper shift, forcing companies to revisit their entire mindset.
Our Take: Throw out the playbook on traditional corporate governance. GenAI is here to change the rules, and until companies rework their structures, processes, and practices to include GenAI, they will always be one step away from successfully harnessing this technology to its fullest potential. Companies would never blindly uproot their regular operations without a new governance structure in place. Incorporating GenAI should be no different.
As our clients explore GenAI, we sought to understand what strategic goals they are prioritizing and where they are in the adoption process. The goals, for the most part, were varied, with no single priority taking an overwhelming lead. More interesting, however, is the rate of GenAI adoption: while just under one-third of companies (30%) are still getting the basics in place, nearly half (47%) are past that point and working on their use cases – including 13% who consider themselves to be industry leaders in AI adoption.
If you ask any company what its current priority is for GenAI adoption, the answer is likely going to include speed: How fast can GenAI be incorporated into day-to-day processes and how can the time to prove use cases be cut down? In this race against the clock, companies are rushing to embrace AI without developing structures and guardrails to guide their usage.
But the ROI of being fast is diminished if projects are plateauing in the pilot stage or leaving value on the table. And while some, if not most, projects could still be successful without an AI-enabled governance structure, the chance of introducing new risks and liabilities to the company will always be higher until the right guardrails are constructed.
It’s difficult to slow down when it seems like every company in every industry is charging ahead, and this is something we know firsthand at West Monroe. Our AI journey began with some stakeholders eager to jump in and other stakeholders urging caution. In order to succeed, we had to find a balance between the right amount of governance and an extreme amount of flexibility – more than we normally would be comfortable with – to ensure a kind of safe agility.
Don’t think of it as a red light or a green light, but instead as the blinking yellow “proceed with caution” light. Get some basic guidelines in place (it can’t be the Wild West) and then continue building an AI-enabled governance structure along the way. The goal is to create an environment where innovation and agility are encouraged—but within a framework that manages risk and keeps ethical standards top of mind.
While this will look different for every company, there are still core tenets that should be included.
Leadership sets the tone for any organization and is responsible for establishing and ensuring adherence to its mission, vision, and values. This ownership will not change as companies begin integrating AI tools. Instead, leaders need to ensure their culture is adapting to enable AI growth without allowing the introduction of AI to completely rewrite their guiding principles.
Chief among these considerations will be ethical behavior and responsibility, which must be adapted to include the ethical implications of AI. Organizations will need to develop guidelines that reflect their values and accelerate their mission while also ensuring AI usage is fair, transparent, and accountable.
Having clearly defined roles and responsibilities for all employees will continue to be a necessary component of effective decision-making. As AI solutions are implemented, clear ownership and accountability will be essential, and this will require updated roles and responsibilities across all levels.
Companies need to be able to articulate who owns AI without stumbling. The short answer: everyone in the organization has a role in AI ownership. This not only includes more specialized positions with a direct role in development, deployment, and oversight, but all employees who may be using AI in any capacity.
Taking a holistic, full-company approach to AI ownership helps enable collaboration and open communication across departments. This, in turn, makes it easier to identify opportunities, mitigate risk, and manage AI evolution.
Developing an organization-wide meeting cadence and escalation approach for AI will keep all parties accountable as different teams navigate upskilling, realigning goals, and adjusting roles and responsibilities. This in turn helps to ensure AI adoption is smoother and more impactful.
Effective governance structures have long included processes for identifying and mitigating risks, but these processes must be revisited as AI is introduced into operations. Potential risks include data privacy, copyrights, AI biases, and misuse of technologies.
But with risk comes reward. In this case, the benefits of successful AI implementation are worth the extra steps companies will take to manage risk. Governance structures should be updated to include risk management strategies that focus specifically on the ethical, reputational, and operational risks associated with AI. With the right guardrails in place, organizations can find the balance between innovation and risk management to ensure they are well-equipped to leverage AI for long-term benefit.
Maintaining legal and regulatory compliance is table stakes for most companies. But while many regulations are more industry-specific, emerging AI regulations and ethical standards are wide-reaching and industry agnostic. Companies will need to closely monitor the creation, usage, and management of their AI solutions to ensure they are able to adapt their go-to-market strategies to navigate and comply with regulatory changes. One way to get ahead of this is to establish consumer protection and data privacy policies before it becomes a requirement.
As AI continues to be scrutinized and uses are assessed, additional regulation will be enacted, and additional governance will be needed. This includes having designated owners and codified guidelines for monitoring regulatory updates and communicating necessary changes within the organization. Compliance will only rise in importance as AI continues to evolve.
Organizations need an effective intake process to manage and quickly address new AI regulations. GenAI can even be used to help mitigate risk and adapt by course-correcting content and positioning following regulatory changes.
Business decisions must balance the needs of the company with the needs, interests, and opinions of their stakeholders. This could include shareholders, employees, customers, suppliers, or the general public. As AI enters the conversation, the list of external stakeholders may grow (regulatory bodies, for example, may become a key stakeholder), and the information they ask for may become more comprehensive.
Since AI implementation will have a company-wide impact, employees may expect to have a larger role in conversations throughout the AI adoption lifecycle. Companies should also consider to what degree they need to educate their stakeholders about the benefits, risk, and ethical considerations of AI. This not only helps to communicate expectations more clearly but also helps to promote transparency and accountability.
These conversations will not be a “set it and forget it” undertaking. Instead, companies should be prepared to facilitate ongoing engagement with stakeholders on their AI usage, and their methods for soliciting and implementing stakeholder feedback should be included in any revised governance structure.
West Monroe’s AI implementation plan was a guess-and-check process, but we eventually ended up with a framework to successfully govern our company’s AI usage. We’ve now taken the learnings from our own experience to create the West Monroe AI Accelerator, which helps clients quickly develop their own AI strategies.
This starts by assessing the challenges and opportunities organizations have with their people, processes, and technology and goes all the way through to developing a governance and implementation roadmap. In as little as four weeks, we can stand up initial governance and a pilot program centered around a specific use case. We start with three stages:
Assess organizational readiness for AI: We start by analyzing an organization’s people, processes, and technology to fully understand their current challenges and opportunities. This will help to inform how AI implementation will impact existing governance structures.
Determine a future state: After assessment, we’ll establish or refine an organization’s AI governance, specifically focusing on vision, team alignment, and identifying how success and ROI will be measured during the initial year.
Develop a roadmap for governance implementation: Following these first steps, the third stage is action. We’ll define and prioritize core projects for the year and then determine the timeline for achieving them, as well as identifying owners for each project area and flagging any potential resource concerns.
The companies that will see the most reward from AI view it not just as a tool, but as a welcome disruptor. AI has the potential to upend your processes and experiences for the better—if the right frameworks are in place. After going through this process ourselves, we understand the desire to move quickly when adopting AI, but also recognize the importance of developing the appropriate frameworks and guardrails to do it right.
The way to balance this speed and caution? Start integrating AI into your governance today.
Game on.