The Gist
- Governance is critical. Without proper AI governance, companies risk compliance failures and loss of customer trust.
- Strategy first. A well-defined AI strategy and governance framework can be the difference between innovation and chaos.
- Compliance drives trust. Strong AI governance not only meets regulatory requirements but also enhances customer trust and competitive advantage.
A recurring trend with most technologies is that they generally find their way into widespread adoption long before laws can be established to regulate them. Governance of technologies — especially those put in the hands of the masses — is critical for keeping people, companies and their data safe and secure from misuse.
AI appears to be no different. But is it a new dog with old tricks, or just an old dog with new tricks?
Though AI has been in existence for about 60 years, its numerous fits and starts have kept it pretty low-profile until recently. Since ChatGPT emerged in late 2022, many companies have begun developing AI models for a variety of applications.
Use of this technology can bring with it a number of risks, including exposing sensitive data, violating intellectual property laws, producing results that are patently wrong and running afoul of both existing regulations and those in development. AI must be applied carefully.
Why AI Governance Matters
Until recently, AI was used almost exclusively by data scientists and other data experts. Because the data was typically only used internally to improve products and services, or to analyze general customer behavior, there wasn’t a pressing need for data governance frameworks. With generative AI becoming ubiquitous in much of the workplace, that has now changed.
Nowadays, most companies recognize that in order to keep up with their peers, they need some form of generative AI or large language models (LLMs) in their products or services — and with that comes requisite governance.
One problem: AI governance programs are yet to be formed, let alone finalized. The underlying technology of AI is still evolving, making it challenging for legislative bodies to firmly settle on a compliance approach.
But companies can’t wait years to find out what the final frameworks look like. Instead, they must immediately begin to put in place a workable, defensible, scalable, and, ultimately, flexible framework for overseeing and managing AI use within their organizations. Before deploying an AI model, companies need to have a proper understanding of the lifecycle of those models within the context of their business, and how they work. And of course, all AI projects should go through the usual governance chain of review from the compliance, legal and info-sec teams.
Related Article: Generative AI in Marketing: Boost or Bust for Your Department?
Establishing an AI Governance Framework
Proper data governance addresses data quality, data security, ethical use and privacy protections, among other things. Depending on your business and the nature of the data you use, AI governance might be an even greater (and more resource-consuming) obligation than data governance.
With data analytics, for example, there’s almost always a human to validate results before people make decisions and act. But with AI’s ability to operate on a much larger scale, it can make dozens of predictions in milliseconds without explanation. AI can even be trained to take action on its own, as in automated decision-making scenarios.
When thinking of what your AI governance model will look like, consider a few fundamentals:
1. Consider Your Strategy of Using AI
What is your plan? What are your goals? Ask yourself if you have a real business case for AI in your company. That is, will the AI embedded in your product or service make a meaningful difference to your customers? If the answer is no, then it’s likely not worth the effort and cost of development.
2. Create an AI Policy for Usage Monitoring
If the answer is yes, then the second consideration should be a policy or statement dictating how and by whom AI can be used within your company. This foundational document can take the form of an official policy which addresses the formal “why” AI is being used, by whom and for what purposes. Policies tend to be more prescriptive and specific on how a technology must — not should — be used. Your policy can also mention industry standards to use as benchmarks.
3. Consider Creating AI Guidelines
Alternatively, you can create guidelines, which traditionally note recommended uses and best practices of a technology. Both approaches have their pros and cons. Depending on your industry, the degree of regulation you are subject to, and the type of data your company uses (e.g., B2C or B2B), you may opt for guidelines, so as not to overly impede innovation or experimentation.
4. Create a Portfolio of AI Documents
Regardless of the document you decide on, aim to create a portfolio of documents that give as much direction as possible to your employees on how to use the new technology. This will help them to create richly featured products and services while using data in an ethical and responsible way.
5. Establish a Legal Compliance Framework
Next, ensure that the legal foundation of your data processing is rock-solid. If you process any data of EU citizens in particular, you’ll have to land on the most appropriate legal basis for your situation, as prescribed by Article 6 of the GDPR.
6. Determine Your AI Decision-Makers
Another big effort should be creating an AI approval workflow, including the designation of decision-makers.
The ideal AI governance framework should be positioned somewhere between the concept and operationalization stages. This will ensure that any team, committee, or forum established to evaluate ideas and use cases can adopt a holistic and cross-functional perspective. This approach allows them to comprehensively assess the unique risks AI poses to your business model and decide which initiatives should progress to production.
Preparing for Broad AI Oversight
Lawmakers around the world will soon require a strong AI governance oversight model for most companies. By adopting a broader perspective that combines human rights and business insights with AI governance, businesses can gain a competitive edge by enhancing customer trust.
This privacy-first, business-centric approach enables companies to develop value-driven models confidently, while simultaneously ensuring data privacy and meeting regulatory requirements, offering a strategic advantage beyond mere compliance.
Learn how you can join our contributor community.
link