AI Risk Assessment, Policies, Custom GPT’s and Training
The Challenge
Like many organisations, Wild Card was seeing increasing use of AI tools across the business. Team members were experimenting with various platforms to assist with writing, research, and creative brainstorming.
While this presented opportunities for efficiency, it also introduced several risks:
• Potential exposure of sensitive information
• Lack of clarity around which AI tools were approved for use
• Inconsistent outputs that did not always reflect the company’s tone of voice
• Concerns about copyright, confidentiality, and contractual obligations
• No formal governance around how AI should be used across the business
Wild Card wanted to move quickly to harness the benefits of AI, but they also recognised the importance of introducing appropriate guardrails before adoption became widespread.
The goal was not simply to deploy an AI tool, but to implement a structured and responsible approach that balanced innovation, security, and brand consistency.
Roadmap’s Approach
Roadmap worked closely with Wild Card’s leadership team to design a phased approach focused on three core areas:
1. AI Risk Assessment
2. AI Governance and Policy
3. Role-Based AI Assistants
This ensured that AI adoption would be both strategic and controlled, rather than ad-hoc.
Understanding AI Risk
The project began with a detailed assessment to understand how AI tools were already being used within the business and where potential risks existed.
Roadmap conducted a review of existing tools, internal policies, and contractual obligations with clients. This allowed the team to identify areas where AI use could potentially create legal, compliance, or confidentiality risks.
The findings were documented in a structured AI Risk Assessment Matrix, outlining key risks and recommended mitigation strategies.
Working collaboratively with Wild Card’s leadership team, Roadmap then created a treatment plan to address these risks and define a safe operating framework for AI adoption within the organisation.
This process ensured that Wild Card had a clear understanding of:
• Where AI could safely be used
• Where restrictions were required
• What safeguards needed to be introduced
Establishing Clear AI Governance
Following the risk assessment, Roadmap developed a formal Artificial Intelligence Policy tailored specifically to Wild Card’s business.
The policy defined how AI could be used responsibly across the organisation and provided clear guidance to staff.
Two versions of the policy were created:
Internal Governance Policy
A detailed version used internally and embedded into the configuration of AI tools to ensure consistent behaviour and guardrails.
Operational Policy
A simplified version designed for staff and client communication, ensuring transparency about how AI is used within the business.
This governance framework gave Wild Card confidence that AI could be introduced responsibly while maintaining compliance with client obligations and internal standards.
Introducing Role-Based AI Assistants
With the governance framework in place, Roadmap implemented a secure AI platform for the business using ChatGPT for Teams, integrated with Wild Card’s identity management system.
Rather than providing a generic AI tool, Roadmap designed a structured system of role-based AI assistants tailored to specific functions within the business.
Each AI assistant was configured using:
• The company’s tone of voice guidelines
• The approved AI policy and governance rules
• The employee’s role and responsibilities
• Structured prompts to guide behaviour and outputs
This ensured that the AI assistants behaved consistently and produced results aligned with Wild Card’s brand and communication style.
Roadmap developed a structured workflow that captured each employee’s role requirements and translated them into tailored AI instructions.
Training and Adoption
To support successful adoption, Roadmap delivered structured training sessions for staff.
Training was tailored to individual roles within the business and focused on practical, real-world use cases such as:
• Drafting press materials
• Research and idea generation
• Content structuring
• Improving workflow efficiency
The sessions were interactive and designed to ensure staff understood how to use AI effectively and where the defined guardrails applied.
The Outcome
The organisation now benefits from:
• A clear understanding of AI-related risks and mitigation strategies
• A formal AI governance policy aligned with business and client requirements
• Secure access to an approved AI platform
• Custom role-based AI assistants tailored to individual job functions
• Improved consistency in tone of voice and output quality
• Increased efficiency in research, writing, and internal workflows
The project demonstrates how organisations can adopt AI in a way that balances productivity gains with responsible governance, ensuring that the technology enhances the business without introducing unnecessary risk.
