Apologies for the length of this post.
I’d like to canvas some opinions on getting AI tooling approval.
I work in a small tech team with a QE team of only 4 people. We are far from bleeding edge, but it is quite liberating being able to help build improvement roadmaps.
The trouble I have is in trying to build a case for use of AI tooling. Specifically, I want to build a business case for a POC use of GitHub Copilot & Playwright MCP Server with Claude Sonnet (atm).
The issues will not be procurement, but cyber security who will likely veto this due to commercial data risks, unless I build the case carefully to accurately describe the risks.
Here’s a summary of what I propose putting to them:
We recognise the potential risks associated with tools like GitHub Copilot, the Playwright MCP Server, and models such as Claude Sonnet. These include:
-
- Accidental inclusion of sensitive or client-related data in AI prompts.*
-
- Transmission of prompts to external AI APIs, even when using enterprise-level accounts.*
-
- Storage or logging of prompts and completions by the provider, if not properly configured.*
However, these risks are not fundamentally different from existing practices we already accept within our current Azure ecosystem. For example:*
- We already store secrets and credentials in Azure Key Vault (a cloud-managed service).
- We host source code and CI/CD pipelines in Azure DevOps (cloud-based and accessible to Microsoft infrastructure).
-
- We rely on Microsoft’s contractual commitments to ensure those services meet security and compliance standards.*
Similarly, GitHub Copilot Business and Claude (when accessed via GitHub) offer enterprise-grade controls:
- Prompts and completions can be excluded from training.*
- Access is governed by GitHub Enterprise identity and permission policies.*
- All services are hosted by reputable vendors with published security and compliance documentation.*
To manage AI tooling risk responsibly, we propose the following mitigations during the pilot:
-
- Restrict usage to non-client code: No confidential data, client identifiers, or bespoke client workflows may be entered into prompts.**
-
- Use GitHub Copilot Business, which allows enterprise settings to disable training on user prompts.**
-
- Educate pilot users on safe prompting practices and approved use cases.**
-
- Monitor tool usage and audit logs where available.**
-
- Review outcomes with Cybersecurity after the pilot to assess long-term viability.*
We are not introducing an entirely new category of risk—we are evolving our existing cloud tooling practices to include developer assistance via AI. With proper boundaries and governance, this pilot can be carried out safely and responsibly.