4 min. read • Email this page
Listen to this blog post:
Do you know about employee’s personal AI use at your work? Learn how to bring security controls back.
The rise of Bring Your Own AI
Employees are bringing their own AI agents. What are the risks? And how to cut those risks
McKinsey’s 2025 report Superagency in the Workplace1 shows that employers are aware of only one-third of their employees’ AI agent use. MIT last year showed 90% of workers used AI for their work and only 40% had organization-sanctioned tools.2 The studies show that employees are not waiting: they’re using personal or unsanctioned AI accounts for their work.
And this creates risk.
When an employee pastes client data into a personal ChatGPT instance, or asks Claude about a confidential business problem, that data leaves your environment. It ends up on servers beyond your control, beyond whatever security measures you’ve put in place.
Some consumer-level tools train on the data entered into them. Several — including versions of ChatGPT, Copilot and Gemini — have had documented incidents of exposing input data. As an example, toward the end of last year ChatGPT was shown to leak data into the Google Search Console product.3 These tools are new and wide-ranging in scope. What assurances do you have that employees’ tools are being appropriately patched to address vulnerabilities?
There’s an exposure risk to criminals. But it’s also an issue with compliance bodies and insurers. GDPR, PCI-DSS and similar frameworks require documented data governance. And to a cyber-insurer the use of non-business software may be a violation of terms which could lead them to not honor a claim.
There’s a second category of risk: the quality of AI output itself. These are probabilistic tools. They aggregate other people’s work and can produce what may be described as confidently-rendered untruths. They have no understanding — only the appearance of it. If an employee is doing work through a personal AI tool and bypassing any review process, your organization is responsible for the output. When that output is wrong — and it sometimes will be — your organization is accountable.
So what should you do?
Because employees are already using these tools, survey them. Find out which tools they’re drawn to, what tasks they’re using them for and why. This surfaces real-world use cases and signals to staff that management is engaged. Also AI capabilities are changing fast, so listening and showing a willingness to adapt will help employees to stay with your organizations’ solutions.
Establish policy. Classify your data as confidential or public — and specify which tools may be used for each category of data. If your organization uses Salesforce or another platform with integrated AI, address those, too. Insist on human review of AI-generated work. Make clear what employees may and may not enter into any AI interface.
Provide a sanctioned tool. Business-class and enterprise-grade instances like Microsoft 365 Copilot Business, for example — are built with data governance in mind. They don’t train on your inputs. They offer administrative visibility into usage. They keep data within your environment. These tools still require correct configuration and ongoing maintenance or things can go wrong. Bryley has experience configuring Copilot for New England organizations and can help you set this up so it can work well for your organization.
Train your staff about AI and data. Employees need to understand what’s acceptable to put into a chatbot and what isn’t. The typical pattern for this kind of security learning is immediate adherence that trails off over time as people resort to old habits. Bryley can guide you with the training of your staff.
It’s what’s happening
Unmanaged AI use is likely happening already in your organization. Bryley can help you assess your current exposure, train your staff and help you strategize, configure and maintain a business-grade AI environment.
So contact your Bryley representative, or if you don’t have one, contact Roy Pacitto via email or phone him at 978•562•6077 x217.
1 mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
2 venturebeat.com/infrastructure/mit-report-misunderstood-shadow-ai-economy-booms-while-headlines-cry-failure
3 arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/