CISOs are finding themselves more involved in AI teams, often leading the cross-functional effort and AI strategy.
But there aren't many resources to guide them on what their role should look like or what they should bring to these meetings.
We've pulled together a framework for security leaders to help push AI teams and committees further in their AI adoption—providing them with the necessary visibility and guardrails to succeed. Meet the CLEAR framework.
It helps security leaders support AI teams and make AI adoption smoother by giving structure and guidance.
If security teams want to be a key part of AI projects, they should follow these five CLEAR steps:
C – Make a list of all AI tools and systems.
L – Understand how people are using AI.
E – Set and enforce AI rules.
A – Find useful ways to apply AI.
R – Use existing security frameworks for AI.
Create a list of AI tools
Many rules and best practices (like the EU AI Act and ISO 42001) say companies should keep track of the AI tools they use. But this can be hard because many organizations still do it manually, which isn’t practical.
Security teams can use six better ways to track AI tools:
-
Tracking Purchases – Helps track new AI tools but won’t catch AI features added to existing software.
-
Checking Logs – Looking at network logs can show AI activity but doesn’t work well for cloud-based AI.
-
Using Cloud Security Tools – Tools like CASB and Netskope help but can’t fully enforce rules.
-
Reviewing Access Logs – Checking logs from Okta or Entra helps track AI app usage.
-
Updating Existing Lists – Sorting AI tools by risk level helps, but AI evolves quickly.
-
Using Special AI Monitoring Tools – Advanced tools, like Harmonic Security, track all AI activity, even from personal accounts.
Find out how employees use AI
Instead of stopping employees from using AI, security teams should understand why they use it. Blocking AI completely can lead people to find ways around the rules.
By knowing why employees turn to AI, security leaders can suggest safer tools that follow company policies. This also helps in discussions with AI teams.
Once you understand AI usage, you can provide better training. This is becoming more important as the EU AI Act now requires companies to train employees on AI:
"Companies using AI must ensure that their employees understand AI well enough to use it responsibly."
Make sure employees follow AI rules
Many companies have AI policies, but enforcing them is tricky. Some businesses just share the rules and hope employees follow them, but this isn’t enough to keep things secure.
Security teams usually try two methods:
-
Secure Browser Controls – Some companies track AI usage by controlling browser traffic. But this can limit functions like copy-paste, making employees switch to personal devices to bypass controls.
-
AI Security Tools (DLP or CASB) – Some organizations use tools to monitor AI use, but older tracking methods can be unreliable and inconsistent.