It’s a logical and understandable approach for any organization contending with the adoption of new technologies: use my existing tools to solve a novel security challenge. The security team understands how the tool functions, and there is limited incremental cost or technology acquisition risks when existing infrastructure is put to new use.
At Aim, we have seen this pattern play itself out in response to pressure from the organization at large for broader use of third-party AI applications. Security teams start out by putting their SASE or other edge gateways from vendors like Zscaler, Palo Alto Networks, NetSkope and Cisco Umbrella in play to discover shadow use of AI and apply controls to better manage security, data security and compliance risks.
But, these tools are really only capable of a heavy-handed and blanket approach that actually compounds the shadow AI challenge, and diminishes the role security teams can play in guiding secure adoption and usage of AI technologies. Fundamentally, security teams cannot collaborate effectively with stakeholders if all they have is a hammer - and no way to govern how users interact with LLMs, for example.
Still, there are plenty of benefits to leveraging existing infrastructure and investments for the right level of discovery and visibility - if integrated, extended and supplemented with application risk profiling, common guard rails for prompt and data leakage protection, and centralized monitoring of usage and activity.
This is precisely how Aim’s integration with SASE, cloud and endpoint security tools is designed: complement existing infrastructure and seamlessly extend for more contextual visibility, granular controls and organizational alignment on how AI is best utilized.
Old Bottles, New Wine: Not Built for Purpose
Generally speaking, the policy that organizations take when using SASE gateways as their primary AI security approach takes the form of a “block” (based on URL) and “allow access by exception” mode of enforcement. And, in many cases, these organizations are now realizing this approach lends itself to a set of poor outcomes.
Inevitably, this tactic falls short in a few critical dimensions:
- security teams can’t identify and mitigate the risks from how AI, what prompts, which applications are being used - as well as how agents are interacting with enterprise systems
- compliance teams can’t define enforce and monitor prompt-level data protection or regulatory policies
- users can’t get access to the right tools to improve their productivity, or they circumvent controls in response to the ‘hammer’ approach
- the organization as a whole can’t effectively collaborate on security and governance for their AI strategy
From a risk perspective, these tools rely on a limited set of AI application categorizations - which crucially cannot distinguish between enterprise or free versions - which in the case of ChatGPT may mean the LLM is training on corporate data. Without the dimension of risk profiling beyond a subset of applications, they are more than likely overlooking potentially risky applications. And, blocking access to applications can have the unintended consequence of users seeking out even more risky applications - which are intended to entice users to enable training on sensitive or proprietary data.
Also, the risk with some applications is ongoing. An approved application’s terms of service may initially appear to be benign. However, if their terms of service are changed, the risk profile needs to be updated.
Secondly, security teams effectively lose visibility into user activity, usage trends and prompt trends once they allow access by exception. And, then have to spend time and resources managing the exceptions - rather than building the expertise to collaborate to implementing a cohesive strategy for AI adoption.
Developer adoption of agents (such as Claude Code and Cursor code assistants) and use of new protocols like MCP (Model Context Protocol) compound the visibility limitations and monitoring gaps of these repurposed legacy tools. These blindspots into emerging patterns impact not only the ability to prevent attacks, they also fall short in providing monitoring and reporting of usage and activity for a broader set of stakeholders.
After all, AI is not only a matter of security. A cohesive and effective strategy requires collaboration with AI enablement teams, legal, data governance, and compliance teams. A binary block or allow strategy cannot facilitate this collaboration, and can’t address related needs such as providing the CIO’s office with observability and in-depth reporting of adoption and usage trends.
Thirdly - and crucially - these tools cannot govern, define guardrails, or enforce policies for how users interact with AI applications, using methods like semantic analysis and data anonymization based on classification for example. This means security teams are hamstrung, and are often on the outside looking in for AI steering committees. Without a clear answer or plan to how they can govern and manage which prompts users can or cannot use, how users interact with LLMs, and which agents are being deployed with enterprise system integrations, security teams cannot play a pivotal role.
Balancing AI Enablement and Control - Not Just Repurposing Legacy Tools
Aim starts out by ingesting the data and logs from the existing security tools in place - whether network, endpoint such as Crowdstrike - to establish a baseline view of what applications are in use. The next step in the discovery phase is to build an inventory of all AI applications in use - including AI agents and embedded AI such as Salesforce’s Einstein - and then perform categorization and risk profiling.
Thanks to our dedicated data science team that is focused on AI threats and risks, we maintain and continuously update a market leading AI intelligence repository. As our team identifies new threats and attack vectors, we create and deploy detections and protection policies.
This breadth of visibility and depth of analysis significantly extends the baseline visibility and subset of applications and detections provided by existing tools.
To ensure that the Aim AI analysis engine is easy to deploy,, we provide a set options for interception - including through integrations with existing infrastructure in place from vendors like Zscaler,Palo Alto Networks and Cisco Networks. We provide a broad range interception options with browser plug-ins, reverse proxies, API gateways and MCP proxies, to accommodate both the architectural and organizational preferences of customers. The intent is both to reinforce the value of existing infrastructure to address new demands from the business, as well ensure that deployment methodologies allow for ease of deployment and distributed enforcement and monitoring.
Aim is intended to function like the Swiss Army knife of AI security solutions - from discovery to interception to analysis. The intent is to align your AI adoption strategy and security strategy - both for pre-built and customizable AI usage.
For example, the proxy deployment can be leveraged to channel all prompt interactions into what we call Aim Chat, which serves as a unified gateway with customizable branding and pop ups for prompt inspection, semantic analysis for intents and LLM interaction based on data classifications. Aim chat channels - rather than blocks - AI usage so as to conform with guardrails and minimize the potential for inadvertent training on sensitive or proprietary data.
Through integration with common identity providers, Aim can seamlessly extend existing allowed by exception user groups, and start to introduce deeper prompt inspection, file upload policies, as well as usage guardrails for intents and topics for example. These user groups are also a logical starting point for monitoring and reporting on AI usage and utilization patterns - helping the organization understand actual adoption trends broken down by department, business unit, or even individuals.
The crux of balancing enablement and control is the ability to make decisions and perform detections without disrupting the user experience or the AI tool’s performance. The Aim Engine provides the logic that runs over the interactions, and allows for centralized data collection and reporting.
Secure AI Adoption is a Team Sport
AI holds the promise of a fundamental reset for how companies and organizations function. As with any new technology, AI also exposes organizations to new forms of risk and threats. While most organizations are cognizant that security and risk mitigation are central to the long term success of their utilization of AI, they also need a sustainable and practical approach - rather than a stop gap.
Equally, this creates pressure on security teams to remain relevant as organizations are formulating their adoption strategies. By reinforcing the value of existing investments without excessive gatekeeping, and complementing the visibility they provide with granular, reasonable enforcement, Aim supports alignment on a shared set of outcomes.