Many organizations are either implementing GenAI solutions or exploring ways to integrate these technologies into their business strategies. However, to enable informed choices and strategic planning, access to reliable data is crucial, yet this information remains scarce.
The “Enterprise GenAI Data Security Report 2025” from LayerX provides insightful revelations regarding the practical use of AI tools in organizational settings, while also identifying significant vulnerabilities. This report, drawing on detailed telemetry from LayerX’s enterprise clients, stands out as one of the few trustworthy resources detailing the actual usage of GenAI by employees.
One notable finding reveals that almost 90% of enterprise AI utilization occurs away from IT’s oversight, creating serious risks, including potential data loss and unauthorized information access.
Key findings from the report are summarized below, encouraging readers to delve into the full document for enhanced security strategy refinement, data-driven decision-making for risk management, and advocacy for further resources to improve GenAI data protection.
Interested in a webinar discussing key takeaways from this report? Click here to register.
Casual Utilization of GenAI in Enterprises (For Now)
Despite the buzz surrounding GenAI, actual workplace adoption is somewhat tepid. LayerX reports that around 15% of users engage with GenAI tools daily. While this is a notable figure, it does not reflect widespread use.
However, the trend is expected to accelerate rapidly, as evidenced by 50% of users accessing GenAI tools biweekly. Additionally, about 39% of regular GenAI users are software developers, posing heightened risks of data leakage involving proprietary code and the introduction of insecure coding practices in their work.
Understanding GenAI Usage Patterns
With its situated presence in web browsers, LayerX can monitor ‘shadow SaaS’ usage, revealing employee engagement with tools not sanctioned by IT or through personal accounts. Although GenAI tools like ChatGPT serve work-related functions, approximately 72% of employees use their personal accounts to access these resources. Among those using corporate accounts, only about 12% employ Single Sign-On (SSO). Consequently, nearly 90% of GenAI usage remains hidden from organizations, leaving them unaware of unauthorized applications and information sharing involving AI tools.
Corporate Data Involvement in GenAI Activities
According to LayerX, although not every user engages with GenAI daily, those who do frequently paste confidential information into these applications. On average, corporate data is pasted into GenAI tools nearly four times each day by users, which could encompass sensitive business details, customer data, financial strategies, and source code.
Strategies for Managing GenAI Integration
The report’s findings underscore an urgent requirement for innovative security measures to mitigate GenAI-related risks. Conventional security tools often fall short in addressing the contemporary AI-centric workplace, particularly with browser-based applications. There is a pressing need for solutions that can detect, regulate, and protect AI interactions right at the source—the browser.
Implementing browser-based security offers insights into access patterns of AI SaaS applications, as well as unmonitored AI tools beyond prominent ones like ChatGPT, and AI-enhanced browser extensions. Such visibility facilitates the application of Data Loss Prevention (DLP) strategies tailored for GenAI, enabling organizations to safely incorporate GenAI into their operational frameworks and future-proof their business endeavors.
For more insights regarding GenAI utilization, be sure to consult the full report.