Blog

Navigating the Cybersecurity Risks of Shadow & Open-Source GenAI

Generative AI is no doubt the leading frontier in AI. Models have captured attention and driven exciting use cases across industries with their ability to create everything from text to images, and even solve complex coding problems. The likes of ChatGPT and Anthropic have changed how companies innovate, automate and engage with customers in just a couple of years.  

But as generative AI’s popularity soars, so do the risks associated with its use—especially when deployed without oversight or sourced from open platforms. Here’s more on navigating shadow AI and open-source AI cybersecurity risks.  

Shadow AI and Open-Source AI 

Shadow AI is when people use AI tools or models without explicit approval (bypassing your established security and governance protocols). Here, AI is synonymous with GenAI for the purposes of the blog. Developers, data scientists or business units might deploy GenAI models or tools outside the purview of IT, driven by the pressure to innovate quickly. 

While cloud-sourced, proprietary AI tools garner much of the attention and user base, open-source options are becoming more viable. The open-source tools found in places like GitHub allow businesses to adapt models to their specific needs, integrate them into existing workflows and even tailor them for unique applications. An interesting deep-dive into open-source generative AI found that two-thirds of foundation models released in 2023 (models that are large, pre-trained and perform a wide range of tasks) were open-source.  

So, What Are the Risks Here?

Some of the main open-source GenAI cybersecurity risks to think about include: 

  • Anyone can modify the models, which heightens the risk that a malicious party introduces harmful code, poisoned training data or backdoors into these projects.  
  • Open-source models rely on external libraries and dependencies, which might contain hidden vulnerabilities that attackers can exploit if not properly vetted and maintained. 
  • Updates and patches for open-source models often vary widely in frequency and reliability, potentially leaving you exposed to known vulnerabilities for extended periods. 

Shadow AI risks are similar, but there are other things to think about that are more specific to using apps in an unvetted way: 

  • Employees often use shadow AI models with data that hasn’t been anonymized or properly secured, which creates compliance risks. Sensitive information might be fed into models that store or process data in unauthorized ways. Samsung found out about this risk when employees fed sensitive company data into ChatGPT, and it’s a risk that’s amplified in unmonitored, unsanctioned apps.  
  • You likely have monitoring systems to track the behavior of sanctioned software, but these systems often don’t cover unsanctioned shadow AI tools. This leaves a gap in visibility, where models can be compromised without setting off any alerts. 
  • Shadow AI circumvents the access controls that are in place for other IT resources. If those credentials are compromised, an attacker could gain access to the model and underlying data without being detected by standard access logs.  

Factors That Drive These Risks 

Speed vs. Security

One of the most significant factors here is the long-running battle between speed and security in IT. There is tension between the need for rapid innovation (or just keeping pace with competitors) and the imperative to protect sensitive information and systems. This goes back to when business internet adoption rapidly increased in the late 1990s, when companies rapidly deployed websites and apps to reach customers online, often neglecting security concerns. Things worsened from a security perspective when Agile development and DevOps became popular, and it’s only in the last decade that security concerns have started being taken more seriously.  

Businesses often prioritize quick deployments of new technologies and apps to respond to market demands and expectations. However, the result is that many companies now adopt GenAI models quickly without conducting thorough security reviews or considering compliance implications. Compounding the issue is that there is less likely to be formal governance in place that addresses AI security through policy changes, etc.  

Lack of proper training

Many teams—including data scientists, developers and even business leaders—do not have a deep understanding of AI-specific security risks. They see open-source AI as a plug-and-play solution without recognizing that adopting a model is more like onboarding a new software dependency—one that can be compromised, manipulated or contain hidden backdoors. 

The concept of AI security is relatively new and isn’t likely to feature heavily in training courses. Few of your employees are likely familiar with concepts like model poisoning, data poisoning or backdoor insertion in AI models, let alone how to defend against them. AI has entered various functions outside traditional IT—like marketing, customer service and product development—where the security expertise needed to vet models is even less likely to exist. This lack of awareness makes it easier for employees to adopt shadow AI projects or risky open-source models.  

On top of that, traditional cybersecurity teams aren’t equipped to scrutinize AI models in the same way they can evaluate software code or network traffic. The result is a gap in knowledge and skills. 

Using AI Securely: Proper Governance and Trusted Solutions

Ultimately, the risk of open-source AI models stems from their inherent openness to misuse. For shadow GenAI, people eagerly use these tools without proper oversight, training or understanding of the potential consequences. Overall, updates to IT policies and clear guidelines addressing shadow and open-source AI cybersecurity risks will need to form the backbone of safe use at every business.  

While GenAI can pose risks, its safe use from trusted vendors can prove to be a pivotal ally in establishing a more proactive approach to cybersecurity defense. Nuspire’s AI-driven security assistant, Nutron, ensures that you’re not just racing toward innovation but doing so with confidence and security. 

Nutron streamlines security operations by automating tasks and providing actionable insights and recommendations tailored to your environment. Developed with input from CISOs and built as a closed-source platform, Nutron does not train on client data, ensuring that your sensitive information stays private and secure. 

Meet Nutron.    

Have you registered for our next event?