Friday, 22 March 2024

Addressing Critical Data Security Challenges Through the GenAI Lifecycle

Addressing Critical Data Security Challenges Through the GenAI Lifecycle

As organizations move forward with generative AI (GenAI), they must consider and provide for data security. Because generative AI models consume text, images, code and other types of unstructured, dynamic content, the attack surface is broadened, escalating the risk of a security breach.

Having trusted data is essential for building confidence in GenAI outcomes and driving business transformation. It’s crucial to secure data for deployment of reliable GenAI solutions.

Organizations must consider generative AI data risks in the four stages of the GenAI data lifecycle: data sourcing, data preparation, model customization/training and operations and scaling. For each stage, we’ll look briefly at the overall challenges, a potential attack vector and mitigation actions for that attack.

Data Sourcing: Protect Your Sources


In this stage, data sources are discovered and acquired from the organization’s internal systems and datasets or from external sources. Organizations must continue to ensure the cleanliness and security of structured and semi-structured data. With GenAI, unstructured data—such as images, video, customer feedback or physician notes—also moves to the forefront. Finally, the integrity of the model data must be assured, which includes fine-tuning data, vector embeddings and synthetic data.

An AI supply chain attack occurs when an attacker modifies or replaces data or a library that supplies data for a generative AI application. As an example, an attacker might modify the code of a package on which the application relies, then upload the modified package version to a public repository. When the victim organization downloads and installs the package, the malicious code is installed.

An organization can protect itself against an AI supply chain attack by verifying digital signatures of downloaded packages, using secure package repositories, regularly updating packages, using package verification tools and educating developers on the risks of supply chain attacks.

Data Preparation: Control Access and Enforce Data Hygiene


In the data preparation stage, acquired data is prepared for model training, fine-tuning or model augmentation. This may include filtering junk data, de-duplication and cleansing, identifying bias and handling sensitive or personally identifiable information. All these activities provide opportunities for an actor to contaminate or manipulate data.

Data poisoning attacks occur when an attacker manipulates training data to cause the model to behave in an undesirable way. For example, an attacker could cause a spam filter to incorrectly classify emails by injecting maliciously labeled spam emails into the training data set. The attacker also could falsify the labeling of the emails.

To prevent these sorts of attacks, companies should validate and verify data before using it to train or customize a model, restrict who can access the data, make timely updates to system software and validate the model using a separate validation set that was not used during testing.

Model Training/Customization: Validate Data and Monitor for Adversarial Activity


In the model training stage, the acquired data is used to re-train, fine-tune or augment the generative AI model for specific requirements. The AI team trains or enriches the model with a specific set of parameters that define the intent and needs of the GenAI system.

In model skewing attacks, an attacker manipulates the distribution of the training data to cause the model to behave in an undesirable way. An example case would be a financial institution that uses an AI model to predict loan applicant creditworthiness. An attacker could manipulate the feedback loop and provide fake data to the system, incorrectly indicating that high-risk applicants are low risk (or vice versa).

Key mitigating steps to prevent a model skewing attack include implementing robust access controls, properly classifying data, validating data labels and regularly monitoring the model’s performance.

Operations and Scaling: Protect AI Production Environment Integrity


As an organization scales its AI operations, they will mature and become more competent in robust data management practices. But opportunities remain—for example, the generated information itself becomes a new dataset. Companies will need to stay vigilant.

A prompt injection occurs when an attacker manipulates a large language model (LLM) through crafted inputs, causing the LLM to inadvertently execute the attacker’s intentions. Consider an attacker who injects a prompt to an LLM-based support chatbot which tells the chatbot to “forget all previous instructions.” The LLM is then instructed to query data stores and exploit package vulnerabilities. This can lead to remote code execution, allowing the attacker to gain unauthorized access and privilege escalation.

To inhibit prompt injections, restrict LLM access to back-end systems to the minimum necessary and establish trust boundaries between the LLM, external sources and extensible functionality such as plugins.

Examine Your Data Risks as Part of AI Strategy and Execution


This post has presented some types of attack risks that can be opened up when training, customizing and using GenAI models. In addition to familiar risks from data analytics, GenAI presents new data security challenges. And the model itself must be guarded during training, fine-tuning, vector embedding and production.

This is a big undertaking. Given the ambitious goals and time-frames many organizations have set for deploying GenAI use cases, they can’t afford the time to gradually add the people, processes and tools required for the heavy lift of GenAI data security.

Dell Services is ready to help with these challenges, with our Advisory Services for GenAI Data Security. Consultants with data security and AI expertise help you identify data-related risks through the four stages of the GenAI data lifecycle—data sourcing, data preparation, model training/customization and AI operations and scaling. Our team provides understanding of possible attack surfaces and helps you prioritize the risks and mitigation strategies, leveraging frameworks such as MITRE ATLAS, OWASP ML Top 10 and OWASP LLM Top 10.

Source: dell.com

Related Posts

0 comments:

Post a Comment