Securing AI: Insights from real-world deployments

Get insights into AI security risks from real-world deployments and learn how forward-thinking enterprises protect their AI-driven platforms.
Kirthika Soundararajan
March 23, 2024
Listen on
Listen on

In 2023, Air Canada made headlines for its ‘lying AI chatbot’ case, a significant moment for businesses using AI. The airline faced controversy after its AI-powered chatbot gave a customer misleading flight information. The subsequent ruling held the airline fully responsible for the incorrect information.

The case highlighted an important truth: Businesses are accountable for every interaction on their platforms, including AI-powered ones.

Enterprises must proactively secure their AI systems, ensuring they are robust, reliable, and free from vulnerabilities.

Ruchir Fatwa, co-founder of SydeLabs and now VP of Engineering at Protect AI following its acquisition, has observed the AI security landscape. SydeLabs and Protect AI provide tools to ensure the security of AI systems, including an AI Red Teaming System, which tests for vulnerabilities like data leaks, prompt injections, and toxicity. Their AI Firewall, an intent-based solution, monitors user inputs for safe communication with AI systems.

In this episode, Ruchir spoke to Srikrishnan, co-founder of Rocketlane, about the unique security challenges of AI, how companies can improve their security, and why early attention to AI safety is essential for businesses.

Here are the main points from their conversation:

1. Why should companies take a proactive stance on AI security?

Companies can’t attribute AI-generated responses to machines—these are still seen as coming from the enterprise to the customer.

In recent deployments and testing, the Protect.AI team has observed concerning trends around safety and security, often due to newer AI models being less secure or how companies implement these systems.

Many enterprises, including mid-market customers, fine-tune their models using customer or private data, assuming sensitive information is protected.

However, there are easy ways to leak this data, like role-playing scenarios designed to reveal another customer’s information.

These incidents highlight the need for stronger safeguards in AI deployments.

When companies deploy AI solutions, they must clearly define the scope of responsibility for customer safety—especially given the uncertainty around liability, particularly with open-source models. For example, if a startup builds on a model like Llama and sells a feature that produces biased or unsafe results, the responsibility question becomes important.

Many startups are now being transparent about their processes—such as the fine-tuning, security measures, and safety tests. Sharing this information with customers builds trust and encourages them to ask the right questions of other providers. While no system is 100% secure, demonstrating necessary precautions helps reassure customers and may reduce the likelihood of being held solely responsible for security issues.

2. The unique security challenges of AI

AI security is distinct because it creates a new attack surface by merging traditional systems with human-like interactions, making them vulnerable to social engineering.

Key factors making AI security challenging include:

  • New attack surface: AI systems introduce a new attack surface with additional vulnerabilities through social engineering, where crafted prompts can bypass traditional security protocols.
  • Limitations of pattern-based security: Unlike traditional systems relying on pattern recognition, AI systems using natural language and multimedia require new security approaches. For instance, ChatGPT was manipulated into providing harmful information by framing a prompt as a story. Security capabilities need to discern intent, not just look for patterns.
  • Increased privileged access: As AI agents replace human functions, they gain broad access to data and operations.

This new model creates a single access point for vast amounts of data and tasks, posing significant security challenges. Access to sensitive systems could result in unintended consequences if not properly managed.

  • The tricky balance between access and control: AI agents present a unique challenge. These agents require broad access to perform tasks such as booking flights or making payments, but careful control over what they can do is essential to prevent misuse.

Beyond security, AI raises concerns about safety, brand reputation, and intellectual property, which companies deploying AI systems must manage.

Protecting AI systems from fraud and abuse requires a holistic approach that includes:

  • Shift from pattern to intent-based security: Instead of focusing on the input's appearance, it's more effective to understand the user's intent. This approach helps in security and preventing abuse. For example, an AI-driven chatbot on a flight booking site shouldn't be misused for unrelated tasks, like writing code, which could overload the system and lead to security issues.
  • Guardrails to comply with brand guidelines: It's important to have guardrails that keep the AI system's output aligned with the company's brand guidelines, tone, and messaging. This ensures the AI stays accurate and relevant, whether it's by giving unwanted responses or making errors in judgment.
  • Balance between AI power and control: Limiting AI capabilities can prevent misuse but may restrict its value. The key is to find a balance, allowing AI to perform diverse tasks while ensuring it stays within its intended purpose. A critical focus moving forward will be properly authorizing AI agents.

3. Real-world example of a proactive approach to AI security 

Enterprises can be more prepared due to their proactive approach to identifying and addressing potential risks.

One example is AI teams aware of potential issues and continuously try to break their systems to find vulnerabilities. Recently, a single model approach is broken down into smaller, specific models for particular tasks. For instance, one model might only handle greetings, ensuring it doesn't respond beyond that. This allows for stricter guardrails, with an outer model upfront that decides which question goes where, and as confidence grows, more modules can increase capabilities.

Another example is when enterprises carefully consider the models they deploy. For example, if a company switches from OpenAI’s GPT to an open-source model like Llama, this change might seem simple but can come with new security risks. While both models may appear similar in performance, the types of attacks that work on GPT might not work on Llama, and there may be new vulnerabilities to consider. These enterprises don’t just evaluate models based on cost and performance; they also consider non-functional factors like security to ensure their systems remain secure as they scale.

4. Ways to ensure better AI security posture

As AI becomes integral to products and solutions, here are a few steps you need to take:

  • Categorize your AI system’s purpose: First, determine whether the AI system is read-only, pulling from public sources and giving summaries, or if it’s using internal data, such as user data or code. The most important distinction is between a purely informational system and one that can take actions, like writing or deleting data.

  • Conduct threat modeling: After categorizing the system, consider the worst-case scenario if things go wrong. Anticipate these possibilities and plan how to address them.

  • Test the system thoroughly: A good practice is to do red teaming (penetration testing by experts) to find out what the worst possible outcomes are before going live. This step is crucial because in-house teams may lack the security expertise to identify vulnerabilities in AI systems.

  • Ensure data protection: If you deploy AI agents, you need to prevent any contamination of data. If the system has access to one customer’s data, you need to ensure it’s not using that data for another customer. Proper safeguards need to be in place to maintain data privacy.

  • Be mindful of supply chain security: If you're using open-source models or public models, consider the risks involved. Unlike traditional open-source software, which you can audit, open-source AI models are often a black box. You won’t know if there’s a backdoor, malicious data, or copyrighted content hidden inside.

5. Best practices for AI security 

To be better prepared for deploying AI at scale, companies should consider a few best practices to ensure success and minimize risks. These include:

  1. Deploying in phases: Start with a read-only system or one with minimal write actions. This allows you to observe how the system is being adopted, gain insights into user interactions, and assess its performance under real conditions. Gradually scale the system rather than jumping straight to more complex functionalities, like fully autonomous agents.
  1. Categorizing adoption strategies: It's important to consider AI adoption in three distinct categories:
    1. AI built into your systems
    2. Employees using AI tools (e.g., copilot or MidJourney)
    3. Adopting AI-enabled SaaS solutions

Each category requires a different approach, and even if AI is not deployed directly in your systems, you must consider how employees are using AI tools. For example, many AI providers don’t protect users from issues like copyright infringement, which can expose the company to risks.

  1. Monitoring and managing risk: As your company grows, your risk appetite tends to decrease, so it’s important to have someone dedicated to managing AI-related risks and ensuring the company’s AI practices align with evolving security standards and regulations.
  2. Leveraging trusted tools: Use third-party tools that are built for AI security and fill gaps in internal security testing to ensure comprehensive protection.

Further Reading 

2025 goal setting for professional services organization
The 6 pillars of client-centric professional services

Move your service delivery into the fast lane

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Other episodes

Securing AI: Insights from real-world deployments
Get insights into AI security risks from real-world deployments and learn how forward-thinking enterprises protect their AI-driven platforms.
The AI revolution in IT services: Why mid-size companies are set to disrupt the industry
Mid-sized IT companies are set to lead the AI revolution, outpacing larger competitors through agility and risk tolerance in service delivery transformation.
The opportunity in professional services
Discover how AI is reshaping professional services with Krishna Kumar Natrajan's insights on innovation, efficiency, and outcome-based models.