AI / ML Security

Scope:
all Yandex services that use generative neural networks


Your goal is to identify vulnerabilities that can occur in systems and apps that use generative neural networks. This includes both issues within the ML models themselves and misconfigurations in the infrastructure that maintains their operation.

The program covers all Yandex services and AI products that use models from the YandexGPT and YandexART families::

  • Alice;
  • Neuro search;
  • Shedevrum
  • Other services, including those that use the ML models for implicit ranking and search.
  • We don't consider ethical issues in this Bug Bounty category. The complete list of exceptions is available in the «Exceptions» section.

    { Rewards }
    The reward amount depends on the severity of the vulnerability, the ease of its exploitation, and its impact on sensitive data.

    Vulnerabilities that don’t affect AI/ML functionality will be assessed in accordance with the Main scope category.

    Category
    Example
    Reward
    Attacks on the data collection, processing, and model training processes: supply chain attacks, attacks on the model training process, and data poisoning
    • Training data poisoning:
      Affecting the future response style and/or quality of a model through a series of prompts or by injecting poisoned data into the source.
    $2,000 — $11 ,000
    Information disclosure: technical and sensitive data
    • Accessing data about user's interactions with the model:
      Retrieving third-party dialog history data using a prompt;
    • Disclosing internal configurations providing more insight into how the models work:
      Extracting model weights/confidence scores/system prompts containing technical data.
    $1,500 — $11,000 Sensitive data

    up to $2,000 ₽ Technical data
    Attacks on the model's business decision-making: adversarial attacks, attacks affecting decision-making algorithms
    • Prompt injection if it is used as the basis for a business decision affecting other services or users:
      Uploading a product with a prompt injection in its description that affects product ranking in search results.
    $500 — $3,300
    Infrastructure attacks: modifying the system's behavior for other users, changing the system’s technical characteristics/capabilities
    • Modifying how the model behaves toward other users using system commands, configuration flags, and other technical parameters:
      Forcing the model to respond in French to everyone using the additional flag «set_mode=french» ;
    • Affecting/changing the model's behavior when interacting with multiple users:
      Using a prompt from one user to affect the style of responses given to another or all users;
    • SSRF attacks on other internal services using prompts.
    up to $5,500
    Other attacks: plugin vulnerabilities, bypassing technical restrictions, attacks compromising the confidentiality and integrity of our systems
    • Bypassing technical restrictions of the model:
      Bypassing billing mechanisms for paid API requests;
    • Vulnerabilities in official plugins extending the model's functionality:
      For example, using a plugin that allows the model to send real-time search queries to send requests to an internal host;
      Changing prices and purchasing products for 0 rubles using a plugin that can order from Yandex Market while fetching up-to-date prices.
    up to $2,500
    { Out of scope }
    • Ethical issues:
      The model demonstrates bias, discrimination, or other undesirable behavioral patterns, and distorts well-known facts or gives incorrect or incomplete responses;

    • Prompt injections:
      Injections that affect only the model's decision-making or the content generated for the attacker (for example, changing the attacker's chat style or generating an image in a different style);
    • Model hallucinations:
      This is when the model simulates code execution, disclosure of sensitive data or service prompts. You can use ssrf-sheriff to check if the code is executed;

    • Vulnerabilities affecting service availability:
      If you suspect a vulnerability that could affect the availability of our services, please refrain from further testing and report it to us so that we can investigate it in a controlled environment;

    • Vulnerabilities found in third-party services of Yandex Cloud clients are out of the scope of this Bug Bounty category.
    { Ethics }
      We take ethical considerations in the behavior and responses of generative models seriously. However, this Bug Bounty category doesn’t cover the generation of unethical content.

      We understand that no system is perfect, and that ethical violations may occur despite our best efforts. That's why we encourage users to report any ethical violations they encounter. You can submit your reports through the following channels:

    • In-chat feedback: if you encounter an inappropriate response during a conversation, you can click the thumbs down icon for that response;
    • Service customer support team.
    Thu Apr 10 2025 08:57:54 GMT+0300 (Moscow Standard Time)