Google Bug Bounty for AI related attacks to study risks


Google is expanding its Bug Bounty program, which includes $12 million in rewards for AI attack scenarios for security researchers as part of its Bug Bounty Program (VRP). This program encourages testers and developers to test and report any software vulnerabilities in Google products and services. The company’s most popular generative AI products include Bard, Lenses, and other AI integrations in Search, Gmail, Docs, and more. GenAI tools will become a major target for them in the coming years. Bug crowds have been active in AL and ML testing since 2018.

It was announced as generative AI continues to grow and Google is one of the major leaders in artificial intelligence. These challenges threaten biases, model manipulations, data misinterpretations, and other hostile attacks with the expansion of the Vulnerability Reward Program. Third-party researchers are tasked with finding these vulnerabilities in exchange for financial gain so that Google can fix them before bad actors exploit them for better product security.

Google AI’s Secure AI Framework focuses on generative AI attacks

Google’s efforts to develop a responsive and secure AI product with its Open Source Security Foundation. To cover discussions that could arise from Google’s Generative AI system to develop responsible AI and manage AI-related risks. Companies have shared several concerns about traditional digital security, such as unfair bias, model manipulation or data misinterpretation.

Applying the common tactics, techniques and procedures that drive actors to leverage the attack AIssystem,

  • Promote an attack
  • Data extraction training
  • Manipulate models
  • Contradictory – disruption
  • Model theft/expiration date

If you discover a bug in an AI-powered tool, you can submit it if it meets the qualifications. Experts have the opportunity to scrutinize popular LLMS for potential vulnerabilities, as you can investigate them to prevent possible misuse of the generative AI feature. The company is taking the necessary steps to ensure the security of its AI supply chain by using the SLSA supply chain security guidelines and the Sigstoe code signing tool.

These programs offer numerous benefits to protect against such threats. Enhanced programs invite security researchers, known as white-hat hackers, to find and report vulnerabilities in AI EMs. The company strives to identify and resolve these issues before malicious individuals or groups can exploit them. Google’s engineering team has posted a list of at-track scenes eligible for VRP rewards. Among several top technology companies committed to ensuring the quality and reliability of AI products

Google Bug Bounty program for security issues

Google is giving out more than $12 million in cash rewards to those who find bugs in its products and report them to a security investigation, and you can submit the bug or security issue to the companies in 2022. The company will recognize and pay compensation to all ethical hackers who find and disclose vulnerabilities in Google’s system. VRP covers AI-specific attacks and issues with strict assessments and requirements for AI models before they are used by government agencies.

Business approach to better anticipate and test these potential risks; Google also plans to expand VRP to include attack scenes around product injections, sensitive data leaks, and the like. This recognition can be a motivating factor, as many in the cybersecurity community are passionate about securing technology to improve and develop it safely.

Additionally, thousands of third-party researchers test and look for system weaknesses in the AI ​​system, thereby avoiding criminal charges for hacking-related activities.

Expanding AI research around AI safety and security with plans to study risks

Recently, many reports have surfaced online drawing attention to generative AI in industry and government, about how bad actors want to take a fresh look at the way bugs are categorized and reported. In this bug bounty program to incentivize researchers to proactively identify and report AI-related vulnerabilities, COMPANY did not have clear guidelines for which issues are eligible for rewards.

Such programs also encourage researchers to focus on several key areas, including vulnerabilities in AI models, data pose attacks, and evasion techniques, to strengthen the security of their AI systems and protect against potential threats. Companies said they paid $26.3 billion to other companies to gain traffic, including default search engine status, which most likely went to Apple.


Microsoft also announced the AI ​​Bug Bounty Program with a $15,000 reward to find vulnerabilities as the company focuses on the AI-powered Bing Experience. The United Nations and OpenAI have announced their plans to study AI in the coming months; It is expected that even the Biden administration will issue an executive order around AI in the coming week to ensure the quality and reliability of AI products.

In addition, Google is among the companies that have signed a pledge to have independent experts test their AI programs for security before making them public, and to develop ways to watermark AI-generated content to prevent it public falls into deepfakes and other AI programs. disinformation created.

Leave a Comment