Close Menu
NotesleuNotesleu
    Facebook X (Twitter) Instagram
    Monday, February 16
    Facebook X (Twitter) Instagram
    NotesleuNotesleu
    • Home
    • General News
    • Cyber Attacks
    • Threats
    • Vulnerabilities
    • Cybersecurity
    • Contact Us
    • More
    NotesleuNotesleu
    Home»AI»Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats
    AI

    Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

    By securnerd2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Follow Us
    Google News

    In a move to fortify the safety and security of artificial intelligence (AI), Google has expanded its Vulnerability Rewards Program (VRP), offering compensation to researchers who identify potential threats specific to generative AI systems. The decision aims to address unique concerns arising from generative AI, including issues like unfair bias, model manipulation, and misinterpretation of data, commonly referred to as “hallucinations,” according to statements by Google’s Laurie Richardson and Royal Hansen.

    The expanded program encompasses various categories, such as prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks triggering misclassification, and model theft.

    Google had previously established an AI Red Team in July as part of its Secure AI Framework (SAIF) to combat threats against AI systems. As part of its commitment to secure AI, Google is also working to bolster the AI supply chain through open-source security initiatives, including Supply Chain Levels for Software Artifacts (SLSA) and Sigstore. These initiatives employ digital signatures, such as those provided by Sigstore, enabling users to verify the integrity of software and detect tampering. Additionally, metadata like SLSA provenance provides detailed insights into software composition, aiding consumers in ensuring license compatibility, identifying vulnerabilities, and detecting advanced threats.

    This announcement coincides with OpenAI’s introduction of an internal Preparedness team dedicated to monitoring and protecting generative AI against catastrophic risks. The team’s focus spans cybersecurity threats to potential chemical, biological, radiological, and nuclear (CBRN) risks.

    In a collaborative effort, Google, OpenAI, Anthropic, and Microsoft have jointly established a $10 million AI Safety Fund. This fund is specifically geared towards fostering research in the field of AI safety, underlining the industry’s commitment to advancing the responsible development and deployment of artificial intelligence technologies.

    Found this news interesting? Follow us on Twitter  and Telegram to read more exclusive content we post.

    Post Views: 66

    Related Posts

    • Yamaha confirms cyberattack after multiple ransomware gangs claim attacks
    • Microsoft Alerts About Phishing Tactics Using Teams Messages to Target Enterprises
    • Amazon sends Mastercard, Google Play gift card order emails by mistake
    • Massive Hack Targets Nearly 2,000 Citrix NetScaler Instances Exploiting Critical Vulnerability
    Follow on Google News
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Add A Comment
    Leave A Reply Cancel Reply

    Recent Post

    Complete HTML Handwritten Notes

    July 22, 2024

    Complete C++ Handwritten Notes From Basic to Advanced

    July 21, 2024

    Complete Python Ebook From Basic To Advanced

    July 20, 2024

    Top 7 Open-Source LLMs for 2024 and Their Uses

    July 18, 2024
    About Us
    About Us

    We're your premier source for the latest in AI, cybersecurity, science, and technology. Dedicated to providing clear, thorough, and accurate information, our team brings you insights into the innovations that shape tomorrow. Let's navigate the future together."

    Latest

    Complete HTML Handwritten Notes

    July 22, 2024

    Complete C++ Handwritten Notes From Basic to Advanced

    July 21, 2024

    Complete Python Ebook From Basic To Advanced

    July 20, 2024
    Popular Post

    Hacking Group Cult of the Dead Cow Develops Veilid, an End-to-End Encryption System for Social Media and Messaging Apps

    August 3, 202356 Views

    Fashion, Tips, Trends and Celebrity Style

    September 6, 20230 Views

    Nigerian Man Admits Guilt in $6 Million Business Email Compromise Scheme

    September 24, 20233 Views
    • Contact Us
    • About US
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 Notesleu. Designed by NIM.

    Type above and press Enter to search. Press Esc to cancel.