<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0" xml:base="https://www.hackerone.com/">
  <channel>
    <title>News &amp;amp; Updates</title>
    <link>https://www.hackerone.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>HackerOne Now Licensed for Penetration Testing in Singapore</title>
  <link>https://www.hackerone.com/blog/hackerone-now-licensed-penetration-testing-singapore</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">HackerOne Now Licensed for Penetration Testing in Singapore</span>
    



    
        H1 Team
        
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>joseph@hackerone.com</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 03/03/2025 - 11:19
</span>

            
  
      
  
    Image
                



          

  

      
            March 3rd, 2025

      
            <p dir="ltr"><span>Cyber threats don’t wait, and neither should your security strategy. Organizations across Singapore are facing growing regulatory demands and increasingly sophisticated cyber risks. The best defense? A proactive approach that uncovers vulnerabilities before attackers do.</span><br><br><span>That’s why we’re excited to announce that HackerOne is now officially licensed to provide penetration testing services in Singapore. With this new certification from the&nbsp;</span><a href="https://www.csro.gov.sg/resources/licensed-service-providers/"><span>Cybersecurity Services Regulation Office</span></a><span>, we can now bring our modern, scalable&nbsp;</span><a href="https://www.hackerone.com/product/pentest"><span>Pentest as a Service (PTaaS) solution</span></a><span> to businesses across the region—helping you strengthen security, meet compliance requirements, and stay ahead of cyber threats.</span></p><p dir="ltr"><span>Unlike traditional pentesting providers, we don’t just hand you a static report and walk away. Our agile, expert-driven approach gives you real-time collaboration, faster results, and deeper insights—so you can turn security gaps into strengths before attackers exploit them.</span></p><p dir="ltr"><span>Ready to rethink penetration testing? Here’s what this means for you.</span></p><h2 dir="ltr"><span><strong>Why This Matters for Organizations in Singapore</strong></span></h2><p dir="ltr"><span>Cybersecurity threats are increasing in complexity, and regulatory requirements are becoming stricter. Organizations in Singapore—particularly those handling sensitive data—should conduct penetration testing in line with laws, standards, and frameworks like:</span></p><ul><li dir="ltr"><span>Monetary Authority of Singapore (MAS) TRM Guidelines</span></li><li dir="ltr"><span>Personal Data Protection Act (PDPA)</span></li><li dir="ltr"><span>PCI DSS </span></li><li dir="ltr"><span>NIST Cybersecurity Framework</span></li><li dir="ltr"><span>Cybersecurity Act of Singapore</span></li><li dir="ltr"><span>ISO 27001, SOC 2, and other international security standards</span></li></ul><p dir="ltr"><span>With our newly-approved penetration testing services, businesses can now proactively identify vulnerabilities, strengthen security postures, and align with local and global regulations.</span></p><h2 dir="ltr"><span><strong>Modern, Scalable Pentesting for APAC</strong></span></h2><p dir="ltr"><span>HackerOne’s Pentest as a Service (PTaaS) model modernizes the traditional penetration testing process, offering a faster, more flexible, and outcome-driven approach. Instead of rigid, slow-moving engagements, our platform allows you to:</span></p><ul><li dir="ltr"><span>Launch pentests in days, not weeks</span></li><li dir="ltr"><span>Access a vetted global community of security experts with deep industry knowledge</span></li><li dir="ltr"><span>Collaborate in real-time to address findings and strengthen security</span></li><li dir="ltr"><span>Meet compliance mandates while focusing on meaningful risk reduction</span></li></ul><p dir="ltr"><span>Unlike traditional consultancy-based pentests, HackerOne PTaaS integrates seamlessly into your security workflow, ensuring continuous security improvement rather than a one-time report.</span></p><h2 dir="ltr"><span><strong>What Sets HackerOne’s Pentesting Apart?</strong></span></h2><p dir="ltr"><span>HackerOne delivers elite penetration testing services backed by industry-leading expertise and technology. Our approach is designed for speed, accuracy, and business-aligned security outcomes.</span></p><ul><li dir="ltr"><span><strong>Speed</strong>: Start your pentest in 4-7 business days</span></li><li dir="ltr"><span><strong>Vetted Experts</strong>: 75% of our testers have 5+ years of experience</span></li><li dir="ltr"><span><strong>High-Impact Results</strong>: 19% of findings are critical or high severity, twice the industry average</span></li><li dir="ltr"><span><strong>AI-Powered Insights</strong>: Our AI Copilot (Hai) helps interpret complex reports and suggests remediation steps</span></li><li dir="ltr"><span><strong>Seamless Integrations</strong>: Works with Jira, GitHub, ServiceNow, Slack, and more for streamlined remediation</span></li></ul><p dir="ltr"><span>With a licensed and highly specialized security testing team, HackerOne ensures that your organization stays ahead of attackers, meets compliance requirements, and builds a more resilient security posture.</span></p><h2 dir="ltr"><span><strong>Next Steps: How to Get Started</strong></span></h2><p dir="ltr"><span>Now that HackerOne is a licensed penetration testing provider in Singapore, organizations in the region can start securing their systems with our expert-led pentesting services.</span></p><p dir="ltr"><span><strong>Interested in pentesting?</strong></span><a href="https://www.hackerone.com/product/pentest#form"><span> Contact us today</span></a><span> to discuss your security needs.</span><br><span><strong>Want to learn more?</strong> Explore our</span><a href="https://hackerone.drift.click/Pentest"><span> Pentest Solution Brief</span></a><span> for detailed insights into our methodology and coverage areas.&nbsp;</span></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/penetration-testing" hreflang="en">Penetration Testing</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/security-compliance" hreflang="en">Security Compliance</a>
        
    
]]></description>
  <pubDate>Mon, 03 Mar 2025 17:19:19 +0000</pubDate>
    <dc:creator>joseph@hackerone.com</dc:creator>
    <guid isPermaLink="false">5568 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Gain Actionable, Data-backed Insights with HackerOne Recommendations</title>
  <link>https://www.hackerone.com/blog/gain-actionable-data-backed-insights-hackerone-recommendations</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Gain Actionable, Data-backed Insights with HackerOne Recommendations</span>
    



    
        Naz Bozdemir
        
            Senior Product Manager
      
    


    



    
        Caroline Collins
        
            Senior Product Manager
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Thu, 02/06/2025 - 14:17
</span>

            
  
      
  
    Image
                



          

  

      
            February 6th, 2025

      
            <p dir="ltr">Meet&nbsp;<a href="https://docs.hackerone.com/en/articles/10131438-home#h_06c31153e5">HackerOne Recommendations</a>: a built-in intelligence layer that continuously refines your security program, delivering personalized insights and your program's historical performance.</p><h2>Eliminate Guesswork With Contextual, High-value Suggestions</h2><p dir="ltr">With HackerOne Recommendations, you don’t need to manually sift through reports or guess which actions will impact your programs most. This automated intelligence layer continuously evaluates your security program’s performance and delivers personalized, high-value recommendations—right inside your&nbsp;<a href="https://docs.hackerone.com/en/articles/10131438-home">HackerOne Home Page</a>.</p><p dir="ltr">Recommendations aren’t just a generic list of tasks—they are risk-driven, context-aware, and backed by real attack intelligence based on HackerOne’s comprehensive database, which comprises over 500,000 valid vulnerabilities reported across industries.</p><p dir="ltr">Every month, HackerOne assesses 20 trigger conditions within your program, with a continually growing set of factors that enhance its intelligence over time. As data expands, so does the system’s ability to surface even more precise, high-impact suggestions, designed to:</p><ul><li dir="ltr">Optimize vulnerability response times by identifying bottlenecks and delays in triage workflows</li><li dir="ltr">Maximize hacker engagement by analyzing payout structures, report resolution speed, and incentive alignment</li><li dir="ltr">Reduce critical security gaps by identifying trends in missed, delayed, or incorrectly prioritized vulnerabilities</li><li dir="ltr">Benchmark your program’s efficiency against industry peers and top performers</li></ul><h2>How HackerOne Recommendations Work&nbsp;</h2><p dir="ltr">HackerOne Recommendations are updated at the first of each month, delivering clear, actionable improvements tailored to your security program. Each recommendation includes:</p><ul><li dir="ltr">A defined action plan with specific steps to improve your program</li><li dir="ltr">Supporting data and metrics to justify and quantify the impact</li><li dir="ltr">Guidance on implementation, whether through direct action or with assistance from your HackerOne Account Manager or Customer Success Manager</li></ul><h4 dir="ltr">Accessing Recommendations</h4><p dir="ltr">Recommendations are available in the&nbsp;<a href="https://docs.hackerone.com/en/articles/10131438-home#h_06c31153e5">Recommendations</a> section of your&nbsp;<a href="https://docs.hackerone.com/en/articles/10131438-home">HackerOne Home Page</a>, providing an at-a-glance view of key security improvement opportunities.</p><ul><li dir="ltr">Take Action – Select a recommendation to view detailed insights, context, and next steps.</li><li dir="ltr">Review All – See a consolidated list of all active recommendations for your program.</li></ul><h4 dir="ltr"><br>Expanded View for In-depth Analysis</h4><p dir="ltr">Each recommendation includes a structured breakdown for clarity and ease of implementation:</p><ul><li dir="ltr">Left-hand pane – View all recommendations applicable to your program.</li><li dir="ltr">Right-hand pane – See detailed insights, including supporting data and suggested actions.</li><li dir="ltr">Actionable steps – Choose specific actions to address security gaps.</li></ul><h4 dir="ltr"><br>Customization and Feedback</h4><ul><li dir="ltr">Dismiss if not relevant – Click the Dismiss button up top to remove a recommendation from your view for 90 days.</li><li dir="ltr">Provide feedback – Use thumbs-up/down ratings on individual recommendations to refine future recommendations and ensure relevance.</li></ul><h3 dir="ltr"><br><strong>Enhance Program Performance With Data-driven Intelligence</strong></h3><p dir="ltr">HackerOne Recommendations is now available to all Bounty customers at no additional cost. Built on real-world security data, it eliminates guesswork by delivering actionable, high-impact insights—not generic alerts.</p><p dir="ltr">Start leveraging the industry’s most comprehensive vulnerability dataset to drive measurable security improvements. Start using HackerOne Recommendations today by&nbsp;<a href="https://www.hackerone.com/product/overview#form">connecting with our experts</a> or&nbsp;<a href="https://www.hackerone.com/product/overview">exploring the HackerOne Platform</a>.&nbsp;</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/vulnerability-management" hreflang="en">Vulnerability Management</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/bug-bounty" hreflang="en">Bug Bounty</a>
        
    

            <p dir="ltr">Security teams deal with an overwhelming volume of reports, alerts, and vulnerability data—but without the right prioritization, it's easy to waste time on low-impact issues while critical risks go unnoticed. Static benchmarks don't adapt to real-world threats, and manual analysis is too slow to keep up with the evolving attack landscape.</p><p dir="ltr"><em>What if your security program could self-optimize: analyze trends, identify weak points, and proactively propose actionable steps to strengthen defenses?</em></p>
      ]]></description>
  <pubDate>Thu, 06 Feb 2025 20:17:15 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5474 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>An Emerging Playbook for AI Red Teaming With HackerOne</title>
  <link>https://www.hackerone.com/blog/emerging-playbook-ai-red-teaming-hackerone</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">An Emerging Playbook for AI Red Teaming With HackerOne</span>
    



    
        Alex Rice
        
            Co-founder, CTO, CISO
      
    


    



    
        Dane Sherrets
        
            Chief Executive Officer
      
    


    



    
        Michiel Prins
        
            Co-founder &amp; Senior Director, Product Management
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Tue, 11/07/2023 - 12:55
</span>

            
  
      
  
    Image
                



          

  

      
            April 1st, 2024

      
            <p>To ensure that AI is more secure and trustworthy, the EO calls on companies who develop AI and other companies in critical infrastructure that use AI to rely on “red-teaming”: testing to find flaws and vulnerabilities. The EO also requires broad disclosures of some of these red-team test results.</p><p>Testing AI systems isn’t necessarily new. Back in 2021, HackerOne organized a <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge" target="_blank">public algorithmic bias review</a> with Twitter as part of the AI Village at DEF CON 29. The review encouraged members of the AI and security communities to identify bias in Twitter’s image-cropping algorithms. The <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge" target="_blank">results of the engagement</a> brought to light various confirmed biases, informing improvements to make the algorithms more equitable.</p><p>In this blog post, we'll delve into the emerging playbook developed by HackerOne, focusing on the collaboration between ethical hackers and AI safety to fortify these systems. Bug bounty programs have proven effective at finding security vulnerabilities, but AI safety requires a new approach. According to recent findings published in the <a href="https://www.hackerone.com/reports/7th-annual-hacker-powered-security-report">7th Annual Hacker Powered Security Report</a>, 55% of hackers say that GenAI tools themselves will become a major target for them in the coming years, and 61% said they plan to use and develop hacking tools using GenAI to find more vulnerabilities.&nbsp;</p><blockquote><p dir="ltr"><em>“Every properly designed AI application has a unique safety threat model and should implement some safety parameters or guard rails to protect against adverse outcomes. The protections you care most about are going to vary based on the use case for the application and the intended audience. But how easily are those guard rails bypassed? That is what you find out with AI red teaming.”</em><br>— Dane Sherrets, Senior Solutions Architect, HackerOne</p></blockquote><h2>HackerOne's Approach to AI Red Teaming</h2><p>HackerOne partners with leading technology firms to evaluate their AI deployments for safety issues. The ethical hackers selected for our early AI Red Teaming exceeded all expectations. Drawing from these experiences, we're eager to share the insights gleaned, which have shaped our evolving playbook for AI safety red teaming.</p><p>Our approach builds upon the powerful bug bounty model, which HackerOne has successfully offered for over a decade, but with several modifications necessary for optimal AI Safety engagement.</p><ul><li><strong>Team Composition:</strong> A meticulously selected and, more importantly, diverse team is the backbone of an effective assessment. Emphasizing diversity in background, experience, and skill sets is pivotal for ensuring a safe AI. A blend of curiosity-driven thinkers, individuals with varied experiences, and those skilled in production LLM prompt behavior has yielded the best results.</li><li><strong>Collaboration and&nbsp;Size</strong>: Collaboration among AI Red Teaming members holds unparalleled significance, often exceeding that of traditional security testing. A team size ranging from 15-25 testers has been found to strike the right balance for effective engagements, bringing in diverse and global perspectives.</li><li><strong>Duration:</strong> Because AI technology is evolving so quickly, we’ve found that engagements between 15 and 60 days work best to assess specific aspects of AI Safety. However, in at least a handful of cases, a continuous engagement without a defined end date was adopted. This method of continuous AI red teaming pairs well with an existing bug bounty program.</li><li><strong>Context and&nbsp;Scope:</strong> Unlike traditional security testing, AI Red Teamers cannot approach a model blindly. Establishing both broad context and specific scope in collaboration with customers is crucial to determining the AI's purpose, deployment environment, existing safety features, and limitations.</li><li><strong>Private vs. Public:</strong> While most AI Red Teams operate in private due to the sensitivity of safety issues, there are instances where public engagement, such as <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge" target="_blank">Twitter's algorithmic bias bounty challenge</a>, has yielded significant success.</li><li><strong>Incentive Model</strong>: Tailoring the incentive model is a critical aspect of the AI safety playbook. A hybrid economic model that includes both fixed-fee participation rewards in conjunction with rewards for achieving specific safety outcomes (akin to bounties) has proven most effective.</li><li><strong>Empathy and&nbsp;Consent: </strong>As many safety considerations may involve encountering harmful and offensive content, it is important to seek explicit participation consent from adults (18+ years of age), offer regular support for mental health, and encourage breaks between assessments.</li></ul><blockquote><p dir="ltr"><em>“It’s important to underscore that different AI models or deployments will have drastically different threat models. An AI text-to-image generator deployed on a social media network will have a different threat model than an AI chabot in a medical context. Early on in these conversations we define what the threat model is based on the use case, the regulatory environment, architecture, and other factors.”</em><br>— Dane Sherrets, Senior Solutions Architect, HackerOne</p></blockquote><p>In the HackerOne community, over 750 active hackers specialize in prompt hacking and other AI security and safety testing. To date, 90+ of those hackers have participated in HackerOne's AI Red Teaming engagements. In a single recent engagement, a team of 18 quickly identified 26 valid findings within the initial 24 hours and accumulated over 100 valid findings in the two-week engagement. In one notable example, one of the challenges put forth to the team was bypassing significant protections built to prevent the generation of images containing a Swastika. A particularly creative hacker on the AI Red Team was able to swiftly bypass these protections, and thanks to their findings, the model is now far more resilient against this type of abuse.</p><p>As AI continues to shape our future, the ethical hacker community, in collaboration with platforms like HackerOne, is committed to ensuring its safe integration. Our AI Red Teams stand ready to assist enterprises in navigating the complexities of deploying AI models responsibly, ensuring that their potential for positive impact is maximized while guarding against unintended consequences.</p><blockquote><p dir="ltr"><em>“In my opinion, the best way to secure AI is also through the use of crowdsourcing. By engaging hackers through AI red teaming engagements, I believe we can obtain a better understanding of the rapidly changing nature of AI security and AI Safety. This will result in reduced risk in implementing these exciting new technologies and allow us to capitalize on all of the benefits.”</em><br>—&nbsp;Josh Donlan, Senior Solutions Engineer, HackerOne</p></blockquote><p>By using the expertise of ethical hackers and adapting the bug bounty model to address AI safety, HackerOne's playbook is a proactive approach to fortifying AI while mitigating potential risks. For technology and security leaders venturing into AI integration, we look forward to partnering with you to explore how HackerOne and ethical hackers can contribute to your AI safety journey. To learn more about how to implement AI Red Teaming for your organization, <a href="https://www.hackerone.com/resources/one-pager/ai-red-teaming-solution-brief">download the AI Red Teaming solution brief</a> or <a href="https://www.hackerone.com/contact">contact our experts at HackerOne.</a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p>As AI is adopted by every industry and becomes an integral part of enterprise solutions, ensuring its safety and security is critical. In fact, the Biden Administration recently released an Executive Order (EO) that aims to shape the safe, secure, and trustworthy development of AI. This follows action taken in California and by the Leaders of the Group of Seven (“G7”) to address AI.&nbsp;</p>
      ]]></description>
  <pubDate>Tue, 07 Nov 2023 18:55:16 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5282 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Free Burp Suite Professional License For Hackers</title>
  <link>https://www.hackerone.com/blog/free-burp-suite-professional-license-hackers</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Free Burp Suite Professional License For Hackers</span>
    



    
        H1 Team
        
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Sun, 07/23/2017 - 22:25
</span>

            
  
      
  
    Image
                



          

  

      
            January 1st, 2024

      
            <p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>Did you know we’ve teamed up with our friends at </strong></b><a href="https://portswigger.net/"><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>PortSwigger</strong></b></a><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong> to offer free 90-day licenses for Burp Suite Professional?</strong></b></p><p>&nbsp;</p><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>Burp Suite is the premier offensive hacking solution, and &nbsp;when new hackers reach at least a 500 reputation on HackerOne and have a positive signal, they are eligible for 3-months free of </strong></b><a href="https://portswigger.net/burp/"><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>Burp Suite Professional</strong></b></a><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>.</strong></b></p><p>&nbsp;</p><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>With Burp Suite, you can scan for vulnerabilities, intercept browser traffic, automate custom attacks, and more.&nbsp;</strong></b></p><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>It’s clear that hackers love Burp and HackerOne:</strong></b></p><blockquote class="blockquote"><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>“Burp Suite is pretty much all I use.” - Mark Litchfield&nbsp;</strong></b></p></blockquote><blockquote class="blockquote"><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>“It’s the best tool out there, simply put. I use it all the time.” - Arne Swinnen&nbsp;</strong></b></p></blockquote><blockquote class="blockquote"><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>“To be effective as a bug hunter, you need the right tools to optimize and backup your vulnerability research. Using Burp Suite means contributing to a quality approach, from research to reporting of your finds on HackerOne.” - Baptiste Moine</strong></b></p></blockquote><blockquote class="blockquote"><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>“I have reported many vulnerabilities on HackerOne, most of them were found with the help of Burp Suite.” - Shawar Khan</strong></b></p></blockquote><blockquote class="blockquote"><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>“Burp Suite has helped me to find many bugs. The Proxy and Repeater are key features and I really like the new Collaborator Client the DNS resolution is awesome! Definitely, an important tool when doing Bug bounty programs at HackerOne platform.” - Francisco Correa</strong></b></p></blockquote><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>You can </strong></b><a href="https://www.hackerone.com/hackers/burp-suite-partnership"><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>check out all the details including an FAQ</strong></b></a><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>. &nbsp;&nbsp;&nbsp;</strong></b></p><p><b id="docs-internal-guid-be2ddbc0-72a0-fe56-861b-b807f200bff4"><strong>Happy hacking!</strong></b></p><p class="text-align-center" dir="ltr"><span data-reactroot>&nbsp;</span></p><hr><p>&nbsp;</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/community" hreflang="en">Researcher Community</a>, 
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    ]]></description>
  <pubDate>Mon, 24 Jul 2017 03:25:46 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">4672 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Unlocking Trust in AI: The Ethical Hacker's Approach to AI Red Teaming</title>
  <link>https://www.hackerone.com/blog/unlocking-trust-ai-ethical-hackers-approach-ai-red-teaming</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Unlocking Trust in AI: The Ethical Hacker's Approach to AI Red Teaming</span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Tue, 12/19/2023 - 08:10
</span>

            
  
      
  
    Image
                



          

  

      
            December 19th, 2023

      
            <h2>Regulatory Landscape and Business Imperatives</h2><p>Testing AI systems for alignment with security, safety, trustworthiness, and fairness is more than just a best practice — it is becoming a regulatory and business imperative. This practice — known as <a href="https://cset.georgetown.edu/article/what-does-ai-red-teaming-actually-mean/" target="_blank">AI red teaming</a>&nbsp;— helps organizations lay the foundation for trust in AI now to help avoid security and alignment failures in the future that may result in liability, reputational damage, or harm to users.&nbsp;</p><p>Most recently, the European Union <a href="https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai" target="_blank">reached agreement</a> on the AI Act, which sets several requirements for trust and security for AI. For some higher-risk AI systems, this includes adversarial testing, assessing and mitigating risks, cyber incident reporting, and other security safeguards.</p><p>The EU’s AI Act comes on the heels of U.S. federal guidance, such as the recent <a href="http://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence" target="_blank">Executive Order</a> on safe and trustworthy AI, as well as <a href="https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai" target="_blank">Federal Trade Commission</a> (FTC) guidance.&nbsp; These frameworks identify AI red teaming and ongoing testing as key safeguards to help ensure security and alignment. Proposed state regulations, such as those by the California Privacy Protection Agency, further <a href="https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf" target="_blank">emphasize</a> the expectation that automated decision-making systems will be evaluated for validity, reliability, and fairness. In addition, the Group of Seven (G7) leaders issued <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/g7-leaders-statement-on-the-hiroshima-ai-process/" target="_blank">statements</a> supporting an <a href="https://www.mofa.go.jp/ecm/ec/page5e_000076.html" target="_blank">international code of conduct</a> for organizations developing advanced AI systems that emphasized “diverse internal and independent external testing measures.”&nbsp;</p><p>At the heart of these government actions is a view that testing AI systems will better protect consumers’ privacy and reduce the risk of bias. At the same time, many private sector organizations recognize the importance of in-house testing to ensure their AI systems align with ethical norms and regulatory requirements. This approach allows organizations to fortify their systems against potential threats and align with regulatory guidelines. Private companies also utilize external AI red teaming services such as those offered by HackerOne to complement their in-house risk management efforts. This dual approach, combining internal expertise with external collaboration, showcases a commitment to fostering secure, trustworthy, and ethically aligned AI systems in the private sector.</p><p>As regulatory requirements and business imperatives surrounding AI testing become more prevalent, organizations must seamlessly integrate AI red teaming and alignment testing into their risk management and software development practices. This strategic integration is crucial for fostering a culture of responsible AI development and ensuring that AI technologies meet security and ethical expectations.</p><h2>Strengthening AI Security and Reducing Bias with HackerOne</h2><p>Organizations deploying AI should consider leveraging the hacker community to help secure and test AI systems for trustworthiness. Our <a href="https://www.hackerone.com/thought-leadership/ai-safety-red-teaming">approach&nbsp;to AI Red Teaming</a> builds upon the powerful bug bounty model, optimized for AI safety engagement.</p><p>HackerOne’s bug bounty programs offer a cost-effective approach to strengthening the security of AI systems, identifying and resolving vulnerabilities before they are exploited. Simultaneously, algorithmic bias reviews help address the critical need to reduce biases and undesirable outputs in AI algorithms, aligning technology with ethical principles and societal values.&nbsp;</p><p>In a rapidly evolving technological landscape, HackerOne is a steadfast partner for organizations committed to securing and aligning their AI systems with ethical norms. Our AI red teaming services not only provide powerful testing mechanisms but also empower organizations to build trust in their AI deployments. As the demand for secure and ethical AI grows, HackerOne remains dedicated to facilitating a future where technology enhances our lives while upholding security and trust. To learn more about how to strengthen your AI security with AI Red Teaming, <a href="https://www.hackerone.com/contact">contact the team at HackerOne.</a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>, 
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p><span><span><span><span><span><span>Artificial Intelligence (AI) is poised to usher in transformative changes that can supercharge economic productivity and improve daily life. However, this immense potential comes with responsibilities to ensure that AI systems are not only secure but also align with legal requirements, expectations for trustworthiness, and prevention of bias and discrimination. That is why HackerOne offers </span></span></span></span></span></span><a href="https://www.hackerone.com/thought-leadership/ai-safety-red-teaming"><span><span><span><span><span><span><span><span>robust AI Red Teaming services</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span> that help organizations bolster the security, fairness, and reliability of their AI deployments.</span></span></span></span></span></span></p>
      ]]></description>
  <pubDate>Tue, 19 Dec 2023 14:10:21 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5298 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Responsible AI at HackerOne</title>
  <link>https://www.hackerone.com/blog/responsible-ai-hackerone</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Responsible AI at HackerOne</span>
    



    
        Jobert Abma
        
            Co-founder &amp; Engineering
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Wed, 10/25/2023 - 09:46
</span>

            
  
      
  
    Image
                



          

  

      
            October 25th, 2023

      
            <p>HackerOne's AI can already be used to:</p><h2><strong>1. Help automate vulnerability detection, using Nuclei, for example</strong></h2><h2><br><strong>2. Provide a summary of a hacker's history across many vulnerabilities</strong></h2><h2><br><strong>3. </strong>Provide remediation advice, including suggested code fixes</h2><h3><br><strong>The Power of Large Language Models (LLMs)</strong></h3><p>Language is at the heart of hacking. Hackers communicate security vulnerabilities as text. Collaboration between customers, hackers, and HackerOne security analysts is text for the most part as well. Before AI, HackerOne used two parallel strategies to understand vulnerability data: feature extraction (machine learning) and creating structure where there wasn’t any (normalization). Both of these helped us build rich reporting, analytics, and intelligence.</p><p>And now Large Language Models (LLMs) give us a powerful third strategy: leveraging fine-tuning, prompt engineering, and techniques such as Retrieval-Augmented Generation (RAG) to simplify many typical machine learning tasks. Text generation, text summarization, feature and text extraction, and even text classification have become table stakes. LLMs enable us and everyone on HackerOne to increase the efficiency of existing processes significantly, and in the future it will scale the detection of security vulnerabilities, support better prioritization, and accomplish faster remediation.</p><h3><strong>HackerOne’s Approach and Principles for Responsible AI</strong></h3><p>We've been around groundbreaking technology long enough to know that there are always unintended consequences, and that everything can be hacked. We have carefully reviewed these risks in consultation with numerous customers, hackers, and other experts. Today we're ready to share those principles for further discussion.</p><h2><strong>Foundation in Large Language Models (LLMs)</strong></h2><p>At the core of our AI technology lies a foundation of state-of-the-art LLMs. These powerful models serve as the basis for how our AI interacts with the world. What sets us apart is the proprietary insight we build on top of these models, trained from real-world vulnerability information, and tailored to the specific use cases people on HackerOne engage in. By combining the strengths of foundation LLMs with our specialized knowledge and vulnerability information, we create a potent tool for discovering, triaging, validating, and remediating vulnerabilities at scale.</p><h2><strong>Data Security and Confidentiality</strong></h2><p>Security and confidentiality are embedded in our approach. We understand that customer and hacker vulnerability information is highly sensitive and must remain under their control. We do not leverage any multi-tenant or public LLMs. At no point do AI prompts or private vulnerability information leave HackerOne infrastructure or undergo transmission to any third parties.</p><h2><strong>Tailored Interactions</strong></h2><p>One size does not fit all in the world of security. We address the risk of unintended data leakage by ensuring that our AI models are tailored specifically to each customer. We do not use your private data to train our models. Rather, our approach lets you make your private data available to the model at inference time with techniques such as Retrieval-Augmented Generation (RAG). This ensures your data remains secure, confidential, and private to you and your interactions only.</p><h2><strong>Human Agency</strong></h2><p>Finally, we have instilled a governing principle requiring the deployment of AI with strong human-in-the-loop oversight. We believe in human-AI collaboration, where technology serves as a copilot, enhancing the capabilities of security analysts and hackers. Technology is a tool, not a replacement for the invaluable human expertise.</p><p>And, as with all technology we develop, AI is within the scope for <a href="https://hackerone.com/security">our bug bounty program</a>.</p><h3><strong>What’s Next</strong></h3><p>Far too often throughout history, emerging technologies are developed with trust, safety, and security as afterthoughts. We are changing the status quo. We are committed to enhancing security through safe, secure, and confidential AI, while tightly coupled with strong human oversight. Our goal is to provide people with the tools they need to achieve security outcomes beyond what has been possible today—without compromise.</p><p>We have started rolling out our models to customers and security analysts already. Over the next few months, we will expand this to everyone, including hackers. We're beyond excited to start sharing with you more details on the specific use cases we're focused on enhancing with AI.</p><p>Welcome to the future of hacking!</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p><em>By&nbsp;Jobert Abma, Co-founder and Principal Software Engineer and&nbsp;Alex Rice, Co-founder and CTO</em></p>

<p><span><span><span><span><span><span>Generative Artificial Intelligence (GenAI) is ushering in a new era of how humans leverage technology. At HackerOne, we are combining human intelligence with artificial intelligence at scale to improve the efficiency of people and unlock entirely new capabilities. This blog will go over our approach and our principles to ensure model safety. </span></span></span></span></span></span></p>
      ]]></description>
  <pubDate>Wed, 25 Oct 2023 14:46:24 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5280 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Ethical Hacking: Unveiling the Power of Hacking for Good in Cybersecurity</title>
  <link>https://www.hackerone.com/blog/ethical-hacking-unveiling-power-hacking-good-cybersecurity</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Ethical Hacking: Unveiling the Power of Hacking for Good in Cybersecurity</span>
    



    
        Marten Mickos
        
            Chief Executive Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 09/11/2023 - 08:04
</span>

            
  
      
  
    Image
                



          

  

      
            September 12th, 2023

      
            <p>In an era where data breaches and cyberattacks dominate headlines, a new and unconventional approach to cybersecurity has emerged, challenging traditional notions of protection. Ethical hacking, also known as hacking for good, is rapidly gaining prominence as organizations seek innovative strategies to safeguard their <a href="https://aws.amazon.com/executive-insights/">digital assets</a>. This approach involves companies hiring skilled hackers to intentionally breach their systems, identify vulnerabilities, and fortify defenses.</p><p>On a recent episode of Amazon’s “Conversations With Leaders,” <a href="https://www.linkedin.com/in/martenmickos/">Marten Mickos</a>, CEO of <a href="https://www.linkedin.com/company/hackerone/">HackerOne</a>, sat down to discuss the evolving landscape of cybersecurity, the challenges organizations face, and the innovative strategies employed to build robust security cultures.&nbsp;</p><p>Marten believes the essence of hacking for good lies in harnessing external hackers to identify vulnerabilities in web systems and mobile apps, enabling companies to rectify these issues before malicious actors exploit them. This “good force against bad force” approach promotes a proactive stance in enhancing security.</p><p>Ethical hacking represents a paradigm shift in cybersecurity philosophy. Organizations embrace proactive and collaborative tactics instead of relying solely on reactive measures to counteract threats. By welcoming skilled hackers into their ranks, they aim to detect weaknesses before malicious actors can exploit them.</p><p>Ethical hackers, often called “white hat”, operate with integrity and a robust code of conduct. Their mission is to expose security vulnerabilities and potential entry points within an organization’s digital infrastructure. Unlike malicious hackers, ethical hackers use their skills for constructive purposes, ultimately enhancing the security posture of the organizations they engage with.</p><p>Challenges are associated with hiring and retaining skilled security professionals in this industry. According to Marten, the solution is to create an environment where employees find meaning, autonomy, and opportunities for growth. A culture that nurtures career development and offers purposeful work can attract and retain top talent.</p><h2>The Hacker Community: A Vast Pool of Expertise</h2><p>A critical element that sets ethical hacking apart is its emphasis on collaboration. Ethical hackers often form communities that share knowledge, techniques, and best practices. These communities foster a supportive environment that encourages continuous learning and skill development. Organizations benefit not only from individual ethical hackers’ expertise but also from the collective knowledge of the broader community.</p><p>Companies like HackerOne have capitalized on this collaborative model, acting as intermediaries between organizations and ethical hackers. Organizations can post bug bounties through their platform, rewarding hackers who successfully identify vulnerabilities. This approach incentivizes hackers to participate in uncovering weaknesses, creating a win-win scenario for both parties.</p><p>With many potential security measures available, organizations need help prioritizing their actions effectively. Marten recommends adopting a risk-based approach focusing on essential actions aligned with business objectives.</p><h2>Fostering a Positive Security Culture</h2><p>While ethical hacking might sound counterintuitive, its value is increasingly evident. Data breaches and cyberattacks can result in significant financial losses, reputational damage, and legal ramifications. By investing in ethical hacking, organizations take proactive steps to prevent these scenarios. Identifying vulnerabilities before they are exploited can save companies millions of dollars in recovery costs and potential fines.</p><p>Marten draws parallels between cybersecurity and the airline industry’s safety practices. There is an emphasis on fostering a blameless culture, where mistakes are treated as learning opportunities rather than causes for retribution. This promotes open communication and rapid issue resolution.</p><p>Marten believes that the need to transform security from a roadblock to an enabler of business growth is critical for hacking for a good approach to be successful. By promoting a positive view of security, organizations can encourage employees to participate in security initiatives actively. CEOs should set the tone by highlighting security’s role in enabling business success.</p><p>Cybersecurity’s asymmetric nature demands a different approach than the standard <a href="https://aws.amazon.com/executive-insights/podcast/">business practices</a>&nbsp;used in most organizations. Collaboration with external hackers allows organizations to tap into an immense pool of expertise that can help identify vulnerabilities quickly. This method provides flexibility and rapid access to diverse skills, ensuring a well-rounded security posture.</p><h2>A Future of Enhanced Cybersecurity</h2><p>As the hacking for good industry gains momentum, it reshapes how organizations approach cybersecurity. The emphasis on collaboration, transparency, and a proactive defense departs from the traditional reactive model. Ethical hacking is a testament to the power of harnessing skilled individuals for the greater good — using their expertise to strengthen digital fortifications, safeguard sensitive data, and propel the cybersecurity industry into a new era of resilience.</p><p>In an increasingly interconnected world, ethical hackers are emerging as unsung heroes, leveraging their talents to prevent data breaches and protect the digital foundations of modern society. As organizations continue to navigate the complex realm of cybersecurity, ethical hacking stands as a beacon of innovation and a testament to the remarkable potential of technology when used for positive and transformative purposes.</p><p>To hear the full “Conversations with Leaders” episode, <a href="https://open.spotify.com/episode/51XdheXuj0pepaWgvOZuR9">click here</a>.</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/hackerones-former-ceo" hreflang="en">From HackerOne's Former CEO</a>
        
    

            <p><em>Originally published on the <a href="https://aws.amazon.com/executive-insights/podcast/" target="_blank">Amazon Web Services Conversations With Leaders podcast blog.</a></em></p>
      ]]></description>
  <pubDate>Mon, 11 Sep 2023 13:04:22 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5268 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>The Hacker Perspective on Generative AI and Cybersecurity</title>
  <link>https://www.hackerone.com/blog/hacker-perspective-generative-ai-and-cybersecurity</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">The Hacker Perspective on Generative AI and Cybersecurity</span>
    



    
        Michiel Prins
        
            Co-founder &amp; Senior Director, Product Management
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Thu, 09/07/2023 - 09:07
</span>

            
  
      
  
    Image
                



          

  

      
            September 7th, 2023

      
            <h2>Future Risk Predictions</h2><p>In a recent presentation at Black Hat 2023, HackerOne Founder, Michiel Prins, and hacker, Joseph Thacker aka <a href="https://hackerone.com/rez0" target="_blank">@rez0</a>, discussed some of the most impactful risk predictions related to Generative AI and LLMs, including:</p><ul><li>Increased risk of preventable breaches&nbsp;</li><li>Loss of revenue and brand reputation</li><li>Increased cost of regulatory compliance</li><li>Diminished competitiveness</li><li>Reduced ROI on development investments</li></ul><p>Hacker Herman Satkauskas also points out that, while AI has lowered the barrier to entry for ethical hackers, “malicious attackers will also realize they have the tools at their disposal to conduct cybercrime.”</p><h2>The Top Generative AI and LLM Risks According to Hackers</h2><p>According to hacker Gavin Klondike, “We’ve almost forgotten the last 30 years of cybersecurity lessons in developing some of this software.” The haste of GAI adoption has clouded many organizations’ judgment when it comes to the security of artificial intelligence. Security researcher Katie Paxton-Fear aka <a href="https://hackerone.com/insiderphd" target="_blank">@InsiderPhD</a>, believes, “this is a great opportunity to take a step back and bake some security in as this is developing and not bolting on security 10 years later.”</p><h3><br>Prompt Injections</h3><p>The <a href="https://www.hackerone.com/vulnerability-management/owasp-llm-vulnerabilities">OWASP Top 10 for LLM</a> defines prompt injection as a vulnerability during which an attacker manipulates the operation of a trusted LLM through crafted inputs, either directly or indirectly.&nbsp;Paxton-Fear warns about prompt injection, saying:</p><blockquote><p><em>“As we see the technology mature and grow in complexity, there will be more ways to break it. We’re already seeing vulnerabilities specific to AI systems, such as prompt injection or getting the AI model to recall training data or poison the data. We need AI and human intelligence to overcome these security challenges.”</em></p></blockquote><p><a href="https://www.techrepublic.com/article/hackerone-how-artificial-intelligence-is-changing-cyber-threats-and-ethical-hacking/" target="_blank">Thacker uses this example</a> to help understand the power of prompt injection:</p><blockquote><p><em>“If an attacker uses prompt injection to take control of the context for the LLM function call, they can exfiltrate data by calling the web browser feature and moving the data that are exfiltrated to the attacker’s side. Or, an attacker could email a prompt injection payload to an LLM tasked with reading and replying to emails.”</em></p></blockquote><p>Ethical hacker, Roni Carta aka <a href="https://hackerone.com/arsene_lupin" target="_blank">@arsene_lupin</a>, points out that if developers are using ChatGPT to help install prompt packages on their computers, they can run into trouble when asking it to find libraries. Carta says, “ChatGPT hallucinates library names, which threat actors can then take advantage of by reverse-engineering the fake libraries.”</p><p>According to Thacker, “The jury is out on whether or not it’s solvable, but personally, I think it is.” He says the mitigation depends on the implementation and deployment of the prompt injection and, “of course, by testing.”</p><h3>Agent Access Control</h3><p>“LLMs are as good as their data,” says Thacker. “The most useful data is often private data.”</p><p>According to Thacker, this creates an extremely difficult problem in the form of agent access control. Access control issues are very common vulnerabilities found through the HackerOne platform every day. Where access control goes particularly wrong regarding AI agents is the mixing of data. Thacker says AI agents have a tendency to mix second-order data access with privileged actions, exposing the most sensitive information to potentially be exploited by bad actors.</p><h2>The Evolution of the Hacker in the Age of Generative AI</h2><p>Naturally, as new vulnerabilities emerge from the rapid adoption of Generative AI and LLMs, the role of the hacker is also evolving. During a panel featuring security experts from Zoom and Salesforce, hacker <a href="https://hackerone.com/tomanthony?type=user" target="_blank">Tom Anthony</a> predicted the change in how hackers approach processes with AI:</p><blockquote><p><em>“At a recent Live Hacking Event with Zoom, there were easter eggs for hackers to find — and the hacker who solved them used LLMs to crack it. Hackers are able to use AI to speed up their processes by, for example, rapidly extending the word lists when trying to brute force systems.”&nbsp;</em></p></blockquote><p>He also senses a distinct difference for hackers using automation, claiming AI will significantly uplevel the reading of source code. Anthony says, “Anywhere that companies are exposing source code, there will be systems reading, analyzing, and reporting in an automated fashion.”</p><p>Hacker Jonathan Bouman uses ChatGPT to help hack technologies he’s less confident with.&nbsp;</p><blockquote><p><em>“I can hack web applications but not break new coding languages, which was the challenge at one Live Hacking Event. I copied and pasted all the documentation provided (removing all references to the company), gave it all the structures, and asked it ‘Where would you start?’ It took a few prompts to ensure it wasn’t hallucinating, and it did provide a few low-level bugs. Because I was in a room with 50 ethical hackers, I was able to share my findings with a wider team, and we escalated two of those bugs into critical vulnerabilities. I couldn't have done it without ChatGPT, but I couldn’t have made the impact I did without the hacking community.”</em></p></blockquote><p>There are even new tools for the education of hacking LLMs — and therefore for identifying the vulnerabilities created by them. Anthony uses “<a href="https://gandalf.lakera.ai/" target="_blank">an online game for prompt injection</a> where you work through levels, tricking the GPT model to give you secrets. It’s all developing so quickly.”</p><h2>How AI Shows the Value of Bug Bounty</h2><p>It’s no secret that security leaders are faced with the challenging task of articulating the value of their security programs to stakeholders and board members. And one of the most tricky pieces of showcasing that value is comparing how much a bug bounty costs against how much that bug would cost in the hands of a cybercriminal.&nbsp;</p><p>Our hacker community is using AI to prove that value. According to Satkauskas,</p><blockquote><p><em>“I tried an experiment where I load up the security finding to ChatGPT and ask it ‘How much would this vulnerability cost a company if it was in the wrong hands?’ ChatGPT can provide a ballpark estimate, meaning it’s far easier to make a case for the impact of that finding in your report.”</em></p></blockquote><p>According to the 7th Annual Hacker Powered Security Report, the average bug bounty across industries is $1,000 and $3,700 for high and critical vulnerabilities. When you consider the potential financial impact of GenAI-facilitated loss data, you can start to estimate the real value of ethical hackers experiements in GenAI to secure your organization.</p><h2><br>Use the Power of Hackers for Secure Generative AI</h2><p>Even the most sophisticated security programs are unable to catch every vulnerability. HackerOne is committed to helping organizations secure their GAI and LLMs and to staying at the forefront of security trends and challenges. With HackerOne, organizations can:</p><ul><li>Secure the use of GenAI and LLMs with community-driven <a href="https://www.hackerone.com/thought-leadership/ai-safety-red-teaming">AI Red Teaming</a></li><li>Conduct continuous adversarial testing through <a href="https://www.hackerone.com/product/bug-bounty-platform">Bug Bounty</a></li><li>Perform targeted hacker-based testing with <a href="https://www.hackerone.com/product/challenge">Challenge</a></li><li>Assess an entire application with <a href="https://www.hackerone.com/product/pentest">Pentest</a> or <a href="https://www.hackerone.com/assessments/audit-security-posture-devops-hackerone-source-code-assessments">Code Security Audit</a></li></ul><p><a href="https://www.hackerone.com/contact">Contact us today</a> to learn more about how we can help take a secure approach to Generative AI.</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p><span><span><span><span><span><span>Generative AI has undergone incredibly fast adoption, with fresh launches of the latest large language model (LLM) coming every day. As with any new technology, however, we often don’t understand the </span></span></span></span></span></span><a href="https://www.hackerone.com/thought-leadership/generative-ai-security-predictions"><span><span><span><span><span><span><span><span>risk implications</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span> before rushing to build it into our applications.&nbsp;</span></span></span></span></span></span></p>

<p><span><span><span><span><span><span>Ethical hackers understand the ins and outs of the security issues inherent in Generative AI, and they’ve been exploring the common mistakes made by organizations rushing to leverage the technology. <span>The </span></span></span></span></span></span></span><a href="https://www.hackerone.com/reports/7th-annual-hacker-powered-security-report"><span><span><span><span><span><span><span><span><span>7th Annual Hacker Powered Security Report</span></span></span></span></span></span></span></span></span></a><span><span><span><span><span><span><span> reveals that 53% of ethical hackers are using GenAI in some way, and 66% are using or intend to use GenAI to write better reports.&nbsp;</span>Who better to learn from when it comes to preventing and managing risks than the hackers who know how to exploit them?</span></span></span></span></span></span></p>

<p><span><span><span><span><span><span>We’ve spoken with several experienced hackers in the space to get their perspectives on the most important considerations for Generative AI and cybersecurity.</span></span></span></span></span></span></p>
      ]]></description>
  <pubDate>Thu, 07 Sep 2023 14:07:25 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5267 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Company Update</title>
  <link>https://www.hackerone.com/blog/company-update</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Company Update</span>
    



    
        Marten Mickos
        
            Chief Executive Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Tue, 08/01/2023 - 15:17
</span>

            
  
      
  
    Image
                



          

  

      
            August 2nd, 2023

      
            <p><em>HackerOne CEO, Marten Mickos,&nbsp;emailed the following note to employees on August 2, 2023.</em><br><br>H1 Team,<br><br>I have made the painful and necessary decision to undertake a restructuring and we will reduce the size of our team by up to approximately 12%. This comes as disappointing news, as we've all built strong connections with our fellow Hackeronies. These actions are necessary to be successful long-term. However, I understand how difficult this news is and the impact this will have on all team members, and I take responsibility for the changes we are sharing today.</p><p dir="ltr"><strong>How are we handling departures?&nbsp;</strong><br>If you are impacted, we aim to notify you as soon as we can while complying with each region's applicable rules and regulations. Impacted employees in the U.S. and Canada will receive a meeting invitation in the next 15 minutes.&nbsp;</p><p>It is expected that the reorganization will also affect employees in the U.K., the Netherlands, and other countries. After completing the relevant consultation proceedings, we will be able to inform all people involved with more detail. These processes will take longer.</p><p>We are offering severance packages to impacted employees that include cash compensation and non-cash benefits. Please monitor your email for country, role-specific information, and other details.&nbsp;</p><p><strong>What do we see in our business?</strong><br>HackerOne, like many tech companies, has been navigating the global economic situation and the resulting shifts in our market. Our strategic plan includes investment in new product families and expanding the capabilities of our platform to expand into the enterprise. A little over a year ago, I authorized the continued hiring of new employees to fulfill this strategy.&nbsp;</p><p>However, we did not anticipate the degree to which the overall economic situation is affecting us, with smaller companies running out of money and larger ones taking longer to make purchasing decisions. The new products we brought to market didn’t perform the way we wanted them to. Our bets on hiring and new products proved to be too big, and we must now restructure our teams to be successful in the future.&nbsp;</p><p>We’ve designed this reduction in force as a one-time event. We don’t claim to have perfect visibility into our future financial performance or the macroeconomic climate, but we unequivocally wanted to take a single action and move forward with confidence.</p><p><strong>How do we move forward?</strong><br>HackerOne remains a category leader. We are one of the most important contributors to cybersecurity worldwide. We plan to be better, stronger, and faster when an improved business climate ultimately emerges.&nbsp;</p><p>My goal is to make every Hackeronie proud, no matter when and for how long they worked at the company. We win as a team, so we also lose as a team. This decision marks a loss. Those remaining in the company are tasked with creating a bigger win over time. I remain as committed and confident as ever.</p><p>You will have many questions for me and the leadership team now. An honest question will receive an honest answer. I asked all department leaders to communicate with your team about the impact on workload and meet to discuss next steps.&nbsp;</p><p>The decision is painful to me because we hired each one of you to pursue our strategy and build an outstanding business. We have a clear mission and a strong company culture. Each one of you is a devoted team member, investing your time and mind into the success of HackerOne. Now as we must say goodbye to some of you, I want to express my deep appreciation of your unwavering commitment to the company.</p><p>Marten</p><p><em>Any inquiries, please send to&nbsp;</em><a href="mailto:press@hackerone.com" target="_blank"><em>press@hackerone.com</em></a><em>.</em></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    ]]></description>
  <pubDate>Tue, 01 Aug 2023 20:17:32 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5260 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Generative AI and Security: HackerOne's Predictions</title>
  <link>https://www.hackerone.com/blog/generative-ai-and-security-hackerones-predictions</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Generative AI and Security: HackerOne's Predictions</span>
    



    
        Michiel Prins
        
            Co-founder &amp; Senior Director, Product Management
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Wed, 07/12/2023 - 11:00
</span>

            
  
      
  
    Image
                



          

  

      
            July 12th, 2023

      
            <h2><strong>Offensive AI Will Outpace Defensive AI</strong></h2><p>In the short term, and possibly indefinitely, we will see offensive or malicious AI applications outpace defensive ones that use AI for stronger security. This is not a new phenomenon for those familiar with the offense vs. defense cat-and-mouse game that defines cybersecurity. While GAI offers tremendous opportunities to advance defensive use cases, cybercrime rings and malicious attackers will not let this opportunity pass either and will level up their weaponry, potentially asymmetrically to defensive efforts, meaning there isn’t an equal match between the two.&nbsp;</p><p>It’s highly possible that the commoditization of GAI will mean the end of Cross-Site Scripting (XSS) and other current common vulnerabilities. Some of the <a href="https://www.hackerone.com/top-ten-vulnerabilities">top 10 most common vulnerabilities</a> — like XSS or SQL Injection — are still far too common, despite industry advancements in Static Application Security Testing (SAST), web browser protections, and secure development frameworks. GAI has the opportunity to finally deliver the change we all want to see in this area.</p><p>However, while advances in Generative AI may eradicate some vulnerability types, others will explode in effectiveness. Attacks like social engineering via deep fakes will be more convincing and fruitful than ever. GAI lowers the barrier to entry, and <a href="https://www.wired.com/story/ai-phishing-emails/">phishing is getting even more convincing</a>.&nbsp;</p><p>Have you ever received a text from a random number claiming to be your CEO, asking you to <a href="https://consumer.ftc.gov/consumer-alerts/2021/09/your-boss-isnt-emailing-you-about-gift-card">buy 500 gift cards</a>? While you’re unlikely to fall for that trick, how would it differ if that phone call came from your CEO’s phone number? It sounded exactly like them and even responded to your questions in real-time. Check out <a href="https://twitter.com/RachelTobac/status/1660432071003881474">this 60 Minutes segment with hacker, Rachel Tobac</a>, to see it unravel live.&nbsp;</p><p>The strategy of security through obscurity will also be impossible with the advance of GAI. HackerOne <a href="https://www.hackerone.com/resources/i/1458426-the-corporate-security-trap/0?">research</a> shows that 64% of security professionals claim their organization maintains a culture of security through obscurity. If your security strategy still depends on secrecy instead of transparency, you need to get ready for it to end. The seemingly magical ability of GAI to sift through enormous datasets and distill what truly matters, combined with advances in Open Source Intelligence (OSINT) and <a href="https://www.hackerone.com/resources/e-book/seven-hacker-recon-secrets">hacker reconnaissance</a>, will render security through obscurity obsolete.</p><h2><strong>Attack Surfaces Will Grow Exponentially</strong></h2><p>Our second prediction is that we will see an outsized explosion in new attack surfaces. Defenders have long followed the principle of attack surface reduction, a term coined by Microsoft, but the rapid commoditization of Generative AI is going to reverse some of our progress.&nbsp;</p><p><a href="https://a16z.com/2011/08/20/why-software-is-eating-the-world/">Software is eating the world</a>, Marc Andreessen famously wrote in 2011. He wasn’t wrong — code increases exponentially every year. Now it is increasingly (or even entirely) written with the help of Generative AI. The ability to generate code with GAI dramatically lowers the bar of who can be a software engineer, resulting in more and more code being shipped by people that do not fully comprehend the technical implications of the software they develop, let alone oversee the security implications.</p><p>Additionally, GAI requires vast amounts of data. It is no surprise that the models that continue to impress us with <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233">human levels of intelligence</a> happen to be the largest models out there. In a GAI-ubiquitous future, organizations and commercial businesses will hoard more and more data, beyond what we now think is possible. Therefore, the sheer scale and impact of data breaches will grow out of control. Attackers will be more motivated than ever to get their hands on data. The dark web price of data “per kilogram” will increase.&nbsp;</p><p>Attack surface growth doesn’t stop there: many businesses have rapidly implemented features and capabilities powered by generative AI in the past months. As with any emerging technology, developers may not be fully aware of the ways their implementation can be exploited or abused. Novel attacks against applications powered by GAI will emerge as a new threat that defenders have to worry about. A promising project in this area is the <a href="https://www.csoonline.com/article/575497/owasp-lists-10-most-critical-large-language-model-vulnerabilities.html">OWASP Top 10 for Large Language Models (LLMs)</a>. (LLMs are the technology fueling the breakthrough in Generative AI that we’re all witnessing right now.)</p><h2><strong>What Does Defense Look Like In A Future Dominated By Generative AI?</strong></h2><p>Even with the potential for increased risk, there is hope. Ethical hackers are ready to secure applications and workloads powered by Generative AI. Hackers are characterized by their curiosity and creativity; they are consistently at the forefront of emerging technologies, finding ways to make that technology do the impossible. As with any new technology, it is hard for most people, especially optimists, to appreciate the risks that may surface — and this is where hackers come in. Before GAI, the emerging technology trend was blockchain. Hackers found unthinkable ways to exploit the technology. GAI will be no different, with hackers quickly investigating the technology and looking to trigger unthinkable scenarios — all so you can develop stronger defenses.</p><p>There are three tangible ways in which HackerOne can help you prepare your defenses for a not-too-distant future where Generative AI is truly ubiquitous:</p><ul><li><a href="https://www.hackerone.com/product/bounty">HackerOne Bounty</a>: Continuous adversarial testing with the world’s largest hacker community will identify vulnerabilities of any kind in your attack surface, including potential flaws stemming from poor GAI implementation. If you already run a bug bounty program with us, contact your Customer Success Manager (CSM) to see if running a <a href="https://docs.hackerone.com/organizations/campaigns.html">campaign</a> focused on your GAI implementations can help deliver more secure products.</li><li><a href="https://www.hackerone.com/product/challenge">HackerOne Challenge</a>: Conduct scoped and time-bound adversarial testing with a curated group of expert hackers. A challenge is ideal for testing a pre-release product or feature that leverages generative AI for the first time.&nbsp;</li><li><a href="https://www.hackerone.com/resources/latest-news-insights/hackerone-security-advisory-services-solutions-brief">HackerOne Security Advisory Services</a>: Work with our Security Advisory team to understand how your threat model will evolve by bringing Generative AI into your attack surface, and ensure your HackerOne programs are firing on all cylinders to catch these flaws.</li></ul><p>Want to hear more? I’ll be speaking on this topic at Black Hat on Thursday, August 10 at Booth #2640, or request a meeting. Check out the Black Hat <a href="https://www.hackerone.com/events/black-hat-2023">event page</a> for details.&nbsp;</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/news-updates" hreflang="en">News &amp; Updates</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p><span><span><span><span><span><span>Generative Artificial Intelligence (GAI) is popping up in all manner of software every day. It's a trend we're seeing unfold right now, characterized by a firehose of daily announcements of new AI-powered products and capabilities. Many businesses, including HackerOne customers like </span></span></span></span></span></span><a href="https://newsroom.snap.com/say-hi-to-my-ai" target="_blank"><span><span><span><span><span><span><span><span>Snapchat</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span>, </span></span></span></span></span></span><a href="https://www.instacart.com/company/updates/bringing-inspirational-ai-powered-search-to-the-instacart-app-with-ask-instacart/" target="_blank"><span><span><span><span><span><span><span><span>Instacart</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span>, </span></span></span></span></span></span><a href="https://www.crowdstrike.com/blog/crowdstrike-introduces-charlotte-ai-to-deliver-generative-ai-powered-cybersecurity/" target="_blank"><span><span><span><span><span><span><span><span>CrowdStrike</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span>, </span></span></span></span></span></span><a href="https://www.salesforce.com/news/press-releases/2023/03/07/einstein-generative-ai/" target="_blank"><span><span><span><span><span><span><span><span>Salesforce</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span>, and </span></span></span></span></span></span><a href="https://blog.google/technology/ai/google-io-2023-keynote-sundar-pichai/" target="_blank"><span><span><span><span><span><span><span><span>many others</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span>, have all announced AI-powered features and user experiences. GAI capabilities will soon be table stakes for any software company as their customers will simply expect it. Those who do not take advantage of this technological evolution will decline into irrelevancy and be replaced by better and more productive alternatives. For example, users will expect to just </span></span></span></span></span></span><a href="https://twitter.com/jobertabma/status/1654595035806203904" target="_blank"><span><span><span><span><span><span><span><span>talk directly to their reports and dashboards</span></span></span></span></span></span></span></span></a><span><span><span><span><span><span> instead of figuring out yet another query language.&nbsp;</span></span></span></span></span></span></p>

<p><span><span><span><span><span><span>A world where Generative AI is ubiquitous will soon be here. What does that mean for security? We have two main predictions.</span></span></span></span></span></span></p>
      ]]></description>
  <pubDate>Wed, 12 Jul 2023 16:00:00 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5255 at https://www.hackerone.com</guid>
    </item>

  </channel>
</rss>
