<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0" xml:base="https://www.hackerone.com/">
  <channel>
    <title>Public Policy</title>
    <link>https://www.hackerone.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>The UK’s AI Cyber Security Code of Practice: What It Means for Your Business</title>
  <link>https://www.hackerone.com/blog/uks-ai-cyber-security-code-practice</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">The UK’s AI Cyber Security Code of Practice: What It Means for Your Business</span>
    



    
        Vanessa Booth
        
            Policy Analyst
      
    


    



    
        Michael Woolslayer
        
            Policy Counsel
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>joseph@hackerone.com</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Thu, 02/27/2025 - 14:24
</span>

            
  
      
  
    Image
                



          

  

      
            February 27th, 2025

      
            <p>The Code establishes baseline cybersecurity requirements across the AI lifecycle and is expected to inform changes to international standards through the European Telecommunications Standards Institute (ETSI). To assist organizations in applying its principles, the government has also released an&nbsp;<a href="https://assets.publishing.service.gov.uk/media/679cae441d14e76535afb630/Implementation_Guide_for_the_AI_Cyber_Security_Code_of_Practice.pdf">Implementation Guide</a>, which expands on specific security measures.&nbsp;</p><p>HackerOne offered input during the development of this Code, emphasizing the importance of independent security testing, AI red teaming, and vulnerability disclosure programs (VDPs).&nbsp;<a href="https://www.hackerone.com/sites/default/files/2024-09/UK%20Call%20for%20Views%20on%20the%20Cyber%20Security%20of%20AI%20Comments.pdf">&nbsp;HackerOne’s recommendations</a>, submitted during DSIT’s Call for Views on AI Cybersecurity, highlighted the need for external validation, proactive security testing, and structured vulnerability reporting mechanisms to improve AI security.&nbsp;&nbsp;</p><p><strong>Who is the Code for?</strong></p><p>The Code applies to developers, system operators, and data custodians involved in the creation, deployment, and management of AI systems. It sets out security measures covering&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai#scope:~:text=secure%20design%2C%20secure%20development%2C%20secure%20deployment%2C%20secure%20maintenance%20and%20secure%20end%20of%20life.">five key phases</a>: secure design, secure development, secure deployment, secure maintenance, and secure end of life. AI vendors who solely sell models or components without direct involvement in their implementation are not directly in scope but remain subject to other relevant cybersecurity standards. &nbsp;</p><p><strong>How can organizations align with the Code?</strong></p><p>The Code&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai#scope:~:text=to%20do%20something-,Structure%20of%20the%20voluntary%20Code%20of%20Practice,-Principle%201%3A%20Raise">introduces 13 principles</a> to safeguard AI from cyber threats, including data poisoning, adversarial attacks, and model exploitation. Organizations that choose to follow the Code need to integrate AI security into system design, assess risks throughout the AI lifecycle, and maintain transparency with end-users. Key provisions include:&nbsp;</p><ul><li>Ensuring AI security awareness among employees and stakeholders.</li><li>Implementing supply chain security measures to prevent vulnerabilities in AI models.</li><li>Conducting adversarial testing to proactively detect security weaknesses.</li><li>Providing timely security updates and clear communication to end-users.&nbsp;</li></ul><p><strong>How does the Code address Independent Security Testing and Disclosure for AI?</strong></p><p>A key focus of the Code is the requirement for independent security validation systems. Developers&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai#scope:~:text=2023%2C%20G7%202023%5D-,9.1,-Developers%20shall%20ensure">must ensure AI models</a> undergo security testing before deployment, and the Code stresses the importance of&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai#scope:~:text=support%20from%20Developers.-,9.2.1,-For%20security%20testing">involving independent security testers</a> with expertise in AI-specific risks.</p><p>Additionally, the Code&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai#scope:~:text=publicly%20available%20data.-,6.4,-Developers%20and%20System">mandates the creation and maintenance of a Vulnerability Disclosure Program (VDP)</a> for AI systems. This program is vital for enhancing transparency, allowing security flaws to be responsibly reported and mitigated.&nbsp;</p><p><a href="https://assets.publishing.service.gov.uk/media/679cae441d14e76535afb630/Implementation_Guide_for_the_AI_Cyber_Security_Code_of_Practice.pdf">The Implementation Guide</a> further clarifies these expectations, emphasizing proactive security practices such as red teaming and adversarial testing. These techniques are essential for detecting vulnerabilities before they can be exploited, and the Guide offers practical steps to integrate these evaluations into the AI lifecycle. By following both the Code and the Implementation Guide, organizations can ensure a comprehensive, proactive approach to AI security – focusing on external validation, transparency, and ongoing testing to safeguard systems at every stage.&nbsp;</p><p><strong>What’s the likely impact?</strong></p><p>The Code signals a shift toward stronger regulatory expectations for AI security. As cyber threats targeting AI continue to evolve, organizations that adopt these security principles will be better positioned to comply with future standards and regulations, protect their users, and build trust in AI technologies.&nbsp;</p><p>The UK government has&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai#:~:text=The%20UK%20government%20plan%20to%20submit%20the%20Code%20and%20Implementation%20Guide%20in%20ETSI%20so%20that%20the%20future%20standard%20is%20accompanied%20by%20a%20guide.%20The%20government%20will%20update%20the%20content%20of%20the%20Code%20and%20Guide%20to%20mirror%20the%20future%20ETSI%20global%20standard%20and%20guide.%C2%A0%C2%A0">stated</a> its intention for this Code to serve as the foundation for future ETSI standards, ensuring a unified and internationally recognized approach to AI cybersecurity. The government also plans to update the Code and the Guide to mirror the future ETSI global standard, reinforcing the alignment with international best practices.&nbsp;</p><p><strong>How HackerOne can help:</strong></p><p>Organizations navigating AI security challenges need robust testing and vulnerability management solutions. HackerOne helps organizations align with the Code’s security requirements through:&nbsp;</p><ul><li>Independent AI security assessments that align with Principles 9.1 and 9.2.1.</li><li>Vulnerability Disclosure Programs (VDPs) to help meet Principle 6.4.</li><li>Red teaming and adversarial testing to identify weaknesses before they can be exploited as mentioned in the Implementation Guide, sections 9.2, 9.2.1, and 11.2.&nbsp;</li></ul><p><a href="https://www.hackerone.com/contact">Contact HackerOne to learn more about securing your AI systems.&nbsp;</a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/security-compliance" hreflang="en">Security Compliance</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/best-practices" hreflang="en">Best Practices</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p>On January 31, 2025, the UK government published its&nbsp;<a href="https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai">AI Cyber Security Code of Practice</a>, a voluntary framework aimed at mitigating security risks in AI systems.&nbsp;</p>
      ]]></description>
  <pubDate>Thu, 27 Feb 2025 20:24:55 +0000</pubDate>
    <dc:creator>joseph@hackerone.com</dc:creator>
    <guid isPermaLink="false">5561 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>DORA Compliance Is Here: What Financial Entities Should Know</title>
  <link>https://www.hackerone.com/blog/dora-compliance-here-what-financial-entities-should-know</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">DORA Compliance Is Here: What Financial Entities Should Know</span>
    



    
        Michael Woolslayer
        
            Policy Counsel
      
    


    



    
        Vanessa Booth
        
            Policy Analyst
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 02/03/2025 - 08:45
</span>

            
  
      
  
    Image
                



          

  

      
            January 31st, 2025

      
            <h2>What Does DORA Regulate?</h2><p><a href="https://www.hackerone.com/blog/dora-what-you-need-know">DORA applies</a> to a wide range of financial entities operating in the EU, including banks, insurers, investment firms, and payment institutions, along with critical third-party service providers such as cloud and data providers. Essentially, any organization that provides key infrastructure for financial services will be required to comply with some or all of DORA’s operational resilience standards.</p><h2>What Does DORA Aim to Achieve?</h2><p>DORA’s primary goal is to enhance the digital resilience of the EU’s financial sector by ensuring that firms are well-prepared to handle and recover from Information and Communication Technology (ICT) disruptions. The regulation establishes a framework for cybersecurity and operational risk management across financial institutions, focusing on reducing the potential impact of cyber threats and system failures.</p><h2>What Are DORA’s Security Requirements?</h2><p>DORA mandates several key cybersecurity and operational resilience requirements for financial entities:</p><ol><li><strong>Risk Management Framework: </strong>Firms must implement comprehensive risk management practices to identify, assess, and mitigate ICT risks.</li><li><strong>Third-Party Risk Management: </strong>Financial entities must ensure third-party service providers adhere to DORA’s security standards, including implementing particular contractual terms and conducting ongoing monitoring and due diligence.</li><li><strong>Digital Resilience Testing: </strong>Firms are required to perform stress tests and regular pentests, in addition to threat-led penetration tests (TLPT) at least every 3 years, based on <a href="https://www.esma.europa.eu/sites/default/files/2024-07/JC_2024-29_-_Final_report_DORA_RTS_on_TLPT.pdf">Regulatory Technical Standards (RTS)</a> for TLPT expected to be adopted by the European Commission in early 2025.</li><li><strong>Incident Reporting: </strong>DORA mandates a clear process for reporting major ICT-related incidents to regulators within specified timeframes.</li><li><strong>Information Sharing: </strong>The regulation does not require but encourages entities to share cyber threat intelligence to bolster collective cyber security efforts across the financial sector.</li></ol><h2>How Does a Covered Financial Entity Demonstrate Compliance– and What Happens if it Doesn’t Comply?</h2><p>Covered entities must ensure they meet DORA’s security standards by implementing appropriate risk management practices, third party oversight, and resilience testing. While fines or criminal sanctions are not included in the DORA regulation, individual EU Member States can institute penalties and criminal sanctions in their national laws. These may include fines of up to 2% of an entity’s total annual worldwide revenues or up to 1 million euros and even steeper penalties of up to 5 million for critical third-party ICT providers. Entities must also submit detailed reports outlining their efforts to manage ICT risks, test their resilience, and respond to cyber incidents.</p><h2>When Do These Requirements Take Effect?</h2><p>DORA entered into force on January 16, 2023, and the full compliance deadline was January 17, 2025.</p><h2>What's the Likely Impact of These New Requirements?</h2><p>DORA’s implementation will likely enhance the overall security posture of the EU financial sector by requiring financial entities to adopt stronger risk management frameworks and resilience practices. The regulation will also increase transparency, as firms must disclose to competent authorities information about their cybersecurity measures and third-party relationships. Overall, DORA aims to ensure that financial institutions are better prepared to handle emerging cyber threats, ultimately protecting consumers and the financial system as a whole.</p><h2>We Might Be Subject to These New Requirements—What Should We Do?</h2><p>With the January 17, 2025 deadline already passed, financial entities should review their existing cyber security policies and practices to ensure they meet DORA’s requirements.</p><p>HackerOne offers a comprehensive suite of security solutions designed to help financial services organizations meet DORA compliance requirements. Our portfolio includes <a href="https://www.hackerone.com/blog/crest-and-pentesting-what-you-need-know">CREST-accredited</a> Pentest as a Service (PTaaS), Code Security Audits, Bug Bounty programs, and Spot Checks. This integrated approach aligns with DORA's mandates for regular and comprehensive ICT risk assessment and management, as outlined in <a href="https://www.digital-operational-resilience-act.com/Article_24.html">Articles 24</a> and <a href="https://www.digital-operational-resilience-act.com/Article_25.html">25</a>.</p><p><a href="https://www.hackerone.com/contact">Contact HackerOne to learn more.</a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/security-compliance" hreflang="en">Security Compliance</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/best-practices" hreflang="en">Best Practices</a>
        
    

            <p>The <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2554&amp;from=FR" target="_blank">Digital Operational Resilience Act (DORA)</a>, which came into force in the European Union on January 17, 2025, establishes comprehensive requirements for the financial sector to strengthen its resilience to ICT-related disruptions, including cyberattacks and technical failures.</p>
      ]]></description>
  <pubDate>Mon, 03 Feb 2025 14:45:04 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5472 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>What Will a New Administration and Congress Mean for Cybersecurity and AI Regulation?</title>
  <link>https://www.hackerone.com/blog/what-will-new-administration-and-congress-mean-cybersecurity-and-ai-regulation</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">What Will a New Administration and Congress Mean for Cybersecurity and AI Regulation?</span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Tue, 01/28/2025 - 08:23
</span>

            
  
      
  
    Image
                



          

  

      
            January 28th, 2025

      
            <p dir="ltr">Much attention has been paid to the incoming administration’s stated intentions to roll back regulations, as well as their criticism of certain cybersecurity and artificial intelligence (AI) policies adopted by the Biden administration. A more comprehensive review of policy statements and past actions suggests that the Trump administration will support strong cybersecurity defenses and best practices as well as practices that encourage the responsible and trustworthy development and adoption of AI.</p><h2>The First Months</h2><p dir="ltr">The new administration immediately put a hold on pending regulations, as is typical. In<a href="https://trumpwhitehouse.archives.gov/presidential-actions/memorandum-heads-executive-departments-agencies/">&nbsp;the first Trump administration</a> and the<a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/regulatory-freeze-pending-review/">&nbsp;Biden administration</a>, the new White House Chief of Staff issued on Inauguration Day a memo to the heads of executive departments and agencies to immediately freeze any new or pending regulations to allow review by the new administration. The Trump administration also released a large number of executive orders on his first day of office, though only one addressed AI or cybersecurity in a material way (see below).</p><p dir="ltr">We expect that many members of Congress will reintroduce cybersecurity and AI legislation from the previous session, and new legislation on these hot issues will be introduced for the first time.&nbsp;</p><p dir="ltr">Based on precedent, it is possible that Congress will use the Congressional Review Act to reject regulations that have already been enacted by federal agencies. The law, enacted in 1996, has only been used to overturn a total of 20 rules, with 16 of those actions taking place early in the first Trump administration with a Republican majority in both chambers of Congress. To take effect, the Congressional Review Act requires Congress to introduce a joint resolution within 60 Congressional session days of its receipt of the regulation, so only relatively recent regulations are subject to the law.</p><h2>Cybersecurity Policy and Regulations</h2><h4>CISA</h4><p dir="ltr">Republican lawmakers and incoming administration officials have criticized the Cybersecurity and Infrastructure Security Agency (CISA). However, these criticisms against CISA are largely not related to cybersecurity, but rather for perceived expansion beyond its core mission of protecting federal and critical infrastructure to address issues such as disinformation. The Republican Party Platform emphasized a commitment to “use all tools of National Power to protect our Nation's Critical Infrastructure and Industrial Base from malicious cyber actors. This will be a National Priority, and we will both raise the Security Standards for our Critical Systems and Networks and defend them against bad actors.” We expect the new administration to refocus CISA on cyber protection and scale back or defund disinformation initiatives, but not to dismantle CISA.</p><h4>CIRCIA&nbsp;</h4><p dir="ltr">CISA is finalizing regulations to implement the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), enacted in 2022. The proposed rule requires a wide range of businesses in critical infrastructure sectors to report covered cyber incidents and ransomware payments to CISA. Many of the public comments, including those submitted by members of Congress that had sponsored the original legislation, argued that the draft regulations went beyond the intention of Congress by applying the rule to too many entities, requiring too many cyber incidents to be reported, and not providing enough reciprocity with similar cyber incident reporting regulations. Expect members of Congress to closely review and scrutinize the nature and scope of the final regulations.</p><h4>Cybersecurity Executive Orders</h4><p dir="ltr">The Biden administration released its second&nbsp;<a href="https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting-innovation-in-the-nations-cybersecurity">executive order</a> on cybersecurity in his final week in office. The order focused on improving the United States’ defenses against the escalating threats from foreign adversaries, particularly the People’s Republic of China (PRC).&nbsp;</p><p dir="ltr">The new administration will certainly review all executive orders issued by the prior administration and consider whether to repeal them entirely, repeal them and replace them with their own executive order, or take no action. Given the scope of the order and the new administration’s focus on cyber defense and countering the malicious activities of national adversaries, particularly China, a full repeal without replacement in the short term may be unlikely. It is worth recalling the Trump administration<a href="https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-taking-additional-steps-address-national-emergency-respect-significant-malicious-cyber-enabled-activities/">&nbsp;issued</a> its own executive order on cybersecurity in its last day in office, which the Biden administration did not repeal.</p><h4>Coordinated Vulnerability Disclosure Practices</h4><p dir="ltr">Coordinated vulnerability disclosure practices, including the implementation of Vulnerability Disclosure Policies and the use of bug bounties by federal agencies have been supported by both the Trump and Biden administrations, are well established in federal agencies, and are unlikely to be rolled back. Russell Vought, who has been nominated to return to his prior role as Director of the Office of Management and Budget, directed federal agencies to implement such programs in a 2020<a href="https://www.whitehouse.gov/wp-content/uploads/2020/09/M-20-32.pdf">&nbsp;memo</a>. These practices also enjoy bipartisan support in Congress, which is<a href="https://www.hackerone.com/press-release/hackerone-applauds-senate-committee-homeland-security-and-government-affairs-approval">&nbsp;actively working</a> to pass legislation to require the adoption of Vulnerability Disclosure Policies by federal contractors.</p><h2>Artificial Intelligence</h2><p dir="ltr">Both President Trump and President Biden issued executive orders related to AI. President Biden’s order directed over 50 federal entities to take more than 100 specific actions to implement its guidance in areas including safety and security, consumer protection, worker support, and consideration of AI bias and civil rights. Proposed rules resulting from the order include those proposed by the Department of Commerce that would require mandatory reporting to the federal government by leading AI developers and cloud providers. Republicans raised concerns about the order’s<a href="https://www.politico.com/news/2024/01/25/conservatives-prepare-attack-on-bidens-ai-order-00137935"> reliance</a> on the 1950 Defense Production Act for its authority to require such disclosures, as well as the order’s impact on free speech, innovation, and focus on addressing&nbsp;<a href="https://fedscoop.com/eyebrow-raising-ai-amendment-passes-senate-commerce-committee/">bias and discrimination</a>. The Trump administration repealed President Biden’s executive order on AI on its first day in office, honoring a commitment made during the campaign. In doing so, he&nbsp;<a href="https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership/">issued</a> his own order to remove barriers to American innovation and “to sustain and enhance America’s dominance in AI to promote human flourishing, economic competitiveness, and national security.”&nbsp;</p><p dir="ltr">While the Trump administration is expected to take a lighter regulatory approach to AI, its past approach through executive order has<a href="https://trumpwhitehouse.archives.gov/ai/executive-order-ai/">&nbsp;recognized</a> the importance of regulatory guidance, technical standards, and transparency and trustworthiness to realizing the benefits of AI innovation. As OMB Director, Vought issued<a href="https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf">&nbsp;guidance</a> to federal agencies for regulation of AI applications, writing that “agencies should continue to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property.” The memo emphasized the importance of public trust in AI and the validation of AI systems while encouraging agencies to “be mindful of any potential safety and security risks and vulnerabilities.&nbsp;</p><p dir="ltr">Congressional action on artificial intelligence has been limited to date with the executive branch stepping in to shape government policy and practices related to AI use and regulation. However, Congress and the states show willingness to take this issue up in the coming legislative term.&nbsp;</p><h2>Focus Areas for HackerOne and Our Partners</h2><p dir="ltr">HackerOne’s policy team continues to advocate for the enactment of legislation and regulation that enhances cybersecurity defenses and promotes the responsible adoption and use of AI. This advocacy will continue across administrations and Congresses. Regardless of how the regulatory environment evolves, companies should continue to proactively identify and manage vulnerabilities in their own systems and AI models to protect their assets and maintain the trust of the public, their customers, and investors.</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/security-compliance" hreflang="en">Security Compliance</a>
        
    

            <p dir="ltr">The transition to a new presidential administration and a change in control of the Senate raise questions about how cybersecurity and artificial intelligence (AI) policy and regulation will change and whether such change will be dramatic or more measured.&nbsp;</p>
      ]]></description>
  <pubDate>Tue, 28 Jan 2025 14:23:05 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5470 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>A Partial Victory for AI Researchers</title>
  <link>https://www.hackerone.com/blog/partial-victory-ai-researchers</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">A Partial Victory for AI Researchers</span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Fri, 01/10/2025 - 09:33
</span>

            
  
      
  
    Image
                



          

  

      
            January 10th, 2025

      
            <p dir="ltr">HackerOne has partnered with security and AI communities to advocate for stronger legal protections for independent researchers. Most recently, HackerOne participated in a&nbsp;<a href="https://hai.stanford.edu/news/strengthening-ai-accountability-through-better-third-party-evaluations">workshop</a> hosted by leading institutions to discuss the need for legal safeguards for third-party AI evaluators and address the gaps in current legal frameworks. Despite the strong push for change, the Librarian’s&nbsp;<a href="https://www.federalregister.gov/documents/2024/10/28/2024-24563/exemption-to-prohibition-on-circumvention-of-copyright-protection-systems-for-access-control#:~:text=The%20Librarian%20of%20Congress%2C%20pursuant,the%20next%20three%20years%20to">ruling</a> provided some clarity, but ultimately fell short of granting the full&nbsp;legal protection requested for AI safety research.</p><h2>What is the DMCA and Why Does it Matter?&nbsp;</h2><p>DMCA Section 1201 makes it illegal to circumvent technological protection measures (TPMs) used to protect copyrighted works. Essentially, if software has security features, it’s against the law to break or otherwise bypass them—even for research purposes.&nbsp;</p><p dir="ltr">Every three years the U.S. Copyright Office considers petitions for exceptions to this restriction. In 2015, the security community advocated for and received an&nbsp;<a href="https://s3.amazonaws.com/public-inspection.federalregister.gov/2015-27212.pdf">exception for good faith security research</a>. This year, HackerOne advocated for broadening this exception.&nbsp;</p><p dir="ltr">While security research has legal protections under the law, it is not clear that the same protections extend to AI researchers. AI research, or red teaming, evaluates AI systems for more than just security - including safety, accuracy, discrimination, infringement, and other potentially harmful outputs. The absence of clear legal protections creates a chilling effect that may deter independent AI testing, which is crucial for the long-term resilience of the digital ecosystem—much like independent security research safeguards organizations by identifying vulnerabilities before they can cause harm.</p><p dir="ltr">AI platforms, in an effort to safeguard their systems, may block or ban researchers who attempt to find vulnerabilities or algorithmic flaws. In order to continue their work, researchers are sometimes forced to create new accounts or use proxy servers to bypass these access restrictions. While this circumvention is often necessary for identifying unintended behaviors and improving AI systems, in the absence of clarity around the DMCA 1201 exceptions, it comes with potential legal risk.&nbsp;</p><p dir="ltr">HackerOne&nbsp;<a href="https://www.copyright.gov/1201/2024/comments/reply/Class%204%20-%20Reply%20-%20HackerOne%20Inc..pdf">joined the effort</a> to request the Copyright Office to grant clear liability protection for good faith AI research under DMCA Sec. 1201. The process took several months and multiple rounds of comments before the Librarian of Congress issued its decision on October 28, 2024.</p><h2>What Was the Ruling?</h2><p dir="ltr">The U.S. Copyright Office considered a proposed exemption to the DMCA that would allow researchers to circumvent TPMs in order to test and improve the trustworthiness of AI systems. This exemption would have enabled independent researchers to probe AI models for biases, harmful outputs, and other issues related to fairness and accountability, without the threat of legal action.</p><p dir="ltr">However, the Librarian of Congress ultimately declined to grant this proposed exemption. The decision was based on two determinations:</p><ol><li dir="ltr"><strong>Insufficient Evidence</strong>: There was not enough evidence to prove that Section 1201 significantly deterred researchers from conducting the necessary red teaming and testing activities on AI models. While many researchers have raised concerns about the legal risks of conducting this type of research, the Copyright Office found that the existing framework of TPM circumvention protections did not present a significant barrier to their work.</li><li dir="ltr"><strong>Non-Circumvention of TPMs</strong>: Many of the techniques employed by researchers do not actually involve circumventing TPMs in the way Section 1201 was intended to prohibit. According to the ruling, most of the research methods in question do not technically involve bypassing access controls or security measures, which means they do not fall under the DMCA's anti-circumvention provisions.</li></ol><h2>The Implications for AI Research</h2><p dir="ltr">While the rejection of the full exemption for AI trustworthiness research is a setback, it does provide some clarity in certain areas. The decision clearly states that many common testing methods, such as post-ban account creation, rate limiting, jailbreak prompts, and prompt injection, do not violate Section 1201. This clarification is a win for researchers, as it helps to reduce the uncertainty around these techniques and provides more legal confidence to pursue this critical AI research.</p><p dir="ltr">However, the ruling ultimately leaves AI researchers operating at times in a legal gray area which may result in an inability or unwillingness to fully test AI systems independently, especially in cases where flaws are deeply embedded in the technology.</p><p dir="ltr">As AI continues to evolve and impact all aspects of society, legal frameworks must evolve alongside these technological advancements. The additional clarity provided is welcome, but there is still much to be done to secure stronger, more comprehensive legal protections for good faith AI researchers.</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/security-compliance" hreflang="en">Security Compliance</a>
        
    

            <p>Artificial intelligence is advancing faster than ever, but the legal system is struggling to keep up. A key challenge lies in clarifying how independent AI testing and research intersect with copyright law, particularly under the U.S.’s&nbsp;<a href="https://www.copyright.gov/dmca/#:~:text=Millennium%20Copyright%20Act-,The%20Digital%20Millennium%20Copyright%20Act,between%20copyright%20and%20the%20internet." target="_blank">Digital Millennium Copyright Act</a> (DMCA). In October, in response to advocacy by HackerOne and the Hacking Policy Council, the Librarian of Congress issued a ruling that lessened legal risk for independent AI researchers under DMCA Sec. 1201.</p>
      ]]></description>
  <pubDate>Fri, 10 Jan 2025 15:33:42 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5465 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>New York Releases AI Cybersecurity Guidance: What You Need to Know</title>
  <link>https://www.hackerone.com/blog/new-york-releases-ai-cybersecurity-guidance-what-you-need-know</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">New York Releases AI Cybersecurity Guidance: What You Need to Know</span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 12/16/2024 - 12:33
</span>

            
  
      
  
    Image
                



          

  

      
            December 16th, 2024

      
            <p dir="ltr">AI adoption is accelerating in the financial services industry, both as an asset for improving business operations and as a potential tool to defend against cybercriminals. At the same time, adopting AI systems expands the attack surface that financial institutions must protect. Within this context, the NYDFS guidelines highlight the need for proactive risk management strategies that encompass the unique challenges posed by AI technologies.</p><h2>Cybersecurity Risks of AI</h2><p dir="ltr">The NYDFS guidance outlines several key cybersecurity risks associated with AI, along with strategies for mitigating those risks:</p><ul><li dir="ltr"><strong>AI-Enabled Social Engineering</strong>: One of the most immediate concerns is AI’s potential to enhance social engineering attacks. With tools like deepfakes—AI-generated media that can mimic real people—attackers can create highly convincing phishing schemes. These attacks may occur via emails, phone calls (vishing), SMS (smishing), or even video conferencing, where the attacker impersonates trusted employees or executives.</li><li dir="ltr"><strong>AI-Enhanced Cybersecurity Attacks</strong>: AI allows cybercriminals to amplify the potency, scale, and speed of their attacks. With AI, attackers can quickly scan and analyze vast amounts of data, identify and exploit vulnerabilities, deploy malware, steal sensitive information more efficiently, and develop new malware variants or ransomware designed to evade detection.</li><li dir="ltr"><strong>Exposure or Theft of NPI</strong>: Financial institutions increasingly rely on AI to process sensitive data, including personally identifiable information (PII) and financial records. This growing reliance heightens the risk of exposure or theft of non-public information (NPI), which is protected under the NYDFS Cybersecurity Regulation.</li><li dir="ltr"><strong>Supply Chain Vulnerabilities</strong>: As financial organizations integrate AI into their operations, they also depend on a range of third-party vendors and partners. This interconnectedness introduces the risk of cyberattacks targeting vulnerabilities within the supply chain, including AI systems or software that may have been tampered with or compromised.</li></ul><h2>Mitigating AI Cybersecurity Risks: Key Strategies for Financial Institutions</h2><p dir="ltr">The NYDFS's guidance offers practical advice on how institutions can address these AI-specific threats and integrate them into their existing cybersecurity programs. Here are key strategies from the guidance:</p><ul><li dir="ltr"><strong>Risk Assessments and AI-Specific Programs:&nbsp;</strong>Under the NYDFS Cybersecurity Regulation, financial entities are required to perform regular risk assessments. According to NYDFS, these assessments must include AI-related risks. This involves not only evaluating the internal use of AI systems but also assessing the AI systems provided by third-party vendors. Institutions should also ensure that their incident response plans, business continuity plans, and disaster recovery strategies are tailored to handle AI-driven risks.</li><li dir="ltr"><strong>Third-Party Service Provider Management</strong>: Given the interconnected nature of modern financial systems, managing third-party relationships is more critical than ever. Financial institutions must ensure that their third-party vendors—whether they are providing AI-powered services or supporting infrastructure—adhere to the same stringent cybersecurity standards. Regular assessments and audits should be conducted to ensure third-party systems remain secure.</li><li dir="ltr"><strong>Access Controls</strong>: The NYDFS guidelines emphasize the importance of robust access control mechanisms, ensuring that only authorized personnel can access sensitive AI-driven systems. This includes implementing multi-factor authentication (MFA), role-based access controls (RBAC), and segmentation of sensitive data to reduce the impact of a potential breach.</li><li dir="ltr"><strong>Cybersecurity Training</strong>: AI’s potential use in social engineering attacks makes cybersecurity awareness training more critical than ever. Institutions should regularly educate their employees about the risks of AI-enhanced attacks and equip them with the knowledge to identify and respond to potential threats. Employees must be trained to recognize the signs of AI-powered phishing attempts and social engineering tactics.</li><li dir="ltr"><strong>Continuous Monitoring and Data Management</strong>: Financial institutions should implement real-time monitoring tools to detect anomalies and suspicious activities within their AI systems. AI-driven cybersecurity monitoring tools can help track and flag unusual patterns that could signal an ongoing attack or breach. Additionally, effective data management practices should ensure that sensitive data is encrypted, segmented, and protected against unauthorized access.</li></ul><h2>The Road Ahead: What's Next for AI and Cybersecurity?</h2><p dir="ltr">The NYDFS's AI cybersecurity guidance underscores the need for financial institutions to proactively incorporate AI considerations into their risk management activities. While the guidelines focus on regulated entities, the risks and strategies outlined are universally relevant to many organizations using AI. As AI technologies become more pervasive, institutions of all sizes must also integrate AI-specific risks into their broader cybersecurity and risk management frameworks.</p><p dir="ltr">At HackerOne, we recognize that institutions need more than just traditional cybersecurity measures to address the growing risks posed by AI. That’s why we advocate for proactive, real-world testing through AI red-teaming.&nbsp;</p><p dir="ltr">Red-teaming is a form of adversarial testing that can reveal flaws such as the potential for hackers to bypass AI security protections, as well as algorithmic safeguards against unsafe or harmful output. HackerOne’s red-teaming is driven by a community of ethical hackers whose creativity and expertise help organizations around the world stay safer and more secure. By uncovering AI vulnerabilities and algorithmic flaws early, institutions can take steps to mitigate them before they can be exploited by bad actors.</p><p dir="ltr">As regulatory requirements around AI and cybersecurity come into focus, institutions should view the NYDFS guidelines not just as best practices but as business compliance imperatives. Securing AI systems is no longer optional; it’s essential for protecting both organizational assets and customer trust.</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p dir="ltr">The New York Department of Financial Services (NYDFS) issued&nbsp;<a href="https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks" target="_blank">new guidelines</a> for financial institutions and other regulated entities to address the growing concerns over AI-related cybersecurity risks. While the guidance does not introduce new regulatory requirements, it clarifies how institutions can integrate AI-related risks into their existing cybersecurity frameworks, helping them meet the mandates of&nbsp;<a href="https://www.dfs.ny.gov/system/files/documents/2023/12/rf23_nycrr_part_500_amend02_20231101.pdf" target="_blank">NYDFS's Cybersecurity Regulation.</a></p>
      ]]></description>
  <pubDate>Mon, 16 Dec 2024 18:33:41 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5461 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>New Guidance for Federal AI Procurement Embraces Red Teaming and Other HackerOne Suggestions</title>
  <link>https://www.hackerone.com/blog/new-guidance-federal-ai-procurement-embraces-red-teaming-and-other-hackerone-suggestions</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">New Guidance for Federal AI Procurement Embraces Red Teaming and Other HackerOne Suggestions</span>
    



    
        Michael Woolslayer
        
            Policy Counsel
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 12/09/2024 - 13:09
</span>

            
  
      
  
    Image
                



          

  

      
            December 9th, 2024

      
            <p dir="ltr">Earlier this year, the Office of Management and Budget (OMB), which establishes budget rules for federal agencies, issued a memorandum <a href="https://www.whitehouse.gov/wp-content/uploads/2024/10/M-24-18-AI-Acquisition-Memorandum.pdf">on&nbsp;Advancing the Responsible Acquisition of Artificial Intelligence in Government</a> which outlines for both agencies and the public significant aspects of responsible AI procurement and deployment. In particular, OMB’s memo embraced AI red teaming as a critical element of the acquisition of AI for U.S. government agencies.</p><h2>Rules for U.S. Federal Agency AI Procurement</h2><p dir="ltr">Last October, the Biden-Harris Administration published an<a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">&nbsp;Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence</a> (AI EO). That expansive action set the tone for the US government’s approach to utilizing AI in a safe and secure manner and required OMB to provide guidance to US government agencies on how to manage risks when acquiring AI products and services.&nbsp;&nbsp;</p><p dir="ltr">Consistent with HackerOne’s long-standing<a href="https://www.hackerone.com/public-policy">&nbsp;policy advocacy</a> in favor of responsible AI deployment, we provided OMB with <a href="https://www.hackerone.com/sites/default/files/2024-05/%5BApril%2029%2C%202024%5D%20HackerOne%20Response%20to%20OMB%20AI%20RFI.pdf">comments</a> on how the security and safety best practices championed by HackerOne aligned with the AI EO and should be leveraged in OMB’s development of that guidance. Specifically, HackerOne cited the benefits of conducting <a href="https://www.hackerone.com/ai-red-teaming">AI red teaming</a>, ensuring the transparency of AI red teaming methodology, and of documenting the specific harms and bias federal agencies are seeking to avoid. These suggestions drew on our extensive experience working with government agencies and companies to enhance cybersecurity and our use of similar best practices in testing AI models.</p><p dir="ltr">We were pleased to see that the memo reflects our core recommendations:</p><ul><li><p dir="ltr"><strong>Embracing AI Red Teaming:</strong> OMB has made it a requirement that agencies procuring general use enterprise-wide generative AI include contractual requirements ensuring that vendors provide documentation of AI red teaming results.</p></li><li><p dir="ltr"><strong>Identifying Specific Harms:</strong> In addition to the categories of risk that vendors include, OMB has encouraged agencies to require documentation to cover AI red teaming related to nine specific categories of risk.&nbsp;&nbsp;</p></li></ul><p dir="ltr">The inclusion of these elements within the memo will help protect the security and effectiveness of the U.S. federal government by requiring that the AI products and services that undergird critical operations be proactively tested to identify potential risks and harms. It also further underscores the role of AI red teaming as a best practice that all companies should adopt to help ensure the safety and security of their AI products and services and to build the trust of their customers.</p><p dir="ltr"><a href="https://www.hackerone.com/ai-red-teaming"><em>Learn more about AI red teaming with HackerOne.</em></a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    

            
            <a href="https://www.hackerone.com/blog/topic/ai-red-teaming" hreflang="en">AI Red Teaming</a>
        
            
            <a href="https://www.hackerone.com/blog/topic/ai-safety-security" hreflang="en">AI Safety &amp; Security</a>
        
    

            <p dir="ltr">The U.S. government’s approach to evaluating and adopting new technology for its own use often impacts private sector adoption. That’s why it’s significant that, while AI is already having a transformative effect on productivity across industries, the U.S. government is also seeking to harness the benefits of this emerging technology for federal agencies and has now developed criteria to guide its decision making while evaluating AI. As the U.S. government works to apply AI to critical U.S. government operations, it is vital that AI’s power is harnessed safely and responsibly—not only to ensure that the government’s own deployment of AI is secure and effective, but also because of the government’s ripple effect on standards and adoption of AI across all sectors.</p>
      ]]></description>
  <pubDate>Mon, 09 Dec 2024 19:09:00 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5455 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>Securing Our Elections Through Vulnerability Testing and Disclosure</title>
  <link>https://www.hackerone.com/blog/securing-our-elections-through-vulnerability-testing-and-disclosure</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">Securing Our Elections Through Vulnerability Testing and Disclosure</span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


    



    
        Michael Woolslayer
        
            Policy Counsel
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 10/28/2024 - 14:50
</span>

            
  
      
  
    Image
                



          

  

      
            October 28th, 2024

      
            

<em>Security researchers and election technology manufacturers at the Election Security Research Forum (ESRF).</em>

<h2>The Event</h2><p dir="ltr">In preparation for the election season, HackerOne planned and executed a unique live hacking event in coordination with the election security group within the Information Technology - Information Sharing and Analysis Center (IT-ISAC). Modeled after HackerOne’s existing&nbsp;<a href="https://www.linkedin.com/posts/hackerone_togetherwehitharder-ugcPost-7246966262017208322-qjpT?utm_medium=member_ios&amp;utm_source=social_share_video">live hacking events</a> where technology owners and researchers work together to test targeted assets, IT-ISAC leveraged the collective experience of its&nbsp;<a href="https://www.it-isac.org/_files/ugd/b8fa6c_7a9e81acc957489b9d9a1fadfe809e34.pdf">advisory board</a> for this first-of-its-kind event.&nbsp;</p><p dir="ltr">HackerOne gladly provided its significant expertise and resources necessary to plan the live hacking event to help secure our elections. Three election technology manufacturers and 15 independent, vetted U.S. security researchers with hardware hacking expertise took part. Over a two-day period, these ethical hackers and election technology providers collaborated to explore potential security issues within election devices, which included controlled access to modern election technology with newly developed and not yet fielded configurations of the on-board software. The devices tested included digital scanners, ballot marking devices, and electronic pollbooks, emphasizing the technology that voters may encounter at a polling site.&nbsp; In addition to the testing,&nbsp;<a href="https://www.hackerone.com/security-compliance/election-integrity-coordinated-vulnerability-disclosure">the various expert stakeholders</a> like HackerOne further enhanced collaboration and disseminated lessons learned across providers through panels and follow-up discussions.&nbsp;</p><h2>The Results</h2><p dir="ltr">In a 48-hour testing window, the ethical hackers submitted 21 reports across the three election technology manufacturers. The attack vectors tested represented a range of election security threats, including ballot box stuffing, scanner denial of service, website URL squatting, and front panel workstation access. The&nbsp;<a href="https://www.it-isac.org/_files/ugd/b9866c_9d45491cff0943bd8149fa5b72e1547d.pdf" target="_blank">result</a> was more secure products and thus more secure elections, and a strengthened trust between the stakeholders.</p><p dir="ltr">This event built on previous efforts to support the adoption of&nbsp;<a href="https://www.hackerone.com/product/response-vulnerability-disclosure-program">Vulnerability Disclosure Programs</a> (VDPs) by election technology manufacturers. A VDP is a “see something say something” policy that provides a secure channel for third parties to report potential vulnerabilities and security gaps directly to the affected organizations. With the assistance of former election officials, industry, and the security research community, including HackerOne, election technology manufacturers have increasingly implemented this security best practice. While most election technology companies now have VDPs in place, last year’s event brought more access to the various systems and reinforced the security-enhancing value of this collaboration.&nbsp;</p><h2>The Future</h2><p dir="ltr">Following the success of the event, IT-ISAC has focused on updating and modernizing standards to better accommodate VDP and responsible disclosure within the industry and developing a framework for future iterations of this event. Stakeholders are exploring possible future events that aim to include a broader set of researchers, additional companies, and others involved in the election security process, including state and local election officials. This would not only expand the attack surface ethical hackers can test, but also empower them to focus on additional attack vectors. Protecting the integrity of our votes is vital and requires proactive approaches—like getting a bunch of experts in a room together to try to hack hardware—to identify and address vulnerabilities before they can be exploited.</p><p dir="ltr"><a href="https://www.hackerone.com/ethical-hacker/election-security"><em>Read the full Election Security Research Forum story &gt;</em></a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    
            <p dir="ltr">There is nothing more fundamental to democracy than a free and fair election. Ensuring the security of our elections and election infrastructure requires much more than just vigilance during an election year. Fortunately, ethical hackers have risen to the challenge,&nbsp;<a href="https://www.politico.com/news/2024/08/12/hackers-vulnerabilities-voting-machines-elections-00173668" target="_blank">strengthening</a> the cybersecurity of our electoral systems over the past several years through collaboration with government and technology manufacturers. <a href="https://www.hackerone.com/ethical-hacker/election-security">Last year’s election security research forum</a>, which HackerOne cosponsored, is an example of this successful partnership.</p>
      ]]></description>
  <pubDate>Mon, 28 Oct 2024 19:50:17 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5438 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>How To Use HackerOne’s Global Vulnerability Policy Map </title>
  <link>https://www.hackerone.com/blog/how-use-hackerones-global-vulnerability-policy-map</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">How To Use HackerOne’s Global Vulnerability Policy Map </span>
    



    
        Michael Woolslayer
        
            Policy Counsel
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Mon, 10/14/2024 - 14:18
</span>

            
  
      
  
    Image
                



          

  

      
            October 14th, 2024

      
            <p dir="ltr">To help organizations keep up with the shifting landscape of VDP mandates and recommendations, HackerOne has developed the <a href="https://www.hackerone.com/vulnerability-disclosure-policy-map">Global Vulnerability Policy Map</a>, an interactive map-based tracker. Users can see at a glance where VDPs are required, recommended, or announced but not yet implemented and click into each jurisdiction for more information.</p><p dir="ltr"><br>Scrolling down to the table will show the basic information about each applicable policy. We’ve put together a bit of a primer on the table fields below to help users navigate the high-level policy table.</p><p>&nbsp;</p><h2>Field Definitions</h2><p dir="ltr"><strong>Jurisdiction</strong></p><p dir="ltr">The jurisdiction that the requirement or recommendation applies to. This is often a country, but it can also be a regional body like the European Union or international (as is the case for some of the standards).</p><p dir="ltr"><strong>Region</strong></p><p dir="ltr">The geographic region in which the jurisdiction is located.</p><p dir="ltr"><strong>Requirement</strong></p><p dir="ltr">Indicates if a particular entry is a requirement or a recommendation.</p><p dir="ltr"><strong>Policy</strong></p><p dir="ltr">The title of the standard, regulation, or law that contains the VDP requirement or recommendation.</p><p dir="ltr"><strong>Applies to</strong></p><p dir="ltr">Many of the listed requirements and recommendations are applicable to a particular type of organization (e.g., IoT device manufacturers).</p><p dir="ltr">Users can expand any entry with a click, which will also show the relevant text and provide a link to the original source material.&nbsp;&nbsp;</p><p>&nbsp;</p><h2>Stay On Top of Evolving Requirements</h2><p dir="ltr">We will periodically update the map and table to help keep organizations aware of the vulnerability disclosure landscape as standards, regulations, and laws increasingly incorporate VDPs.&nbsp;</p><p dir="ltr">If you are looking for help to comply with a new requirement, align with a new recommendation, or adopt a cost-effective security best practice, <a href="https://www.hackerone.com/product/response-vulnerability-disclosure-program">HackerOne Response</a> provides all the tools needed to launch a successful VDP from a single platform. Our out-of-the-box setup makes it easy to establish a vulnerability disclosure workflow for continuous security. Choose the best option to fit your team’s security goals:</p><ul><li dir="ltr"><strong>Essential:</strong> Start with a free self-serve VDP solution to follow best practices and help meet compliance mandates.</li><li dir="ltr"><strong>Professional:</strong> Elevate vulnerability disclosure with advanced features and reporting for proactive security measures.</li><li dir="ltr"><strong>Enterprise:</strong> Ensure enterprise-grade security and compliance with customizable solutions, dedicated support, and extensive integrations.</li></ul><p dir="ltr">Contact us to discover which VDP plan is right for your organization and get your VDP started today.</p><p dir="ltr"><em>Organizations are solely responsible for determining if HackerOne Response satisfies their applicable legal and regulatory obligations.</em></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    
            <p>Thousands of organizations have already adopted vulnerability disclosure programs (VDPs) because they work. They are a proven and fundamental best practice that reduces cybersecurity risk. Governments and standards bodies have globally recognized the importance of this best practice, but it’s hard to keep track of these continually evolving requirements.</p>
      ]]></description>
  <pubDate>Mon, 14 Oct 2024 19:18:48 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5433 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>European Council Adopts Cyber Resilience Act </title>
  <link>https://www.hackerone.com/blog/european-council-adopts-cyber-resilience-act</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">European Council Adopts Cyber Resilience Act </span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Fri, 10/11/2024 - 13:33
</span>

            
  
      
  
    Image
                



          

  

      
            October 11th, 2024

      
            <p dir="ltr">The CRA will be a game-changing regulation for software and connected product security. The CRA imposes cybersecurity requirements for manufacturers of software and connected products sold in the EU market (regardless of where the manufacturer is located). Below are&nbsp; some of the requirements around the handling and reporting of vulnerabilities in connected devices and their software:&nbsp;</p><ol><li dir="ltr">Establish a coordinated vulnerability disclosure policy (CVD);</li><li dir="ltr">Address and remediate vulnerabilities without delay, including by developing and maintaining processes to ensure regular testing and provide security updates where feasible;</li><li dir="ltr">Report “actively exploited” vulnerabilities to their designated Computer Security Incident Response Team (CSIRT) and to the European Union Agency for Cybersecurity (ENISA);</li><li dir="ltr">Provide a Software Bill of Materials (SBOM) of the most significant software dependencies in the covered products.</li></ol><p dir="ltr">The legislative act will next be signed by the presidents of the Council and of the European Parliament and published in the EU’s official journal in the coming weeks. The new regulation will enter into force twenty days after publication with most provisions applying three years after entering into force. Certain requirements like vulnerability reporting will kick in within 21 months.</p><p dir="ltr">HackerOne’s&nbsp;<a href="https://www.helpnetsecurity.com/2023/08/21/vulnerability-disclosure/" target="_blank">advocacy</a> helped drive notable improvements to the CRA, including (1) enhanced protections for good-faith security researchers from mandatory vulnerability reporting and (2) provisions encouraging EU states to protect researchers from liability and ensure they are compensated for their efforts. Unfortunately, the CRA requires product manufacturers to disclose actively exploited vulnerabilities regardless of mitigation status or guardrails for how government agencies may use the vulnerabilities. HackerOne will continue to work with Member States during the implementation process to seek additional safeguards in this process.</p><p dir="ltr">For an in-depth understanding of the vulnerability handling and reporting requirements, <a href="https://www.hackerone.com/public-policy/eu-cyber-resilience-act">dive into HackerOne’s summary.</a></p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    
            <p>The EU Council adopted the Cyber Resilience Act (CRA) this week. Here’s where we’re headed and what HackerOne believes should happen next.&nbsp;</p>
      ]]></description>
  <pubDate>Fri, 11 Oct 2024 18:33:29 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5432 at https://www.hackerone.com</guid>
    </item>
<item>
  <title>NIS2: Next Step Forward on EU Security Requirements</title>
  <link>https://www.hackerone.com/blog/nis2-next-step-forward-eu-security-requirements</link>
  <description><![CDATA[<span class="field field--name-title field--type-string field--label-hidden">NIS2: Next Step Forward on EU Security Requirements</span>
    



    
        Ilona Cohen
        
            Chief Legal and Policy Officer
      
    


<span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>h1_admin</span></span>
<span class="field field--name-created field--type-created field--label-hidden">Fri, 09/27/2024 - 15:02
</span>

            
  
      
  
    Image
                



          

  

      
            September 27th, 2024

      
            <p dir="ltr">NIS2 focuses on strengthening EU resilience through new and amended obligations for cybersecurity risk management practices, incident reporting, and security audits. NIS2 imposes obligations on entities across critical sectors to adopt numerous cybersecurity measures, including controls related to vulnerability management and disclosure.&nbsp;NIS2 also introduces supervisory measures for national authorities in individual Member States, as well as stringent enforcement requirements.&nbsp;<br><br>In addition,&nbsp;NIS2 establishes a framework for coordinated vulnerability disclosure (CVD) across the EU. NIS2 requires EU Member States to create policies for“managing vulnerabilities, encompassing the promotion and facilitation of” CVD, and for each Member State to designate&nbsp;one of its computer security incident response teams (CSIRTs) as a CVD coordinator.&nbsp;</p><h2>Brief Background on NIS2</h2><p dir="ltr">NIS2 builds and expands upon the original NIS Directive, which was introduced in 2016 as the first EU-wide legislation on cybersecurity. Two notable differences from the first iteration of the directive are that NIS2 significantly expands the “essential” and “important” entities to which the directive applies,&nbsp;and imposes administrative fines in the event of non-compliance.</p><p dir="ltr">NIS2 applies to public or private entities that provide a service within the EU that is listed in&nbsp;<a href="https://atwork.safeonweb.be/sites/default/files/2024-06/Detailed%20NIS2%20Scope%201%20v1%20EN.png">Annex I</a> (Sectors of High Criticality) or&nbsp;<a href="https://atwork.safeonweb.be/sites/default/files/2024-06/Detailed%20NIS2%20Scope%202%20v1%20EN.png" target="_blank">Annex II</a> (Other Critical Sectors) of the directive. Under NIS2, the designation of “essential” or “important” is based on a company’s size and the criticality of the services they provide. “Essential” entities are proactively supervised, whereas “important” entities will fall under reactive supervision.&nbsp;</p><p dir="ltr">Under NIS2, entities providing “essential” or “important” services must comply with the same set of 10 cybersecurity risk management measures, such as vulnerability handling and disclosure, testing the effectiveness of security safeguards, and incident response. Some of these measures will be further detailed in the Implementing Regulation (a draft is&nbsp;<a href="https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14241-Cybersecurity-risk-management-reporting-obligations-for-digital-infrastructure-providers-and-ICT-service-managers_en" target="_blank">available here</a>). NIS2 is a “minimum harmonization” law, meaning that Member States can, in some areas, impose additional obligations in their implementing laws beyond those set out in the NIS2 Directive itself. Topics covered by the Implementing Regulation, however, should apply consistently across member states.<br><br>For entities found out of compliance with NIS2, administrative fines can reach up to 10 million Euros, or 2% of the company’s annual revenue for “essential” entities, whichever is higher. Notably, NIS2 also mandates personal liability for corporate executives in the event of non-compliance.&nbsp;</p><h2>How to Prepare: Security Controls for In-Scope Entities&nbsp;</h2><p dir="ltr">Article 21 of NIS2 outlines ten cybersecurity risk management measures to be adopted by in-scope entities. This includes security in network and information systems acquisition, development, and maintenance, as well as vulnerability handling and disclosure.&nbsp;</p><p dir="ltr">A robust vulnerability disclosure process, in addition to regular security testing like penetration testing, will help ensure organizations comply with NIS2, and identify and remediate security weaknesses in their systems more quickly and effectively. Implementing a strong CVD process will also help meet the requirements of any national transposition of NIS2 that go beyond the directive’s requirements, as is the case with the Belgian transposition which actually requires entities to implement a CVD policy.&nbsp;<br><br>As the NIS2 deadline nears, in-scope organizations should take action now by establishing a vulnerability disclosure program (VDP). In September, HackerOne launched Essential VDP — a free, self-serve tier of <a href="https://www.hackerone.com/product/response-vulnerability-disclosure-program">HackerOne Response</a>, our Vulnerability Disclosure Program (VDP) product. This product will be useful for “essential” and “important” companies that have to apply vulnerability handling and disclosure measures as part of their cybersecurity risk management compliance with NIS2.&nbsp;</p><p dir="ltr">Additionally, in 2023, the&nbsp;<a href="https://digital-strategy.ec.europa.eu/en/policies/nis-cooperation-group" target="_blank">NIS Cooperation Group released guidelines</a> for Member States on implementing national CVD policies. The cooperation group is a platform for EU collaboration with representatives from EU Member States, the European Commission, and the European Union Agency for Cybersecurity (ENISA). The guidelines explicitly endorsed vulnerability rewards programs such as bug bounty programs as an impactful means of implementing CVD.&nbsp;</p><h2>CVD for EU Member States</h2><p dir="ltr">As Article 12 of NIS2 outlines, each Member State must designate one of its CSIRTs as a coordinator for a national CVD program. The CSIRT coordinator will identify and contact the entities involved in a vulnerability disclosure, assist those reporting a vulnerability, negotiate disclosure timelines, and manage vulnerabilities that affect multiple entities.&nbsp;</p><p dir="ltr">In addition, ENISA must develop and maintain a European vulnerability database, with “appropriate information systems, policies, and procedures….to ensure the security and integrity of the European vulnerability database.” Mirroring the functions of the U.S.-based National Vulnerability Database (NVD), this EU database will include information describing a vulnerability, the affected products or services, the associated severity, and the availability of related patches and remediation guidance.&nbsp;</p><h2>NIS2 Next Steps</h2><p dir="ltr">The European Commission is expected to issue a finalized Implementing Regulation in the coming days. The Implementing Regulation will provide a consistent EU approach to incident reporting thresholds and cybersecurity measures. At the same time, member states are busy transposing NIS2 into their own national laws, a process known as transposition.&nbsp;</p><p dir="ltr">Transposition of NIS2 presently has a deadline of 17 October 2024. Some Member States, like Belgium, have already achieved transposition, though several other Member States, like the Netherlands, have publicly stated they anticipate a longer transposition process, likely well into 2025.&nbsp;</p><p dir="ltr">It will be important to track the European Commission’s forthcoming publication of the Implementing Regulation, as well as the progress of Member States’ transposition of NIS2 into their national laws. Tracking these and other developments will help businesses know what EU agencies and Member States expect with regard to NIS2 compliance.</p><h2>Conclusion</h2><p dir="ltr">Businesses should anticipate that NIS2 will come into effect at the EU-level over coming weeks and months. To help prepare, we recommend that businesses in the EU should determine whether they are in-scope for NIS2, and in which specific Member State jurisdictions. Businesses should work with their IT and compliance teams to determine whether their current security controls meet the risk management measures required under NIS2.&nbsp;HackerOne’s vulnerability management solutions, including our vulnerability monitoring and Essential VDP services, are an excellent way to begin fulfilling NIS2 vulnerability handling and disclosure requirements.&nbsp;</p><p dir="ltr">By strengthening the security practices of important and essential entities, NIS2 will help protect health and safety and ensure critical services are resilient to disruption. HackerOne looks forward to working to achieve a high common level of security across Europe.</p>
      
            
                                                                                <a href="https://www.hackerone.com/blog/public-policy" hreflang="en">Public Policy</a>
                    
    
            <p>The European Union (EU) is poised to take the next step forward on implementing the second&nbsp;<a href="https://eur-lex.europa.eu/eli/dir/2022/2555" target="_blank">Network and Information Security Directive</a> (NIS2). All eyes are now on the <a href="https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14241-Cybersecurity-risk-management-reporting-obligations-for-digital-infrastructure-providers-and-ICT-service-managers_en" target="_blank">upcoming Implementing Regulation</a>, which will detail incident reporting thresholds and cybersecurity measures that will apply to critical sectors across EU member states. As the EU Commission is expected to finalize the Implementation Regulation shortly, organizations can prepare by familiarizing themselves with the major requirements of NIS2.&nbsp;</p>
      ]]></description>
  <pubDate>Fri, 27 Sep 2024 20:02:31 +0000</pubDate>
    <dc:creator>h1_admin</dc:creator>
    <guid isPermaLink="false">5426 at https://www.hackerone.com</guid>
    </item>

  </channel>
</rss>
