DoJ is revising the vulnerability disclosure framework to encourage AI red teaming

The Department of Justice is revising its policy aimed at limiting legal risks for third-party cybersecurity researchers to address the reporting of vulnerabilities for artificial intelligence systems.

Nicole Argentieri, deputy assistant attorney general in the DOJ’s Criminal Division, said her department is “hard at work” updating the 2017 framework for vulnerability disclosure programs.

The updates will “address the reporting of vulnerabilities for AI systems and consider issues that may arise under intellectual property laws,” Argentieri said at an event Wednesday organized by the Center for Strategic and International Studies.

“As we work to update this document, the Criminal Justice Division, along with other department components, will work with external stakeholders, including researchers and companies, to solicit feedback so they can share any concerns, including about the potential applicability of this document. criminal laws good faith AI red teaming efforts,” she said.

Argentieri noted that third-party security research has helped protect computer systems and networks from previously known cyberbugs. Federal agencies, including the Department of Defense and the Department of Homeland Security, have implemented vulnerability disclosure programs to identify cybersecurity issues in their systems and even in the systems of contractors.

But Argentieri noted that independent research into AI systems can go beyond pure safety concerns.

“It can also help protect against discrimination, prejudice and other harmful effects,” she said. “As AI becomes more prevalent in our lives, it is critical that we do not let it undermine our shared national principles of fairness and equality.”

The DoJ’s efforts are in line with the White House’s September 2023 voluntary AI commitments, which encourage companies to promote third-party vulnerability discovery.

While leading AI companies have pledged to follow through on White House commitments, AI researchers have raised concerns about their compliance through protecting good-faith research. More than 350 leading AI researchers earlier this year called on companies to create a safe haven for AI evaluation, arguing that current policies “deter independent evaluation.”

Argentieri noted that DoJ is also supporting an effort for the U.S. Copyright Office to clarify that the exemption allowing good faith security research into AI systems would also include research into bias and “other harmful and unlawful results of such systems.” include.

“While we believe that good faith investigations can coexist with the criminal justice laws we enforce, we know we are only at the beginning of this AI revolution, and much remains unclear in the world of AI of some. from the core technology to the technical details of AI systems and how research into them can be most effectively conducted,” said Argentieri.

The DoJ’s plan to overhaul its vulnerability disclosure framework comes as the criminal division outlines a new “strategic approach to combating cybercrime.”

The objectives include boosting efforts to disrupt ransomware gangs, botnets and other cybercriminal activities; strengthening the DOJ’s tools to combat cybercrime; and promoting more “capacity building,” public education and information sharing on the misuse of emerging technologies.

“Criminals have always tried to exploit new technology, but while their tools may change, our mission does not,” Argentieri said. “The Criminal Division will continue to work closely with its law enforcement partners to aggressively pursue criminals, including cybercriminals, who exploit AI and other emerging technologies and hold them accountable for their misconduct.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

You May Also Like

More From Author