Inside Cybersecurity

May 8, 2024

Daily News

Industry leaders to Senate: Embrace AI as key element in cyber defense, be cautious on regs

By Charlie Mitchell / September 12, 2023

Tech industry leaders will urge a carefully crafted regulatory approach to artificial intelligence at a Senate Commerce subcommittee hearing, while emphasizing the essential role AI tools play in defending against increasingly sophisticated cyber attacks.

“AI is the best tool defenders have to identify and prevent zero-day attacks and malware-free attacks because AI can defeat novel threats based on behavior cues rather than known signatures,” Rob Strayer, Information Technology Industry Council executive vice president for policy, says in his prepared testimony for today’s Senate Commerce hearing on “the need for transparency” in AI.

“Leveraging these technologies is essential to meeting constantly evolving threats,” Strayer says.

Further, he will testify, “The United States has the potential to build on its lead as AI transforms all sectors of the economy, generates trillions of dollars in economic growth, and benefits U.S. companies and citizens for decades into the future. Overly broad and prescriptive regulation, however, could undermine that leadership position and cede it to U.S. competitors, including authoritarian nations.”

The hearing was announced by subcommittee Chairman John Hickenlooper (D-CO) and ranking member Marsha Blackburn (R-TN).

In addition to Strayer, the panel will hear from Victoria Espinel, CEO at BSA-The Software Alliance; Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy, Carnegie Mellon University; and Sam Gregory, executive director of WITNESS, an organization that “helps people use video and technology to protect and defend human rights.”

Espinel, according to excerpts of her prepared testimony, will cite the importance of “a strong national law” to support “responsible and broad-based AI adoption.” She will point out that “other countries are moving quickly on regulations that affect US companies. The US should be part of shaping the global approach to responsible AI. The window for the US to lead those conversations globally is rapidly closing.”

She will argue that legislation should “focus on high-risk uses of AI, like those that decide whether a person can get a job, a home, or health care”; require companies to have risk management programs and conduct impact assessments; and “require companies to publicly certify they have met these requirements.”

“It is important that legislation reflect different roles,” Espinel will say. “Some companies develop AI. Some companies use AI. Our companies do both. And both roles have to be covered. Legislation should set distinct obligations for developers and users because each will know different things about the AI system in question and be able to take different actions to identify and mitigate risks.”

“So my message to Congress is simple: do not wait,” Espinel says in her testimony. “AI legislation can build on work by governmental organizations, industry, and civil society. These steps provide a collective basis for action. You can develop and pass AI legislation now that creates meaningful rules to reduce risks and promote innovation. We are ready to help you do so.”

Strayer in his testimony will say, “AI and machine learning can be leveraged to improve cybersecurity. Indeed, defensive cybersecurity technology must embrace machine learning and AI as part of the ongoing battle between attackers and defenders.”

He will say, “The threat landscape constantly evolves, with cyberattacks that are complex, automated and constantly changing. Attackers continually improve their sophisticated and highly automated methods, moving throughout networks to evade detection. The cybersecurity industry is innovating in response: making breakthroughs in machine learning and AI to detect and block the most sophisticated malware, network intrusions, phishing attempts, and many more threats.”

Strayer will underscore AI’s role in manufacturing and supply chains, the healthcare sector and telecommunications.

He will argue, “There are existing laws and regulatory frameworks that can address AI-related risks, so it is critical to understand how those laws apply, and where they may not be fit-for-purpose, prior to creating new legislation or regulatory frameworks pertaining to AI. As an initial step, policymakers should evaluate how NIST’s AI [risk management framework] is being adopted and how it can be used to manage risk.” -- Charlie Mitchell (cmitchell@iwpnews.com)