The Pentagon can require vendors to certify that their software is free from known defects. The experts are divided.
Written by Suzanne Smalley
Should the Pentagon require vendors to only sell military software that is free of known vulnerabilities or flaws that could cause security issues? At first glance, this seems like a reasonable request.
But when security researcher Jerry Gamblin tweeted a screenshot of the House of Representatives software vulnerability provision as part of the massive National Defense Authorization Bill of 2023 – passed on July 14 – it has divided the cybersecurity community. The debate boils down to two key arguments: the requirement is unnecessary and unattainable or a groundbreaking decision that will begin to hold software vendors accountable for selling faulty technology.
The Biden administration is on the side of software vendors’ responsibility to ensure their products are free of known common vulnerabilities and exposures, or CVEs. The software industry should emulate the auto industry, where “manufacturers retain ownership and liability” throughout the life of the vehicle, said Anne Neuberger, deputy national security adviser for cyber and emerging technologies. .
“The technology model for far too long has been that users are responsible for patching devices and systems and recovering from an incident when a vulnerability is exploited — and that model needs to change,” Neuberger told CyberScoop in a post. interview Friday. “This certainly includes fixing critical CVEs before a product is sold and maintaining visibility of new CVEs and their accountability.”
But cybersecurity chief Dan Lorenc says there is no such thing as software without vulnerabilities.
“On the face of it, to someone outside the industry, it seems perfectly fair to ban the sale of software with known vulnerabilities,” wrote Lorenc, a former Google software engineer and CEO of Chainguard, in a post. of blogging. “Why would you sell something vulnerable? And why would anyone buy it? Especially an organization responsible for national security. But for anyone who’s spent time watching the results of the CVE scan, that idea is misguided at best and an impending s***show at worst.
But it’s time to start putting more responsibility on software vendors, says Michael Daniel, former senior cybersecurity adviser to President Obama and now president of the nonprofit Cyber Threat Alliance.
Daniel pointed out that there is some wiggle room in the layout as it allows the contractor to identify any vulnerabilities or flaws and a plan to fix them. Another provision asks the Secretary of Defense to provide guidance on how and when to apply these rules.
“This change would be quite significant because software developers have long taken no responsibility for vulnerabilities in their products,” he said, adding that it would “mark a change in the market.”
Daniel disagrees with Lorenc’s view that since there is no software without vulnerabilities, it is wrong to require companies to take responsibility for eliminating all vulnerabilities. known before selling to DOD.
“NIST [National Institute of Standards and Technology] database is well accepted as a source of vulnerabilities,” Daniel said. “It is true that not all vulnerabilities are created equal: some are more dangerous than others and some are more likely to be exploited than others, so there are certainly nuances in the importance that a defender grants to a given vulnerability.”
Daniel said he expects the directives issued by the secretary to respond to this dynamic. “The underlying principle that you shouldn’t ship software whose known vulnerabilities you haven’t mitigated seems to be a good one,” Daniel said.
But Lorenc’s side includes many vocal opponents of the bill, including Harley Geiger, leading cybersecurity policy expert who tweeted: “Policymakers: Please stop considering requirements to remove ALL software vulnerabilities, or bans on sale of software with ALL vulnerabilities. Please understand that not all vulnerabilities are significant or can/should be mitigated.
Lorenc also said NIST’s National Vulnerability Database (NVD), a government repository of data-based vulnerability management standards, is unfeasible at scale. “Vulnerability data is bad; like really, really bad,” Lorenc wrote in his blog. “As an industry, we haven’t figured out how to accurately assess severity, measure impact, and track known vulnerabilities in an evolutionary way.”
Lorenc said many organizations don’t know all the software they use. He pointed out that technology research firm Gartner found that up to 35% of IT spending was spent on software the owners didn’t know existed.
In an interview, Lorenc said the fundamental problem is that there is no universal definition of a vulnerability. He said that many of the vulnerabilities included in the NIST database is partially incorrect or not always applicable.
“We don’t have a common vocabulary to explain all of this and so a lot of the stuff in there really comes across as noise and there’s no good way to filter it out or correct it,” Lorenc said. “So the NVD tries to be as open as possible, but that leads to a lot of fixes happening in an unstructured way, making it difficult to keep track of tools and systems.”
Comments are closed.