Asaf Karas, Chief Technology Officer of Security, JFrog
Federal agencies face a variety of challenges with software in general and software updates in particular, including the dueling priorities of speed and security.
The cyber leaders and software developers on the front lines of managing the nation’s critical infrastructure are keenly aware of the need to more tightly integrate DevOps and security — strengthening security throughout the development, testing, and reporting process. The cyber Executive Order (EO) issued earlier this year outlines a broad range of measures to strengthen cybersecurity, including a specific call to action for vendors and government organizations alike to improve software supply chain security.
The EO states, “The development of commercial software often lacks transparency, sufficient focus on the ability of the software to resist attack, and adequate controls to prevent tampering by malicious actors. There is a pressing need to implement more rigorous and predictable mechanisms for ensuring that products function securely, and as intended. The security and integrity of “critical software” – software that performs functions critical to trust (such as affording or requiring elevated system privileges or direct access to networking and computing resources) – is a particular concern.”
Federal developers are challenged by the need for rapid software updates, particularly when a vulnerability is identified. Many areas of government are embracing the DevOps principles of “release often and quickly.” But, software updates in some application environments (including military platforms) can take months or even years to deliver. The challenge is that almost all pieces of military equipment used today (except for small, handheld weapons and tanks) include software – which means they have bugs and require updates.
One option for accommodating this need is enabling (secure/verified) automatic software updates. Automation means that release cycles are getting shorter. The industry is heading toward a future in which software updates will be constant. Effectively, software will become “liquid” in the sense products and services will be connected to “software pipes” that constantly stream updates into systems and devices – with no human intervention. Federal leaders should lay the groundwork today – and put the requisite security measures in place to ensure this can all happen securely.
Security in the Design: Continuous Verification
Realizing this vision means that organizations must be able to trust the software updates and update process. Software developers are the initial source of security – they determine the performance profile of a piece of software through their architectural and implementation choices. After development, a pipeline of other people and processes can detect and correct problems in software, but the initial bars of quality, performance, and security are set by the developer.
Developers can build in security right from the beginning of the process – using zero trust architectures, for example. Innovations including IoT and 5G were designed for worlds of cooperative, friendly participants, but incorporating zero trust principles into IoT and 5G solutions is key. No solution can be sufficiently locked up to prevent attacks but building zero trust into solutions gives agencies the means to constantly verify security.
Security and DevOps Working Together
Ideally, software companies and internal development teams can bake security into every stage of development and seamlessly deploy updates across geographies – from ground to cloud, to any device throughout the software supply chain. Realizing this vision requires security and DevOps teams to work more closely together.
Certainly, some overlap of responsibilities is beneficial. Ideally, the DevOps team should have some amount of security knowledge, so they are able to present a sensible deployment plan to the security team. Likewise, the security team should have some DevOps experience so they can validate the deployment parameters, for example, reviewing Kubernetes Custom Resources Definitions (CRDs). The security team should also be aware of the cybersecurity features available for in-house developed software. In some cases, the security team defines these features and requires that DevOps enable them when deploying the software. The security team should also define the required network segmentation of DevOps’ deployments. For example, there might be two containers that shouldn’t run on separate physical nodes.
In government, and for organizations that manage critical infrastructure, the security requirements are even greater, as we know. A software program might get a green light, but if security testing is weak, the system (and the agency) is vulnerable. Cyber leaders can have a false sense of security. They’ve checked all the boxes and yet end up with vulnerable software. This is one of the reasons zero trust is gaining momentum – IT security in general has not been able to secure the perimeter.
Trust, But Verify
When an agency or military organization procures software from a third party, they require documentation on the security testing. Ideally, the security testing is performed internally (by the third-party developer) and by an external security auditor. Furthermore, the security audit should include manual research, automated static analysis, and automated dynamic testing (fuzzing). In applications that require a high level of confidence and reliability, the organization can consider software that was written in accordance with software standards such as MISRA C. MISRA C a set of software development guidelines for the C programming language developed by The MISRA Consortium – the group’s goal is to facilitate code safety, security, portability and reliability in the context of embedded systems.
Some government and industry leaders have proposed improving security with a software bill of materials (SBOM), similar to the ingredients label one would see on a package of food. An SBOM provides a clear chain of custody, detailing all the people who touched a piece of software, binary code, or anything related to the end product – from inception to when it is installed.
While the SBOM is a useful tool, buyers need education to understand the various “ingredients” in an SBOM, which should include all open source and third-party software components, the licenses for those components, the versions, and their patch status. Too often, there are open source versions or licenses used but not logged – which creates risk.
Testing Security Measures: Every System is Critical
Federal CISOs recognize the need for more extensive security than in the past. One key process is the concept of red teaming, where a team of specialists mount a false attack on an agency’s systems. It is a valuable tool that agencies should use more often, and it can be performed during the design process as well as the qualification cycle.
Often organizations only try red teaming on critical systems, but recent supply chain attacks have demonstrated that every system is critical.
For example, an army base’s payroll system may be the weakest link – in part because it isn’t scrutinized with the same priority as weapons systems. However, this system could be the entry point to attack the entire network. Red teaming identifies weaknesses and vulnerabilities in systems that can be mitigated prior to an attack, protecting the entire network.
Additionally, review software certifications – as these certifications mean more testing. As an example, in conjunction with the DoD’s Platform One initiative, developers can access a central binary repository of secure, Iron Bank-certified resources that have been hardened to the DoD’s specifications. This container registry has Continuous Authority to Operate (cATO), allowing developers to easily push validated code into production more quickly.
This allows developers to access a central binary repository of secure, Iron Bank-certified resources that have been hardened to the DoD’s specifications. This container registry has Continuous Authority to Operate (cATO), allowing developers to easily push validated code into production more quickly.
Vulnerability Disclosure
As the variety and complexity of software environments grow, vulnerability disclosures become increasingly important. With dozens of vulnerabilities found each day, vendors and agencies must provide obvious and easy ways for external parties to report vulnerabilities. Establishing a vulnerability disclosure program (VDP) gives government agencies another layer of protection – specifically to deal with unexpected situations that might include unusual or creative attack vectors. A VDP creates a system for organizations and researchers to work together, find vulnerabilities before they can be taken advantage of, protect essential data from exploitation, and stay a step ahead of cybercriminals.
A VDP provides a clear method for researchers to securely report vulnerabilities they discover and a framework for the agency’s response to reports in an appropriate time frame. Security researchers can expose new vulnerabilities unknown to the vendor, and thus initiate a path to fixing security gaps before attackers find them.
As VDPs become common practice, organizations can provide their stakeholders /customers/partners with peace of mind by actively looking for and remediating vulnerabilities. Implementing an effective and efficient VDP can reduce the risk of security flaws being exploited by cybercriminals.
One important resource is the CVE program, under which organizations identify, define, and catalog publicly disclosed cybersecurity vulnerabilities.
Within this program, a group of public and private sector organizations (including JFrog, Red Hat, Google, and Microsoft) assign CVE identification numbers to newly discovered security vulnerabilities and publish related details in associated CVE records for public consumption.
Cybersecurity and IT professionals worldwide use CVE records to coordinate their efforts for addressing critical software vulnerabilities.
Summary: Aim for a Layered Defense
As developers balance the need for “speed to mission outcome,” with the need to keep the software supply chain secure, steps that help to build a multi-layered defense include:
- Integrate Security and DevOps: Each team should understand the others’ role, and capabilities. For example, the security team should know the cybersecurity features they have available in-house, and what features need to be enabled by DevOps. The security team should also define the required network segmentation for deployments
- Evaluate Software Benchmarks: For example, solutions with the DoD’s Platform One Iron Bank Certification are part of a central binary repository of certified resources that have been hardened to the DoD’s specifications
- Implement a Multi-Pronged Testing Approach: The process should include internal testing, as well as testing by a third-party auditor who has conducted manual research, run the software through an automated static analysis, and executed automated dynamic testing. In addition, plan periodic red team exercises to identify missed or new vulnerabilities
- Establish a Vulnerability Disclosure Program (VDP): The policy should include direction on how researchers can securely and quickly respond to, and resolve vulnerabilities
Taken together, these measures will provide the cyber front line – software developers – with the strongest defense and the best opportunity to stay ahead of evolving adversaries.
About the Author
Asaf Karas is Chief Technology Officer of Security at JFrog. A seasoned security expert, Asaf has extensive experience in reverse engineering, device debugging, network forensics, malware analysis, big data and anomaly detection. He has spent several years working with international military organizations.
Karas can be reached on twitter @asfkrs and at our company website https://jfrog.com/