Open Source Security Foundation | Interview with Brian Behlendorf, GM, OpenSSF

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Open Source Security Foundation | Interview with Brian Behlendorf, GM, OpenSSF. The summary for this episode is: <p>Brian Behlendorf is the General Manager of the Open Source Security Foundation. Brian has dedicated his career to connecting and empowering the free software and open source community to both solve difficult technology problems and have a positive impact on society. From startup company founder, to advisor to the U.S. government, to non-profit board member and employee of the World Economic Forum, he's been at the forefront of the open source software revolution.&nbsp;</p><p>Join hosts Luke Schantz and Joe Sepi as they get Brian's take on the latest open source software developments. As the recent Log4J vulnerability has shown, open source software is not immune to security breaches and attack. Brian shares his views on the Log4J scramble, his recent White House meetings on software security, the costs of security and threat mitigation, and future challenges and opportunities in open source software.&nbsp;</p><p>Join us for a look back at Brian Behlendorf's unique career and see what's next for him and the movement he helped launch, this time on In the Open with Luke &amp; Joe.</p><p><br></p><p><strong>Key Takeaways:</strong></p><ul><li>[00:04&nbsp;-&nbsp;00:24] Intro to the episode</li><li>[02:00&nbsp;-&nbsp;02:49] Intro to Brian Behlendorf</li><li>[02:59&nbsp;-&nbsp;08:04] Brian's role with the Open Source Security Foundation</li><li>[08:46&nbsp;-&nbsp;14:16] The importance and newer focus on security</li><li>[15:29&nbsp;-&nbsp;18:27] How to more folks, importantly the US Government, involved in Node.Js</li><li>[18:52&nbsp;-&nbsp;21:43] SBOM</li><li>[21:48&nbsp;-&nbsp;26:17] The Alpha Omega Project</li><li>[27:28&nbsp;-&nbsp;30:58] Getting money and support for security</li><li>[31:46&nbsp;-&nbsp;35:02] The Best Practices badge</li><li>[35:12&nbsp;-&nbsp;38:44] Project Sigstore</li><li>[39:29&nbsp;-&nbsp;41:04] How to get involved in Open SSF</li></ul><p><br></p><p><strong>Resources</strong>:</p><p>Brian Behlendorf bio: <a href="https://en.wikipedia.org/wiki/Brian_Behlendorf" rel="noopener noreferrer" target="_blank">https://en.wikipedia.org/wiki/Brian_Behlendorf</a></p><p>Open Source @ IBM: <a href="https://www.ibm.com/opensource/" rel="noopener noreferrer" target="_blank">https://www.ibm.com/opensource/</a></p><p>Learn in-demand skills. Build with real code. Connect to a global development community: <a href="http://ibm.biz/IBMdeveloperYT" rel="noopener noreferrer" target="_blank">http://ibm.biz/IBMdeveloperYT</a></p><p><strong>Follow IBM Developer on social</strong>:</p><p>Twitter: <a href="https://twitter.com/IBMDeveloper" rel="noopener noreferrer" target="_blank">https://twitter.com/IBMDeveloper</a></p><p>Facebook: <a href="https://www.facebook.com/IBMDeveloper/" rel="noopener noreferrer" target="_blank">https://www.facebook.com/IBMDeveloper/</a></p><p><strong>More from IBM Developer</strong>:</p><p>Community: <a href="https://developer.ibm.com/community/" rel="noopener noreferrer" target="_blank">https://developer.ibm.com/community/</a></p><p>Blog: <a href="https://developer.ibm.com/blogs/" rel="noopener noreferrer" target="_blank">https://developer.ibm.com/blogs/</a></p><p>Call for Code: <a href="https://developer.ibm.com/callforcode/" rel="noopener noreferrer" target="_blank">https://developer.ibm.com/callforcode/</a></p><p>#opensource</p><p>#Developer</p><p>#Coding</p><p>#IntheOpen</p><p>#IBMDeveloper</p>
Intro to the episode
00:19 MIN
Intro to Brian Behlendorf
00:49 MIN
Brian's role with the Open Source Security Foundation
05:05 MIN
The importance and newer focus on security
05:29 MIN
How to more folks, importantly the US Government, involved in Node.Js
02:57 MIN
SBOM
02:50 MIN
The Alpha Omega Project
04:28 MIN
Getting money and support for security
03:29 MIN
The Best Practices badge
03:15 MIN
Project Sigstore
03:32 MIN
How to get involved in Open SSF
01:35 MIN

Luke: In this episode of In the Open, we are pleased to bring you a conversation with Brian Behlendorf, General Manager of the Open Source Security Foundation. We'll be discussing a variety of topics, including the recently announced Alpha Omega Project, the Best Practices badge, working groups, Sigstore and more. But before we welcome our guest, let's say hello to our co- host, Joe Sepi.

Joe Sepi: Hey Luke, how are you my friend?

Luke: Good, how are you doing, Joe?

Joe Sepi: I'm all right. It's a bit rainy out here and it's been raining for a couple of days and it's washing away all the snow, which I have mixed feelings about because now I'm left with a muddy, dirty mess.

Luke: It's funny you say that because I was reflecting on the same. I used to really dislike the snow when I lived in New York City because there was just no place to go with it. But I must say they really know how to deal with the snow out here in Connecticut. It's really not a problem.

Joe Sepi: And if it's going to be cold there might as well be snow. It's fun and just the blanket of white I really like. And it's quieter, it like dampens the sound. I like the snow. And I was actually hoping for warmer temperatures because there was some ice that I needed to clear, but I'm just not happy about all the snow melting away.

Luke: You'll get more before you know it. Before we welcome our guests, I just want to say to all our listeners, if you have any questions, please drop them in the chat. If you're catching this later on a replay or a podcast, check our Twitter handles and feel free to message us on Twitter. But without further ado, let's welcome our guest, Brian Behlendorf.

Brian Behlendorf: Hey there.

Luke: Hey Brian. Welcome to the show.

Joe Sepi: Yeah, welcome. Thanks for joining us.

Brian Behlendorf: Thank you, Luke. Thanks Joe.

Joe Sepi: How's the weather out there?

Brian Behlendorf: It's sunny, it's very dry. It's been an entirely dry January, which was depressing because we had a great series of storms in December that got us well over average. We're pretty civilized on the West Coast. We keep our snow in the mountains where we can go visit it. But yeah, no, it's nice and sunny, but we could sure you some rain.

Joe Sepi: So maybe let's start off with a bit of a self introduction, if you don't mind.

Brian Behlendorf: Sure. As Luke said, I'm Brian Behlendorf, General Manager for the Open Source Security Foundation, which is embedded inside of The Linux Foundation. I've been with The Linux Foundation since 2016 when I joined to lead something called Hyperledger, which is an enterprise blockchain initiative that was at the other end of the spectrum from all the cryptocurrency and ICO madness, NFT madness a bit, although there's ways to do NFTs that don't destroy the planet with energy consumption and that sort of thing. That's been a fun ride. I passed the baton on that and have been leading Open SSF since October. I'm also on the board of a couple of organizations, the Mozilla Foundation, the Electronic Frontier Foundation, I've been on that since 2013. And had a career doing things in open source and open technologies, starting companies. I worked at the White House briefly with CTO for the World Economic Forum for a while. So a bunch of different things.

Luke: Excellent. And so let's dig into a little bit of your current role. And we'll first start with what is the Open Source Software Foundation?

Brian Behlendorf: The Open Source Security Foundation is like many initiatives, most of the initiatives at The Linux Foundation, is a consortium of organizations, of stakeholders, who have pooled some resources, a bit of funding, a bit of their own staffs as volunteer on the project to focus on enhancing the security broadly stated of the open source ecosystem as well as focusing on the software supply chain in open source. I got started in open source and I think the first piece of open source software I used was'91, just playing around with Usenet and FTP and Gopher as a freshman at Berkeley. And then in'92 setting up websites. So using Gopher sites at the time, and then pre- Web, and then getting started with Apache in'95. And in those earlier days, software and the internet as a whole was a much higher trust environment we could take for granted. It's like when somebody would email you that you didn't know, you're like, about email too. You must be interesting, you must be competent, you must be somebody who I can by default trust. And likewise, when you'd find a software on the internet you could download, there was this default assumption that the folks behind it were competent, that they took security maybe seriously. We all didn't worry about TLS hardening our connections at the time because we trusted the admins on the boxes and the networks not to be snooping our traffic. Out of this high trust environment, the highly social interactions between developers on open source projects and the dependencies that they build on top of let us get a little bit lazy about things like, how do you really know when the dependencies you pull, that the diligence and duty of care that those developers might have taken around responding quickly when there's security holes that have been published, right? Or even just that they've been informed that there are security holes in their code. How often are they paying attention to the compiler errors or warnings, not errors, that kind of suggest we might not have wanted to cast that into a pointer because that might be a highly exploitable kind of thing to do. And certainly the array of different analysis tools you can run today speak to more that's possible. So now transposing us into 2022, this really low trust, zero trust world that we live in where the vulnerabilities are not just from the code as it sits and off by one errors that lead to buffer overflows and that kind of thing, it comes from things like developers deciding to give the middle finger to enterprise and changing their JavaScript to print out a whole bunch of nonsense rather than doing the thing they had been doing for a few years. The faker. js, color. js attack. Or somebody saying, hey, I see a package named something very generic, something. js, I wonder if I register another package in NPM that is a slight misspelling of that, how many people I'll catch who inadvertently include my code? Maybe I'll add a dash or remove a dash that was there in the name of the module. And that's been a source of attacks as well. Or developers who use a simple name and password for NPM, for example, which recently fixed this, or other resources, their GitHub account, and through either malware or some other thing, their credentials get compromised. And suddenly now their privileged position inside of this software supply chain gets put under attack. But we also shouldn't forget that part of the challenge is we're dealing with software now that is so bifurcated, so highly granular now. The average application will pull in dozens to hundreds to thousands of underlying dependencies, each of them written by teams if you're lucky, and often just one or two people who perhaps are doing unsung yeoman's work down in the depths who can't necessarily afford to think about things like threat modeling, or third party security audits, or the other types of things that might lead to better software. What we're trying to do at the Open SSF is say, let's look at the entirety of the supply chain from when code is written in an IDE or sits in a developer's head to an IDE, to building it and building upon dependencies, and developers make choices about which dependencies, to use to getting into the package management systems and other distribution points in the end user. Where are there these defaults that come from this high trust world that we really need to reexamine? And so all sorts of our projects, and really sometimes it looks like a nerd's paradise, a kind of circus of all these different things going on under the Open SSF. But what are the specifications that'll help us get further and faster to hardening some of that supply chain? What are some tooling that'll help first accelerate adoption of those standards, but then also lead to higher quality code? What are some things we can do to try to help people evaluate objectively the risk involved in an open source module? And then finally, education plays a big part of this as well. How do we help train developers who ordinarily do not receive any formal, and rarely do they get in the informal training about how to write secure code? What are some patterns to avoid, some real anti patterns in writing code? I took a few CS classes at Berkeley. It was my major until I dropped out, but most of what I knew about programming I picked up from man- pages and O'Reilly books and that kind of thing. And most people in the industry are the same way. Are there things we can more systematically do to try to raise the floor on people's understanding of what security means in open source? Anyways, a wide array of different things we're doing and I would love to talk more about them.

Joe Sepi: Yeah, you threw a lot out there. There are all sorts of places to dig in. I kept thinking, oh yeah, we should, oh that too. Yeah. But I guess maybe before we even get into some of those details and how we address these concerns, I thought we'd set it up even a little bit more. And what I'm thinking about is it's not just the developers and the end users and the people who are working at open source who are thinking about a security. Now with Log4j, it's like everybody is really super hyper focused on security now. It seems like everybody's talking about it everywhere I go. But that even goes up to the government and that seems a bit of a new thing to me, the things that have been happening at that level. Maybe you could talk a little bit more about that too.

Brian Behlendorf: Yeah, well first off, it's unfair to Log4j developers that the name of their project has become such a key word for what's going on because frankly it seems like every four months there's a new thing that captures people's attention. Solar Winds is another example of this. And those developers are pros. They write software for a living. None of them are necessarily full- time on Log4j, but they're all using this in commercial applications. It's a little bit unfair for them to be hoisted up by that. Certainly what happened there became a poster child for things that even with the best of intentions, even with some of the basic processes that Apache Software Foundation does have to try to present a degree of comfort and reliability to its downstream users, there's still very much a perspective of caveat mTOR in a lot of the code. And what that leads to is not necessarily the right kinds of investment in things like a third party audit or in things like threat modeling on the code and the like. And also when the bug was revealed, and there was a bit of an inadvertent reveal, there was a commit to fix the bug that had been pointed out by the researcher from Alibaba, the commit mentioned a CVE number that had not been publicly disclosed. And people picked up on that and went, oh wait, this seems like a bigger issue. They very quickly had to tell the world about what was going on, rather than being able to take the time to do a coordinated vulnerability disclosure process to talk to the people who might be most affected by this to get them to update first, to think about how to make sure you have the right fix and not the series of four fixes that they ended up having. It also meant though that because it was so easy to compromise and hard to find where Log4j was actually being used, and which version of Log4j, that it caused this massive amount of disruption. And disruption that's expensive. It was expensive to those developers who started getting faxes and other weird emails from companies demanding that they do things that those companies had never had a relationship before. They weren't paying the Log4j developers or anything. And to suddenly make demands like they were was pretty unfair. Anyways, a lot of this bubbled up and folks at policy levels started to realize there's something going on here that's either worrisome or merits further attention. It's almost as if every bridge and highway in America had been built by barnstorming, so people digging into the ground and laying concrete individually. And then we all woke up and realized there's a lot of variants out there in the quality of the roads and in the systems we have, and a lot of them seem to be getting potholes. We've been contacted by some folks at the NSC, the National Security Council who work within the White House about asking questions, asking about what does this take. We found that contrary to the perception that governments might either be ignorant of how software is built or how open source works, or think that everybody's a volunteer, or that there was something malicious going on here, we found them to be pretty knowledgeable about the mechanics of it and asking some sincere questions about what can government do not just as a big user of open source software, which they should be investing at least as much as Google. They probably have just as much revenue as Google has and if not more in software development that they're funding. But also as a peer in the ecosystem, they should be participating and trying to understand how to help harden it and improve it. But also as the institution we turn to help guarantee public safety and to think about critical infrastructure and resilience in the face of nation state actors now who are starting to exploit these holes. They just wanted more information about that. They hosted a meeting that was originally going to be face- to- face. Then we all flew in and it was canceled at the last minute, so we all flew home. But at the White House with members of the NSC, Office of National Cyber Director who've done quite a bit on software bill of materials and the like, and it was an open conversation. It was with folks from our organization, myself, Jim Zemlin, the Apache Software Foundation, and about 10 other firms all talking about where are these systematic weaknesses. And most importantly, how do we not pin this on the open source developers or do things that end up feeling like here's a 300 page checklist of all the things that you must do, thou shalt, in order to allow your software to be used by government or others. And instead the focus was on where can we invest in doing some of these either security kinds of actions that might help improve open source software. Where can we show up with code, with poll requests? What are the kinds of interventions that would feed into rather than would slow down the kind of collaborative innovation processes that make open source so powerful? It was really great to see that. There's some follow- ups coming through that which speak more to this idea of where might there be some targeted opportunities for this. There's already groups within the US federal executive sector who have some resources to be able to spend in this domain. I was just on a meeting earlier today hosted by the Open Forum Europe where the Dutch Minister for Innovation, Digital Minister, French Digital Minister talked about setting up essentially an OSPO for the French government and resourcing it with 30 million dollars to spend on improving open source software in the interest of the French government. The great news is these are all digital public goods that are worldwide. And so the investment dollar that the US government puts in or the French government or the Japanese or even the Chinese, if it's targeted, if it's actually additive to those processes, will pay out for everybody and improves the security. We're continuing these conversations. We're not aiming to be a lobbyist organization, of course we are here to try to just figure out if there's resources that show up across the public or private sector, what are the best ways for those to be deployed to be helpful to the open source industry?

Joe Sepi: Yeah, this is fascinating to me. I think the way you just talked about it too is really interesting because it's two sides of this huge spectrum. We're talking about these governments, the US government, the French government, all these governments, but then we're also talking about a person who made a commit in a GitHub repo. These are humans. And I do feel bad for the Log4j folks and other folks who get caught up in this. But I just find it really fascinating that it really spans that whole spectrum of one person versus, not versus, but to a whole government. It's really fascinating. And I think it's interesting too to think I work in the OpenJS Foundation and on the Node. js space, and one of the things we focus on in the work that we do there in both those places really is how to get organizations more involved in supporting the efforts with different tech companies and whatever, ebbs and flows. And I'm curious, and I don't know if this is really a question, but my brain is thinking about how to get the US government more involved in Node. js for example. I can go knock on the door of Google and Microsoft or whoever and find the person to talk to, but I wonder about how to get more folks involved from that level too.

Brian Behlendorf: There's such a strong libertarian streak in open source software, historically has been, in internet circles there has been as well. Where we're fairly afraid of involving government in core governance processes of open source projects if we can avoid it, and not for poor reason. There's lots of examples of engagement by government in open source, both positive and a few negative ones over the last 25 years. On the positive side, everything from SELinux, do you remember this? The kind of NSA secured and hardened version of Linux that actually fed a lot of interesting ideas regarding capabilities and the like into the Linux kernel. To something called Vista, which was the Veterans Administration's original health electronic health record system, which was written by US government employees. And so it was open sourced through a series of Freedom of Information Act requests made to the Veterans Administration that eventually became the open Vista Health record platform. To other kinds of investments they've made. The state department invested, for example, in open source tools to facilitate human rights workers and whistleblowers and others working in dangerous countries. Lots of good work going, they invested in TOR for example as well. And at the same time there's been places where the US government has imposed requirements or things like FIP certification for example, which is a good thing. You want your product that deals with encryption to be used in highly critical environments. You should expect there's probably a certification process for these kinds of things. But getting Open SSL to pass FIP certification has been a huge amount of work and cost and time delay. And it ends up meaning that they certify a version of a core TLS library, SSL library that is years behind date by the time they actually finish that. One thing we talked about in that meeting and has been a part of other conversations is how do we get certification processes or mandates like the SBOM mandate that was part of the recent executive order from last year. How do we get this to be more practical and more reflective of the fact that open source development isn't focused on the end object so much as it's focused on a stream? And in fact being able to quickly update in the face of a CVE depends upon treating software like streams that need continuous refreshment, continuous update, rather than bars of gold that sit in a safe somewhere. I would expect to see things like future mandates in the procurement process around bringing in open source code to be focused less on a FIP style certification of a hard object and more about, how do we look at these risk scores. How do we look for certain behaviors like getting a Best Practices badge on security or your developers have training on writing secure software. Or some of these adoptions of standards that speak to a healthy process rather than to specific outcome. And I think that's going to end up generating positive benefit for the users well beyond government users.

Joe Sepi: And to be clear, I don't want the government all getting up in my business, but I'm thinking about 18F and just the way perhaps since the Obama era, which I don't know if that's where you were involved, but things have been seeming to be more modernized and more digital focus and more tech savvy. I wonder about INS talking to people in those places and having them get more involved as well. But anyway, you mentioned the SBOM stuff, the software bill of materials. I'm working on some of that internally and thinking about it externally as well. Maybe you could share more about what that looks like and this whole concept of a stream rather than just a deliverable. How does that work in your view in the community and the space that you're working in?

Brian Behlendorf: For folks who aren't familiar with it, software bill of materials documents are intended to say, for this piece of software, here are the underlying pieces of code that it incorporates as well as some other metadata, standardized metadata about the software package to help you organize it better. And historically, one of the first uses for SBOMs has been in licensing, making sure that this package I have that's labeled as open source software available under the Apache license are all the underlying pieces. Also Apache licensed products or otherwise open source licensed or, oh wow, there is this oddball, unlicensed, or perhaps proprietarily licensed thing, lingering inside. Okay, we've got to figure out how to remediate that. Now the SPDX standard, which was really developed and derived to focus on license conformance and compliance has been extended to also now be a tool for tracing that tree of dependencies that you have inside of a software project, like the label you have on a back of something you eat, it tells you what's inside. And it's not a panacea, it's not right now today not easy to implement. Not the one day's worth of work that it really needs to be for somebody to take a standard build system and add SPDX support. That's changing though. Now that SPDX is an ISO standard, there's a whole lot of corporate comfort with it and that's starting to open up checkbooks as well as procurement requirements upstream to technology vendors to say, hey, we expect you to publish an SPDX SBOM on what you sell to us. And that's going to push upstream to the underlying open source components, not in a mandate kind of way, but a, hey, we noticed that this piece of software you're using, it'd be really beneficial if you all were publishing an SBOM, an SPDX for it. We're going to submit a poll request to you to add that to your build system. But there's also a work that needs to happen in improving the SPDX generation tools. So that's one area that we're pulling together some resources to try to help improve is to make it easy for the standard build systems out there or for the standard CICD systems to come by default or with a real simple command line option to generate these SPDX files. The other would be having people in a dev inaudible capacity to go out and work with key open source projects, especially those that get embedded as libraries inside of other people's and say, here is a poll request to add this to your build system. And then for an app that incorporates that library, here's how to check the SBOM to make sure that it meets the requirements that you have for proving that validation through the supply chain. All that work is being done today. It's still a work in progress. The important thing was getting SPDX as international standard, which is what ISO status accomplishes for us. And now it's simply a ground game of going out and trying to get more adoption of organizations by open source projects.

Luke: Brian, I know you announced a new program this week, the Alpha Omega Project. Could you tell us a little more about that?

Brian Behlendorf: Like all good open source projects, you want to be open, you want to be early, you want to have that be chapter one rather than chapter 10 in a lifespan of a project. This is something that builds upon what started out as a white paper written by Michael Scovetta at Microsoft, but which quickly found a lot of believers over at Google and across other members of our Open Source Security Foundation. And the idea here is that to actually bring better open source practices to some of the key open source projects out there, it's not enough to say, here's a standard, here's a document, here's a white paper. You have to meet them where they are on as a set of security experts. Talk with them about what are some pieces inside of your family of technologies that perhaps are not as widely scanned as others. Or a known vulnerability that there hasn't been the resources put together to try to address. Or here's how to adopt Project Sigstore, which is a package signing process. So part of it is like high- touch, think of it like pro bono consulting on hardening the security practices and improving the security practices inside of a project. And if there's a spot project that needs$ 30,000, $50,000 worth of work, let's undertake that, right? Or a third party audit or something like that. And that's for the Alpha end of the spectrum of projects out there, perhaps the top 100 or 200. And frankly if we can be helpful to a half dozen this first year, we'll be happy. At the other end of the spectrum, and I'm not talking about the millions of projects on GitHub, I'm talking more about say the top 10,000 projects that really matter that are part of a Linux distro that are part of a modern build environment, the top ones at NPM, that sort of thing. For perhaps the 10, 000 most important, are there ways to systematically use tooling to get a better sense of not just the security posture of these projects, are they using Best Practices badges and pinning dependencies and the kinds of things that Scorecard looks for. But also, okay, so Log4j, the problem was that they were taking in user submitted input and parsing that for format strings and they had some protections against that. But then there was a hole in protecting against JNDI LDAP, right? So the question is if we know that's a problem, we can fix it for Log4j, but how many other Java projects are out there that potentially take in this kind of input and do this kind of unsafe thing, that forgot to close up this JNDI hole in some way? And if you could query across those 10,000 projects for the Java ones, do you do this kind of thing? You do, let's dig in a little bit closer. You might discover new vulnerabilities or new facts that give you some pause and you want to communicate with the maintainers of the project. This was a set of dinner cutlery in your drawer and there's a big chainsaw sitting in the middle, you might want to reconsider whether that chainsaw is what you want to have there or at least put a chain guard around it while it's sitting there. Omega is intended to be that surveillance system for the broad suite, for can we use automated tooling and pattern matching and other kinds of ways to interrogate the code that's sitting there and ask it, are there things to be worrisome about? And then highlight that and work with maintainers to go, is this something real? Or if we think we find a vulnerability to work that through perhaps a neutral nonprofit driven version of Project Zero at Google, right? It's not a perfect metaphor. We definitely want to be more collaborative with the maintainers if we find weaknesses and issues. It is something that's hard to be entirely public about. You don't want to surprise maintainers by finding a vulnerability and telling the world about it first. One of the hardest parts about this project is it is going to be pretty human intensive, resource intensive. We have raised five million dollars to get started on this and start to recruit those teams to do that work and start to perhaps write some checks to some projects that could use that, as well as to put together the platform for being able to review this code and be able to look at it and ask those kinds of questions. But it's seed capital, it is seed funding. This is the kind of thing that to really do right will require a lot more funds, but the amount of positive impact I think we can have on open source will be far beyond the cost that comes in. So that's what Alpha Omega is about. It's early days for sure and that we're still figuring out our engagement model with the public. But if you're interested further in this, we'll have a webinar on it coming up soon and there's a place to talk about it. And we're really eager to figure out how can we really leverage the expertise that's out there and publicly available from volunteers. And if anyone wants a new gig wearing a cape and fighting on the side of good, we're going to have some job descriptions up soon.

Luke: It's so interesting to hear you dissect this space because I feel like in past eras of society it's like when it was mechanical it was like, okay, I make a gear and it's used wherever. Now it's, if you make that gear, you're interacting again, like you said, with nation state. It's amazing how we're all in this really together and it's very complicated, like you say. Maybe there's a decent amount of money that needs to go into it to help secure it, but if you compare that to the potential losses and the losses that we're experiencing every day, we've seen this explosion of ransomware. The Colonial Pipeline was a big one, I thought, that brought it to everybody's attention. Wow, this is serious. So I think any investment that a company makes or that the government makes I think is going to be obviously well worth it because the losses and the threats to both public and private life are just huge.

Brian Behlendorf: Yeah. Yeah, definitely agree.

Joe Sepi: I would add too, it's interesting, security is newsworthy and exciting when something goes wrong, but when something's going well you don't talk about it. It's not in the news. And so it's interesting, you want things to be going well, but then when they're going well, is it easier to get money or support or involvement? But when it goes bad, then everybody's, oh, let's throw things at this to fix it.

Brian Behlendorf: We have to get out of this world where security is the thing that's only punished in the negative rather than rewarded in the positive, right? Where people only talk about it or prioritize or spend money on it after the crisis has hit rather than long before. I would love to see a world where you're able to see quickly as a developer, I'm going to build upon these couple of dependencies. If I'm writing an app in Java, I need a logging framework, and I have multiple choices. Today there's multiple choices beyond Log4j, right? Which of these projects follows a set of practices that speak to probably better security outcomes? Which of them have the bulk of their maintainers haven't taken or certifiably taken some sort of course in secure software development recently? If there are objective tools that I can use to decide as a developer what platform to build on and that favors those types of projects, then there's suddenly a positive incentive for developers to start doing that thing. You get more users, build a bigger community, get recognized in some way. And I think of this as a corporate level too. If we come up with a set of tooling and metrics and other systems that are objective, that are automatable and allow for a company to say, hey, I'm willing to tolerate a little bit of risk because I want to jump into a new space like blockchain or whatever, and that stuff's crazy but I know I've got to do it so I'll take a little bit higher risk there. But for my core banking systems and payment systems or whatever, I'm going to use stuff that's much more secure. They can dial that up and down, but consciously do that rather than be surprised by it. And everybody today is talking about it and trying to buy and trying to price cybersecurity risk insurance because there are starting to be big fines for breaches and that kind of thing. In fact, the FTC made a statement at the beginning of January that they expect the people to upgrade if they're vulnerable to the Log4j effect. Because if they fail to remediate and Log4j leads to a bigger breach, they will add additional fines to that breach if it's been shown that you did not update. Now that adds to the punitive, but can we spin that to something positive? So insurance is one way to do that. If the insurance companies had an objective tool to go to an enterprise and say, your use of software, open source or not, objectively we come in, we run a tool, we scan it. They have these today for licenses. There's no reason you couldn't have it for security posture as well. And objectively here's your score. And by the way, if you bring that score down by 20% or improve it by 20% because you make different choices or you invest in Log4j, or you invest in this one and improve the score this way, then we'll cut your premiums by this much. And that could start to create a positive incentive for those companies to invest in the kinds of things that aren't sexy to invest in these days, paying off technical debt, looking for security holes, responding to small bugs that actually might indicate larger breaches. Or even for the insurance companies themselves to recognize collective interest and go, hey, we're going to pay out fewer claims if we help the industry harden by looking at the low- hanging fruit to go and try to solve out there. I'm really fascinated by what are all these financial mechanisms we might be able to use to encourage the kinds of software work that is so often left on the floor in the rush to add features or to meet some deadline or other things. To pay off technical debt to close up holes, hopefully as well to motivate the move towards things like memory safe languages or other kinds of deeper refactoring changes or redevelopment changes that would just get us out of a whole category of potentially problematic behavior.

Luke: That totally makes sense. And it's also then it's a language that the business side can understand too where they're like, Hey, there's these mechanisms we can interface with. Because I think sometimes there's a lost in translation between the frontline technical developers who are under pressure to deliver features that are, hey, this is our ROI we're looking for, where this is more about preventing loss and it's a new language for the business side to fully appreciate. But I'm sure those punitive fines are something that they really recognize.

Brian Behlendorf: It's easy to do the fines, it's easy for government or otherwise to say we'll ding you this, but that's going to be the least helpful thing I think to open source developers and organizations around open source.

Luke: We are not running low on time, but I just wanted to make sure we touched on Scorecard and your Best Practices badge because I think these are pretty interesting.

Brian Behlendorf: The Best Practices badge came out of a previous effort called the Core Infrastructure Initiative, which arose in the response to Heartbleed, and let's see if we can get funding for Open SSL, and some other activity that played out over the last few years and I think did have a positive impact on security at that layer. But one of the interesting side projects from that was something called the CII Best Practices badge, which is basically a checklist of the kinds of things that open source projects, and I don't mean individual developers, but the projects that they form as a collective should do to help attest to and enhance the integrity and security of projects. So things like, do you have a security team that's reachable by a single email alias? Do they respond in a certain amount of time to messages that come through that, whether valid or not? Do you have a posted vulnerability disclosure process? All of these things that are human level that require somebody who's involved with the project to sit there and grade themself on. And there's a little bit of work from that point of view, but if you get to 100%, you can display a best practices badge that shows, I think it's even 90% as well if you get there, it's all green. If you get part of the way 50% of the way, I think you can show a yellow badge. And even if you're just starting on that process, you can still get a badge that indicates I'm only 9% of the way through so I can get that way. And then there's a website you can go and look up all packages and see all software projects and see who has actually filled out the badge, how close are they to that sort of thing. Some projects also put that badge on their own GitHub pages, their own project websites, that sort of thing. That's the Best Practices badge and we'd like to see that become standard for everybody in the open source space, at least all the foundations in this space. If you've organized around collective action around defending the integrity of an open source project, I'm not talking about the one or two person GitHub repos, but an actual foundation, maybe you should start to publish this for your top projects, maybe even all your projects. The Scorecards effort is started at Open Source Security Foundation and builds upon that by having some scriptable automatable tools to look for certain practices and behaviors that speak to better security. So things like, do you have a fuzzing step in your software tests? First off, do you have tests? If you don't have tests, that's a bad thing. Are you testing for negative things, not just positive things? Which is really critical. You want to be able to show you can throw gunk at an input and it won't cause the program to go sideways. And fuzzing is simply an extension of that, and there's some really good open source fuzzing tools out there now. It's a script designed to be much lighter weight to for a project to pick up and start scanning their own code. But again, Scorecard feed into this kind of objective analysis of how much trust can I expect to have in this body of code. Not trust that it's guaranteed to be defect free of course or never have a CVE published against it, but at least there's some duty of care that developers are taking. And we see these two things, which by the way, there's a website called metrics. openssf.org that you can go to and see how both of those are applied to a wide range of open source projects. These have been run against or can tell you who's applied for the badge, that kind of thing. I expect we'll see some other tooling in this space as well that tries to again objectively measure what's the trust that you might have in how the software is built or characteristics of the software itself. That's the Scorecards and badge work.

Joe Sepi: Yeah, I feel like there's so much we could be talking about, but I'm trying to be cognizant of time. What else do you think is super important that folks take away from the work that you're doing? And then-

Brian Behlendorf: Well, there's another project that's really gained a head of steam called Sigstore, Project Sigstore. Which look, some of the better run open source projects long have published PGP signatures on releases, right? That's the standard that Apache we even started doing in 1995, but there are some weaknesses to that. It's not aside from its use in the Ubuntu package manager and a few other places, it's not universal that PGPs are used. In fact, in package managers you have one key per repo and it's really just the last mile rather than a signature all the way up the stream to the development teams that published the code originally. Project Sigstore is an attempt to try to make it a habit for developers to sign releases. And rather than depending upon developers to know how to configure PGP correctly or some of the other signature tooling, which we've never been really good as an internet, as an industry at PKI, instead this is based on short- lived keys based on your email address in much the same way that let's encrypt. It bases TLS certificates upon your ability to receive an email. And so Sigstore issues these short- lived, what we call ephemeral, keys. Those get signed and then the fact of their issuance is recorded in a transparency log, which is kind of a blockchainy kind of distributed ledger thing. It's something that actually Google came up with for certificate transparency, which is their distributed TLS system. It's actually a really great way to try to bootstrap a real simple PKI system that is entirely developer friendly and can be woven into automated tooling and the like. The idea is you should be able to combine these signatures in your build processes to know that you're pulling down the stuff that you expect to be pulling down. Somebody didn't man in the middle as you were pulling a package down from NPM or some other place. And then when you put those pieces together you can sign it and downstream from you they can validate that. And it really is something that came out of the Cloud Native Compute Foundation and the security tag there, huge credit due to the folks who've been working on that. And there's been a bunch of automated tooling to add that into the container distribution and validation in the processes that are out there. This is really exciting stuff. Weaves together a lot of open source projects from different foundations and as an effort it's taking off. We'd really love to see that. That's like a piece. There's an even quasi smaller piece, but one that I think is pretty significant we're hoping to expand. Which is we got a bunch of codes to distribute to developers that they could use to claim for a free Multi- Factor Auth hardware token, the kind of token you plug in a hardware token into a laptop to help verify your signature. We were able to get a thousand tokens that we distributed to the 100 top open source projects as determined by our critical projects working group, which said based on all this data, here's the top 100. We sent them 10 codes a piece. Not all of them have been claimed. We're hoping to get more trenches of this and expand to the number of people we can reach. But Multi- Factor Auth, the fact that's not a standard part of most software developers lives, even though they'll use it to log into their Coinbase account or bank account or whatever, that's a big hole. And it's a hole that's been exploited to get cryptocurrency miners or malware into the supply chain. It's a small project, but one that we hope to expand to many more beyond that. Those are some of the other things going on. Again, Open SSF can feel like a circus at times and part of our job is to try to make it more cohesive and easier to explain to people. We'll get better at that over time, I'm sure. I don't even have a marketing person yet. It's kind of me. But those are some other things I thought might be worth throwing out.

Joe Sepi: Yeah, it's interesting. I find Open SSF really interesting because I spent a lot of time in The Linux Foundation, but it's a different kind of foundation. It's not just a home for these projects, which by the way, we've been talking a lot about all the things that you can be doing. If you have an open source project, I encourage you to get it into a foundation and you get all this support and help and working together to solve these sorts of problems. But yeah, I find Open SSF really interesting because it's really a lot of these best practices and tooling and things like that. Obviously one call to action is to adopt these sorts of tools and practices and resources and things like that. But are there more ways that folks should be thinking about in terms of being getting involved in Open SSF?

Brian Behlendorf: We're different from the average open source project, you're right, in that we're not primarily focused on one piece of software or a small handful of pieces of software, but instead very meta and have efforts in education and guides and content and that kind of thing, in addition to some software packages. So for example, all the source code behind Sigstore you'll find within our GitHub org. But the primary unit of organization within Open SSF are these working groups that we have. If you come to the website, you'll see a direct link. You've got six different working groups focused on vulnerability disclosure to educating developers, education and training, to identifying critical projects, to focusing on the supply chain issues. And all of those working groups have a Slack channel, have an email alias, and also meet by Zoom at least once every two weeks to talk about where to go and what they're working on. And it's pretty high level engagement in those working groups. You'll find people from well- known companies and from startups who are baking these standards into their products and services. We would love people to get involved there. We have published some guides and then, like I mentioned, the edX courses for secure coding. If you're just starting your journey in open source and into cybersecurity, you might even want to start with those and see what does this space mean and what are some of the basics for learning how to navigate through this. But then it's a target rich environment, about the most I can say. The working groups are a good place to learn about what they're doing and all the other efforts going on. But each working group has four or five different kind of sub efforts going on within them that have spawned things like Sigstore and others. So no matter what level you're at, please come, please check us out, please find a way to get engaged. We'll make sure that your time is well spent.

Joe Sepi: Being in the Node space when people ask about how to get involved, sometimes it's not always easy, but that's what I encourage folks to do as well. Look for the groups that are working on it, the people. Attend meetings maybe, I don't know if they have issues and things like that they're working on in the repos. And then you can get familiar with what's happening, get to know some of the people. And then usually that's a great way to get involved because there's people that are happy to have you involved, happy to help you get more involved, and then that's your entry point.

Brian Behlendorf: The people is the most important thing. My standard open source deck used to have a slide which was the Simpsons' take on Soylent Green, but it was like now with more girls as a product kind of thing, from a Simpsons episode. But open source software is Soylent Green, it's made of people. It is literally the bucket of bits hardly matter. It's really the people behind it. And Open SSF has been intensely volunteer driven since inception. When we finally did put together some funding and set up as an operation, it was really just to put rocket boosters on a set of efforts that started long before I showed up and long before the first dollar showed up to move it forward. And that's still the core of what we do. The core is public facing. The core is validation through collaboration and making sure that we can be helpful to all these other open source projects and integrated into them, but really benefiting from the expertise that lays out there in what we should be doing.

Luke: Yeah, I was just going to comment, I love what you were saying there because one of the things we try to highlight on this show a lot is demystifying open source. We get a question a lot of times where people ask like, " How do I make money doing open source?" And it's always this complicated answer where it's not like maybe you're not making money from the open source itself per se, but what that means to the industry and it's how you fit that into your career. And then again, just the networking. Because I would also mention just looking at the Open SSF members. It's really an all- star of enterprise companies, top financial organizations, all the hyperscalers, all kinds. It's a really interesting list of companies. And I would say for upcoming developers that are interested in open source, maybe you don't see that short term, oh, I'm going to make a dollar from this commit, but just being a part, and it's a great way to differentiate yourself in that ecosystem and network. But then again, find the right fit where you're not spending all your time working on something that doesn't have a strategic career alignment going on.

Brian Behlendorf: Yeah. No, first off, this is one of the most in- demand spaces in software development is cybersecurity. And even just learning the lingo, even learning what companies are working on, what open source projects do in terms of addressing this space can be a tremendous boost to one's career prospects. And you probably had lots of people mention the fact that when recruiting developers these days, folks look a lot less at formal resumes and a lot more at one's GitHub profile and where have you participated and contributed. I can't promise you that if you join a working group and sit in on a Zoom call that you're going to get a $300, 000 a year gig or anything, but it's probably a better use of an hour, especially the time when it's hard to meet face- to- face. It's hard to even have meetups, let alone major conferences. I will mention, by the way though, in June 21st to the 24th, there is the Open Source Summit taking place in Austin. The Linux Foundation is putting this together. This is our main all of our communities together under one roof kind of event. And there will be a Supply Chain Security Con happening in parallel to that that we'd love to see people come out for. And the CFP for that is open. If you're either working on some of this stuff or simply have an interest in it and want to talk, it'd be really great to have you there both at Open Source Summit, you see the URL here on the screen, as well as specifically at Supply Chain Security Con. That'll be really our first chance as a community to get together en masse face- to- face and talk about the projects and how we move them all further and faster.

Joe Sepi: Yeah, knock on wood that everything's smooth through the summer because June in Austin is going to be pretty amazing. The Open JS World Event. Yeah, we're partnering with CD Con. The CD, Continuous Delivery Foundation. Yeah, that's going to be fantastic. I actually already messaged my friend in Austin about maybe renting a place for the month and coming down with my dog. I love Austin, so it's going to be great. I encourage folks to check out this link, look for the Supply Chain Security event as well. It seems like there are a number of spinoff or smaller events that are a part of this overall event. And yeah, I'll just comment too on the thing we were talking about previously, working in open source, I think you gain invaluable skills there in terms of there's oftentimes no manager or anything. You need to figure out how to work with people, get along with people, move things forward, find consensus. The skills that you learn in open source are really invaluable. I highly recommend. We're running out of time and this happens every time, but I feel like we didn't really get to dig into who is Brian Behlendorf, and there's so much there. We could have our own show, whether it's early rave culture or your tech leadership in Burning Man. I'd love to do a show just on Brian Behlendorf.

Brian Behlendorf: Yeah, I'm old is what you're saying. I have a deep history and, yeah, I'm a dinosaur. I'm just really grateful I can be working on something that all the kids are into these days.

Joe Sepi: Security is so hot right now.

Brian Behlendorf: Yep.

Luke: It is funny. We were joking before the show that, oh, this isn't going to be a three hour conversation, but I think it easily could have been.

Brian Behlendorf: Yeah. Well, what are you doing for the next two hours?

Joe Sepi: No, this is great. And this is a programming note, we have Jamie Thomas from IBM coming on the show soon. She was in the White House meetings and I'm really eager to talk to her about all the work that she's focused on. She's on the board at the Open SSF-

Brian Behlendorf: She's Chairwoman of our board, yes.

Joe Sepi: Chair of the board. Great.

Brian Behlendorf: Great, great inaudible to have on the show.

Joe Sepi: Yeah, I'm excited. She's going to be coming. And we're talking to David Wheeler, who is, his title again, Brian?

Brian Behlendorf: Director of Software Supply Chain Security for The Linux Foundation. He's been working on this and working on Open SSF longer than I have. And so we just skimmed the surface. You can ask him to drill down and get surgical and, yeah, he'd be happy to. He's great at that.

Joe Sepi: Yeah, I'm looking forward to that as well. Security is top of mind for everyone right now. I encourage folks to check out all the links that we had. We'll have them in the show notes and everything. And any closing thoughts, Brian?

Brian Behlendorf: Again, I'll repeat I'm super appreciative of the chance to be able to work on this. This is a space where it's been far too easy to blame open source developers or the open source business model, or that we're all Communists or something. No, things are fundamentally good and healthy, but it's long overdue for us to take a look at how we write code and systematically make some improvements. I just feel super fortunate to be in a position to be able to be the pretty face on the front of a huge community of people working together to make this stuff happen. It's a really exciting space and I'm grateful for the chance to talk about it here.

Joe Sepi: And I appreciate all the work that you're doing. And I do hope that we've talked about open source is people. I do hope that we get some systems in place like we've been talking about here where when you go to look at an open source project, you look at the license, I look at the code of conduct and things like that as well, but security should be right at that top of that list. Check the badge, see if they have a Security MD file. What is their process for reporting vulnerabilities? Do they have any sort of reporting program? All of these things I think are really critical and I hope they really rise to the top of the things that people consider when they're using open source software.

Luke: I think we've reached a point that it makes sense to end the conversation. This has been so much fun. I would say maybe later in the year after some more things have happened, it'd be great to have you back and check in and see what's happened. Because this is obviously such an important subject and organization, and it's only going to be more important moving forward.

Joe Sepi: Yeah, maybe we'll do it live in Austin.

Brian Behlendorf: Once we've solved all security problems and there's no more security bugs, I'm happy to come back and claim victory.

Luke: You can go onto your full retirement career as a DJ.

Brian Behlendorf: Exactly. Just live out on the playa at Burning Man full- time.

Joe Sepi: That'd be great. That'd be great. Thank you so much, Brian. It's been a pleasure talking with you. Thank you for all the work that you're doing, and I look forward to talking more about this stuff.

Brian Behlendorf: Yep, I'm happy to come back. Yeah, let's talk again soon.

Joe Sepi: Cool. Thanks. Cheers.

DESCRIPTION

Brian Behlendorf is the General Manager of the Open Source Security Foundation. Brian has dedicated his career to connecting and empowering the free software and open source community to both solve difficult technology problems and have a positive impact on society. From startup company founder, to advisor to the U.S. government, to non-profit board member and employee of the World Economic Forum, he's been at the forefront of the open source software revolution. 

Join hosts Luke Schantz and Joe Sepi as they get Brian's take on the latest open source software developments. As the recent Log4J vulnerability has shown, open source software is not immune to security breaches and attack. Brian shares his views on the Log4J scramble, his recent White House meetings on software security, the costs of security and threat mitigation, and future challenges and opportunities in open source software. 

Join us for a look back at Brian Behlendorf's unique career and see what's next for him and the movement he helped launch, this time on In the Open with Luke & Joe.

Today's Guests

Guest Thumbnail

Brian Behlendorf

|General Manager, Open Source Security Foundation, The Linux Foundation