Open Source Security Foundation | Interview with Brian Behlendorf, GM, OpenSSF
Luke Schantz: In this episode of In the Open, we are pleased to bring you a conversation with Brian Behlendorf, general manager of the Open Source Security Foundation. We'll be discussing a variety of topics, including the recently announced Alpha Omega Project, the best practices badge, working groups, SIG store and more, but before we welcome our guest, let's say hello to our co- host, Joe Sepi.
Joe Sepi: Hey Luke, how are you my friend?
Luke Schantz: Good. How are you doing, Joe?
Joe Sepi: I'm all right. It's a bit rainy out here and it's been raining for a couple of days and it's washing away all the snow, which I have mixed feelings about because now I'm left with a muddy, dirty mess.
Luke Schantz: It's funny you say that because I was reflecting on the same, and I used to really dislike the snow when I lived in New York City because there was just no place to go with it, but I must say, they really know how to deal with the snow out here in Connecticut. It's really not a problem.
Joe Sepi: If it's going to be cold, there might as well be snow. It's fun and just the blanket of white I really like, and it's quieter. It dampens the sound. I like the snow and I was actually hoping for warmer temperatures because there was some ice that I needed to clear, but I'm just not happy about all the snow melting away.
Luke Schantz: You'll get more before you know it. Before we welcome our guest, I just want to say to all our listeners, if you have any questions, please drop them in the chat. If you're catching this later on a replay or a podcast, check our Twitter handles and feel free to message us on Twitter, but without further ado, let's welcome our guest, Brian Behlendorf.
Brian Behlendorf: Hey there.
Luke Schantz: Hey Brian. Welcome to the show.
Joe Sepi: Yeah, welcome. Thanks for joining us.
Brian Behlendorf: Thank you, Luke. Thanks, Joe.
Joe Sepi: How's the weather out there?
Brian Behlendorf: It's sunny. It's very dry. It's been an entirely dry January, which was depressing because we had a great series of storms in December that got us well over average. We're pretty civilized on the West Coast. We keep our snow in the mountains where we can go visit it, but it's nice and sunny, but we could sure use some rain.
Joe Sepi: Maybe let's start off with a bit of a self introduction, if you don't mind.
Brian Behlendorf: Sure. As Luke said, I'm Brian Behlendorf, general manager for the Open Source Security Foundation, which is embedded inside of the Linux Foundation. I've been with the Linux Foundation since 2016 when I joined to lead something called Hyperledger, which is an enterprise blockchain initiative that was at the other end of the spectrum from all the cryptocurrency and ICO madness, NFT Madness a bit, although there's ways to do NFTs that don't destroy the planet with energy consumption and that sort of thing, but that's been a fun ride. I passed the baton on that and have been leading OpenSSF since October. I'm also on the board of a couple of organizations, the Mozilla Foundation, the Electronic Frontier Foundation. I've been on that since 2013 and had a career doing things in open source and open technologies, starting companies. I worked at the White House briefly, with CTO for the World Economic Forum for a while, so a bunch of different things.
Luke Schantz: Excellent. Let's dig into a little bit of your current role and we'll first start with what is the Open Source Software Foundation?
Brian Behlendorf: The Open Source Security Foundation is like many initiatives, most of the initiatives at the Linux Foundation, is a consortium of organizations, of stakeholders who have pooled some resources, a bit of funding, a bit of their own staffs as volunteer on the project to focus on enhancing the security broadly stated of the open source ecosystem as well as focusing on the software supply chain in open source. I got started in open source in, I think the first piece of open source software I used was'91, just applying around with Usenet and FTP and Gopher as a freshman at Berkeley. In'92, setting up websites, using gopher sites at the time, and then pre- web, and then getting started with Apache in'95. In those earlier days, software and the internet as a whole is a much higher trust environment we could take for granted that you found. Well, it's like when somebody would email you that you didn't know, you're like, inaudible email, too. You must be interesting, you must be competent, you must be somebody who I can, by default, trust. Likewise, when you'd find the software on the internet you could download, there was this default assumption that the folks behind it were competent, that they took security maybe seriously. We all didn't worry about TLS hardening our connections at the time because we trusted the admins on the boxes and the networks not to be snooping our traffic. Out of this high trust environment, the highly social interactions between developers on open source projects and the dependencies that they build on top of, let us get a little bit lazy about things like, how do you really know when the dependencies you pull in, the diligence and duty of care that those developers might have taken around responding quickly when there's security holes that have been published or even just that they've been informed that their security holes in their code. How often are they paying attention to the compiler errors, or warnings, not errors, that kind of suggest we might not have wanted to cast that into a pointer because that might be a highly exploitable kind of thing to do. Certainly, the array of different analysis tools you can run today to speak to more that's possible. Now transposing us into 2022, this really low trust, zero trust world that we live in where the vulnerabilities are not just from the code as it sits and off by one errors that lead to buffer overflows and that kind of thing, it comes from things like developers deciding to give the middle finger to enterprise and changing their JavaScript to print out a whole bunch of nonsense rather than doing the thing that it had been doing for a few years, the faker. js colors. js attack, or somebody saying, " Hey, I see a package named something very generic, something. js, I wonder if I register another package in NPM that is a slight misspelling of that, how many people I'll catch who inadvertently include my code? Maybe I'll add a dash or remove a dash that was there in the name of the module," and that's been a source of attacks as well. Or developers who use a simple name and password for NPM for example, which recently fixed this or other resources, their GitHub account, and through either malware or some other thing, their credentials get compromised and suddenly now, their privileged position inside of this software supply chain gets put under attack, but we also shouldn't forget that part of the challenge is we're dealing with software now that is so bifurcated, so highly granular now, the average application will pull in dozens to hundreds to thousands of underlying dependencies, each of them written by teams if you're lucky, and often, just one or two people who perhaps are doing unsung yeoman's work down in the depths who can't necessarily afford to think about things like threat modeling or third party security audits or the other types of things that might lead to better software. What we're trying to do at the OpenSSF is say, " Look, let's look at the entirety of the supply chain from when code is written in an IDE or it sits in a developer's head, to an IDE to building it and building upon dependencies, and developers make choices about which dependencies to use, to getting into the package management systems and other distribution points in the end user. Where are there these defaults that come from this high trusts world that we really need to reexamine?" All sorts of our projects, and there's really, sometimes it looks like a nerd's paradise, a kind of circus, of all these different things going on under the OpenSSF, but it's what are the specifications that'll help us get further and faster to hardening some of that supply chain? What are some tooling that'll help first accelerate adoption of those standards, but then also lead to higher quality code? What are some things we can do to try to help people evaluate objectively the risk involved in an open source module? Finally, education plays a big part of this as well. How do we help train developers who ordinarily do not receive any formal and rarely do they get any informal training about how to write secure code. What are some patterns to avoid some real anti patterns in writing code? I took a few CS classes at Berkeley. It was my major until I dropped out, but most of what I knew about programming, I picked up from man pages and O'Reilly books and that kind of thing. Most people in the industry are the same way. Are there things we can more systematically do to try to raise the floor on people's understanding of what security means in open source? A wide array of different things we're doing and I would love to talk more about them.
Joe Sepi: Yeah, you threw a lot out. There are all sorts of places to dig in. I kept thinking, oh yeah, we should. Oh, that too. Yeah. I guess maybe before we even get into some of those details and how we address these concerns, I thought we'd set it up even a little bit more and what I'm thinking about is, it's not just the developers and the end users and the people who are working at open source who are thinking about a security now. It's with Log4j, it's like everybody is really super hyper focused on security now. It seems like everybody's talking about it everywhere I go, but that even goes up to the government and that seems a bit of a new thing to me, the things that have been happening at that level. Maybe you could talk a little bit more about that, too.
Brian Behlendorf: First off, it's unfair to the Log4j developers that the name of their project has become such a keyword for what's going on because frankly, it seems like every four months there's a new thing that captures people's attention. SolarWinds is another example of this, and those developers are pros. They write software for a living. None of them are necessarily full- time on Log4j, but they're all using this in commercial applications and it was a little bit unfair for them to be hoisted up by that. Certainly, what happened there became a poster child for things that, even with the best of intentions, even with some of the basic processes that the Apache Software Foundation does have to try to present a degree of comfort and reliability to its downstream users. There's still very much a perspective of caveat mTOR in a lot of the code, and what that leads to is not necessarily the right kinds of investment in things like a third party audit or in things like threat modeling on the code and the like. Also, when the bug was revealed and there was a bit of an inadvertent reveal, there was a commit to fix the bug that had been pointed out by the researcher from Alibaba. The commit mentioned a CVE number that had not been publicly disclosed and people picked up on that and went, oh wait, this seems like a bigger issue. They very quickly had to tell the world about what was going on rather than being able to take the time to do a coordinated vulnerability disclosure process to talk to the people who might be most affected by this to get them to update first to think about how to make sure you have the right fix and not the series of four fixes that they ended up having. It also meant though that because it was so easy to compromise and hard to find where Log4j was actually being used and which version of Log4j, that it caused this massive amount of disruption and disruption that's expensive. It was expensive to those developers who started getting faxes and other weird emails from companies demanding that they do things that those companies had never had a relationship before. They weren't paying the Log4j developers or anything, and to suddenly make demands like they were was pretty unfair. A lot of this bubbled up and folks at policy levels started to realize that there's something going on here that's either worrisome or merits further attention. It's almost as if every bridge and highway in America had been built by barnstorming, so people digging into the ground and laying concrete individually, and then we all woke up and realized there's a lot of variants out there in the quality of the roads, in the systems we have, and a lot of them seem to be getting potholes. We've been contacted by some folks at the NSC, the National Security Council, who work within the White House about asking questions, asking about what does this take? We found that contrary to the perception that governments might either be ignorant of how software is built or how open source works or think that everybody's a volunteer or that there was something malicious going on here, we found them to be pretty knowledgeable about the mechanics of it and asking some sincere questions about what can government do, not just as a big user of open source software, which they should be investing at least as much as Google. They probably have just as much revenue as Google has, if not more, and software development that they're funding, but also as a peer in the ecosystem, they should be participating and trying to understand how to help harden it and improve it, but also as the institution we turn to help guarantee public safety and to think about critical infrastructure and resilience in the face of nation state actors now who are starting to exploit these holes. They just wanted more information about that. They hosted a meeting that was originally going to be face- to- face. Then we all flew in and it was canceled at the last minute, so we all flew home, but at the White House with members of the NSC Office of National Cyber Director, who've done quite a bit on software bill of materials and the like, and it was an open conversation. It was with folks from our organization, myself, Jim Zemlin, the Apache Software Foundation, and about 10 other firms all talking about where are these systematic weaknesses, and most importantly, how do we not pin this on the open source developers or do things that end up feeling like here's a 300 page checklist of all the things that you must do without shalt in order to allow your software to be used by government or others. Instead, the focus was on where can we invest in doing some of these cybersecurity actions that might help improve open source software. Where can we show up with code with poll requests? What are the kinds of interventions that feed into, rather than would slow down, the collaborative innovation processes that make open source so powerful? It was really great to see that. There's some follow- ups coming through that which speak more to this idea of where might there be some targeted opportunities for this. There's already groups within the U. S. federal executive sector who have some resources to be able to spend in this domain. I was just on a meeting earlier today hosted by the Open Forum Europe where the Dutch Minister for innovation digital minister, French Digital Minister talked about setting up, essentially, an OSPO for the French government and resourcing it with 30 million to spend on improving open source software in the interest of the French government. The great news is, these are all digital public goods that are worldwide. The investment dollar that the U. S. Government puts in or the French government or the Japanese or even the Chinese, if it's targeted, if it's actually additive to those processes, will pay out for everybody and improves the security. We're continuing these conversations. We're not aiming to be a lobbyist organization, of course. We are here to try to just figure out if there's resources that show up across the public or private sector, what are the best ways for those to be deployed to be helpful to the open source industry?
Joe Sepi: This is fascinating to me, and I think the way you just talked about it, too, is really interesting because it's two sides of this huge spectrum. We're talking about these governments, the U. S. government, the French government, all these governments, but then we're also talking about a person who made a commit in a get up repo. These are humans. I do feel bad for the Log4j folks and other folks who get caught up in this, but I just find it really fascinating that it really spans that whole spectrum of one person versus, not versus, but to a whole government. It's really fascinating and I think it's interesting, too, to think... I work in the open JS foundation and on the no JS space, and one of the things we focus on in the work that we do there, in both those places really, is how to get organizations more involved in supporting the efforts with different tech companies and whatever kind of ebbs and flows, and I'm curious, and I don't know if this is really a question, but my brain is thinking about how to get the U. S. government more involved in no JS, for example. I can go knock on the door of Google and Microsoft or whoever and find the person to talk to, but I wonder about how to get more folks involved from that level, too.
Brian Behlendorf: There's such a strong libertarian streak in open source software. It historically has been, in internet circles there has been as well, where we're fairly afraid of involving government in core governance processes of open source projects if we can avoid it, and not for poor reason. There's lots of examples of engagement by government in open source, both positive and a few negative ones over the last 25 years. On the positive side, everything from SELinux, do you remember this? The NSA secured and hardened version of Linux that actually fed a lot of interesting ideas regarding capabilities and the like into the Linux kernel to something called Vista, which was the veterans administration's original electronic health record system, which was written by U. S. government employees. It was open source through a series of Freedom of Information Act requests made to the Veterans Administration that eventually became the Open Vista Health record platform to other kinds of investments they've made. The state department invested, for example in open source tools to facilitate human rights workers and whistleblowers and others working in dangerous countries. Lots of good work going. They invested in Tor, for example, as well. At the same time there's been places where the U. S. government has imposed requirements or things like FIP certification for example, which is a good thing. You want your product that deals with encryption to be used in highly critical environments. You should expect there's probably a certification process for these kinds of things, but getting open SSL to pass fifth certification has been a huge amount of work and cost and time delay, and it ends up meaning that they certify a version of a core TLS library, SSL library, that is years behind date by the time they actually finish that. One thing we talked about in that meeting and has been a part of other conversations is how do we get certification processes or mandates, like the SBOM mandate that was part of the recent executive order from last year, how do we get this to be more practical and more reflective of the fact that open source development isn't focused on the end object so much as it's focused on a stream, and in fact being able to quickly update in the face of a CBE depends upon treating software- like streams that need continuous refreshment, continuous update, rather than bars of gold that sit in a safe somewhere. I would expect to see things like future mandates in the procurement process around bringing in open source code to be focused less on a FIP style certification of a hard object and more about how do we look at these risk scores? How do we look for certain behaviors like getting a best practices badge on security or your developers have training on writing secure software or some of these adoptions of standards that speak to a healthy process rather than to specific outcome. I think that's going to end up generating positive benefit for the users well beyond government users.
Joe Sepi: To be clear, I don't want the government all getting up in my business, but I'm thinking about 18F and just the way perhaps, since the Obama era, which I don't know if that's where you were involved, but things have been seeming to be more modernized and more digital focus and more tech savvy, and I wonder about institution, talking to people in those places and having them get more involved as well. You mentioned the SBOM stuff, the software bill of materials, I'm working on some of that internally and thinking about it externally as well. Maybe you could share more about what that looks like and this whole concept of a stream rather than just a deliverable, and how does that work in your view in the community and the space that you're working in?
Brian Behlendorf: For folks who aren't familiar with it, software bill of materials documents are intended to say, " For this piece of software, here are the underlying pieces of code that it incorporates as well as some other metadata, standardized metadata, about the software package to help you organize it better." Historically, one of the first uses for SBOMs has been in licensing, making sure that this package I have that's labeled as open source software available under the Apache license are all the underlying pieces, also Apache licensed products or otherwise open source licensed, or oh, wow, there is this oddball, unlicensed, or perhaps, proprietarily licensed thing lingering inside. Okay, we've got to figure out how to remediate that. Now, the SPDX standard, which was really developed and derived, had to focus on license conformance and compliance, has been extended to also now be a tool for tracing that tree of dependencies that you have inside of a software project for that kind of, like the label you have on a back of something you eat. It tells you what's inside. It's not a panacea, it's not right now, today, not easy to implement, not the one day's worth of work that it really needs to be for somebody to take a standard build system and add a SPDX support. That's changing though. Now that SPDX is an ISO standard, there's a whole lot of corporate comfort with it and that's starting to open up checkbooks as well as procurement requirements upstream to technology vendors to say, " Hey, we expect you to publish an SPDX SBOM on what you sell to us," and that's going to push upstream to the underlying open source components, not in a mandate kind of way, but in a, " Hey, we notice that this software you're using, it'd be really beneficial if you all were publishing an SBOM, an SPDX for it. We're going to submit a poll request to you to add that to your build system, but there's also a work that needs to happen in improving the SPDX generation tools, so that's one area that we're pulling together some resources to try to help improve, is to make it easy for the standard build systems out there or for the standard CICD systems to come by default or with a real simple command line option to generate these SPDX files. The other would be having people in kind of a dev rail capacity to go out and work with key open source projects, especially those that get embedded as libraries inside of other people's and say, " Here is a poll request to add this to your build system," and then for an app that incorporates that library, here's how to check the SBOM to make sure that it meets the requirements that you have for proving that validation through the supply chain. All that work is being done today. It's still a work in progress. The important thing was getting SPDX as in international standard, which is what ISO status accomplishes it for us, and now it's simply a ground game of going out and trying to get more adoption of organizations and by open source projects.
Luke Schantz: Brian, I know you announced a new program this week, the Alpha Omega Project. Could you tell us a little more about that?
Brian Behlendorf: Like all good open source projects, you want to be open, you want to be early, you want to have that be chapter one rather than chapter 10, in a lifespan of a project, so this is something that builds upon what started out as a white paper written by Michael Sveta at Microsoft, but which quickly found a lot of believers over at Google and across other members of our open source security foundation, and the idea here is that, to actually bring better open source practices to some of the key open source projects out there, it's not enough to say, here's a standard, here's a document, here's a white paper. You have to meet them where they are on as a set of security experts, talk with them about what are some pieces inside of your family of technologies that perhaps are not as widely scanned as others or a known vulnerability that there hasn't been the resources put together to try to address, or here's how to adopt Project Sigstore, which is a package signing process. Part of it is like high- touch, think of it like pro bono consulting, on hardening the security practices and improving security practices inside of a project, and if there's a spot project that needs$ 30,000, $50,000 worth of work, let's undertake that, or a third party audit or something like that. That's for the alpha end of the spectrum of projects out there, perhaps the top 100 or 200, and frankly, if we can be helpful to a half dozen this first year, we'll be happy. At the other end of the spectrum, and I'm not talking about the millions of projects on GitHub, I'm talking more about say, the top 10,000 projects that really matter, that are part of a Linux distro that are part of a modern build environment, the top ones at NPM, that sort of thing. For perhaps the 10, 000 most important, are there ways to systematically use tooling to get a better sense of, not just the security posture of these projects, are they using best practices badges and pinning dependencies and the kinds of things that scorecard looks for? Also, Log4j, the problem was that they were taking in user submitted input and parsing that for format strings and they had some protections against that, but then there was a hole in protecting against JNDI LDAP. The question is, if we know that's a problem, we can fix it for Log4j, but how many other Java projects are out there that potentially taken this kind of input and do this kind of unsafe thing, that forgot to close up this JDNI hole in some way? If you could query across those 10,000 projects, for the Java ones, do you do this kind of thing? You do. Let's dig in a little bit closer. You might discover new vulnerabilities or new facts that give you some pause and you want to communicate with the maintainers of the project. Here's something, you thought this was a set of dinner cutlery in your drawer and there's a big chainsaw sitting in the middle. You might want to reconsider whether that chainsaw is what you want to have there or at least put a chain guard around it while it's sitting there. Omega is intended to be that surveillance system for the broad suite, for can we use automated tooling and pattern matching and other kinds of ways to interrogate the code that's sitting there and ask it, are there things to be worrisome about? And then, highlight that and work with maintainers to go, is this something real or if we think we find a vulnerability to work that through. Perhaps a neutral nonprofit driven version of Project Zero at Google. It's not a perfect metaphor, but we definitely want to be more collaborative with the maintainers if we find weaknesses and issues. It is something that's hard to be entirely public about. You don't want to surprise maintainers by finding a vulnerability and telling the world about it first, so one of the hardest parts about this project is it is going to be pretty human intensive, resource intensive. We have raised$ 5 million to get started on this and start to recruit those teams to do that work, and start to perhaps, write some checks to some projects that could use that, as well as to put together the platform for being able to review this code and be able to look at it and ask those kinds of questions, but it's seed capital. It is seed funding. This is the kind of thing that, to really do right will require a lot more funds, but the amount of positive impact I think we can have on open source will be far beyond the cost that comes in. That's what Alpha Omega is about. It's early days for sure and we're still figuring out our engagement model with the public, but if you're interested further in this, we'll have a webinar on it coming up soon and there's a place to talk about it, and we're really eager to figure out how can we really leverage the expertise that's out there and publicly available from volunteers, and if anyone wants a new gig kind of wearing a cape and fighting around the side of good, we're going to have some job descriptions up soon.
Luke Schantz: It's so interesting to hear you dissect this space because I feel like in past eras of society when it was mechanical, it was like, okay, I make a gear and it's used wherever. Now, if you make that gear, you're interacting again, like you said, with nation state. It's amazing how we're really in this together, and it's very complicated. Like you say, maybe there's a decent amount of money that needs to go into it to help secure it, but if you compare that to the potential losses and the losses that we're experiencing every day, and we've seen this explosion of ransomware. The colonial pipeline was a big one, I thought, that brought it to everybody's attention, wow, this is serious. I think any investment that a company makes or that the government makes, I think is going to be obviously well worth it because the losses and the threats to both public and private life are just huge.
Brian Behlendorf: Yeah, they definitely agree.
Joe Sepi: I would add, too, it's interesting, security is newsworthy and exciting when something goes wrong, but when something's going well you don't talk about it. It's not in the news. It's interesting, you want things to be going well, but then when they're going well, is it easier to get money or support or involvement? When it goes bad, then everybody's all, let's throw things to fix it.
Brian Behlendorf: We have to get out of this world where security is the thing that's only punished in the negative rather than rewarded in the positive, where people only talk about it or prioritize or spend money on it after the crisis has hit rather than long before. I would love to see a world where you're able to see quickly as a developer, I'm going to build upon these couple of dependencies. Which of these, if I'm writing an app in Java, I need a logging framework and I have multiple choices, today there's multiple choices beyond Log4j. Which of these projects follows a set of practices that speak to probably better security outcomes? Which of them have the bulk of their maintainers, have certifiably taken some sort of course in secure software development recently. If there are objective tools that I can use to decide as a developer what platform to build on and that favors those types of projects, then there's suddenly a positive incentive for developers to start doing that thing. You get more users, build a bigger community, get recognized in some way, and I think of this as a corporate level, too. If we come up with a set of tooling and metrics and other systems that are objective that are automatable and allow for a company to say, " Hey, I'm willing to tolerate a little bit of risk because I want to jump into a new space like blockchain or whatever and that stuff's crazy, but I know I've got to do it, so I'll take a little bit higher risk there, but for my core banking systems and payment systems or whatever, I'm going to use stuff that's much more secure." They can dial that up and down, but consciously do that rather than be surprised by it, and everybody today is talking about it and trying to buy and trying to price cybersecurity risk insurance because they're starting to be big fines for breaches and that kind of thing. In fact, the FTC made a statement at the beginning of January that they expect the people to upgrade if they're vulnerable to the Log4j defect because if they failed to remediate and Log4j leads to a bigger breach, they will add additional fines to that breach, if it's been shown that you did not update. Now that adds to the punitive, but can we spin that to something positive? Insurance is one way to do that. If the insurance companies had an objective tool to go to an enterprise and say, " Your use of software, open source or not, objectively by, we come in, we run a tool, we scan it, they have these today for licenses. There's no reason you couldn't have it for security posture as well, and objectively, here's your score, and by the way, if you bring that score down by 20%, or improve it by 20%, because you make different choices or you invest in Log4j or you invest in this one and improve the score this way, then we'll cut your premiums by this much. That could start to create a positive incentive for those companies to invest in the kinds of things that aren't sexy to invest in these days. Paying off technical debt, looking for security holes or responding to small bugs that actually might indicate larger breaches, or even for the insurance companies themselves, to recognize collective interest and go, " Hey, we're going to pay out fewer claims if we help the industry harden by looking at the low- hanging fruit to go and try to solve out there." I'm really fascinated by, what are all these financial mechanisms we might be able to use to encourage the kinds of software work that is so often left on the floor in the rush to add features or to meet some deadline or other things to pay off technical debt to close up holes. Hopefully as well, to motivate the move towards things like memory safe languages or other kinds of deeper refactoring kinds of changes or redevelopment changes that would just get us out of a whole category of potentially problematic behavior.
Luke Schantz: That totally makes sense, and it's also, then it's a language that the business side can understand, too, where they're like, " Hey, there's these mechanisms we can interface with," because I think sometimes there's a lost in translation between the frontline technical developers who are under pressure to deliver features to that, or hey, this is our ROI we're looking for where this is more about preventing loss and it's a new language for the business side to fully appreciate, but I'm sure those punitive fines are something that they really recognize.
Brian Behlendorf: It's easy to do the fines. It's easy for government or otherwise to say, " We'll ding you this," but that's going to be the least helpful thing I think to open source developers and organizations around open source.
Luke Schantz: We are, not running low on time, but I just wanted to make sure we touched on scorecard and your best practices badge because I think these are pretty interesting.
Brian Behlendorf: The best practices badge came out of a previous effort called the core infrastructure initiative, which arose in the response to heart bleed, and let's see if we can get funding for open SSL, and some other activity that played out over the last few years, and I think did have a positive impact on security at that layer, but one of the interesting side projects from that was something called the CII best practices badge, which is basically a checklist of the things that open source projects, and I don't mean individual developers, but the projects that they form as a collective should do to help test to and enhance the integrity and security of projects. Things like, do you have a security team that's reachable by a single email alias? Do they respond in a certain amount of time to messages that come through that, whether valid or not? Do you have a posted vulnerability disclosure process? All of these things that are human level that require somebody who's involved with the project to sit there and grade themself on, and there's a little bit of work from that point of view, but if you get to a hundred percent, you can display a best practices badge that shows, I think it's even 90% as well if you get there. It's all green. If you get part of the way, 50% of the way, I think you can show a yellow badge, and even if you're just starting on that process, you can still get a badge that indicates I'm only 9% of the way through, so I can get that way. There's a website you can go and look up all packages and see all software projects and see who has actually filled out the badge, how close are they to that sort of thing? Some projects also put that badge on their own GitHub pages, their own project webs websites, that sort of thing. That's the best practices badge and we'd like to see that become standard for everybody in the open source space, at least all the foundations in this space. If you've organized around collective action, around defending the integrity of an open source project, I'm not talking about the one or two person GitHub repos, but an actual foundation, maybe you should start to publish this for your top projects, maybe even all your projects. The scorecard's effort started at Open Source Security Foundation and builds upon that by having some scriptable automatable tools to look for certain practices and behaviors that speak to better security. Things like, do you have a fuzzing step in your software tests? First off, do you have tests? If you don't have tests, that's a bad thing. Are you testing for negative things, not just positive things, which is really critical. You want to be able to show, you can throw gunk at an input and it won't cause the program to go sideways. Fuzzing is simply an extension of that, and there's some really good open source fuzzing tools out there now. It's a script designed to be much lighter weight for a project to pick up and start scanning their own code, but again, be a scorecard, feed into this objective analysis of how much trust can I expect to have in this body of code, not trust that it's guaranteed to be defect free of course, or never have a CVE published against it, but at least there's some duty of care that developers are taking. We see these two things, which by the way, there's a website called metrics. openssf.org that you can go to and see how both of those are applied to a wide range of open source projects. These have been run against or can tell you who's applied for the badge, that kind of thing. I expect we'll see some other tooling in this space as well that tries to again, objectively measure what's the trust that you might have in how the software is built or characteristics of the software itself. That's the scorecards and badge work.
Joe Sepi: I feel like there's so much we could be talking about, but I'm trying to be cognizant of time. What else do you think is super important that folks take away from the work that you're doing and then-
Brian Behlendorf: There's another project that's really gained ahead of steam called Sigstore, Project Sigstore, which look, some of the better run open source projects long have published PGP signatures on releases. That's the standard that at Apache, we even started doing in 1995, but there are some weaknesses to that. It's not aside from its use in the Ubuntu package manager and a few other places, it's not universal that PGPTs are used. In fact, in package managers you have one key per repo and it's really just the last mile rather than a signature all the way up the stream to the development teams that published the code originally. Project Sigstore is an attempt to try to make it a habit for developers to sign releases and rather than depending upon developers to know how to configure PGP correctly or some of the other signature tooling, which we've never been really good as an internet, as an industry at PKI. Instead, this is based on short- lived keys based on your email address in much the same way that let's encrypt, bases TLS certificates upon your ability to receive an email. Sigstore issues these short- lived, what we call ephemeral keys, those get signed, and then the fact of their issuance is recorded in a transparency log, which is a kind of a blockchain- y, kind of distributed ledger thing. It's something that actually, Google came up with for certificate transparency, which is their distributed TLS system. It's actually a really great way to try to bootstrap a real simple PKI system that is entirely developer friendly and can be woven into automated tooling and the like, and the idea is, you should be able to combine these signatures in your build processes to know that you're pulling down the stuff that you expect to be pulling down, somebody didn't man in the middle you, as you are pulling a package down from NPM or some other place. When you put those pieces together you can sign it and downstream from you. They can validate that and it really is something that came out of the cloud- Native Compute foundation and the security tag there. Huge credit due to the folks who've been working on that and there's been a bunch of automated tooling to add that into the container distribution and validation and the processes that are out there. This is really exciting stuff, weaves together a lot of open source projects from different foundations and as an effort, it's taking off. We'd really love to see that. That's a piece. There's an even quasi smaller piece, but one that I think is pretty significant. We're hoping to expand, which is we got a bunch of codes to distribute to developers that they could use to claim for a free multi- factor off hardware token, the kind of token you plug in, a hardware token into a laptop to help verify your signature. We were able to get a thousand tokens that we distributed to the 100 top open source projects as determined by our critical projects working group, which said, based on all this data, here's the top 100. We sent them 10 codes a piece. Not all of them have been claimed. We're hoping to get more trenches of this and expand the number of people we can reach, but multi- factor off is the fact that's not a standard part of most software developers lives, even though they'll use it to log into their Coinbase account or a bank account or whatever. That's a big hole and it's a hole that's been exploited to get cryptocurrency miners or malware into the supply chain. It's a small project, but one that we hope to expand to many more beyond that. Those are some of the other things going on. Again, OpenSSF can feel like a circus at times and part of our job is to try to make it more cohesive and easier to explain to people. We'll get better at that over time, I'm sure. I don't even have a marketing person yet, it's me, but those are some other things I thought might be worth throwing up.
Joe Sepi: Yeah, it's interesting. I find OpenSSF really interesting because I spent a lot of time in the Linux Foundation, but it's a different kind of foundation. It's not just a home for these projects, which by the way, we've been talking a lot about, all the things that you can be doing. If you have an open source project, I encourage you to get it into a foundation and you get all this support and help and working together to solve these sorts of problems, but I find OpenSSF really interesting because it's really a lot of these best practices and tooling and things like that. Obviously, one call to action is to adopt these sorts of tools and practices and resources and things like that, but are there more ways that folks should be thinking about in terms of being getting involved in OpenSSF?
Brian Behlendorf: We're different from the average open source project, you're right, in that we're not primarily focused on one piece of software or a small handful of pieces of software, but instead very meta and have efforts in education and guides and content and that kind of thing in addition to some software packages. For example, all the source code behind Sigstore you'll find within our GitHub org, but the primary unit of organization within OpenSSF are these working groups that we have. If you come to the website, you'll see a direct link. You've got six different working groups focused on vulnerability disclosure to educating developers, education and training, to identifying critical projects to focusing on the supply chain issues and all of those working groups have a Slack channel, have an email alias, and also meet by Zoom at least once every two weeks to talk about where to go and what they're working on. It's pretty high level engagement in those working groups. You'll find people from well- known companies and from startups who are baking these standards into their products and services. We would love people to get involved there. We have published some guides, and then like I mentioned, the edX courses for secure coding. If you're just starting your journey in open source and into cybersecurity, you might even want to start with those and see what does this space mean and what are some of the basics for learning how to navigate through this, but then it's a target rich environment, about the most I can say. The working groups are a good place to learn about what they're doing and all the other efforts going on, but each working group has four or five different kind of sub efforts going on within them that have spawned things like Sigstore and others, so no matter what level you're at, please come. Please check us out. Please find a way to get engaged. We'll make sure that your time is well spent.
Joe Sepi: Being in the note space when people ask about how to get involved, sometimes it's not always easy, but that's why I encourage folks to do as well. Look for the groups that are working on it. The people attend meetings, I don't know if they have issues and things like that, that they're working on in the repos. And then, you can get familiar with what's happening, get to know some of the people, and then usually, that's a great way to get involved because there's people that are happy to have you involved, happy to help you get more involved, and then that's your entry point.
Brian Behlendorf: The people is the most important thing. I used to have my standard open source deck used to have a slide, which was the Simpsons take on soil and green, but it was, now with more girls as a product kind of thing, was from a Simpsons episode, but open source software is soil and green. It's made of people. It is literally, the bucket of bits hardly matter. It's really the people behind it, and the OpenSSF has been intensely volunteer driven since inception. When we finally did put together some funding and set up as an operation, it was really just to put Rocket Boosters on a set of efforts that started long before I showed up and long before the first dollar showed up to move it forward, and that's still the core of what we do. The core is public facing. The core is validation through collaboration and making sure that we can be helpful to all these other open source projects and integrated into them, but really benefiting from the expertise that lays out there in what we should be doing.
Luke Schantz: I was just going to comment. I love what you were saying there because one of the things we try to highlight on this show a lot is demystifying open source, and we get a question a lot of times where people ask, " How do I make money doing open source?" and it's always this complicated answer where it's not like, maybe you're not making money from the open source itself per se, but what that means to the industry and how you fit that into your career, and then again, just the networking because I would also mention, just looking at the OpenSSF members, it's really an all- star of enterprise companies, top financial organizations, all the hyperscalers, all kinds. It's a really interesting list of companies and I would say for upcoming developers that are interested in open source, maybe you don't see that short term, oh, I'm going to make a dollar for this commit, but just being a part and it's a great way to differentiate yourself in that ecosystem and network, but then again, find the right fit where you're not spending all your time working on something that doesn't have a strategic career alignment going on.
Brian Behlendorf: First off, this is one of the most in- demand spaces in software development is cybersecurity. Even just learning the lingo, even learning what companies are working on what, what open source projects do in terms of addressing this space can be a tremendous boost to one's career prospects. You probably had lots of people mention the fact that when recruiting developers these days, folks look a lot less at formal resumes and a lot more at one's GitHub profile, and where have you participated and contributed. I can't promise you that if you join a working group and sit in on a Zoom call that you're going to get a$ 300,000 a year gig or anything, but it's probably a better use of an hour, especially the time when it's hard to meet face- to- face. It's hard to even have meetups, let alone major conferences. I will mention, by the way though, in June, June 21st to the 24th, there is the Open Source Summit taking place in Austin. The Linux Foundation putting this together. This is our main, all of our communities together under one roof event. There will be a supply chain security con happening in parallel to that, that we'd love to see people come out for, and the CFP for that is open. If you're either working on some of this stuff or simply have an interest in it and want to talk, it'd be really great to have you there both at Open Source Summit, you see the URL here on the screen, as well as specifically at Supply Chain Security Con. That'll be really our first chance as a community to get together and mass face- to- face and talk about the projects and how we move them all further and faster.
Joe Sepi: Knock on wood that everything's smooth through the summer because June in Austin is going to be pretty amazing, the open Open JS World event. We're partnering with CD Con, the CD continuous delivery found.
Brian Behlendorf: Okay, very cool.
Joe Sepi: Yeah, that's going to be fantastic. I actually already messaged my friend in Austin about maybe renting a place for the month and coming down with my dog, and I love Austin, so it's going to be great. I encourage folks to check out this link, look for the supply chain security event as well. It seems like there are a number of spinoff or smaller events that are a part of this overall event. I'll just comment, too, on the thing we were talking about previously, working in open source, I think you gain invaluable skills there in terms of there's oftentimes no manager or anything. You need to figure out how to work with people, get along with people, move things forward, find consensus. The skills that you learn in open source are really invaluable. I highly recommend. We're running out of time and this happens every time, but I feel like we didn't really get to dig into who is Brian Behlendorf, and there's so much there. We could have our own show, whether it's early rave culture or your tech leadership in the Burning Man, and I'd love to do a show just on Brian Behlendorf.
Brian Behlendorf: I'm old is what you're saying. I have a deep history and I'm a dinosaur, so I'm just really grateful I can be working on something that all the kids are into these days.
Joe Sepi: Security is so hot right now.
Brian Behlendorf: Yep.
Luke Schantz: It is funny. We were joking before the show that, oh, this isn't going to be a three hour conversation, but I think it easily could have been.
Brian Behlendorf: Well, what are you doing for the next two hours?
Joe Sepi: No, this is great, and this is a programming note. We have Jamie Thomas from IBM coming on the show soon. She was in the White House meetings and I'm really eager to talk to her about all the work that she's focused on. She's on the board at the OpenSSF?
Brian Behlendorf: She's chairman of our board, yes.
Joe Sepi: Chair of the board, great.
Brian Behlendorf: Great inaudible to have on the show.
Joe Sepi: Yeah, I'm excited she's going to be coming, and we're talking to David Wheeler, who is, his title again, Brian?
Brian Behlendorf: Director of Software Supply Chain Security for the Linux Foundation. He's been working on this and working on OpenSSF longer than I have. We just skinned the surface. You can ask him to drill down and get surgical and he'd be happy to. He's great at that.
Joe Sepi: Yeah, I'm looking forward to that as well. We've got security as top of mind for everyone right now. I encourage folks to check out all the links that we had. We'll have them in the show notes and everything. Any closing thoughts, Brian?
Brian Behlendorf: Again, I'll repeat. I'm super appreciative of the chance to be able to work on this, and this is a space where it's been far too easy to blame open source developers, or the open source business model, or that we're all communists or something. No. Things are fundamentally good and healthy, but it's long overdue for us to take a look at how we write code and systematically make some improvements, and I just feel super fortunate to be in a position to be able to be the pretty face on the front of a huge community of people working together to make this stuff happen. It's a really exciting space and I'm grateful for the chance to talk about it here.
Joe Sepi: I appreciate all the work that you're doing and I do hope that, we've talked about open sources people. I do hope that we get some systems in place like we've been talking about here where, when you go to look at an open source project, you look at the license, I look at the code of conduct and things like that as well, but security should be right at that top of that list. Check the badge, look at their, see if they have a Security MD file. What is their process for reporting vulnerabilities? Do they have any sort of reporting program? All of these things I think are really critical and I hope that they really rise to the top of the things that people consider when they're using open source software.
Luke Schantz: I think we've reached a point that it makes sense to end the conversation. This has been so much fun. I would say, maybe later in the year after some more things have happened, it'd be great to have you back and check in and see what's happened, because this is obviously such an important subject and organization and it's only going to be more important moving forward.
Joe Sepi: Yeah, maybe we'll do a live in Austin.
Brian Behlendorf: Once we've solved all security problems and there's no more security bugs, I'm happy to come back and claim victory.
Luke Schantz: You can go onto your full retirement career as a DJ.
Brian Behlendorf: Exactly. Just live out on the playa at Burning Man full- time.
Joe Sepi: That'd be great. That'd be great. Thank you so much, Brian. It's been a pleasure talking with you. Thank you for all the work that you're doing, and I look forward to talking more about this stuff.
Brian Behlendorf: I'm happy to come back. Yeah, let's talk again soon.
Joe Sepi: Cool. Thanks. Cheers.
DESCRIPTION
Brian Behlendorf is the General Manager of the Open Source Security Foundation. Brian has dedicated his career to connecting and empowering the free software and open source community to both solve difficult technology problems and have a positive impact on society. From startup company founder, to advisor to the U.S. government, to non-profit board member and employee of the World Economic Forum, he's been at the forefront of the open source software revolution.
Join hosts Luke Schantz and Joe Sepi as they get Brian's take on the latest open source software developments. As the recent Log4J vulnerability has shown, open source software is not immune to security breaches and attack. Brian shares his views on the Log4J scramble, his recent White House meetings on software security, the costs of security and threat mitigation, and future challenges and opportunities in open source software.
Join us for a look back at Brian Behlendorf's unique career and see what's next for him and the movement he helped launch, this time on In the Open with Luke & Joe.