Conf42 DevSecOps 2021 - Online

Securing Your Pipes with a TACO

Video size:


In highly regulated environments, governing bodies of the organization can quickly get in the way of your delivery. What I present is a straw man for architecture, compliance, security and development to come to agreement on their minimum viable bureaucracy.

TACO stands for Traceability, Access, Compliance, and Operations and is a set of 20 controls I use as a guideline for helping organizations define automated governance for their software delivery pipelines. However, the primary purpose of TACO is to provide a common language for the organization to understand what “”good”” pipelines mean for them and how to get there.

This model allows for the creation of opinionated pipelines and helps create a common understanding across teams as to what is required in order to be secure. Taking a TACO approach can be considered a part of implementing a DevSecOps program and I’ve used this approach at multiple banks. Having this baseline helps build organizational confidence in the automation of software delivery.

During the talk, I’ll run through the different categories of controls, how they are implemented, what the purpose of them is, how to create robust feedback loops for controls.


  • This is all about automating governance. How do I properly manage changes to regulation, changes to security standards. Stay tuned and I hopefully we'll have some good advice and guidance for you.
  • Peter Maddison is a coach, a consultant, and the founder of a consulting organization called Zodiac. His speciality is around DevOps strategy and around lean controls. He says efforts to adopt these tools fails from a lack of clarity. We need ways of being able to introduce change more rapidly.
  • We need to bring those practices into the entire system, into other parts of the system that also we need to consider around security, compliance and regulation. If we're not aligned in our vision and aligned in how we go about creating value for our customers, then we'll not actually accelerate the way that we deliver.
  • One of the big problems we see is that the parts of the organization that are responsible for ensuring the safety of the organizations are often not aligned to the delivery teams. We need to come up with better ways of being able to understand and apply these rules into our delivery teams in a much faster manner.
  • So let's start with the pipelines, because if we want to take a look at DevOps, this is a good place to start. There are lots of things that we can do at each stage in order to manage and secure the pipeline. It's not just for the auditors, it's also very powerful to help with the adoption of new practices.
  • The elements of taco are traceability, like identifying what happens in the pipeline. This is critical both from an operational perspective and from a security perspective and a compliance perspective. This works very well with other compliance frameworks and allows us to create a straw man that we could then marry into the organization.
  • There's only one question required on the survey, which is, hey, did you like this? And if you want to, you can leave your email there and it will send you a copy of the taco spreadsheet and some useful links. With that, I'll sign off and say thank you very much.


This transcript was autogenerated. To make changes, submit a PR.
Hello and welcome to my talk about securing your pipes with the taco. This is all about automating governance. When we work in these highly regulated, complex organizations, or even when you're just working in an organization that's trying to work out, how do I properly manage changes to regulation, changes to security standards, changes to how I govern my organization and how do I align those into the way I operate. So this is all about those kind of things. So stay tuned and I hopefully we'll have some good advice and guidance for you. So, to get us started, I like to run things through in terms of a talk map. Now, these talk maps are basically a way for me to communicate with you, like what's going to happen in the talk and where we're going to be going with it. We'll start off with an introduction. We'll talk a bit about risk and where some of this comes from. We'll talk about how you can go about automation governance, and we'll have some conversations about why and how a taco can be helpful in these circumstances. And after all that, we can wrap up and have some q a so who am I? I'm Peter Maddison. I'm a coach, a consultant, and a founder of a consulting organization called Zodiac. We help organizations adopting new ways of working. My particular speciality is around DevOps strategy and in particular around lean controls and working with senior executives around understanding how do we marry up the need to go fast with the need to understand how to do that safely. And so we'll start this off with the idea that in this fast paced world, our customers are demanding instant gratification. We have these little black rectangles, and if we don't get what we need, we'll immediately go somewhere else. It's very easy for us to change where our providers of services are, and it's very different from the past where we started. In a world where technology was very expensive, we had to spend a lot of money on it, and it was a case of having to make some very strategic decisions around how that money was spent to really now we went through a period where it was about investing in the capabilities, the networking of the capabilities, how we can then put different parts together to create these more distributed systems into one now where the customers really drives the story. It's all about how do we generate customer value? How do we focus on the customer ensure that we're building organizations and systems that are able to accelerate this delivery of value. And there's lots of tools that will help accelerate value delivery there's all sorts of ones that I'm sure we'll be talking about a lot in this conference. But one of the problems that we often see is that efforts to adopt these tools fails from a lack of clarity. It fails because the organization itself isn't easily able to adopt these practices that go with those tools and bring them in and start to change the way that they operate. There's a lot of complexity in these environments. There's a lot of moving parts. We've got a system that works. That system generates revenue, it generates value, and it works as it is. So when we come in and start to introduce new ways of working, it can be very disruptive to the organization. And as we start to pull on the different threads, it can be very hard to see what is happening and what the impact of this is going to be. So that's all about the introduction, talking about complexity and risk and value delivery, and how do we help start to manage this in a fast moving environment. So where to begin? So a lot of this work came out of consulting work I've done into some large banks where we've gone in and they've started on a DevOps journey. They've been wondering, how do we go faster? How do we go further? How do we start to take the learnings we've initially got and start to move those out into the rest of the organizations? Because when we start to introduce these new ways of working, we realize that it brings in an awful lot of other types of risk. And risk is something that as human beings, we tend to be a little allergic to. So we need ways of being able to introduce change more rapidly, get used to change. Change is the norm. We want to make that something that we're able to deal with. And we spend a lot of time focusing on how to do this in the software delivery space. But we sometimes forget that we also need to apply this to the other external dependencies, which may influence that. As we look at the different things that are coming in, both from a regulatory, a compliance and architectural design perspective, looking at new ways of doing things, how do we see these things changing the way that we do our delivery practices? So when we look at devsecops, one of the key things that we see is that there's a lot of focus on the loops that we can bring to the table. And sometimes we remember to talk about how these tools will modify the processes and how we do our delivery practices. And quite often we forget about people, and people are often the piece that's causing a lot of the complexity unintentionally, a lot of the time. But as human beings, as we interact and we look for different ways of finding solve problems, when we look at something like DevOps, we can sometimes, especially from a technology perspective, get far too focused on the tools and the processes and the practices and forget that we need to work with each other to make all of this work. And all of this is easy, right? It's easy to adopt change, it's easy to bring new things to the table. I mean, that's why we have these conversations. And in my experience we've managed to deliver some amazing results, but it's certainly not been easy. There's work to be done here as we look at how do we need to change the way we think about the work that we do and how we interact with each other in order to make this successful, because we're adopting new paradigms, new ways of working where we have to bring in new knowledge, new ideas, and all of this while we're meeting the same obligations and existing commitments that we've always had to meet. And as humans go through change, they go through this cycle, they go through this cycle of saying, hey, that's not going to apply to me. We're not going to be able to put devsecops onto the mainframe. We've got too many regulations that we need to make sure that we comply to. We can't possibly do that here. It's not going to happen. I can ignore that. It's not my fault to complaining like why are you doing this? This is terrible. You're going to break everything if you do that. Everything's going to be horrible. You're going to expose all our data, you're going to crash everything to, please don't do that. Please don't do that to me. To sulking about, God, this is terrible, it's awful. I can't believe you did that anyway. And to finally accepting the change and as change agents, one of the things that we're looking to do here is we're looking to bring down this change curve. How can we make this easier for people to adopt? How can we make it easier for people to look at the processes that they have today and all of the things that they know that they're dependent on all of these requirements that they have to ensure, get embedded in the way that they do delivery and make this easier for them to deal with. So working with organizations, we come up with these different models, these different practices, and we draw all sorts of different ways of doing this, from champions programs to understanding how do we integrate these different tools together to start to build out these pipelines that will already have some of the capabilities and regulation baked into them, to understanding how do we create, go from this small organization that's trying to bring in more change and accelerate the change in the organizations to being able to create the snowball effect of accelerating that beyond the initial successes that we see. And one of the things that we found when we were working with these banks was that we keep hitting this wall, we keep hitting this wall where we run into regulation and compliance and audit, and where it's not enough to be able to say, yes, we know that by adopting these practices, we can deliver faster and better and safer, and it's better for our customers, and we know we can actually make things more secure. We also need to ensure that we're bringing those practices into the entire system, into other parts of the system that also we need to consider around security, compliance and regulation. And one of the things that we found is that when we look across all these different areas of the organization, we find that things get lost in translation, as the Rosetta stone here shows the translation between hieroglyphics through ancient Latin to help us understand these things. So developers, for example, are speaking one language. They're talking about how do we make sure that we don't build the wrong things, that we're able to understand what's happening there. The testing is ensuring that we don't find the wrong things wrong, that we don't waste our time chasing after ghosts, that we've actually got effective quality built into our systems. We've got operations trying to manage and ensuring that we're meeting any external slas that we may have, and ensuring that the systems are operating the way that we want them to operate. We've got security who's concerned about making sure that we're properly identifying problems and they can often get in the way of what we need to do. We got compliance, who's saying, well, we've got these external regulators, and we need to make sure this is broken in. How do we ensure that we're properly identifying and demonstrating that we are compliant? And we got architecture that's saying, well, we can't do everything, and if everybody runs off in a million different directions, then everything will start to fall apart. So you've got all of these groups who are all trying to ensure that we create a safe delivery environment and that we reduce the risk to the organization as we accelerate our value delivery practices. But they're all talking different languages. They're all talking using different terms, different terminology, different concepts, different models and visions of the way that things should be done. So in order to create alignment across these groups, we need to find ways of coming to some common understanding about what is it that all of us mean when we talk about delivering safety and creating safety in the way that we deliver. You can have the best security team and the best delivery team and still fail at being able to deliver securely. And this is because if we're not aligned in our vision and aligned in how we go about creating value for our customers, then we'll very often end up standing on each other's feet and not actually accelerating the way that we deliver. So we have to move from this idea of being the department of no to the department of yes. And this is how you do it. How do we get from the point of view that we as security, our job is to make sure you don't do the wrong thing to we as security are here to make sure you can deliver securely. So the other side of this as well is we need to ensure that we're measuring correctly, that we're looking for incremental improvements, that we're, as we set out on this journey, as we look for, how are we going to go about making the right changes to the organizations to ensure that we're embedding safety into the delivery system, that we're quantifying what it is that we're doing, that we're not just saying, hey, we want to be safer, but we're saying this is how we're going to determine that we are safer. This is how we're going to determine that we are actually doing a better job of managing risk in our environment, and that the practices that we're introducing through all of these different areas are actually helping and not hindering our ability to create value for our customers in a safe manner. So we've talked about the introduction, talking about value delivery and how that can be accelerated, and we've talked about risk and the different things in the environment that cause risk and how some of the things we need to start to capture and come of the problems that cause it and the friction across the organization. So now let's talk a little bit about how you can go about automating governance. So one of the big problems we see, especially in these large organizations which are regulated, is that the parts of the organization that are responsible for ensuring the safety of the organizations are very often not aligned to the delivery teams that are looking to adopt DevOps practices and accelerate the way that they deliver. Very often they're sitting off in their own silos trying to interpret what's required from a government regulation and try and translate that through to ensure that delivery teams have the right compliance standards. But often those delivery teams have very little visibility or understanding. And even when they do, they had call up somebody in this much smaller department who's looking at all of these standards, and they get told, well, here's a book of standards. There's 300 pages. You got to go through these other 200 pipes which are referenced over here. And if you can find the section that applies to the problem you're looking to solve, then good luck. And then if you call back, you may not even get the same person because you may be dealing with somebody else. And it's very, very hard to find the thing that is applicable to your particular problem that you're looking to solve. How do I find the standard, the control objectives that are actually going to apply, and how do I then interpret those into my team, into what I need to do in order to ensure that I can deliver value without having to go through massive committees and all sorts of other frameworks to do this? Common examples being things like ARB. And there's another organizations I was working with, TRDs, and then followed by ARP, where you've got a technical review board, which will then take you up to a larger architecture review board, which means that it could take three months, or it could take six months, or it could take nine months, or it could take twelve months to even come to a decision about, are we allowed to try and do this? Is this the right way to go? Is it okay if we do this? So we need to come up with better ways of being able to understand and apply these rules into our delivery teams in a much faster manner in order to be able to operations in the modern world. So let's start with the pipelines, because if we want to take a look at DevOps, this is a good place to start. And this is a generalized pipeline that comes from some it revolution white paper that I highly recommend reading, and there's a link to later on in the talk. So if we look at these as some of the primary parts of the pipeline where we could start to introduce controls, if we only put our controls in, in the production deployment, then we're never going to be able to understand how secure we need to make sure that we're introducing controls all along the pipeline, that we're properly introducing and managing our delivery practices. And there are lots of things that we can do at each stage in order to be able to manage and secure the pipeline. We can introduce even things like linting and SCA into our source code repositories. We can look at dependency managing using artifact repositories, using scanning for open source vulnerabilities. We can look at tools inside of our building, look at SAS and dast capabilities as we move through the pipeline. All of these enable us to gather more and more information about the state of our code and feed it back into delivery team to help them understand what changes they need to make to become more secure in the way that they do delivery. And when we're looking at running the pipe, we can think about what are the different stages we can do. We can talk about as we define what we're going to do. We can turn that into code and everything needs to be written in this code. We can then use our CI systems, create build results which we review. We can look at our quality tests, we can look at artifacts, and then we can run another CI run, perhaps to run our organizational tests. We'll start with doing peer reviews, but as we look to move forward, we'd look to say, hey, how can we remove any other manual steps that we have, if we have manual steps around the build results, can we look at pair programming and other capabilities to start to think about other ways to satisfy these requirements, to reduce the number of manual touch points there are as we deliver? Because what we really want to do here is we want to reduce the cycle time. But in order to do that, we've got a couple of dependencies here we need to be able to look at how do we reduce what we're delivering to a small enough component piece that it's easy enough for us to test? A good example here is that doing peer reviews or code reviews, although it's a great idea, it's one of these things that in order to do it well, you've got to have a small enough amount of code to review at any one time. If somebody only submits their code every two weeks, they might have a few thousand lines to review. And then it becomes very, very easy for things to get missed. It becomes very easy for things to get rubber stamped as they move through the pipelines. Far better if you're deploying on a daily banks or less than a few hours, so that there's less to review and it's faster to get through the process. And as we look to move forward, we can start to say, okay, well, how will we audit all of this? And one of the values of being able to automated pipelines is we can pull information out and we can make it visible. When we did this at one of the banks, we started with hygiea, but then found that keeping hygiea up to date with the various other tools that it needed to feed from was a lot of work in and of itself. As the tools upgraded, hygiea needed to be upgraded and that took time. So we switched to pulling logs out of each of the different tools and using that to build dashboards, which we could then radiate useful information back to the organization to help say, hey look, we went from an average deployment time of once every six months down to one of once every month, and we did that in a six month period. So this is the kind of automation you can pull out and radiate to help guide and improve the automation within the organization. So it's not just for the auditors, it's also very powerful tool to help with the adoption of new practices in the organization. One thing to realize here is that even as you build out these new capabilities and building a paved road and an automated pipelines is great, but you also need to consider that it's not going to be for everyone in the organization. It becomes an easy button for those parts of the organization that are struggling to see how to adopt these new practices, and it can make it very, very easy for them to start to get started. But your faster moving groups are going to want to accelerate far beyond anything that you could build out as a pipeline. So what we do here is we can provide them with a set of guidelines and I'll talk about these in the next section. But if we give them a set of this is what you need to be able to do in order to build can appropriate pipeline. These are the things that need to be embedded so that you can do continuous delivery. If you don't do these pieces, if you don't want to come and use ours, which has all of that embedded, then go ahead and build this out yourself. You want to try other tools, other capabilities. We're here to help, but it's an invitation we're not going to inflict on you. We're not going to tell you to come and do it our way. It's a paved road, but you do have the option to try and do it. There's some other way if you want to. This is important too, because we know that we don't want to provide just one way to go places. We need to create the ability to innovate. We need to decide that hey, maybe we want to go in a different direction, maybe we want to go on a longer route because we know that we could possibly discover something of value in the process of doing it. We want it to behave more like ways and help us direct us in the right directions. But we want it to be something that is not just singly controlled by come centralized team. If you try to do it that way, then you will almost certainly fail as you try to bring in these new practices. So automating governance. There's a lot of pieces that I'm touching on there as I'm pulling the threads out and I'm happy to talk about them in more depth as we move forward. It's always hard to cram so much into a talk like this. It does sound like a lot of work and there is a lot of work to be done, but here are some good pointers to get you started. Think about it as not getting. It's not about keeping considered off your back. It's not about satisfying a checklist. Think about it about collaborating to create safety in your delivery practices. Think about how do we make the safety parts of the organization, all of these governance parts of the organizations, a part of the solution, not a part of the problem. How do we get them to be the departments of enabling safe delivery and do that using come of the practices that I'm talking about? Don't try and boil the ocean. Start small, get one team working and then grow from there. Don't try and do all of this all at once. Find one team or one thread, one value stream where you can start to align your safety teams to that value stream and start to understand what are their needs, what are their controls. How can we start to create an understanding of risk management for that particular value stream and align on that and then grow from there? And three is focus on the conversation. Make sure you're engaging leaders in this conversation. Focus on needs to be, not on the tooling. It's not about have I got the next greatest product embedded into how I do this. One of the common mistakes that I see organizations make is they bring in a new piece of tooling that generates a whole bunch of reporting, but it also generates an awful lot of false positives and lots of other information and lots of other reports. But it's not being fed back into the delivery teams in a meaningful way. So it doesn't really add any value. It doesn't actually help with the intended purpose of helping the organizations properly secure their pipelines. The tools can be extremely valuable and they are necessary. It's just bringing them in in such a way that we understand and see that benefit. And we create the appropriate feedback loops into delivery teams so they know how to respond to the information that they're being given. So we've talked about a bit of an introduction about the context of the world that we're living in today. We've gone through risk and DevOps and how we look to try and reduce the change cycle to make it easier for people to adopt these practices into heavily regulated organizations. We talked a lot about automating governance and the different kind of things that you can do and things you can embed into a pipeline and some of the things you should avoid when you're starting along this journey. And what sort of things are helpful as you go through automating your governance practices. Now we're going to move into a little bit about what taco is and how that was one of the things that we brought in that was quite useful. So when we look at how do we model or create a model for us to be able to understand and give to the delivery team so they know what it is they need to do, we need to find a way of communicating this, creating that common language as to what does a safe set of delivery practices mean so that I as a delivery team can understand and apply it to my context without necessarily having to wade through 300 pages of documentation. And there's multiple elements to this. One is this taco piece, which is on the tactical how do I understand what I need to do? And then there's the control piece of how do I make sure that as controls change, I have a pipeline for those controls and that they are also being embedded and fed into delivery teams, and that they are being translated into the context of the team's need. So this is in that lean control space, and we'll talk a little bit about that, too. So the elements of taco are traceability, like identifying what happens in the pipeline. This is critical both from an operational perspective and from a security perspective and a compliance perspective. So it's something we want to do. So looking for chain of custody of code as it moves through the pipeline, which we can do in a number of different ways. Looking for tagging things back to a Jira ticket is one way that ive done this in the past. Other ways is by if you've got consistent tagging and you can pull things with the logging layer that can also be a great way to be able to start to create that test results. All is about ensuring that test results are visible and that we're not hiding them off in a separate repository that nobody else gets to look at, that we're properly collaborating to bring quality into the delivery teams. Deployed version is in fact tracked. So we're looking at what is the version of thing we deploy and making sure that changes are properly recorded, which we also need both from a security and operations perspective, securing the delivery process, looking at the access, who has access to code? Who is managing that code? Looking at is the source code properly accessed and managed? Are we tracking the creator? Are we signing code so that we actually can trace who it was that did what in a particular pull, request, build once, deploy many, ensuring that we are creating immutable artifacts and deploying them out, and only allowing pipelines to then do those deployments and compliance. Looking at validating the payload in the pipeline, are we properly doing managing peer review? Is that properly being birded into the way that we do things? Are we scanning the code? Are we scanning the artifacts? Are we managing the data and the way that it's delivered and operations monitoring and securing after the fact, looking at validating the target of where we're deploying, validating the quality and checking it works and watching it live. And all of these tie nicely back into the compliance standards that we're looking to map. So this works very well with other compliance frameworks and allows us to create a straw man that we could then marry into the organization. So what we did from here, and we created a set of auditing ones to validate that it was working, is we said, okay, so if we had this set of standards, what else do we need? What else is missing? What else could we embed? And then we embedded this into the standards within the organization to ensure that if somebody meets the set of standards that we've defined on taco vendor and the risk profile is correct, then we can properly believe that the pipelines is doing what it's supposed to do and that we know how to measure for safety in that pipeline. And my daughter drew a lovely picture that I like to include here. So the first version of this was this spreadsheet where we said, okay, we want to understand, what is the purpose of this? Why are we actually introducing this as a control at all? What would the control be? Where would the artifact be? Where would it be stored? What happens when that control is passed? What happens when it's failed? And who's going to own this. We could then map that because out of the spreadsheet and create the picture. And I will end. If you sign up at the end, I'll send you a copy of the spreadsheet. We then could also design and run workshops up this. We like looking at how do we help people understand where there might be risks in how they're doing things today, and what might possibly be ways of overcoming that and introducing controls that might mitigate some of the risks that they have in their delivery systems today. The ultimate result was a wonderfully complex diagram that looks like this, but simplifying this out, it would look like build, validate, test and deploy. And not as in the terms of SCLC. There's no gates here. This is more about identifying where are the areas that we can appropriately apply controls so that we know what to do and identify them. So this is more about understanding the phases that a software delivery system goes through and then being able to identify what controls we apply into each of those areas. And when something goes wrong, what do we do when that happens? One example of this is Capital one has a very similar sort of sets of controls which they applied say, hey, if you want to be able to get to controls, delivery and deploy through our pipelines, these are the sets of things that you need to be able to do. When we look at things at scale, across large teams with 20,000 developers or large deployments, you've got every version of every type of software in there. It's impossible to say, hey, everybody, do things this way. You'll never get to that utopian version of complete compliance. So you need to provide instead frameworks which work for people to be able to adapt to their needs while being able to provide this understanding of how do we make sure things are operating the way that we want them to. Nationwide also has some great practices around this, applying lean control into the way that they do delivery. And I'd highly encourage reading through some of their things and looking up some of the talks that they've done on the topic. They've also created some great dashboards on the way. You can pull out information about how they've moved to a model where you can put these control tickets into the delivery team's backlog, so they could then solve those, and you can see how many of them being dealt with so that, you know, are we appropriately adapting to managing risk in our delivery teams? So, to sum this up, it's not about DevOps on its own, and it's really not about devsecops. Either. It's really, and John Smart put this very well about risk, dev risk, ops risk. We're talking about how do you embed risk into the DevOps practices and just make it a part of what we do. I mean, we can't really come up with another term for this. It's about how do we make sure that these safety elements, these governance elements within the organizations are properly aligned and enabling our delivery practices. So we talked about an introduction, we talked about value delivery and all of the way the world is changing and complexity. We talked about risk. We talked about how Devsecops aims to help, but it also introduces change. And change in itself is risk. And we've talked about the problems with communication you see across different parts of the organization. We've talked about automating governance. We've talked about the different things and practices that we can introduce to help with automating governance. And we've talked about why and how to make a taco and how a taco is a minnow to help you understand how you can start to marry these and scale them out across your organization to understand what is a good safety pipelines, what does that look like? What are the things we want to see in there? What do we as an organization agree is necessary in order for us to be able to allow a delivery team to deliver software? So to wrap up, it's really about not using the same thinking we've always used around securing our delivery pipelines. It's about how do we create a common understanding of what good looks like and start to move from a model where the compliance teams and the security teams and architecture teams are sitting outside of the delivery teams and are trying to dictate to them how they should do things, because that just creates confusion, because they're talking a different language, they've got a different set of priorities, a different set of goals. So we need to create tighter alignment into the way that they operate. Safety teams is a great way to do this, like looking at how do we start to align small safety teams to value streams which exist with delivery teams embedded in them, where they can develop a stronger understanding of the context of that delivery. How do we create that common understanding of what good looks like? And this is where things like taco is a good model for that, in that you can use that as a way of creating that common language, that common way of talking about what do we mean when we say that we are creating secure and compliant pipelines in our organization. Here's some great references. I'd highly recommend looking these up if you're interested in this space and how to take and apply some of these to your organization. And so let's review. Let's go from talking about apples, oranges and bananas to talking about tacos. And this is the idea here is it's really about, let's go from talking about all of these different terms and terminologies and start to talking about a common way of talking about what we mean by safety in our delivery. So you should hopefully now have a common understanding of how we get to a good pipeline, if not what that good pipeline looks like yet. That's a longer conversation. Safety is about behavior, not just tools. To truly get to safety in your organization, you also need to be creating a culture where it's okay to say that, whoops, something went wrong, that it's okay to admit that there was a fault or there was a mistake, and that it isn't necessary to get 300 approvals before you can do anything or move anything forward. Ways to automate your software delivery compliance so when we look at how we can pull information out and create reports which are aligned to compliance, that's one way. But when we also think about, as we look to take and understand what we have to do from a compliance perspective and translate it into the context of the delivery teams, this means we can also now measure and quantify what that looks like, which will allow us to not only be more compliant in the way that we do delivery, but make it easier to be compliant with what we do. And I'll leave you with the Rosetta stone and a quick thank you to everybody for attending. I hope you enjoyed it. There's only one question required on the survey, which is, hey, did you like this? And if you want to, you can leave your email there and it will send you a copy of the taco spreadsheet and some useful links as well. Okay. With that, I'll sign off and say thank you very much. It's been a pleasure and hope to see you all soon.

Peter Maddison

Managing Partner @ Xodiac

Peter Maddison's LinkedIn account Peter Maddison's twitter account

Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways