Conf42 DevSecOps 2025 - Online

- premiere 5PM GMT

Secure by Default: Building Confidence in AI-Driven Delivery

Video size:

Abstract

The fastest way to break trust in DevSecOps is to automate insecurity at scale. As AI takes a central role in our pipelines, it is time to rethink what “secure by default” really means.

In this keynote, Dewan Ahmed will challenge the audience to look beyond vulnerability scanners and compliance gates. He will share a vision for intelligent security by design, where native intelligence within the delivery platform detects not only vulnerable code but risky delivery behavior such as misconfigured environments, suspicious artifact provenance, and drift between source and runtime.

You will walk away with a framework for balancing automation with human oversight and examples from Harness’ work on building verifiable, auditable, AI-native delivery systems. In the new world of DevSecOps, safety is not a step; it is an outcome we continuously learn to improve.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
AI is reshaping how we build and shape software, not gradually, but in giant leaps. It's writing code, generating infrastructure, fixing tests, reviewing pull requests, and making decisions inside our pipelines that used to require a human for the first time. The pace of delivery isn't set by developers or release managers. It's said by automation and specifically AI driven automation. That means something fundamental has shifted. We're delivering faster than ever, but the guardrails we have relied on for the past decade. Were not designed for systems that think, respond and adapt, and that gap between how fast AI can move and how slowly. Traditional security evolves is exactly what we need to talk about, because if we don't redesign our security for the AI era, AI won't just accelerate delivery. It will accelerate risk. So today's session is about one idea. How do we make AI driven delivery secure by default, not by policy, not by hope, not by heroics, but by design. So before we dive into the technical depth of the talk, let me give you a quick sense of who I am and where my perspective come from. I'm Don Hammed. I serve as the principal developer advocate at Harness a company with a simple but ambitious mission to help every software engineering team deliver code quickly, reliably, and efficiently. If your team is trying to ship fast, but not break production in the process. Harness exists for exactly that reason. Before this role, I spent over 15 years working across the software delivery lifecycle as a Java backend developer, QA lead consultant, and later in advocacy. I have worked for both small companies, large enterprise, and on government projects with one goal. I wanted to help teams ship software. With confidence and security baked in. I live on the East coast and outside of work I spend a lot of time giving back to the community by free resume reviews on LinkedIn. Alright, enough about me. Now let's dive into the perspective of secure Pipeline in the AI era. Now, here's the dream. A pipeline that flows in this beautifully simple, linear way. You write code, you build it, you run security, you deploy, and then you monitor and audit. Just like the textbooks say. It's so clean and so orderly that you could imagine it on a DevSecOps brochure or a pamphlet. If you're in the Europe, everything behaves exactly the way you think it will. Nothing surprising, nothing out of place. You push changes and the pipeline gently moves them through, through production, like a well-behaved assembly line. This is the image we all have in our heads when we think about our DevSecOps done right is the idealized version, the Instagram version of software delivery. But here's a problem. This is not how real pipelines work. Real pipelines are messy, real pipelines are unpredictable, and most importantly, real pipelines have layers of hidden complexity. That this simple diagram doesn't show. So let's move Zoom in because the moment you inspect the details, this perfect pipeline start to look very different. So now this, and this is closer to reality, let's break it down because each stage here represents a failure mode we have all encountered. So we start with a base image or dependency set that hasn't been touched in years. It's packed with vulnerabilities, not because anyone is negligent, but because teams are juggling so many priorities that updating the foundation gets deprioritized. So right from the first step, we're injecting risk directly into the pipeline. Next, the build stretch proudly announces all test passed, but when you look closely. You found out that only a tiny silver of a code base is actually tested. This is like checking only the headlights on a car and declaring it road trips ready. It's technically true, but practically useless. Next comes security. So SaaS and das are there in theory, but in practiced. They're often skipped. Maybe scans take too long. Maybe someone's under pressure to meet a deadline, and because the system isn't enforcing anything, the pipeline just shrugs and moves on. Then we deploy everything all at once. No calories, no blue, green, no gradual rollout, just. Ship it and hope nothing explodes. Monitoring fires, alerts only after the application is already throwing. HTPs five hundreds. There's no early warnings, no behavioral signal, only catastrophic detection. And finally, audit becomes a manual guessing game. No continuous compliance, no real traceability, just someone asking. So is production on fire? The point of this slide is simple. The ideal pipeline fails because it ignores reality, and AI is about to amplify all of these weakness, which brings us to our next slide. So over the last decade, we became experts at automating delivery. We built modern CICD pipelines, shifted, left, shifted everywhere, and for the first time in the industry's history. We actually had a clear, well established path from code to production. Think of that pipeline as a set of carefully laid train tracks. A lot of hard work went into getting them stable, predictable, and safe enough to run at reasonable speed. And then we added ai. That's the bullet train you see on the slide. AI writes code. It generates configuration. It proposes infrastructure. It drafts remediation, automatic decisions, and accelerates everything we feed into the pipeline. But here's the critical problem. We added the bullet train before checking whether the tracks could handle it, the rails, our processes, our guard rails, our security models. We're built for a slower world, and the moment you increase speed without increasing safety, the consequences aren't just faster delivery. They're faster failures. This is the core tension of modern DevSecOps. AI doesn't fix broken process. AI amplifies them and that leads directly into our next problem, the real world impact we are already seeing. Here's the first data point. 76% of enterprises, not startups, experiment experimenting, not hobby projects, but established enterprises have already experienced a prompt injection incident. Prompt injection is one of the simplest attacks against AI systems, and yet it's incredibly powerful. An attacker doesn't exploit code. They exploit behavior. They manipulate the model's, instructions, override guardrails, or change the intent behind the interaction. All through crafted input. In traditional security, this is like someone whispering the right sequence of words to get root access. It feels absurd until you realize how easy it is to pull off. And this isn't niche. This is an oh that only happens in the research labs. This is happening in production environment today. Prompt injection is the warning sign, the indicator that we are not adequately prepared for behavior level attacks that AI enables, and it doesn't stop at prompt injection. 66% of organizations have shipped code that contained vulnerabilities introduced by AI generated suggestions. This is the part that's our price. People LLMs are excellent at generating syntactically correct code, but they are not inherently good at generating secure code. They don't understand memory safety. They don't have built-in threat modeling. They don't know your architectural constraints. They don't reason about. Authentication boundaries or or privilege escalation. They simply produce what looks right based on patterns in training data. Okay, so developers accept a suggestion pushes a commit and test pass because tests often cover too little and our vulnerability quietly enters the system. In other words, AI accelerates delivery, but unless we guide it carefully, it also accelerates the spread of insecure patterns. And when you combine that with increasing speed, limited review windows and pressure to ship. A vulnerable code becomes an automated problem at scale. Uh, let's look at one more number and this one reinforces the trend. 66% of enterprises have already experienced some sort of LMG breaking, which means someone managed to push the model into doing something it was explicitly instructed not to do. This is different from prompt injection. Prompted Injection manipulates the model within its allowed structure. Jailbreaking convinces the model to break its own rules entirely. It's like having a safety system that says, I'll never disable breaking, and then someone gets it to say, well, unless you ask nicely. The scary part is that jailbreaking doesn't require advanced hacking skills. It just requires creativity and enough persistence. Once a model breaks its own boundaries, every downstream workflow that relies on it become vulnerable. If the model helps generate configs, those configs can become unsafe. If it influences deployment decisions, those decisions can become risky. If it shapes, uh, access policies, uh, permissions can be escalated or weakened. And if it interacts with user data, the guard rails around sensitive content can evaporate. Jailbreaking is a symptom of a deeper truth. AI systems are not purely deterministic. They behave, they interpret, and they adapt, and that creates an entirely new category of threats. Now let's connect that shift to the broader picture of software delivery. So AI has dramatically improved how we deliver software. It accelerates development, simplifies infrastructure, automat decisions, and turns complex tasks into simple prompts. In many ways, delivery has never been more efficient, but the threat landscape didn't just evolve alongside ai. It evolved faster. So traditional security models were built for static systems. We scanned for, uh, SQL injection. We look for, uh, cross edge scripting. We hardened servers. We watch dependencies, but AI native systems don't break the weight. Traditional systems break. Their vulnerabilities don't live in, uh, code lines. They live in model behavior, uh, contextual interpretation, uh, reasoning pathways, uh, decision making, logic, uh, emergent responses. The visual on this slide captures the difference perfectly. On the left, you have the old world structured static governed by predict predictable code paths. On the right, you have the AI native world. Fluid, contextual, adaptive, and therefore susceptible to entirely new classes of manipulation. This is why we are seeing prompt injection, jailbreaks, data leakage, and behavioral drift. They're not bugs, they're consequences of how AI systems function. So what happens when we put AI inside our delivery workforce, inside our pipelines and inside our automation? That brings us to the next critical idea. This is one of the most important truths in modern DevOps. Pipelines don't have opinions. They automate whatever we put into them, good or bad. If you give them secure, well tested, well reviewed code, they automate excellence. If you give them misconfiguration outdated images or vulnerable AI generated code. They automate those mistakes with the exact same enthusiasm. Now introduce AI into the equation and everything accelerates. AI writes code faster. AI proposes convic faster, generates infrastructure faster, approves changes faster, makes decision faster. So if there's a crack in the system, um, we control or an outdated pattern, uh, permissive policy. AI doesn't fix it. AI amplifies it. So the image on this slide shows this perfectly, a clean green flow until one flow enters, and suddenly the entire system becomes a high speed distribution mechanism for risk. This is why the idea of secure by default is not a slogan in an AI driven pipeline. Security cannot be optional. It cannot depend on developers remembering something or waiting for a late stage scan. Security has to be embedded at every layer because the pipeline will run and it'll run fast, whether the safety rails are there or not. With that in mind, let's talk about how organizations are currently adopting AI and why the patterns aren't keeping up. So across industries, teams are doing something very exciting and very dangerous. At the same time, they're shipping AI integrated systems into production, not prototypes, not, uh, side experiments, real customer facing business critical systems. Um, AI is generating code generating yaml, um, suggesting infrastructure changes. AI is, uh, triaging incidents. AI is shaping the behavior of our pipelines. And teams are adopting these capabilities incredibly quickly because they're powerful and because they re um, they reduce friction. But here's the part we're not talking about enough. We're not updating our security patterns at the same speed. We're still relying on AppSec models built for deterministic systems. Still treat security as a separate workflow. We still focus on code level flaws while the risks have moved to be moved to behavior and context. This creates a mismatch. Modern delivery is dynamic, but our security tools and process are static. So teams are moving fast with ai, but without AI aware guard rails policies or verification, and that mismatch is exactly where incidents happen. To understand this gap more clearly, we need to step back and look at how delivery itself has changed. Okay, so for the last decade, delivery cycles have been compressing Quarterly release became monthly release. Monthly became weekly. Weekly became daily, and in many teams now deploy multiple times per day. Then AI entered the picture and compressed everything. Again, work that used to take hours. Now happening in minutes, work that took minutes, now happening in seconds. The velocity of software creation has fundamentally changed, but security practices didn't evolve. Alongside this shift, we're still depending on manual reviews, periodic scans, point in time approvals, human driven decision making, security teams already overloaded with requests. In a world where code is generated instantly and configurations are modified automatically. Security that operates on human time cycles simply cannot keep up even worse. Misconfigurations are no longer just mistakes. They have become automated hazards. A bad configuration generated by AI doesn't sit quietly in one environment. It gets replicated, it gets de deployed and redeployed. It becomes infrastructure as as a risk. This widening gap, fast delivery, slow security is the core reason we need a new approach. And that new approach starts with understanding that AI native delivery lifecycle. Now this slide captures the fundamental shift in how delivery works in the AI era. Traditionally, our pipeline was linear. You code, build, test, deploy, monitor, changes, move forward step by step. But AI has changed that model entirely. Today's delivery lifecycle looks more like this. AI influences every layer. It helps generate code, is suggest, uh, or modifies configurations. It influences infrastructure definitions. And it affects runtime decisions through automated remediation. And importantly, this is no longer a one-way flow. Issues don't just move downstream. They move in both directions. Here's what that means. A config change can trigger AI to adjust infrastructure and infrastructure. Drift can cause AI to regenerate code. A runtime anomaly can cause automated remediation that shifts your system away from its source of truth. And AI generated fix can accidentally introduce new misconfigurations backup upstream. This creates a closed loop system, not a linear one. And in closed loop, a small mistake doesn't stay small. It gets amplified, it propagates, it becomes a pattern. That's why AI driven delivery demands new controls, new guardrails, and a new security model. One that understands behavior, context, and system wide effects. With this lifecycle in mind, let's talk about what it means for security, because the attack surface has fundamentally changed. So when we think about securing AI native delivery, it's important to recognize that the attack surface has expanded far beyond traditional application vulnerabilities. In the pre AI world, our threats mostly lived in code. SQL injection, uh, cross site scripting, uh, dependency vulnerabilities, configuration mistakes, um, author and access issues. But, uh, with AI embedded across the delivery lifecycle, we now face entirely new categories of risk. First prompt injection. Now, this is the most common attack today. Instead of attacking your code, attackers manipulate the model's behavior by injecting crafted prompts. The model is tricked into revealing data, altering its reasoning, or taking actions you never intended. Second is the AI supply chain. We're not just depending on packages anymore. We rely on model weights, fine tuning data, prompt libraries, uh, embedding stores, third party inference, APIs. Each of these become an attack factor. Um, if any component is compromised, it enters your delivery pipeline just like a toxic dependency. Third is the automated misconfiguration. So in the past, misconfigurations happen manually now AI can generate or modify configurations at machine speed. If there's a mistake, a bad IM roll or an insecure default, a faulty YAML pattern. That mistake is propagated instantly across environments. And last but not the least, the model exploits. These include jailbreaking, model poisoning, output manipulation, data extraction, uh, via embeddings, uh, or inference attacks. These aren't weakness in your code, their weakness in how the model behaves. And this is why traditional AppSec tools fall short. The design to secure code not behavior. The threat surface has moved. Our security model needs to move with it, and yet the teams responsible for securing all of this, they're not being, uh, set up to succeed. When you look at how organizations are adopting ai, there's a clear disconnect. Only 43% of developers say they're building security into AI native applications. Not because they don't care. Developers always want to build things correctly, but because they haven't been trained or equipped for this new threat landscape, 74% of teams say security is still viewed as a blocker. This is the old DevSecOps friction returning in a new form. When delivery accelerates, but security can't keep up. Teams don't slow down. They bypass controls. And 62% of developers report having no AI security training at all zero. Yet we are asking them to defend against prompt injection model manipulation, LLM data leakage, AI driven misconfiguration. Embedding level exploits behavioral automated decision making gone wrong. This is like asking someone to secure a distributed system they've never seen before. Developers want to ship safely, but the industry hasn't given them the guardrails. That gap leads us to central top, to the central concept of the stock, secure by default delivery, a system where safety isn't dependent on heroics or expertise or luck. So what does secure by default actually mean? It means designing delivery systems in a way where the safest possible option is the default option, not an optional configuration, not a best practice, not something someone has to remember. Secure by default means the pipeline starts in a secure posture without needing manual intervention. Developers don't have to constantly think about security edge cases, risky changes are caught automatically. Not through late stage reviews, AI generated output is evaluated for safety as it's created. Unsafe defaults don't exist because the system won't allow them. This isn't just philosophy, it's a practical engineering goal because in an AI driven world where delivery happens faster than humans can meaningfully react. Security has to be built in, not bolted on. And the way we achieve this is by embracing the intelligence that AI provides, not just for generating code or configs, but for securing the pipeline itself. So the next question is, how can AI actually help fix the problems AI introduced? That's where we go next. So we have talked a lot about how AI can introduce new risks, but here's the twist. AI can also help solve the very problems it creates. The key is shifting our mindset from AI generates code and configurations to ai, observes, interprets, and protects the pipeline. So what does this look like first? We need pipelines that know when something simply looks wrong, not because there's a specific rule or signature, but because the pattern doesn't match the system's historic behavior. AI's great at spotting these anomalies. Weird output. Suspicious config changes, unexpected infra calls, things humans might miss. Second, the system needs to understand provenance. Where did this artifact come from? Was it written by a developer generated by an LLM imported from a third party model or data set? Without provenance, there is no trust. AI can track lineage deeply and automatically. Third, the pipeline needs context awareness. A risky action might be safe in staging, but catastrophic in production. A permissive policy might be fine for r and d, but unacceptable for a customer facing workflow. Security shouldn't block everything. It should understand the intent and environment. Finally, the system needs to understand the behavioral gift. AI systems change over time. Their outputs evolve, their interpretations shift. If a model that used to produce safe output suddenly starts acting differently, that drift needs to be caught early. So the big idea is this. AI shouldn't just accelerate delivery. It should accelerate security by giving our pipelines, eyes, ears, and judgment. And this leads us directly to the framework that ties all of this together. To build secure by default systems, we need more than good intentions. We need structure, a way to operationalize everything we have talked about so far. This bring us to the four pillars of Secure by Default DevSecOps, A framework designed for AI native delivery. These four pillars work together to transform pipelines from passive automation into intelligent self-protecting systems. They are contextual intelligence, automatic verification, behavior based, anomaly detection, and continuous learning. And here's the important part. You don't need all four pillars to start. But eventually a mature AI native delivery system needs every one of them. Each pillar solves a specific part of the risk equation. Each builds on the others, and together they create a pipeline that stays secure even as the delivery speeds up. So let's walk through each pillar one by one, starting with the foundation, contextual intelligence. This is what separates traditional DevSecOps from ai, native World traditional pipelines. Detect that something changed. Contextual intelligence, understand what changed, why it changed, and how that change affects the system. So let me break down what it means. So a regular diff just shows lines of code contextual intelligence interprets meaning. Is this change altering security posture? Is it affecting permissions? Uh, is it something AI generated that doesn't align with the developer's intent? Then in AI native delivery, these layers influence each other. A config tweak may alter infrastructure and infrastructure drift may cause AI to regenerate code. Contextual intelligence ties this relationship together. Every team has a typical pattern deployment, size, frequency, rollback rates. When something deviates from the pattern, it might be a risk signal. And then the last one, this is huge. Developers often intend one thing, but the outcome. Ends up being something else. Especially with AI generated changes, contextual intelligence catches this inconsistencies early. This pillar is the foundation because it gives your pipeline awareness. Without context, everything else is guesswork. With it, your pipeline becomes a system that understand what's happening, not just executing steps blindly. Now that we understand context, the next step is to verify trust, which leads us in into pillar number two. So once we have contextual intelligence, once the pipeline understand what's happening, this is the next pillar, which is automatic verification. So this is why we shift from trusted by default, to verify by default. In the AI era, artifacts come from everywhere. Developer written code, AI generated code, external model outputs, downloaded containers, uh, infor configurations, automated re, re, uh, remediation. And because these sources vary, we need pipelines that automatically from trust before anything gets deployed. Now, here's what automatic verification include. The first one is provenance and integrated checks. Where did this artifact come from? Did it come from a trusted bill system or was it generated by ai? Has it been tampered with? If the pipeline can't prove the origin and integrity, it shouldn't ship. The second one is the config environment and infra alignment with the source of truth. AI often generates changes that unintentionally drift from what's defined in Git. A verification ensures that actual environment matches the declared configuration. No surprise, no hidden divergence. The third one is that every pipeline step is signed, trusted, and re reproducible. If a bill can be reproduced, it can be trusted signatures ensure every piece of software has a verifiable chain of custody, and the final one is enforced safety conditions. If a deployment doesn't meet predefined safety criteria, wrong version, missing signature, drifted config, or un untrusted model, then those enforced, uh, safety conditions are not met. So this blocks your pipeline on worms when the pipelines don't meet these predefined safety conditions. Now this leads us to the third pillar, which is a behavior based anomaly detection, and this is where traditional security models completely fall short. Traditional security looks at code dependencies, configs, static rules, but AI native systems break in dynamic way. This through unexpected behavior, a reasoning drift or contextual anomalies. So what does a behavioral based detection let let us do? Uh, first one is it detect abnormal behavior across code config, infra, and model outputs. So this means the pipeline actively watches for things like unusual model responses, unexpected API calls or infrastructure activity changes that don't align with past deployments. The second one is risky. Configurations rarely blow up. Immediately. They degrade slowly. So behavioral signals catch these problems early before the cascade. Next. These attacks don't show up in code, they show up in behavior. The model suddenly becomes more permissive, more confident, or more consistent. So. This is important. Every engineering team has its own rhythm, deployment sizes, frequency, rollback rates. So when something deviates, we need to understand you need to learn the normal delivery patterns and flag the deviations in real time. And the final pillar, the, and arguably the most transformative one, is continuous learning groups. This is where the pipeline evolves over time. It just doesn't detect issues. It learns from them, adapts and becomes safer with every execution. So what it means in practice is you improve security posture automatically using real pipeline data. So the system uses the signal. That from every build, every deploy, every rollback, every failure, the generative signal you get, the system uses that signal to refine its understanding of what is normal versus what is risky. Next teams change tools. Team set up new frameworks. They restructure services. Static rules become outdated quickly. Continuous learning ensures that guard rails evolve with the team instead of blocking valid changes. Now failures aren't just fires to put out. They become lessons. The system absorbs, not just humans. This turns incident response into future prevention and finally turn failures into system level improvements, not human rework. With these four pillars, contextual intelligence, automatic verification, behavior based detection, and continuous learning, we now have the blueprint for Secure by delivery. Secure by default delivery. So let's talk about how this actually shows up in real world through harness. So now that we have identified the four, uh, pillars of secure by default DevSecOps, let's look at how these ideas can come to a, a life inside a real platform. So harness was built on a simple belief delivery should be fast, safe, and as automated as possible without recurring teams to become pipeline expert. Now, when a pipeline fails, most systems throw you into a log avalanche. You are left digging through stack traces, clicking to artifacts, jumping between dashboards, and trying to reconstruct what happened, harness that process. This approaches this differently. The system analyzes the pipeline execution, understands what failed. So here, harness ai, uh, it, uh, understand what failed and automatically explains why. This is contextual intelligence in action. The pipeline understand the story behind the failure, not just the raw output. And it has two huge benefits. First, developers get clarity instantly, no time wasted, uh uh, going through logs or trying to reproduce flaky behaviors. Second. The recovery is faster and safer. This is the first layer of secure by default delivery, an intelligent CICD engine that interprets, explains and assists, not just executes. Now one of the biggest historical problems in DevSecOps is that security is often bolted on a separate workflow, A separate tool, a separate responsibility harness takes a different approach. Security is not something you add on top of the pipeline. It's woven directly into the delivery path. Let's look at wha what, what that means. So on the left side of the screen, you see continuous compliance. Cross pipelines, repositories, services, and environments. Instead of re-running security scans once per sprint or only on pull request, harness, continuous labels, everything using the CIS benchmarks, WASP standards, custom policies and environment. Environment rules. This gives you a breakdown of passes versus failures, the severity of risks, where violations are trending, and which services are repeatedly failing. Security checks. This transforms security from a reactive process into a real time posture dashboard. On the right side, you see a deeper example harness detecting a command injection vulnerability inside a pipeline, and providing a full contextual explanation what the vulnerability, what matters, what it would impact, and how to fix it. This is not just a scanner returning a red cross. This is a system that understand the risk well enough to teach you how to remediate it, and this final piece ties everything together. Harness isn't just adding AI feature to CICD. It's becoming an AI native software delivery platform where intelligence is part of the foundation, not an extension. Now the gif or gif you see here is a good example. This isn't a chatbot sitting beside your pipeline scoring through logs. This is AI inside the pipeline execution, observing the behavior, interpreting the failure, and generating precise actionable guidance. So let me highlight the key advantage here. AI that understands delivery, not just text it, understand deployments, manifest infrastructure, rollbacks and traffic routing, the actual mechanics of software delivery. You have governance and verification built into the intelligence layer. Every incident rollback, successful redeploy and policy violation become training data for the system. So it's continuously learning. So this is what secure by default delivery looks like in practice, a platform that understand context, verifies trust, monitors behavior, and learns continuously. And with that, we have covered the full journey. From the risks of AI native delivery to the framework and the real world platform implementing it. So as we wrap up, I want to bring the entire story together in a few clear takeaways. First, AI native delivery has expanded the attack surface far beyond traditional code vulnerability. Second, our old security patterns can't keep up with this New world. Manual reviews, pointing time scans, rule-based checks. They were already struggling before AI arrived. And third, the future of DevSecOps is secure by default systems. Pipelines that can detect anomalies, verify provenance, understand context, and continuously learn from every deployment. It is not about slowing teams down. It's not about adding more friction or more gates. It's about building delivery systems that are smart enough to keep you safe as you move fast. Systems that make the secure choice the default choice, this is the path forward for DevSecOps. That is how we build confidence in AI driven delivery. Now before we fully wrap up, I'd love to hear your feedback. If you scan the QR code on the screen, it'll take you to a very short form where you can share what resonated with you, what you'd like to. Learn more about your feedback, genuinely helps me improve and as thank you, if you fill out this form, I'll share the resources from this talk with you, the slides, relevant links, reference materials, and any deep dive content that can help you take these ideas back to your own teams. Thank you again for spending your time with me today and for being part of the future of secure AI driven delivery.
...

Dewan Ahmed

Principal Developer Advocate @ Harness

Dewan Ahmed's LinkedIn account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content