Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hey everyone, this is Ish Ra Gotham.
I have been in software development and quality engineering
space from last 13 years.
Today I will be discussing a little bit on converge and quality engineering,
where we are talking about integrating a shift left and shift right testing.
Basically unifying the shift left and shift right for
end hand software residency.
Let's get into the next slide.
So to talk about today's quality challenges first thing is obviously
you have the development silos where you have all the, development
and your, functional testing and everything happening in one place.
And and then you have other side is the operation excellence and
operation disconnect rate where production feedback loops separated
from the developer life cycles.
And let's say you have the, something missed in the development.
If the bug is not detected in development or functional testing phase it'll
be too late for the late detection.
You are finding it in either production or even in UAT, right?
The time to, fix the same bug when you find in development is
going to be much less compared to the same bug found in either.
UATR production, it's going to be very costly to fix it.
And obviously the fourth one is market delays, right?
The slower release cycles due to this fragmented quality process, it's going
to be take, it's going to take longer time to market and in this fast paced
software development era, right?
You want to market your software as soon as you know you can to the customers or,
whoever is adopting your software, right?
So what we're talking about is having that continuous quality loop.
The shift left testing.
You have the preventive quality measures during the development
whatever it may be, unit testing.
You have the integration testing different, pre-commit hooks or
even some tests running on the ci.
Or you have the continuous monitoring where you have real time quality metrics
through the deployment or shift rate validations where you have a reactive
residency testing in production and you have that feedback integration, right?
Insights from production informing the development priorities, right?
Right now.
As of today in this era where what's happening is each of these
working as of each one silo, right?
Where you have four different aspects, right?
And what we are talking about, how we can decrease this time between these
loops so that, you, you can have better software development and resiliency.
So to talk about a little bit of a shift lift pathology, right?
What we have is obviously developer testing, where you have, they're writing
the static unit tests and and then maybe a little bit of integration testing,
and you have automated ca validations.
It may be part of your git commit hooks, or it's running part of
the, CSCD when you merge the prs.
And then you have the augmented testing with a, behavior driven development.
Maybe the cucumber test or web driver test or whatever you have, right?
Playwright or different functional tests.
You have you can think in the same thing in inverse.
Pyramid where you have the developer testing should
contribute to more than 50%.
And then you have the automated ca validations are running
different integration tests.
We will go into, 30% and then you have functional testing for 15%.
At the end where you have a end-to-end testing is for 5%.
So when it come to shift right tech techniques, right?
Where you have the advanced observability, right?
Where you have a granular insights into the production behavior patterns,
you have so many different tools like, Dynatrace or, Splunk or Grafana.
All these tools are available in the market and you have, again,
the Ks engineering where you have intentional system disruption, right?
Every cloud provider have their own tools aWS has the AWS Fault injection
simulator, our Google Cloud, chaos Engineering, our Azure Chaos Studio,
or even you have standalone tools like Lin or Chaos Mesh or Litus, right?
And then you have a the third thing is the candid deployments where you
don't deploy to the a hundred percent.
You start with the very small rollout like the one two.
2% are like a start with the AB test or CCM switches, right?
Where you start to minimal users and if you find any bugs or fix the
bugs and then do the incremental rollout to the more users.
That way you are, you are introducing the bugs to the smaller smaller customers
smaller set of customers, and then fix it to do the incremental rollout
instead of, rolling out a hundred percent and then the damage will be more.
Obviously last one is the performance monitoring, right?
A real time analysis of the system performance metrics, right?
You have a posis with a Grafana or Inflex db.
Even the cloud providers have their own native tools, right?
AWS CloudWatch or Google Cloud Monitoring or Azure Monitor, right?
So what we are trying to do is like whatever the tools I was mentioned
in the shift left or shift right?
How we can have a co cohesive unifying orchestration where each
of these tool work as one agent where you have the agent approach
each agent is solving one problem.
At the same time how they can interact with each other to have
a better communication and, the faster development life cycles.
Let's talk about a little bit of that let's talk about first is the
implementation challenge where you have a cultural transformation, right?
This is a obviously shifting mindset from testing to quality engineering, right?
Where you need, you used to have like where the testing is a separate
domain to where you want to come to the point where the quality
engineering or quality is owned by.
Each person in the team from, you're talking about from product to program
to development to the SRE or DevOps person, whoever it is, everybody
have that, they, they have their quality engineering hat on their head.
And the second thing is having the tool chain integration, right?
Connecting this disparate quality systems across the lifecycle, right?
If to give you example what I'm talking about.
Is if you talk about a unit test for example, unit testing is a one
agent or your integration testing is one agent, or your functional
testing is one agent or your DevOps checks or git checks is one agent.
And then you have the observability or operation excellence side, right?
All the performance or K os engineering observability tools, what we talk,
talked about each one, think as a one agent and each one performs one diligent
task and a specific targeted task.
So it can give you the result.
But what you're talking about, having that agent to agent protocol
where each of these systems, our agents can talk to each other more
cohesively in one orchestration.
That way any, anywhere the fall tolerance you have.
It'll communicate it to the developer more faster, and the fix is done in
more faster way to, release the rollout or fix the customer more faster.
To accomplish that, obviously you need to have the skill development.
We are talking about building a T-shaped quality professional
with a cross domain expertise.
What is a T-shaped quality professional?
Basically T-shaped quality professional is the one who has a
depth in one area, but they're aware.
With mean to say they have the jack of all trade in all other areas.
To give you example, in quality engineering, right?
Somebody might be like their depth may be in like a quality engineering,
their expertise in test automation or code review EX or a PA testing or
CACD, pipeline building knowledge.
Maybe that is their core competence or depth.
But what we are talking about with that, if somebody can have the width right, from
operation excellence standpoint or site reliability standpoint or observability
standpoint, on the other side, they have the, all the tools available in the market
and what specific task or, target problem.
They're solving the same thing on the development side, right?
What are the tools available?
What are the things we can be done?
For the quality side of things, right?
If you talk the same thing here, if you see the core quality competence, right?
They have quality risk assessment or test strategy development or defect analysis,
or quality metrics design, or from operation skill side, like observability
tooling or performance analysis.
R Ks, engineering and production monitoring, right?
So if you have that kind of a resource available, then you can have
this phased adoption model, right?
Obviously, anything new you want to implement, even for a small software
team it's big ship to tier, right?
So if you want to, move the ship or make a turn, it's going to
take us a large amount of time.
But when you make the turn properly to the right direction, it's going
to give you the better results.
To talk about the, the foundation, right?
The established core metrics and baseline current quality performance
what, where we stand today and what we are trying to accomplish.
And then obviously the second thing is how the connecting these development
and production quality tool inside that I was talking about, different
agents and orchestration, how you can have these cohesive systems built.
You can do the POCs and see what all the off shelf tools available are, any of
these cloud provider tools available.
And see how they can fit into your ecosystem, because everybody has
different ecosystem, different set of, cloud tools they're using.
And obviously use a lot of automation with the, in this automation era, right?
Implement a augmented testing or automation tools to get the better
results and obviously the maturity, right?
Achieve self improving quality ecosystem with the predictive capabilities, right?
End of the day, obviously.
Even if you do the POC and implement certain things will work out the way
you want it to, or you will have the positive results and you have, you
will have a negative results and see the, whatever the learnings from that
negative results how we can make it as an opportunity to, learn more and
implement and have a better results.
This is some empirical results.
From this some of the published papers from the, the teams that are implemented,
these t-shaped professionals with jack of alt trades, knowledge in all the
areas they're able to, accomplish that 78% of defect reduction and 63% of
faster detection of the bugs, and 42% of insulin declines, and 3.2% of RI.
And that is a very good results.
Whatever the investment you are making, if you can get in this fast waste
software development a era, right?
And this is one of the case studies obviously from financial services side,
the deployments are like, normally release cycles are very, bigger.
But if you see at this the starting point, they used to have a 12 week
release cycles and 85% manual testing and siloed QA and SRE teams when they're
transformed into this agentic approach where they have unified quality metrics,
cross-functional quality guides and production telemetry in test environments.
All these things, they're able to achieve that two weeks release
cycles, 73% automated coverage and 58% reduction in incident as one time.
I know compared to a lot of these big tech companies where you have like
everydays, this might look like, very far.
But if you think about financial services any, bad move you make,
it's going to cost you a, a lot in customer and even the financial damage.
So you need so this is a really good results if you think about financial
services business perspective.
And obviously the AI augmented quality tools, right?
In today you have like for every problem that you're facing in
quality engineering, right?
You have tools available for self healing tests, even for the machine
learning algorithms automatically repair your broken test scripts
without human intervention, right?
You can give your figma changes.
To your yay agent.
It can go and fix your automation tests and get the, results and
intelligent test selection, right?
The same thing.
You can give your, your product designs and your product or
business documents and everything.
You can come up with the proper test cases, excuse me, and
obviously that The third one I'm talking about is anomaly detection.
And fourth one is a code quality prediction.
So all these things can be achieved with the augmented quality
tools available in the market.
There is so many different tools like a Test Sigma or Rapid Tools, everybody
on top of the game with all these tools you can definitely explore more.
To come to the conclusion the key takeaways from what I'm talking about
here is the integrate quality loops.
Connect that shift lift prevention with a shift rate
validation for complete coverage.
So you have complete pipeline built where each of the agent communicating
each other with more cohesively, and that way your development lifecycle
time is less and you have less bugs.
And even if you have a bug.
The faster time to fix and market it and leverage a capabilities as much as
you can implement intelligence driven testing to reduce maintenance burdens,
and obviously develop ship teams where you have all the, any resource, right?
Somebody who has a width and also they have the jack alter width and
depth in certain area of, their proficiency and measure holistically.
See see where you stand today and where you want to reach and
how you can reach your goals.
I think that with that, I'm done.
Thank you very much.