Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello all welcome.
This is Sharon ING Hora.
I'm going to talk on Platform First.
Mobile Automation.
I have around 20 plus years of experience.
I'm working with Cognizant Technology Solution, and I am in two 10 plus years
with mobile automation with on different cloud platforms like cts, perfecto.
And browser stack and mobile device.
I'm available on LinkedIn for any if we want to discuss more on this.
So I'm working with, mobile Cloud plus AI as well.
So I thought of, presenting this PPT.
So yes, let's get started.
Platform first, mobile automation engineering, t testing infrastructure
across enterprise ecosystem.
A technical exploration of how platform engineering principle transform
mobile automation from isolated QA process into integrated cell
service infrastructure that scales across multiple cloud environments
and diverse development teams.
As we as per the first slide platform, first mobile automation, we are talking
about the t testing infrastructure across the various, enterprise ecosystems.
The platform engineering revolution.
So modern enterprise face complex mobile testing challenges nowadays.
We are doing mobile automation, but we are very complex infrastructure,
multiple device ecosystems, diverse operating systems, versions,
varying network conditions, and continuous delivery at a scale.
So basically, we have various ecosystems of, available right now.
And we have diverse operating systems like in iOS and Android.
If you see every month we have new releases, we have new devices different
type of versions, upgrades, right?
And there are various network conditions also associated with it.
A lot of complexity right now testing with mobile devices and we have, native
apps or reactive web based we have to validate test in mobile devices.
It's very complex nowadays.
And continuous delivery at a scale.
Traditional approaches with, siloed QA teams manual infrastructure
management can't keep pace with these demands right now.
Yes, it is very difficult.
You cannot buy each and every device.
You need some type of solution.
Where devices are available on demand as and when need is there.
For this platform engineering reemergence, mobile automations product, platform
engineering, obviously, they think, this is just not a mobile automation.
It's a mobile, it's a product rather than a service.
Okay?
It's altogether different product because it's a big elephant nowadays
because everybody's using smart phones, mobile devices, iOS or Android.
So it's creating a self-service, API driven system that empower development
teams while maintaining enterprise grade reliability and governance.
So basically, it is API driven and it maintains enterprise
grade reliability and governance.
It is secured by government is monitoring all those, policies,
compliance, and all those things.
Organizations implementing platform first, mobile automation report.
Dramatic improvement in deployment, confidence reduced time to market, enhance
ability to response to market changes.
So yes, as you see it has dramatic improvements in deployment,
confi aggressions, reduced time to market, and all those factors
which are associated with this.
Now we talk about architectural foundations.
So what are these architectural foundations like?
It is infrastructure S code.
GI tops driven deployments like CICD, vvc, right?
We have GitLab, GitHubs multi-tenant architecture as such, right?
So what is infrastructure as code?
It is a modern platform engineering trade mobile testing, infrastructure
code, eliminating environment drift, and ensuring reproducible test conditions.
So like we have Terraform Polo or AWS, we have Azure also.
Google Cloud also, right?
This defines environment spanning multiple cloud providers.
We, this use YML adjacent configuration files automatically
provides appropriate resources.
Kubernetes provides runtime foundation, which isolated secure containers.
So yes, we have Kubernetes which works on, secure containers containerized
approach using Kubernetes.
So we have now we will talk about GitHub's driven deployment.
Like we have testing configurations, automation scripts and infrastructure
definitions resides in gate repositories, enabling version control, peer review,
and automated deployment workflows.
This will change follow same rigorous process as application code staging
environments to receive updates.
First for validation, progressive delivery patterns, roll out new
testing capabilities gradually.
Then we are multi-tenant architecture, enterprise platform support, multi
development teams while maintaining strict security boundaries through
isolated testing environments.
Network segmentation ensures sensitive testing data remains isolated.
Each tenant receives dedicated namespace resource quotas and access controls.
And also it has role-based access controls, integrates with
enterprise direct directories.
So basically, we are multi-tenant architecture where we are using,
cloud-based infrastructure.
Might be A-W-S-G-C-P, Google Cloud, or Azure.
So it is very secured and network segment is, segmentation basically, which ensures
your data is secured enough, right?
And it has role-based access controls by default, it is denied.
So if you have a role, it will be given the access, like that in cloud.
This is how it works.
Now we'll talk about AI enhance resource orchestration.
So we have modern platform engineering incorporates artificial intelligence
to optimize mobile testing, resource allocation, machine learning algorithms,
analyze historical testing patterns.
Predictive models consider application release schedules, test execution
times seasonal traffic patterns and team productivity cycles.
Intelligent test selection algorithms identify minimum test suit required
to validate specific changes.
So basically what we are trying to say here is AI is basically machine
learning algorithms are there, which will analyze historical testing patterns.
So we have a lot of data from the past, which is getting analyzed by, AI
LLMs or machine learning algorithms, which will, which is a key for the
pattern testing and predictive model.
Consider application releases schedule.
So basically all the LLMs, ai, lms, or, predictive models are there.
Which are based on heavy data from the past years which will predict, which will
release the schedules, execution times, and team productivity cycles as well.
Now we'll talk about intelligent test selection algorithms.
Identify minimum tests suit required to validate specific changes as such.
Because of this, machine learning, we have intelligent test selection algorithms are.
Built in over the years which will help us to minimize the test suit
required to validate specific changes.
So all the critical flows are tested using, these intelligent
test selection algorithms.
These proactive approach eliminates resource contention while minimizing
cloud computing costs, ensuring adequate testing capacity during peak demand,
while avoiding over provisioning during.
Quite periods, yes.
So this will very cost effective because this is on demand cloud and
resources we are using on demand.
This is a proactive approach.
So that is why, AI and Cloud helps a lot for cost reduction for the company.
Now we will talk about some, cross-platform compatibility, automation.
What is this comprehensive device mattresses.
And beyond functional testing and global device forms, what is
comprehensive device mattresses?
It is a sophisticated platform maintain which maintains devices,
mattresses, spanning iOS, android, and imaging platforms automatically
executing test suits across representative device combinations.
So basically it's a mechanism which will have the mattresses which will span
across the platforms like iOS, Android.
And which will execute test suits across represented device combinations
on various device combinations.
It is going to, execute all the test suits, excuse me.
And it'll also beyond functional testing, compatibility testing,
access to performance accessibility.
So basically we are talking about not only functional testing, we are
talking about performance accessibility.
And user experience consistency testing as well.
It is automated with visual regression which detects UI in across devices.
Devices are of, various sizes and shapes, right?
We can easily, if they're UI insistency, which we can identify
very easily using regress.
Across the devices, across the platforms.
We have, global device forms cloud platforms are available,
like c test, which, have multiple physical devices, perfecto.
Or we can talk about mobile device labs or browser stack.
This will be having real devices as well as emulators, which will
enable testing across various network condition and regional configurations.
So based on regions, we can select these devices.
Suppose I'm in us, I want to select, devices in Australia because I
have a customer in Australia.
I want to see the latency, network speed and everything.
So using these, cloud provider providers, I can select the region
and we can test accordingly.
And that will let us know what is going on with that region, devices,
or are we facing any issues.
So this platform orchestration, manage device allocations, test
scheduling result aggregation.
Presenting unified compatibility reports to development teams as such.
So this is the platform orchestration teams, which manages all these,
devices, allocations test scheduling.
This is on the orchestration layer of the platform, basically.
Now we'll talk about progressive delivery integration.
So what is it?
It's a feature flag integration, candidate deployment testing
automated roll back triggers.
What is feature flag integration like to suppose, example.
You are going to deliver new code in production.
And it's a very.
New functionality.
So now what you will do is you'll slowly throttle that in production in
Android or iOS platforms, like 1%, 2% in first few days based on the feedback.
Then you will enable that feature flag for, 15%, 20%, and within
couple of months you will be a hundred percent throttling that.
So that is called feature flag integration.
Then we have candid deployment testing, right?
What is it?
It's automatically compares mattresses between control and experimental groups.
Detecting performance regression crash rates, increase our
user experience degradation.
So basically what it is doing, it just perform.
It's, it provides the mattresses ex on the experiments detecting performance
regression crash rate, increase or user experience degradations as well.
Now we'll talk about automated rollback triggers.
As such, respond to testing failures or metrics.
Degradation instantly reverting problematic changes to protect
user experience while enabling aggregative innovation cycles.
So we are talking about automated rollbacks.
Say for example, if I am deploying some new code in productions
something happens, right?
Something network problem, some code is not working or something fails, right?
So there should be automated rollback mechanism, right?
Which we say disaster recovery, right?
So we can make sure, we are on the original state, previous, original state.
This is how this automated rollback triggers.
Basically, this is for the progressive delivery integration.
Three factors are there, as we discussed.
So these systems basically, balance the risk and velocity in mobile
application development, providing confidence data that informs
rollout decision at each stage.
Yes, this will, make sure every stage we are getting the correct
information based on these flags, deployments, or rollback triggers.
Now we talk about, industry applications financial services, what is it?
Compliance driven automation.
So in financial sector, a data, PCI compliance data is very important.
You cannot share data like that.
It should be compliant, mask, and, otherwise company or financial
organization might face a lot of issues.
Or people can sue if financial data is shared or there are some losses.
So what we are saying is financial service organizations face unique mobile
testing challenges due to stick regulatory requirements and security constraints.
Platform engineering addresses these through.
So basically, financial organization has to be PCI compliant.
It should not share the data like that.
It's a very secured industry, we can say.
So how platform engineering address these issues through
automated compliance testing.
So yes, P-C-I-D-S-S, SOCs and Regional Banking regulations are there, right?
Which is automated.
So that, manual you might.
Fail on something, but if processes are automated, vetted accordingly and
this, there's less chances of failures.
And then policy as code framework, right?
That defines compliance rules in machine readable format.
Then security testing automation that detects vulnerable value.
It's encryption.
Ensure secure data handling.
So basically security testing is must, and if it is automated that's
really saves a lot of time in leakages vulnerabilities across the financial
organization with that product.
This approach transforms compliance from a manual gatekeeper process into
an automated quality gate, dramatically reducing regulatory review cycles by
maintaining strict security standards.
Automated reporting generates compliance documentation.
Required for regulatory audits, streaming government process without
sacrifice, government velocity.
So yes this is really, approach transforms compliance from manual
gatekeeper to automated where you can have less chances of, leakages security
issues because is automated and it's a role-based process, basically.
Now we'll talk about industry application, e-commerce performance at scale.
So traffic simulation engines, right?
What is performance?
So like we see traffic simulation engines.
So here, what is their model complex Using journey including browsing,
searching, card managing, and checkout.
Process across represented device configurations.
Realistic load patterns include geographic distribution, device
diversity, temporal, variations that my mirror actual shopping behavior.
So what this traffic simulation engines is basically, let's say you
are navigating to one website, right?
So this will, simulate the traffic patterns, load patterns, at what
time, people are visiting these sites.
What.
It's a holidays or, weekends or what days, basically, so this
simulation engines will scale all this automated performance monitoring.
So what it'll do is detects degradation in real time triggering scaling actions.
So alerting operations teams to prevent performance related incident
during peak shopping events, right?
Like we have Thanksgiving or holidays, it's a peak shopping day, right?
Integration with content delivery networks, edge computing platforms, and
show optimal performance well, right?
So basically in a cloud world, we can say edge computing platforms
are platforms which are near to that geographic locations like
we have Australia, US, or Europe.
We will be having edge locations near to that.
Where performance, basically is more, if I'm accessing a website from Australia,
it has a same website as more performance.
If I'm pointing that website to us, and if I'm accessing Australia, it'll
be low latency performance, very slow.
So that is the concept of edge computing.
Now we'll talk about performance budgets.
So enforce accessible response times, preventing performance
regressions from reaching production and maintaining optimal user.
Experience across diverse device ecosystems?
Yes.
So basically enforcing the acceptable response diamonds per
preventing performance regression.
So automatically the scale testing environments to match
production load patterns.
So basically performance budgets.
We can try to automatically scaling testing environments.
Basically we are putting the same budget, whatever we are doing in production, which
will scale the environment automatically.
Testing environment so that, we can match the actual production load patterns
that is very important key feature.
Now we'll talk about, self-service, API and developer experience.
So a successful mobile automation patterns, prioritize developer
experience through restful APIs, enabling programmatic
access to testing capabilities.
Okay, restful APIs they will enable programmer technical
excel to testing capabilities.
Like we have, restful APIs where we can directly call API services and check, we
can get the response, whether it's get or post and accordingly validate the data.
Then graph will interface is providing flexible queries.
We can have graph queries to precise testing data, real time
subscription, delivering test results and infrastructure status updates.
So real time, we can get the test results and infrastructure status updates as well.
So we have comprehensive as d liabilities, as well as simplifying
platform integration across popular programming languages.
So yes, we have, these SDK libraries available which we can integrate
to our, programming languages across might be, Java, see whatever
language we want to integrate it.
Code generators are there creating a boilerplate testing confi configurators.
Nowadays we have, copilot and all those things.
We can just instruct it and we can get the codes also.
And there are various other code generators as well.
Interactive documentation with executable example, demonstrate platform so we can
give interactive documentation as such.
We can have a PowerPoint interactive, documentation, which will be show us
exactly how codes are getting executed and how we can have those executables.
Now we'll talk about observ observability and operational excellence.
First is comprehensive monitoring in this section.
Then we have distributed tracing.
Then we have service level, objective, then error budget.
So what is comprehensive monitoring?
So we are talking about, this covers the infrastructure,
health, text execution metrics.
Developer productivity indicates enabling productivity should
resolve capacity planning.
So basically this is overall, you can say budgeting, monitoring of execution.
Everything comes into comprehensive monitoring as such, right?
All the capacity planning all type of, indicators are there.
Distributed tracing means provides visibility into complex testing
workflows, identifying bottlenecks and optimizing execution path
across the testing infrastructure.
What we are saying here is it'll provide the visibility into testing
workflows, identifying bottlenecks, if we have any bottlenecks, right?
That will optimize execution parts as well across the testing
infrastructure, service level objectives.
Here we are talking about, defined platform reliability expectation,
driving continuous improvements efforts through objective criteria
for infrastructure changes.
So basically we are talking about platform reliability, driving
continuous improvements efforts.
So this is ongoing process.
In the infrastructure for changes.
Then error budgeting.
So we should, balance innovation, velocity, stability requirements,
providing objective criteria, managing the pace of pattern evolution.
So here we are talking about, how patterns are getting evaluated, right?
Accordingly, we are budgeting, providing objective criteria.
So it should be like some criteria driven, right?
It should not be like, we have to do something for doing it, but it should
be business driven, criteria driven.
So now we are talking about go governance and policy enforcement.
So here we are talking about, enterprise platform requires governance
framework that balance developer authority with organization control.
Here we are talking policy engines and for security requirements, resource
limits and compliance standard.
Automated policy validation prevent non-compliant congregations from breaching
production integration with admission controls and validation web books and
force policies at the intersection level.
Here we are talking about cost management policies, prevent runway
resource consumption while enabling bus capacity for critical testing needs.
We are talking about resource quota.
Establish clear boundaries.
Budget alert, provide early warning for potential over.
Automatic scaling policies, balance cost, testing of effectiveness, regular policy
reviews, ensure governance framework evolve with changing organizational needs.
So basically what we are talking about is a government policies we should
be up to date and we have to make sure we are reviewing our policy on
daily basis so that we are compliant with the GO governance framework.
And government policies as such.
Now we'll talk about future directions, where we have to go from here.
So we are talking about edge computing and 5G integrations.
So edge, a native testing, validation, application performance under the
i low latency conditions while ensuring compatibility across
diverse edge computing environments.
Like we have spoken about, edge locations.
What is edge location?
Edge locations means, if I am in Australia, I'm accessing some website
or from mobile devices, right?
So I will be hitting a nearby edge locations where
performance is good, right?
Although that website is hosted in us, but through Azure locations,
it performance is very fast.
Then we are talking 5G network Simulation EN enables testing of
advanced mobile capability, including augmented realtime ion IOT integration.
So we are talking about here 5G integration.
Sustainable testing practices, carbon aware testing schedule, shift resource
intense operations to periods when renewable energy availability is
highest reducing environmental impact.
Green software engineering principle guide platform development,
ensuring environmental constitution influence architectural decisions.
So basically we have to be, make sure it's environmental and, engineering
principles are implemented accordingly.
So we should follow good practices always.
Now we'll talk about implementation strategy.
So first is build a strong foundation.
Yes.
We have to focus on infrastructure, comprehensive observability, developer
centric design for building a strong foundation enable self services.
So we have to make sure every developer is self ent.
We have proper documentation, SD, SDKs intuitive APIs, libraries,
so that, it should be integrated testing into their workflows.
Developers should integrate the same.
Implement governance.
So we should, balance developer autonom organization control through
policy engine that enforce security compliance and resource manage
management, which is very important here.
It evolve continuously.
So we have to learn and evolve continuously.
We have to add whatever the new data we are reading new testing techniques
it's evolving process basically.
A data drive iterative improvement, scaling platform
capabilities as they matured and adopting emerging technologies.
These elements enable iterative improvement, scaling as platform
capabilities mature, creating the foundation for sustainable
growth and continuous innovation.
Now we'll talk about the strategic, imperative.
So platform first, mobile automation represents a fundamental shift in
how enterprise approach testing infrastructure by treating
testing capabilities as product.
Yes, rather than sources.
So this mobile infrastructure testing, all this is like a product.
Okay.
We should read all together as a different product.
Mobile testing.
It should not be treated to the services organization, achieve unprecedented scale
reliability and developer productivity.
Team, the transformation extends beyond operational efficiency to strategic
advantage, enabling what this will enable.
Faster time to market, enhanced competitive positioning,
improved deployment confidence, a reduced operational overhead.
Organizations that master these capabilities will define the future of
mobile application development, setting new standards for quality, velocity
and scale in the digital economy.
So mobile is the future, right?
We have a smartphone and Android, iOS.
So we have to make sure, all the organization understand that these
capabilities and it should be treated as a product, not as in service.
Oh, okay.
So we come to the last slide.
Thank you very much for listening me and watching me.
And in case of any questions, KU as I told in beginning right,
I am available on LinkedIn.
You can search by chair and preaching who, and we can connect
there and discuss more over there.
Thank you very much.
Good day.
Bye.