Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, everybody.
Thank you for joining this session.
Thank you for investing one hour of your personal time to find more
about cloud, about AI, and how we should modernize, how we should
prepare our cloud solution for AI.
My name is Radu Mvulevic, and for the next 50 55 minutes, I'm going to turn We'll
talk about AI, we'll talk about cloud, we'll talk about how we can modernize,
how we can prepare our applications that are running on top of cloud for AI to
ensure that the system can work, can provide data, can process the triggers,
the events that are generated by the AI solution, and in the end, how AI
can be part of our cloud ecosystem without having an impact, a negative
impact on our end to end solution.
Before going forward, I would like to please introduce myself.
My name is Radu.
Radu Gomogulev.
I was born down the school, live and work in a beautiful city called Cluj
Napoca, in the middle of Transylvania.
And in my part time, I'm investing time in communities,
learning, and in enjoying coffee.
I'm also Microsoft Azure MVP, Microsoft Video Director.
If you want to contact me or to have a chat with me on this topic or on all other
topics, feel free to reach me on LinkedIn.
Why cloud is important for me is because my journey on cloud started
in 2019, and from that moment in time, I was involved in different
projects on AWS and on my computer.
You will see that most examples that I will provide in these sessions are
error specific, but the same service.
have a one to one vendor, AWS, or a system on Google Cloud.
you will find good mention of the service that we are talking
about today in all public vendors.
What is my mission?
What is my mission for today?
Is to discover together with you how the cloud architecture of an AI system,
and now I'm not covering or I'm not focusing On the AI books, but in the
most of the cases, we are using more or less out of the shelf services.
We're just changing the model, playing with the models in the cloud, but
hosting that service is already done.
The challenge is when we're doing the integration, AI component goes
in our cart and how we need to ensure that these two can leave
work, bring value to the business.
It doesn't make sense to talk about where is Cloud 2024, where we being at the
end of 2025, what is the current impact?
What is the future?
We already know that we are at the beginning of the cloud.
I say that we don't even know what will be the real impact.
in five, ten years.
We just hear different things, not only from the software,
but also from the hardware.
Autonomous cars, it's already here.
We already hear about half a million robots will use AI maybe in a few years.
We don't know where we'll be.
We need to build systems that can work, can interact, and
can communicate in a good way.
in a scalable way with AI services.
I think that now we're in a phase like we've been with data analytics in 2020.
Why I'm saying this?
Because I see an analogy in 2020, which had data layer on the system,
not fully ready, hard analytics.
Now in
2025, I find us in the same situation.
In the same situation, but not for data analytics.
For ai, for the workload that AI is bringing pressure, that is putting on
existing system, on systems that in many situations were migrating maybe in 2020,
maybe in 21, 22, if the shift OnPrem to cloud or in a way that let's push it,
let's push it, and maybe sometime in the future we'll have enough time to.
to do modernization or to apply the best practices and well
attached framework that AWS Azure.
I remember that I was talking about serverless, about computation flavors that
you can have in cloud a few years ago.
And I was saying that we have 20 30 different flavors of
containerization of serverless in AWS.
And now this is just a snapshot of end of 2024 of a part of the AI services
that Microsoft Azure is offering.
And this is only the beginning.
It's
not only training among customers.
Now, using AI nowadays means To know to select, maybe an out of the shelf plasma
service or a service solution that a cloud web or a third party is providing that is,
that is solving your need, your problem.
You don't need to train a model to do text to speech or speech
to text or to do, document and text extraction from a document.
Ah, if something already exists, why not paste a service or in a different way?
There are two core services that are mostly used, especially
when we talk about Azure.
We have this bunch of services, extraordinarily good.
And we have OpenAI and Azure ML. That, of course, each of them
come with a studio and so on.
why they are relevant?
First of all, Let's understand a little bit the technical scope.
OpenAI is for when you have a pre trained model, you don't do any kind
of customization, you just focus on taking the pre trained model, you do
the, Integration based on the API.
You don't need data scientists.
You don't need MLOps, things like that.
No training required.
You just take out of the shelf and build a code assistance, or the content
generation, a shellpot, and like that.
They work perfectly.
I would say in 90 percent of the cases, this is what you're looking.
On the other side, you have the Azure Machine Learning, where The thing
that you use to work with Azure OpenAI and the existing system cannot do too
much because most of them might be developers, infrastructure people,
security, testers, and so on, data people.
They are great.
They are awesome.
But when you are using Azure Machine Learning, then you do the training.
means that you need a data scientist.
You are focusing only on the model itself.
It is a project inside the program.
A program might be a full end to end AI solution for a specific business stream.
One of the components is building All the ecosystem that needs to be
around the part and another team, another project is building that model.
Why this is important?
Because cloud, we already know, it's not only virtual machines.
AI, and to have a real impact to the business, it's not only the model.
At the moment when you start to put an AI service or an ML Inside your solution,
you also need to create a bunch of other services that we'll talk later
on today to be able to support this.
To expose the API, to manage the payload, the trigger that they will
handle, access management and so on.
This is what I want to do today.
let's start step by step.
What are the limitations of AI in the cloud and the challenges and I'm
not referring here for physical or technical challenges of AI, but the
real challenge is do a full integration of an AI component in an existing
ecosystem, cloud ecosystem without or ensuring that you don't have a
direct impact on the current business use cases and to not have a downside.
Or to not create bottlenecks.
And also, to be able to supply to that AI solution all the resources,
all the flows that are required to be able to respond to the AI needs.
Now,
AI comes with a lot of challenges.
Data security, privacy, bias of AI, ethical consideration, complexity,
the technical limitation, the legal challenges, and the regulatory compliance,
and of course, how it would impact the work that we are doing today.
I want to highlight one or two of them because I think these are very important.
Volume and sensitivity of data.
Sensitivity of data is not only to ensure that the AI can crunch what is allowed.
only, but also refer to the person that would use that a service that you
are putting inside the system would be able to see and access only the
information that they are allowed to.
A very basic example, a database with salaries, the classical
problem of HR and financial.
And you need to ensure that people can ask, let's say, to a GPT.
And to get data only for, or only for financial information regarding
the people that they are managing.
Not of their managers, not of people from different departments.
In Excel, we have different ways to do this.
In the database, we have different ways.
In order to ensure that the same thing is happening when building AI.
Volumes are important.
When we say volume, we are referring that AI needs to crunch data.
It is putting a high pressure on your system.
You need to ensure that your data, your repo, can manage that volume.
Going with data, because this is the thing that I want to highlight,
is data governance and compliancy.
A lot of time, I would say, small to mid sized companies didn't have to manage
and to create a catalyst for data.
to understand what kind of data they are storing, what can be
shared with you, who, and so on.
And there is a shock, I would say even cultural shock, in the moment
in time when they realize that we need to create a catalog of our data.
Why is it a shock?
It's not because it's hard to do, it's because of the cost and the
complexity is coming with that part.
A nice thing that I really like regarding job displacement and human workforces
At the beginning of 19 19, 5 19, and we used to have people that was, that were
throwing guns in your wind in the morning and sure that people would wake up.
to go in the factories.
Think about AI, might reinvent the jobs, the workforce.
In the end, we will equal, but this is a topic for another
session in a different course.
Next, together with you, I would like to highlight five key areas of modernization.
It needs to be tackled, needs to be covered, needs to be
considered when we talk about putting AI inside existing systems.
There are more than five, but these five I want to do.
First one is scalability and performance bottlenecks.
Scalability of the works.
We need to have systems that can scale, that can provide to the AI layer
all the resources that are required.
GPO, for example, other type of computation, memory.
performance bottlenecks sometimes are forgotten.
Because the bottlenecks don't come all the time from the AI.
The bottleneck is generated because AI is trying to do too
many queries to an API endpoint.
Or he's generating too many triggers because event would be if
we have an event driven approach.
But too many triggers to some API endpoints.
And the system is not able to manage all of them, and we start
to see, let us see, we start to see bottlenecks that in the end might
create a butterfly or a domino effect.
The second thing is the data seal and how you can govern.
The government part we already touched a little bit.
data catalogs, and how you keep the data secure, what kind of data you're
exposing, having the ability to control what you provide to different
users and to different environments.
Because in a very sensitive industry, you might prefer the bot
to respond in a specific way if, the user is using a secure device
that is part of your organization.
And if the same request is coming from a public endpoint, from the
internet, from a non secure device, you might have a different response.
Simplified or having your IP kept.
Data seals is another thing.
Data seals means that you have your data, you have your repos in multiple places.
Because you have an organization with a lot of departments.
Because you might have 100 factories or 1000 factories around the globe,
and each factory has their own repo.
And if you want to run something globally or across all organizations,
then You need to decide how you create a data lake that is ready for AI.
It's not impossible.
It doesn't mean all the time that you need to copy all the data in place.
You should do anything.
And you should avoid that model.
AI service to access multiple, repos that are across different locations and so on.
And in the end.
In the same way, you find solution how to create a reporting dashboard and
views that differ at production line, where details at the factory level,
more, the granularity is bigger because you have a factory manager directly
looking at the data at global level.
Again, more, more granular or with a bigger , because you don't want
to have . All the details, similar principles would be applied.
So four solutions and for the data that is consumed by ai,
and now we go to the third one.
What is the right balance between innovation and compliance?
How important and what kind of borders?
To define from the co compliancy, from the governance's point of view,
to give enough liberty to freedom, to give enough freedom for innovation.
There's something that is different from organization to predict,
but governance is required.
But so at the same time, AI gives you the freedom to search.
Combine information in a different way, and this is from where innovation comes.
Finding the right balance between these two, it's a challenge.
The fourth thing is how you move, how you, manage the cost of the
AI and the full cloud ecosystem.
What is the, what is the line between?
Sacrificing performance or accept that you have a latency or maybe the AI system
is not aware of everything but would allow you to keep posts under control.
It's for example like if you are using your Application Insights at the beginning
when you're starting to use it, you say, oh yeah, I will keep everything,
everything installed and after a few days or weeks you realize, oh my god,
Application Insights cost me almost as much as the computation and database.
Exactly, this is the context for the performance in the context.
And the last thing is about deployment, about agility and efficiency.
And you have the models, you have the MLOps, and you need To ensure that you
can deploy them easily all in, in all environments, you can easily create the
full ecosystem that you're looking for.
So what are the key of,
personally, and that's my personal opinion, is the key of success
of an AI system that is running inside cloud is not only AI.
Partially, an important part is about application modernization,
ensuring that you are using not a state of the art services and architecture, but
you are using the right cloud services, the right approach in the cloud that
would allow to have a scalable, efficient
system that can work together with AI, keeping the cost under control.
and deliver business value to the end customer
without affecting other business streams or dimension of your solution.
How can we achieve this?
From the five areas of modernization we can derive that we are talking
about scalability and performance.
Features like auto scaling based on Real internal metrics of the
systems are knowing exactly when you need to scale for how much.
What is the upper limit and when you need to scale down, it's very important.
If you ever tried to find the right balance of the scaling system, when
you should trigger the scaling for how long and so on, you already
know that it's not as easy.
And even nowadays, in many situations, scaling decisions are taken manually.
You have alerts, somebody from the support is looking at the alerts,
and decide what needs to be done.
The second thing is data integration and management.
We've talked about data savings.
We've talked about that we need to unwind.
On the market, there are a lot of data lake solutions and warehouse systems.
What is important before putting data?
The key before putting the AI is to ensure that not only you bring
everything under the same umbrella, but also the data, structure on structure
data, is exposed in such a way that the AI service, the AI model, is able
to understand and is able to crunch.
The next component is about flexibility and agility.
You need to have the capacity, you need to provide to the team the ability to be
able to deliver, to release in different specific environments, dev environments,
testing environments, reproduction environments, as easy as possible.
The AI service, their ML model, anything that is related to the
system that they are building.
As you have, for example, the capability.
to do 10, 15 releases in the development environment per day.
The same agility is required for the ML models, or for the AI services
that are working, because the team needs to have the ability to test, to
see how it behaves in an environment, to get the output, and then to do
any kind of modification or changes.
Easy to say, much harder to implement.
I remember in 2024, at the beginning, almost, I think in the same period of
time, February, March, I was saying that yet we don't have the right
tool and the level of maturity of the truth is not enough to be able to
work with a, to work with the cloud.
Now in 2025, I would say that the tool that we have now available on the market.
are pretty much sure they're not perfect, but we start to have that development
ecosystem that is allowing us to design, build, operate any services, any models
in production and non production.
When we talk about these tools, don't think about tools that are
related to training the models.
No.
I want to talk about the tools and the capability to secure, to do data catalog,
to do compliancy checks of AI services and what are exposing through them.
We have nowadays more and more AI solutions that are capable to
integrate, are capable to communicate with, let's call them enterprise.
I'm referring now to Atlassian, Jira, Confluence.
I'm referring now to SharePoint, to OneDrive.
Because in the end, you don't want to write your own custom integration.
You just want more or less to do a drag and drop.
As you are consuming an APN from, let's say, regarding, the train schedule,
the same thing you want in an APN.
that is already available from ClassyARM or from Microsoft that allow you to
crunch access information that is available on SharePoint based on your
role, based on your grade, based on the group that you're part of or office
and create a response specific to it.
There are many others that we will touch more or less in the next half an hour.
Security and Compliancy, where I would like to highlight, let me, I
would like to highlight policies.
Policies, from my perspective, are one of the most important, terms.
Of security when to close the efficiencies and we'll come back in a
few minutes to this are very important.
Many people don't realize the real cost and the real impact integrating on a
solution only after taking a production.
the running cost?
Double.
And, at the moment in time, there might not be too many options to take action.
Interoperability is exactly what we were talking about with the example
of SharePoint, with Atlassian.
And, the last thing about real time analytics, basically,
the operationalize of AI.
Also about this topic, we will come back in a few minutes.
Now, what are the keys of AI
monetization?
I prepared some services.
As I said, I'm giving examples from Azure, but you will find the second dimension.
The service on AWS and on Workflow.
The things that are crucial, and indirectly, we touched
them in the previous slide.
Why do you think that they are crucial?
Because at least with what we have on the market, on, on 2024 and the beginning
of 2025, they need more or less to be dead products or similar products.
Yeah, most of the products are similar, it's just different implementations.
One of the most important ones is Azure Kubernetes service, or any
other flavor of modernization like approach where you have your workload.
And why?
One of the biggest problem is how you manage the workload.
Because inefficient managing workloads could create problems.
Create issues regarding scaling, regarding latency, and how
you manage your resources.
You need in the end a solution where you can, when you create
a cluster, 200, 500 nodes.
and you don't, and not all the nodes are the same.
You have nodes with GCP capability, with different GCP power.
Other with CPU, other for memory intensive.
And in the whole ecosystem that you're building, you have the capability to
specify that workload goes on GCP, on GCU, that workload goes on specific
AMD interprocessors, that goes on ARM, that goes on different types.
And this gives you the, this gives you the flexibility to the full orchestration that
only, not only, improve the performance of the systems, but also you can run
them more smooth, more cost efficient.
Because in the end, you have a system that's scalable, you deploy resources,
even if they're pretty complex, it's more easy to stick them on the right
nodes type, and you don't end up in a situation where you use Expensive and
very powerful nodes for simple and basic.
the second thing is selfless.
When I say selfless, for example, Azure Functions, I'm not thinking
about running your AI stuff in service.
You could do that problem.
One of the biggest problem, the biggest challenge is that a complex AI
system can generate is all the events, all the triggers that are generated
You might say, oh, you go to an API, but you might have to fix
what, how you would manage them.
You can go very easily on an event driven approach where you have the AI that is
pushing events and behind the events.
You have some functions, you have some serverless approach that is triggered in a
moment in time when there are two events.
And then you can scale up or down based on your needs.
It's up to you if you run your serverless in your own cluster or you go in a more
serverless approach and you pay per consumption or in a combination between.
In the end, you need an event driven approach plus a serverless approach.
approach for ensuring that you can dynamically scale and manage that work.
The third component that I see relevant is the ability to ensure
that you avoid as much as possible.
You can train your AI model and your AI service can efficiently
access data as close as possible to the location where it's running.
The other needs to do a lot of requests across multiple.
This is where a data lake approach is the right way.
There are a lot of tools on the market.
One of the services that I really enjoy is Vital Fabric.
And why?
Because it brings under the same umbrella the full journey
for a piece of data from the moment in time when it's ingested.
It's different transformation has applied to them.
It's stored until the moment when it's ingested.
Somebody's access and based on a catalog, what is access
and in the end are high or is
using mysotherapy or something similar to, and, in a seamless approach regarding data
access, data transformation, and having a catalog and understanding of what data is
stored and who should access different.
We know already that AI.
it's about using, data.
Three or four years ago, everybody was saying data is gold.
And now, we can, we have the area that on top of data can
extract the value of the content.
And we want to expose it.
Exposing, it's not done directly from that area.
Exposing the data requires classical, like an API.
like a load balancer, usually a level 4 balancer, if you go with me.
And more nicely, maybe an Azure Application Gateway.
Because, first of all, you want to protect yourself from the people.
You need to ensure that you can balance the traffic, you avoid downtime,
and the performance are acceptable.
Having a load balancer like Azure Application Gateway gives the ability.
to manage the latency and to balance the traffic and also, protect you
from different type of attack.
Because having, for example, application gateway, is the, with what helps you.
Let's say that we have the AI service, you have a API layer,
that you're on top of that, and you have other application gateway.
Well, a web based attack is happening, all that load.
It's the application gateway and the load balancer, meaning that the computation
available for your business or AI model will remain available for your own use.
It will not be consumed by different systems that would protect you from this.
We are talking about real time.
We are talking about the streaming events, about triggering different activities.
Azure events have about one.
Any other approach of an event based solution is important
from two perspectives.
One is the ability to allow an AI solution to trigger and to scale
up the number of Actions that he wants to run might be actions as
looking in a different location.
Might be actions like triggering, for example, closing
a door or securing a house.
On the other hand, when we talk about AI, everybody is talking
about real time, near real time.
If you have a system that is modifying all the time, then you need an ability,
you need the capability, to ensure.
that you are receiving a change in that, update.
To be able to implement a real time data ingestion system, receive a
stream of updates, you would need to go with an event streaming approach,
and use a service like Error Info.
Because you're not only scaling, also, you can be aware, or your solution
can be aware in near real time.
regarding what is happening around.
The last service that I would like to highlight is copy, joking.
It's about catalog.
It's about the data catalog.
Because you need to do, you need to have the governance layer.
You need to ensure that you know exactly what kind of data
you have, who can access it.
If you need to be compliant with different regulations, you
need to understand all of this.
This can be achieved only through a strong data governance
layer, through a data catalog.
Azure Purview goes hand in hand with different data repos and with different
repos Microsoft has, gives you the ability to do two things, to do the catalogs
and to do the governance of your data.
Meaning that you can control and you can be sure that the AI service and the people
that are using your AI are accessing only the data that they are in the right way.
And also, you can track the level of data accessibility that.
Each flow or each person is access.
Now, again, for the last few months, we already talked about
that AI is increasing a lot.
Workloads automatically, the cost can put pressure on other systems.
It's pre translated, but how you can track that.
And you need to understand exactly the current spending of your
cloud infrastructure versus The business value that is brought.
If the business value is high enough, you might not care about the, your cloud
infrastructure doubled, but the business value is four times more than before.
This is something that you would do.
How you can implement, the ability to understand and
track and to optimize the cost.
These are true human principles, where you have three important components.
Governance, optimization, and strategic alignment.
At this phase, I was not able to identify a tool or a service, only one that you
will say that is covering everything.
It's usually a combination.
Even the ones that are cloud agnostic.
Even if you have one dashboard, behind the scenes they're using
one or multiple services from each event or for multiple ways how they
collect, monitoring, and so on.
If we split between governance, cost optimization, and strategic alignment,
when we talk about government, we talk about governance, we talk about
management, we talk about meaning.
that you gain visibility.
You understand the cost you have to take.
When we talk about cost optimization in the space of cloud, we talk about
different ways how you can optimize the spending, like Azure reservations,
both instances, different licensing, quotation, but also we are talking about
services like Azure Advisor, similar to it, that can advise you how you can take
different actions, all different stuff.
to be able to reduce your costs.
And the third component is strategic alignment, where tools like Azure
Automation can give you the ability to have different automated triggers actions.
This can help you to take control of your spending.
Now, we talked about, we talked a lot about AI, how different
solutions should look like.
I really like this component diagram of Azure services
that is provided by Microsoft.
Why I really like this?
Because if you take a look and if you zoom this diagram,
that there are only to OpenAI.
Nevertheless, around them, there are many other services that
needs to be configured, needs to run, to be able to serve the AI
capability that is generated by them.
It's not just putting AI service.
And you say, okay, I'm done.
No, exactly.
As when you build a cloud solution, you don't just put a virtual machine,
put on this virtual machine, the workload and you say, I'm done.
No, you need to consider all the other things that are around.
measure a p management or, for the, to be able to ensure that, you manage the API
you are exposing and you do versioning of the a p and that are exposing of okay.
Solution that they're building.
You have.
web apps or something else, when we are, when you talk about how you
can, do some logic around what you're exposing from the daily service, you
need to do access management and to ensure that people can access only the
information and the content that they are allowed through your AI system.
You have the secret, so you need a key vault or a different flavor of vault.
You need to connect because that solution don't run by herself.
It is integrated in a full stack solution, meaning that you would need
secure connection with your subnet.
If you go in a hub spoke topology, then you might have a spoke or
you might define something like a sub landing without specific.
to how you do the AI integration on specific applications
or domains and so on.
And then things start to get more and more complex.
building a cloud solution is not only about building a virtual machine.
The same thing is applicable for AI.
Running an AI solution inside cloud is not only about.
taking an AI service and running it.
Now, let's go with an example, a very basic one, because I want
to highlight what is the process that we should have in our mind.
When we prepare our current application for AI.
Let's imagine a traditional e commerce application, a classical
monolithic one, that was hosted on prem, and was migrated with success
in Azure Analytics and Shifting.
Everybody is happy.
You bring business to the customer.
consumption is Okay, nobody's complaining.
That's great
to be able to connect to services you need to do, you need to break down the
application into microservices to be able to scale and to have a better mobility.
You need to ensure that the data storage and the processing capabilities
can support a workflow, can scale, you, will not affect, affect the
e-commerce platform because you just started to use the AI behind the team.
you have implemented all the DevOps practices that are needed
to be able to deploy easily the AI service, the models, and so on.
And
you can do some recommendations based on the capability for the users.
Now, to be able to obtain all of this, you starting a journey
in a modernization journey.
If you take a look, for example, on cloud adoption and cloud modernization,
is providing for a p and you'll find that it's very similar.
It is the same route that gives you scalability.
Ensure data.
That you have a link for, analytics for any reporting.
The first part is that is the microservice architecture.
you're, breaking the, you do the modernization of your data.
You might go with your data file factory with fabric, with
startups, or with data lake storage.
It's up to you, but it's important to do the data modernization layer
to ensure that you have more or less a link that your, that you have a,
that you have a governance on top of your data and you have academy.
They're crucial.
Then you need to streamline the development, testing the deployment
through demo, and also to ensure that you're using ARM templates or
Terra form to be able to build the infrastructure that is required for.
The fourth thing is you're in the phase when you can start
to integrate or AI service.
You might go with open AI, with machine learning, with Azure
Databricks, but it's not only because this is what I want to talk about.
Here, you start to use a keyboard, you start to use a
wheelbarrow, something similar.
you put maybe a traffic manager, you start to create, if you remember,
if you go back to this diagram, you are starting to build this space.
Until at that moment, in all the previous phases, you just created
the ecosystem that can scale, that can run with success, the solution.
And then, you need to think about operational, you need to think about
monitoring, you need to take into account how you can have an effective monitoring,
and how you implement all the practices to be able to track, to be able to
improve performance and reliability.
This is a simplified journey.
And there are three crucial steps.
One is the modernization.
Second is building the AI component with all the other services
that are required around it.
And the third one is the operational part, not only of your solution, it already
should exist, but also of your AI part.
Now,
if we do a step back and we would have or issue today's session, I would
say that there are nine components that I would like to, we are talking
about and we need architecture.
We need to be able to have a dynamic scaling, meaning service computing.
We need data lake, data catalog, data modernization.
We need to implement principles.
We, we need to ensure that also we have an event driven approach for data processing.
For
other models, especially the pre-trained one, we can find a
lot on the market and services.
This one might be a good option.
Security compliancy are important, and in the end, we need the
right observability layer, level.
What are the final thoughts, for my, for this session is that AI is part
of our life for a few years, not something that started 200 years ago.
We should expect that, AI services will be integrated and will be part of our lives.
Most of the softwares and digital services and, think about co pilot.
It's in our part.
What do you say?
Word.
There are other services, like
AI, all around us.
And to be able to use it, to be able to integrate it, we need to
ensure that we do the optimisation exercise of the cloud system.
We do the right modernisation to be able to handle the workload, the payload
that AI generated and best practices that we were ignoring in the cloud space
until now, nowadays, become mandatory.
event driven approach, data driven, and real time analytics are part
of the modernization path that
we need to take to be able to ensure that we are building a
cloud solution that can use Be used and communicate with a service.
Thank you.
Thank you for joining me.
If you have questions, also, you can reach me later on LinkedIn.
And thank you for the team that organized this event.
And thank you for you that you join.
Thank you and have a great day.