Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
Today I'm excited to share my thoughts on migrating monolithic SaaS to serverless
achieving scalability, cost reduction, and development agility on AWS.
Hi, I'm Stricker Kampala, a senior software engineer.
With over 13 years of experience, I've dedicated my career to crafting innovative
and scalable software solutions.
If you have any questions about this presentation.
Or anything related to migrating monolithic to SaaS in general,
please reach out to me at reach ryker compala@gmail.com.
So let's start.
Why choose Golan for serverless architecture?
Golan stands align exceptionally well with the demands of service computing.
High performance goal line compile nature results in rapid execution
speeds and minimal latency.
Crucial in serverless environments where costs are directly tied to
the execution time benchmarking consistently shows goal line functions.
Outperforming interpreted languages like Python in AWS, and Lambda scenarios often
exhibiting significantly lower latency.
This translates to reduced execution duration and consequently lower cost.
Efficient memory management goes Optimized garbage collector is designed
for concurrency and no latency.
Essential for scalable serverless applications.
Its efficient memory footprint, minimizes resource consumption.
Further contributing to cost effectiveness in serverless functions
where memory allocation impacts pricing.
Goal line efficiency provides a tangible advantage.
Concurrent programming.
Golang core routines provide a lightweight and efficient way
to handle concurrent operations.
This allows Lambda functions to process numerous parallel requests without
complexities of traditional threading.
This especially beneficial for applications dealing with high
volumes of concurrent events.
For instance, in event driven architectures, Golan concurrency
model enables rapid and efficient processing of incoming events.
Benchmarking has also shown that due to the efficiency of quarantines, that
Golan is able to handle a larger number of concurrent requests other than.
then other languages such as Python, given the same memory constraints.
In a sense, Golan performance, efficiency, and concurrency
capabilities make it a strong tender for building a cost effective and
highly scalable service applications.
Now combining Golan with AWS, that's a purposeful serverless synergy.
Lambda function, support and reduced cold starts.
AWS Na Lambda's Native support for coal Lamb allows developers
to build highly scalable and cost-effective service applications.
Golans, compile nature and small boundary size, translate to significantly reduced
cold start times compared to interpreted languages like python and or chase.
In real world benchmarks.
GoLine Lambda functions often exhibit a cold circulations in tens of milliseconds.
Minimizing latency and improving user experience, especially for
latency sensor applications.
For example, in a test using AWS Lambda Golan called start times were
measured to be 10 to 20 milliseconds.
While Python called start times were measured to be a
hundred to 200 milliseconds.
Were, when performing the same task, cost efficiency through optimized footprint.
Golans compiled binary like release are notably smaller, than
those of interpret languages.
this results in faster deployment times, reducing the time spent on
uploading, function packages to S3.
Moreover, the reduced to binary size translates to lower storage costs on S3.
Combined with coal lines optimized execution speed, which
minimizes the Lambda runtime.
This leads to significant cost savings in service en environment,
seamless integration and performance.
Golan integrates seamlessly with core AWS services like Dyod, dp,
SD API, gateway and CloudWatch.
This natively, this NATO compatibility, ensures scalability without
introducing performance bottlenecks.
Golan concurrency model and low latency execution are particularly
beneficial when interacting with these services, enabling efficient data
processing, and real time even handling.
For instance, a GoLine Lambda function triggered by an S3 event can process
and store data in Dymo DB with minimal overhead and high throughput.
Benchmarking has also shown that GoLine Lambda functions can achieve
significantly higher throughput when interacting with Dymo db, compared to
Python or no JS functions due to its efficiency of coal and concurrency model
and native integration with A-W-S-S-D-K.
Now let's talk about why Golan and Golan, with other versus other languages.
So performance and efficiency showed up.
Overall performance, Golan consistently outperforms Java,
Python, no JS in AWS and environment.
This is primarily due to its compiled nature, resulting in faster
execution speeds and significantly reduced cold stack names.
Benchmarks frequently demonstrate that coal line functions can achieve
execution times several execution times, several times faster than their
counterparts in digital languages.
This translates to global latency in improve user experience and cost,
reduced cost in service applications.
Java.
Okay.
Java offers robust nets and reach ecosystem, but it suffers from a
longer code of start times and in serverless environment, especially the
JVM initialization overhead introduced a significant latency, especially
for infrequently invoked functions.
This increased latency directly impacts impact cost due to longer execution times,
and can lead to poor user experiences in latency sensitive applications.
No js.
No.
JC is a popular for serverless due to its Java, skip familiarity
and event driven architecture.
However, it underperforms coal and in CCP intensive tasks and concurrent processing.
Its single threaded nature can become a bottleneck when handling numerous
panel requests leading to increase latency and potential performance
degradation Benchmarks indicate that Golan can handle significantly more
concurrent requests than node js.
Under the same memory constraints, particularly in the scenarios
involving heavy competition or input output operations, Python.
Python is excellent for rapid development and offers a vast library ecosystem
making it popular choice for servers.
However, Golans compile nature delivers faster execution and lower resource
consumption, enhancing the production scalability in high throughput and
or low latency critical applications.
Where performance, advantage becomes more crucial.
While Python excels in the use of development and speed goal and sufficiency
in terms of memory, usage and execution times makes it more suitable choice for
production environments where scalability and cost effectiveness are paramount.
When compact to goal line, Python often consumes more memory and has higher
execution times for the same task.
Especially when dealing with large data sets or complex algorithms.
By incorporating these data points and contextual insights, the comparison
between Golang and other languages become more compelling and informative,
highlighting the specific advantage of Golang in serverless architectures.
Now, why migrate to serverless architecture?
We've seen cost reductions up to 80%.
Serverless.
A pay for execution model is a game changer.
Unlike traditional servers that incur costs.
When even idle serverless charges only for, the compute time consumed, the,
this is particularly beneficial for applications with fluctuating workloads
or those in experiencing infrequent usage.
The, 80% suggest, that a, it is a significant potential for cost
savings, but the actual percentage will vary depending on the specific
application and usage patterns.
This cost reduction allows companies to invest in more resources into development
and innovation rather than infrastructure maintenance, faster deployment up to 70%.
Serverless architectures enable faster deployment cycles.
Through streamlined pipelines, developers can deploy individual functions
or microservices independently.
Reducing the risk of, impact, impacting the entire application.
The 70%, improvement indicates a notable improvement in the deployment
speed, which is crucial for staying competitive in today's fast paced market.
This agility allows, for rapid duration and faster time to market features and
updates improved scalability up to 60%.
Serverless platforms automatically scaled resources based on demand
and eliminating the need for manual provisioning or capacity planning.
This ensures that the applications can handle certain traffic spikes
without, the performance degradation.
The 60% highlights are, that the significant, benefits of the
serverless, enabling applications to handle unpredictable loads with ease.
This auto-scaling capability ensures a consistent user experience.
Even during PQSH periods, operational gains of, over 50% serverless
platforms handle infrastructure management, including the server
maintenance, patching and scaling.
This free sub development teams to focus more on building and
improving applications rather than managing the infrastructure.
The fifth.
This represents the reduction in operational overhead, allowing the
development teams to concentrate on innovation and core business objectives.
This reduction in maintenance allows for a quicker updates and less downtime.
Now let's take a look at the code AWS servers technologies.
Like AWS Lambda.
Lambda is the compute engine of serverless.
It allows, us to run the code.
Without provisioning or managing servers.
Lambda functions are triggered by events from other AWS services
or external sources such as API Gateway, SD DynamoDB, CloudWatch.
So the, there are a large number of benefits.
It is no server not managed by servers.
It frees up developers from infrastructure tasks, automatic scaling.
Scales automatically based on demand pay, execution model.
You're charged only for the commute, compute, time that we
consume, cost effective I for, and even intermittent workloads.
Now, DynamoDB is a fully managed no SQL database pro, and it
provides a consistent single digit millisecond latency at any scale.
It automatically scales to handle virtually unlimited data and
traffic supports both document and key value data models.
It offers built-in fault tolerance and high availability.
It is ideal for applications requiring, high throughput, low
latency, and flexible, data models.
Now let's take a look at API gateway.
the core function of it is it acts as a front door of applications to
access backend services, including lambda functions and NO db.
The key features of API gateway are API management.
It simplifies the creation and publishing, maintenance, monitoring
and security of APIs, protocol support.
Pro, IT support restful and web socket APIs.
It provides both built-in authorization authentication and API key management.
The traffic management, it offers throttling, caching and request,
transformation capabilities.
The use cases are it enabled, building scalable and secure APIs
for me, mobile and IOT applications.
Now let's take a look at enhancing operational efficiency
with AWS S3 and CloudWatch.
when we dive deep into AWS S3, S3 is a centralized storage, like S3 access,
s central repository for various types of data, including application data,
backups, logs, images, everything.
This simplifies data management and ensures data consistency.
It's cost effectiveness.
S3 is a pay as you go pricing model, which makes it a very cost effective
solution for large storage volumes of data, especially for applications
with fluctuating storage needs.
It's integration with Lambdas.
Seamless integration with AWS Lambda enables developers to perform data
operations directly from their Lambda functions without managing
underlying infrastructure, for example.
Lambda functions can be triggered by S3 events such as file uploads or deletions.
The extreme durability of S3 99.9 9 9 9 9 9 9 9 9, so a lot of 9 99 0.9
nines ensures the data is protected against, the data loss making
suitable for critical applications.
Now AWS CloudWatch.
CloudWatch is mainly used for logging metric collection and observability.
Cloud CloudWatch provides real time monitoring and insights, into the
performance of serverless applications.
This allows developers to identify and resolve issues quickly.
Log in metric collection, CloudWatch collects logs and metrics from various
AWS services, including AWS Lambda.
Dyod DB and EC2.
This provides a comprehensive view of the application's health and performance.
Customizable dashboards.
CloudWatch dashboards enable developers to create custom visualizations
of their, application metrics.
This helps in identifying trends, detecting anomalies,
and optimizing performance.
Proactive monitoring CloudWatch allows you to set up alarms and
notifications to you a potential.
issues before they impacted users.
now let's talk about, migration strategy, decomposition of it.
Identifying these, responds domain driven design.
This step emphasizes the importance of understanding the business domain and,
identifying the logical boundaries.
Between two different parts of the application.
The domain event, design helps in, creating a clear map of the
business and aligning them with minimal service boundaries.
Minimal coupling, starting with the components that are minimal
dependencies on other parts of the system, allows for easier extraction
and reduce the risks of unintended consequences during the migration.
prioritization This step highlights the need for strategic planning and
prioritizing components for migration based on the complexity and impact.
Now the second step is extracting the microservices, which mainly includes
refactoring the step inwards, breaking down monolithic applications into
smaller, independent microservices.
Each microservices, microservice has should have its own dedicated data store
to ensure data isolation and independence.
API gateway using API gateway to create version APIs.
For Interservice Communications provides a standardized and controlled way for
microservices to interact with each other.
This also enables you, easier versioning and management of APIs
dependent independent deployment.
Microservices can be deployed independently, allowing for faster
development and deployment cycles.
Now, the third step is implementing serverless functions, Lambda functions.
This step in, involves in con, converting the extracted
microservices into Lambda functions.
Each lambda function should be designed with a single responsibility,
allowing, aligning with the specific business operation.
triggers and optimization.
Lambda function should be triggered by app appropriate events using
A-A-P-I-K two events, S3 events, dymo events, even cloud bridge, event.
optimizing execution by minimizing dependencies and, reducing execution
time is crucial for cost effectiveness.
Adhering to the single responsibility principles ensures
that each Lambda function.
Is fo is focused and manageable.
Now, when we move to transitioning the traffic gradually, we
can use a triangular pattern.
This pattern involves gradually replacing the functionality in the
monolithic application with new microservice all Lambda functions.
We could use the concept of feature lines, which allows enabling us,
enabling or disabling new functionality without deploying new code.
This provides control over the rollout process and allows for
easy rollbacks canary deployments.
Candid deployments involve in routing a small percentage of traffic
to the new service components.
This allows for testing in a production like environment
and identifying any issues.
Before a full rollout, monitoring and metrics, closely monitoring metrics
during the transition is essential for detecting issues and ensuring a
smooth migration, quick rollbacks, should be enabled if necessary.
Now let's dive into the event driven architecture design.
So this involves, event production.
events are generated, when a service or component, experiences a stake change.
This could be anything from user interaction to a database.
Update services publish these events to an event bus or MSHQ, the stake couple.
See event producer from the even consumer event processing.
Asynchronous processing.
Lambda functions are triggered by the events and process them as synchronously.
This means that event producer doesn't have to wait for
the event to be processed.
Lambda functions are ideal for event driven architectures because
they're stateless and can scale automatically based on the demand.
State transformations, updating systems state based on the data or
in the event system state is updated.
This could involve, updating databases, caching systems, or other services.
Data consistency.
Ensuring the data consistency across different services is crucial
in event driven architectures.
Now the ca footstep is cascading events, triggering new events.
The state change result from event processing can trigger new
events, creating a chain reaction.
This allows for complex workflows and business process to be implemented.
managing the flow of events and ensuring they're processed in the
correct order is essential for maintaining the system integrity.
Even architecture reduces service coupling in general, enabling independent
scaling and autonomous deployment.
This approach aligns perfectly with Lambda stateless execution
model and pay by use pricing.
Now let's take a look at the database migration concentration.
So MySQL to MySQL or push K, SQL to Aurora Serverless.
Aurora Serverless offers auto-scaling capabilities and is compatible with
my MySQL and push K SQL, making it relatively straightforward.
Migration path for these databases.
Suitable for applications that require, relational database functionality
with, automatic scaling Oracle, PSQL or SQL server to DynamoDB.
In combination with Aurora sql, this migration path offers cost
saving and improved performance.
DynamoDB is used for high throughput, low data, and sea data access,
while Aurora can be for more complex relational queries, ideal
for applications, that require high performance and cost optimizations,
but may require a hybrid approach, MongoDB to document, DB or dymo db.
So join document DB and NO DB offer managed services and scalability by
simplifying the database management and, allowing for autoscale.
so these are suitable for, applications that require no SQL
database with high scalability and minimal operational overhead.
Redis, mem cache the Redis or mem C to Elastic Cache or, Dak. elastic Cache,
and Dax provide a caching layer, reducing the latency and improving the performance
ideal for applications that require fast data access and caching capabilities.
database migration is the most crucial challenge in
serverless, trans transitions.
When converting monolithic, databases to service architectures,
organizations must analyze data access patterns and usage requirements.
DynamoDB offers high throughput and low latency performance, but requires
shifting from relational ski mass to an access pattern driven approach.
let's, take a look at the hybrid deployment models.
So we identified the services, we have decoupled them and we have
returned the new services in AWS Now.
How to perform the deployment model.
Now, if you take a look at the serverless functions, lambda's
ideal, they ideal for even, stateless and highly scalable, workloads.
they reduce the operational overhead, enabling us to go automatic scaling
and paper execution, pricing.
We could use, services such as, econ container services, NAS,
which are ECS, elastic Container Service and EKS Elastic Service.
services.
These services are, suitable for stateful con, complex microservices that require
containerization and orchestration.
The, these provides, benefits of portability, scalability, and
efficient resource utilization.
You could also use managed services like RDS, ElastiCache, open Search, et cetera.
So these are designed for specific data needs, such as relational databases,
caching and search functionalities.
The benefits of it has simplified management, high availability and
performance optimization, traditional infrastructure, like easy to necessary
for, resource intensive or legacy workloads that cannot be easily migrated.
To other models, we could still use EC2 to achieve them.
These offer benefits, such as flexibility and control over
the underlying infrastructure.
So effective migration strategies in general combine serverless components
with the containers and traditional infrastructure to optimize each service
based on specific requirement, rather than forcing a singular architectural
pattern across all the components.
specialized workloads with unique needs, they require, such
as the long running process.
GPU acceleration memory tends to, operations or legacy components can still
remain on EC2 while new features like leveraging serverless technologies can mu
yes, this incremental approach, delivers immediate benefits while maintaining the
system stability throughout the migration.
Now.
Let's talk about the DevOps integration and the continuous,
improvement, and continuous deployment.
So infrastructure as a code, Terraform or AWS Cloud formations are highlighted
as a tools for codifying infrastructure ensures, so these ensures, the
consistent and, repeatable deployment.
They also offer version control of infrastructure, allowing for
tracking changes and rollbacks.
these facilitate, collaboration among team members throughout the code
Reviews and shared infrastructure definitions automates the infrastructure,
provisioning and management.
Now continuous integration to, AWS code build.
our and GitHub actions are mentioned as tools for automated testing unit as
integration tests and infrastructure violation that emphasized as essential
components for continuous integration.
they offer a variety of benefits such as early shoe detection, improved
code quality, faster code feedback.
Now let's take a look at deployment automation.
So AWS Code pipeline is highlighted as a tool for orchestrating serverless
and, serverless deployments.
And basically you could use any other internal tools.
they are, most of the tools should be compatible with AWS in general.
deployment strategies, cannery releases and blueprint deployments I
mentioned as strategies for reducing risk and maintaining availability.
Automated deployments, automates the deployment process and, by reducing
manual effort and errors, candidate releases blue green deployments, minimize
the impact, of deployment failures.
monitoring and observability.
CloudWatch, x-rays and specialist tools are recommended for
monitoring and observability.
So logging, distributor tracing and letting are emphasized as crucial
for detecting and resolving issues.
These offer benefits such as real domain sites, proactive issue detection,
enabling, proactive detection and resolution of issues before the impact.
they will impact users, and they also offer the benefit of
improved troubleshooting simplifies the troubleshooting, through
logging and distributed, tracing.
when we talk about how do we overcome these migration challenges, there
are some, highlighted issues which we usually see with migrations such as
cold start latency, cold start occur when a lambda function is invoke for
the first time after the period of inactivity leading to increase latency.
a couple of solutions are provision concurrency, pre lambda
function, reduce cold start times.
Scheduled warming regularly working the lambda, functions to keep them active.
Code optimization, reduce, function, size and dependence
to minimize initialization time.
lightweight run times in Java and dotnet uses optimized track times to
reduce the initialization overhead distributed system complexity.
Cha Serverless Architect Architecture often involved in a distributed systems
where can be com, where it can be complex to manage and troubleshoot.
a couple of ways to overcome this is, comprehensive observability.
We're using the X-ray and Cloud Watch provides end to end tracing and
monitoring for distributed systems.
Eventual consistency design systems to handle eventual consistency rather
than relying on strong consistency.
Using circuit breakers, preventing cascading failures
by isolating, failing components and using, back of ex strategies.
implementing retry mechanisms with exponential back costs
to handle transition errors.
Legacy code refactoring.
This is the third step challenge.
The challenge that what we see is migrating the legacy code.
Serverless can be challenging due to high, coupled dependencies
and monolithic architectures.
a couple of ways to overcome these, incremental refractory breaks down
the monolithic application in those smaller independent components.
identifying boundaries and isolating dependencies.
Identifying the logical boundaries and dependencies.
To facilitate refactor, we could use something called a strangler
pattern, gradually placing the, monolithic functions, with serverless
equal while maintaining the system.
Stability.
Now the first of, the last and much important point as
the security concentrations.
So service architectures require careful security considerations, including access,
control, and vulnerability management.
a couple of solutions, to handle security would be implementing function level.
Im permissions implement, implementing the least privileged principles by
granting, only necessary permissions to lambda functions, AWS security hub.
And you could use, other services like AWS Security Hub, Amazon
Inspector, and, service, security scanners to identify vulnerabilities.
Even before deployment, right?
So in the next, let's take a look at a case study of an
e-commerce, platform migration.
So the initial in, during the initial assessment, the platform experience,
significant performance, degradation during peak seasons, with the,
transaction process slowing down by 400%.
While performing an analysis, the bottlenecks were identified in the
monolithic architecture leading to prioritization of inventory
and payment systems for migration.
One of the key insights that was outcome is the phase, the phase insights,
the, this phase, like basically initial assessment phase insight,
the importance of, going through a.
Thorough analysis and prioritization before even
embarking on a migration database.
transition, the product catalog was migrated from MySQL to
DynamoDB, a NoSQL database without optimized access patterns.
for this, a dual right approach, was used to ensure zero downtime, migration, and
maintain data consistency across system.
So the, this database transition phase, demonstrate the importance of choosing
the right database technology and implementing a robust migration strategy.
So the next comes the first migration phase, the payment processing
system, which was extracted into Lambda functions with API gateway
using an event driven architecture.
This resulted in 30% reserve reduction in infrastructure cost, and a 50% improvement
data, response times during peak traffic.
The key insights of this phase, it highlights the tangible
benefits of migrating critical component to serverless complete.
The fourth point is like the complete serverless architecture.
the complete serverless implementation achieved an 80% cost reduction during
normal operations and enable the platform to handle 10 times the traffic
surges without performance issues.
the development cycles.
IMP improve from monthly release to multiple daily deployments.
with this key phase demonstrates the transformation, impact of a complete
serverless migration on performance, cost, and development agility.
Now we could, take a look at the roadmap for successful serverless migration.
The key would be assess and plan, analyzing the existing
architecture to identify suitable candidates for migration.
Then mapping dependencies between services to understand the impact of migration,
prioritizing migration candidates, based on business value and risk.
this is to create a clear understanding of the current state
and develop migration strategy.
The, during the initial pilot implemented POC with a non-critical isolated
service, expertise, building, build, internal expertise in server as,
technologies established architectural patterns for server development.
The whole goal of this initial pilot is to validate migration
approach, and build confidence.
Expand and refine.
gradually scale the migration to critical workloads, refine the architectural
patterns based on experience and feedback.
The goal, of the expanding and refining is to ensure a smooth transition of
core business services to serverless enterprise transformation, adopt
serverless as a standard for development.
embrace a cloud native culture, restructuring teams around business
domains rather than technology silos.
The goal of this, enterprise transformation is to fully leverage
the benefits of serverless, transform, serverless and transform the organization.
So overall, what we could, if it take.
If you consolidate these into two couple of points, it's beginning
with the, begin with the thorough assessment by mapping the dependencies
and prioritizing migration candidates based on business value and risk.
Launch a pilot with a bounded non-critical service to build expertise
and establish architectural patterns.
methodically expand to core business services while refining
implementation patterns.
Finally transform your organization by adopting serverless as a
standard for new development and restructuring teams around business
domains rather than technology silos.
In conclusion, we have explored the significant advantages of migrating
a serverless architecture on AWS from cost reduction and improved scalability.
To enhancing, enhanced develop agility, the benefits are very clear.
Remember, this journey is more about just technology.
It's about transforming how we build and deliver applications.
By embracing serverless, we are positioning ourselves to innovate
faster, respond to market changes more effectively, and ultimately
provide greater value to our users.
The future of cloud computing is serverless, and I encourage you to explore
how it can empower your organization.
Thank you all.
Thank you so much.