Transcript
This transcript was autogenerated. To make changes, submit a PR.
I'm Krista Barde and I'm excited to be here today to talk about how we transform
our PLM product Life settlement Systems using GO programming language, modern
product development, demand a lot from PLM systems, the need to handle massive amount
of data and provide insights in real time.
Our journey volution.
Concurrency and a microservices architecture.
In this presentation, I'll walk you through how we built a system
that processes or hundred thousand product data points for second,
across the distributed services.
This resulted induction in product launch timelines.
Processing.
We'll cover everything from the custom schedulers, circuit
breakers to memory optimization and high throughput pipelines.
I'll share the architectural decisions that helped us transform
our PL infrastructure into a high performance AI driven platform.
So let's start by understanding the challenges that we, as you
can see here, we dealing with some real of all massive data volume.
PLM Systems struggle with the sheer volume of product data generated in
today's development environments.
Think about ca files, specifications, test reserves, and a regulated documentation.
It creates a huge processing demand.
Legacy PLM architecture often rely on monolithic databases and
sequential processing pipelines.
These simply cannot scale to meet the demands of increased product
complexity and data requirements.
And third.
The cross team communication intermission silos between different teams.
Engineering, manufacturing, quality causes significant dealer critical data
is often trapped in dispar systems and formats, making it more and more difficult
to share and collaborate effectively.
To address these challeng, we built a engine met.
This engine is really the found.
Injection and transformation that feeds all our systems.
It has three key components.
First is the MET extraction, specialized parcel converts proprietary
CAD format into standardized data.
For seamless integration with analysis systems and visualization tools.
Secondly, the parallel processing.
We use go routines to process different sections of complex product hierarchies.
At the same time, also design.
of primitives to maintain the consistency.
And the third one is zero strategies.
That is we implemented a custom memory management to process
data without allocating new objects for each transformation.
This significantly reduced the garbage collection pressure
and improved throughput by 85%.
Now.
Next, let's talk about our event driven notification
system built with Go Channels.
this system significantly improved the cross trend communication.
It works through four main steps.
First one is.
Significant data change generates events to a non-blocking channel based system.
This ensures that the producers never have to wait for consumer processing.
Second intelligence routing, central dispatcher route event based on payload.
That enables robust acknowledgement mechanism.
And the last one, the final one is aggregated event streams.
The real time, the real feed, dashboards.
This provides immediate visibility into process status.
It reduces the cross team communication deal by 45%.
Our performance, we implemented a distributed caching architecture.
This system uses multi tier structure.
We use locality cache backed by distributed radius clusters, both
efficient serialization enabled 80% faster data access across our services.
We prefetching machine learning model predicts likely data access patterns.
It triggers background prefetching operation that populate the C
before request even arrives.
To maintain data integrity, we have cash invalidation.
A sophisticated, published subscribed mechanism that ensures cash
coherence across the distributed notes at operations, prevent risk
conditions during the updates.
And finally, the system uses sizing.
Location between the.
Now, let's go to integrating AI into our PLM systems.
It required advanced currency patterns, GoPro to be very helpful here.
We used custom sync primitives.
We developed specialized synchronized primitives that called it ai, AI pipeline
execution across distributed services.
While maintaining data consistency throughout the locking mechanism, we
also implemented efficient memory pools, custom memory pools to be allocated
buffers for large scale data processing.
It's significantly reduced garbage collection and enabled a stable
performance even under the peak.
Or lock free data structures, implementing atomic operations and, carefully
designed lock free data structures.
It enabled us the concurrent access patterns and linear scalability.
These patterns were crucial for integrating machine learning.
Now let's talk about the G ps. Efficient communication between
our microservices is critical.
We used GPRS, sorry, GRPS for high performance communication.
Our GRPS implementation is based on, service definitions.
We use strong type protocol.
Node generation directly from those interface definitions.
GR PS supports bidirectional streaming, enabling the efficient data
transfer with multiplex connection.
It also provides client side load balancing.
And finally it has performance monitoring with detailed metrics on, Let's talk the.
that we implemented, for the performance optimization.
The first one is GC tuning.
Then the, second one is the back pressure mechanism, the flow.
That prevented the system overload.
And then the last one is the real time monitoring we use for continuous
profiling and to enable the, automatic resource allocation, our custom
monitoring system, leverage goes.
Built in profiling tools to provide realtime insight into the system and
it, I just set the resource location to maintain the optimal response
times across all the services.
The next one here is the, realtime M transformation examples.
A validation.
To identify and resolve the issues much earlier in the development process.
The integration of machine learning model with our GO services enable the
predictive maintenance capabilities and reduce the system performance
by our system downtown by 60%.
Even driven notification system, the communication.
And now, the most important thing is Robert, the robust security
and scalability architecture for authentication and authorization.
We implemented a service to service authentication using our TLS.
We use Rate Ling and, we used various data part, data parting approaches here.
So let's recap the key takeaways, from the, our business impact was significant
percent faster product launch, 65%.
Technology implementation was centered around go microservices,
AI integration, and advanced congruency architecture principles.
Were focusing on event design, distributed caching, and efficient communication, and
our journey demonstrates goals elegant.
A foundation for.