Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
I'm Thena and I'm so excited to join you today at Con 42.
A huge shout out to the amazing Con 42 team for putting this event together.
And of course, thank you to all of you for being here, even if
we are connecting virtually.
I'm grateful for the chance to share some meaningful insights with you.
I contribute to Gopher, which is an open source Golan framework
designed to supercharge microservice development in Golan.
At cofa, we are passionate about helping developers build scalable,
resilient, and secure microservices with the power of co. In this talk, let's
kick things off by exploring why GRPC has become the cornerstone of modern
microservices architecture as it is a fantastic choice for building high
performance resilient distributed systems.
If we talk about the reasons first comes the performance.
GRPC leverages HTTP two multiplexing that allows multiple concurrent requests
over a single connection and its uses protocol buffers for binary serialization.
This combination makes it incredibly efficient, often significantly faster
than traditional rest over chase.
On second comes the strong contracts.
With protocol puffers, you get a clear language agnostic interface definition.
This enforces types and ensures consistency across different services no
matter what language they're written in.
It virtually eliminates schema, mismatches and versioning headaches.
Third comes the streaming superpower.
GRPC offers powerful streaming capabilities that is bidirectional
client side and server side.
Which is very crucial for realtime applications nowadays, like chat services,
iot data streams of a nine shared tickers.
But here's the catch, and it's a big one.
GRPC is a black box by default.
Why?
Because without explicit effort, you get no request visibility,
no built-in health checks.
You don't know what's happening inside those efficient piney pipes.
That brings me closer to our analogy for today.
That is running GRPC without observability.
It's like driving blindfolded at night.
You are moving fast, but you have no idea where you are, where you're
going, and what obstacles are ahead.
Let's tying us these critical blind spots that so many teams encounter
when deploying GRPC in production.
First comes the tracing gaps.
Without distributed tracing, we simply cannot track a request as it
hops across multiple microservices.
For example, a single user action might touch five 10, or even 20
different GRPC services, and if one of them fails or introduces latency,
isolating the root cause becomes a nightmare because you're left with
a fragmented, incomplete picture.
Second comes the logging chaos.
Often GRPC logs are unstructured, verbose, and lack context.
And at error August, you are shifting through mountains of text,
desperately trying to correlate log lines from different services.
Debugging GRPC errors can literally take us turning a simple bug
fix into an all day or deed.
And third comes the metrics blindness.
Without cran new latency, error rate, all request volume metrics, you are
completely in the dark about the health and the performance of your GRPC services.
You won't know if users are experiencing slow responses, if an upstream
dependencies is failing or if a new deployment is causing cascading issues.
So now let me give you a real world consequence.
Suppose a 500 MS. Latency spike in a critical payment service goes completely
undetected for us leading to frustrated users and abandoned transactions.
Partial outages in a streaming pipeline silently fail, which means that the data
is not processed correctly and you only find out days later when the downstream
system starts screaming and health checks.
Often that is an afterthought, bolded on at the last parent, if at all.
Now, this isn't theoretical, it's why development team based weeks, sometimes
month, manually instrumenting GRPC services trying to stitch together
a coherent observability story.
Now this is where gofer comes in 'cause meet gofer as open source
Colan framework where the core philosophy is simple, yet powerful.
That is built-in observability should be default and not an option.
It should be foundational, not an afterthought.
With cofa, this is exactly what you get because every GRPC call, whether
it's a simple ary request or a complex bi-directional stream, gets auto-lock,
auto traced and has auto exported metrics.
You don't write a single line of instrumented code for it.
Health checks are enabled out of the box for GRPC servers, whereby the standard
GRPC health protocol giving you immediate visibility into service readiness.
And critically, this is all achieved with zero manual
instrumentation from your site.
Now, how do we do this?
A secret weapon is the gofer CLI tool that we have built with scaffolds,
your GRPC services with all the necessary observability, hooks,
patent in from the very picnic.
So Al, it's time to actually see this in action.
So I'm going to take you from a bare bone GRPC service to full observability
in literally five minutes without writing any observability specific code.
So let's not waste any more time and let's get coding with cofa.
So we start with setting up A-G-R-P-C server from scratch.
Let's begin by creating a new project directory, let's say server.
Inside this directory, we'll define a pro profile, which is
the heart of any GRPC servers.
In this example, we'll name it greed dot proto.
A pro profile is a schema definition file used in GRPC where we define
the structure of our data and service methods using protocol buffers.
Once the pro profile is ready, we need to generate go code from it
using protocol buffer compiler.
This pro compiler converts the proto definitions into go structure and service
interfaces, making it easier to handle binary serialization and RPC calls in Go.
To generate the GO code for A-G-R-P-C service, we simply run the following
command by providing the output path as well as the profile path.
This command generates two files.
Greet pb dot Go and greet GPC PB dot go.
These files contain the auto-generated code to perform GRPC calls.
However, one thing to note is that this code does not automatically
support observability or health checks.
Something that is very crucial for building production grade microservices.
This is where Gopher Steps in Gopher has always treated observability as a
inbuilt feature even in GRPC services.
To get started with it, we would need to install the goer CLI by running.
Go install goer dev slash cli slash goer at the rate latest.
Next, we navigate to a server directory and execute the command.
Go.
For wrap G PC server, we give the pro profile path that is create dot proto.
This command generates multiple files including greed, server dot go.
Please note that this generated files are always present in the
directory of the proto file itself.
The only file that we are supposed to modify is greet server dot go.
This file has a template for our GRPC server implementation
already wired with contact support, observability, hooks, and health checks.
Here we'll keep things simple, receiving a request with user
info and binding it in our defined user info message type in GRPC.
And then returning a single message as hello and the first name that
we received in the user info.
Now in a main dot go file, we import the package where a greet server
go is present and register our G PC service with gofer as server.
Do register, greet server with goer and pass the arguments as app and server.
New greet.
Go for server.
If we look at the structure of A-G-R-P-C handlers, we can find
go first context available there.
This gives us seamless access to connected databases, metrics, locks,
and traces, so we can easily add custom observability as well as perform
DB calls inside A-G-R-P-C handlers.
Moving on, let's run a server and then go to Postman and call the method.
Say hello.
With all the user details coming back to the terminal, we can see
that Gopher automatically locks the request, but that's not all.
Gopher also provides built-in metrics.
We can access them at local host 2 1 2 1 slash metrics endpoint.
Additionally, all the inbuilt as well as custom traces are visible at the
trace exporter that we had set up in this case being tracer dot COFA dev.
For added resilience, gopher also adds a health service for every GRPC server.
This can be accessed through the health field in the generated truck.
Health checks are automatically registered for both the overall server
and the individual GRPC services.
These health checks are extremely useful in Kubernetes health.
Checking of A-G-R-P-C server.
So if we perform a curl request to the overall GRPC server, we can see
that we get the following response.
But if we perform the curl request for a specific service in that GRPC
server, the health check response return is for that particular service only.
In cases, if we would've had another registered service on the same host, we
can call the health check on that service by using the check method on a health
server by specifying the service name.
Additionally, if a service is temporarily not fit to serve request, we can set
its status using health dot set, serving status, passing the context, the name of
the service, as well as the status of the service we can even shut down or resume
services dynamically making it incredibly useful for Kubernetes Health Check.
And that's it.
The key moment to reiterate over here is that there was zero instrumentation
code that was actually written by us.
This is the power of cofa.
But wait, what about streaming?
We talked about GRPC unity servers in the demo before, but what about streaming?
GRPC is renowned for streaming capabilities and observability here is
even more critical, yet often overlooked.
Gofer handles all streaming types with the safe zero instrumentation approach.
Whether it is server streaming, client streaming, or bidirectional
streaming, goer automatically tracks it.
You get real time message levels tracing, like for example, a
chat service, for instance.
You can see every single message sent and received within the
stream being instrumented as its own span in your trace.
You get crucial metrics for the streams, like stream duration,
message counts and errors within the stream are automatically captured.
For example, see this chat service, every message here is
traced and every error is locked.
This level of detail for streaming services is incredibly powerful for
debugging complex real time interactions, and it's not just the server side.
Even GRPC clients built with COFA get auto instrumentation for their upstream calls.
This means your entire call graph across multiple services is fully traceable.
That concludes for today's session.
Thank you so much to all of you for your time.
Going forward.
You can check our examples at Diet deeper into streaming, as well
as Unity observability of GRPC services you set up in Gopher.
At gofer dev slash docks, we first established that GRPC is a
powerhouse for pot and microservices, but without observability.
It is a recipe for production fires.
Go for an open source coal framework delivers that essential observability
automatically that the structured logs, distributed traces, comprehensive
metrics and robust health checks.
And crucially, it works seamlessly for both unity and complex streaming GRPC
patterns right out of the box today.
So how do we get started in it?
It's incredibly easy.
Simply we do go get gofer dev to pull in the framework for
our existing GRPC services.
We use the Gofer CLI tool and run.
Go for wrap, GRPC server or client command to inject GOFERS Magic.
And for furthermore information, you can go ahead and explore our comprehensive
documentation and examples at slash GRPC or our examples@github.com slash.
To unlock the full potential of GRPC with observability, and this is it.
But when it comes to GRPC, we are just getting started in gofer and
are continuously improving the way Gofers observability making setup
even more seamless and powerful.
We invite you to join us on this journey.
Let us know your feedback, or better yet, consider contributing
to the open source COFA project.
Together we can make GRPC observability effortless for everyone.
It's time to remove the blindfold and stop pasting previous development
type manually instrumenting and truly start understanding our GRPC services.
That's it for today.
Thank you so much to all of you.