Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
Thanks for joining.
this is Nikai with my, talk for, go, con 42 conference.
go on track and the title of my talk is Practical Guide to Testing Go Services.
I. Couple words about me.
I'm senior software engineer, in Zeland, working from Helsinki,
working on PreOn project.
I started my career with SEA and did quite a lot of chow, and for
the past six years, I'm doing.
lots of, go engineering and also writing lots of tests, using Go Eng.
And also I'm offer of PG x Outbox library, which is available on GitHub.
It's about, transactional outbox pattern using, PG x driver.
So many of you are familiar with, testing, pyramid concept, which suggest
us, that, a unit test, chipped to run, and then we should have more of them.
I. and then integration tests that, that take more time, more resources to run.
So we should have less of them.
But, there is, there is a new concept called Testing trophy, which actually
advocates for the idea that, integration test, we should run more integration
test even the, more than unit test, because these days they got, extremely.
Easy, to run and, execute integration tests and unit tests.
They actually tightly couple our, our.
Code to the test.
but in that sense, integration test might, have more value because, we can have
our integration to different external systems like Postgres, Kafka, and so on.
And, one more, idea about testing.
Trophy concept is static tests.
It's about, basically linters in go that many issues, code issues.
they can be caught, using static analysis tools.
yeah, this is, a new, relatively new concept about, testing trophy.
so yeah.
about testing.
I would like to introduce, several principles, Which I'm advocating in
this talk, it's about, testing layer by layer in, in our GO services.
what I mean layer by layer is, basically repository layer,
service layer and HTTP layer.
I'm going to use table test everywhere because they, they're always,
useful because they're extensible.
And, I'm advocating for, comprehen, comprehensive comparison of, of
actual data to expected data.
basically if we have a str, we should, com, compare all the fields, and
I'm going to show libraries, which, facilitate easy comparison of all
the fields of the str, in the test.
and one more principle is to use randomized data, for the test,
not just some hardcoded di dmi.
use, so there is another library, and I'm going to show which, facilitate,
how we can use randomized data and, improve our, quality of the test.
yes.
I would like to introduce, some demo applications, some example service,
which are going to test today.
the service is about, management of the card.
Of, e-commerce.
basically we are going to add some unique items to the card because
pre-owned items are unique, and then we can remove, items from the card.
And, we can get the card basically by, by the user ID or owner id,
which, I refer in this stock.
I'm going to use Postgres as the main database and pgx, driver,
which is the most popular I'm going to use Gene Framework for HCTP.
And I'm going to use, hexagonal like architecture, which in this case
of this, very simple application.
it's very similar to a layered architecture with just, a repository
service layer, and, and HTTP.
So this is hexagonal applica architecture, in the.
In the, more center we have domain, which is independent on the application.
And application is basically our, our, business logic.
our service.
and then, we have infrastructure and infrastructure is some ports, which
are interfaces and adapters, which are implementation of interfaces,
which connect our application business logic, with database and HTTP.
yeah.
in, let's start, with Repository Aware.
How can we test, repository aware, the layer, which would talk to database?
one of, the main idea here that we should test against a real database,
not a mock database, because with these days we can run, a new
database, exactly the same version.
as production, in container docker container, And then we
can test our queries and our code against that, this, exactly the
same database in this production.
And, and the idea is that actually spin up, the containers from Go
Code, this way we can, we can manage lifecycle of the containers in the
test, together with the test itself.
So it's not completely independent.
yeah.
So for this, two, there are two libraries, how we can start, a
container f from the go, co go code.
one of, the most popular is test containers go, which I also use the most.
it comes with several, actually more than 30 modules.
so which simplify configuration, for.
for the person who writes the test.
so I used it quite a lot for Postgres and Local Stack.
And Local Stack is, a ation of, Amazon Web Services.
it has, quite a lot of, services, S-N-S-S-Q-S, S3.
I found it, very useful, for my projects.
But yeah, we are going, to see how one can, use that, for Postgres actually.
it requires just couple of lines of code to run, Postgres
container, from the Go Code itself.
so in here you can see there is a, basic way strategy is used,
and the idea is that before you.
before you start your test, you need the container to be ready for the test.
And, because we're using, res model, it's already knows how to
wait, for the Postgres container, to be ready for our test.
Yes.
As just, an example, I show that, one can provide, initial.
scripts which initialize our database, creates the tables.
maybe insert some data for the test and so on.
And then, from the container, one can get a connection string.
I show example.
In the code.
There's deer here that, it uses a randomized, port on the local machine.
you can start different containers and they don't collide, which is each other,
and your test can run in parallel.
one, one of, one of idea is that, we should, to manage.
the lifecycle of our containers, like in which part of our goal code, we
should actually start the container.
And I'm advocating for using, a shoot package of, testify
Library, a testing framework.
and it's, because it has, it has, lifecycle hooks, which you can
run before, before all the tests.
in the suit and then, shut down the container.
it's, I found it, much more convenient than using Test main, for example,
or, using sync once, or using, or starting the container per each
test because then it, it might be actually costly and cause some device.
this is, And this is, the, the example how we can introduce the testing suit,
from the testify library to our code.
We just need to use, we need to embed, the seal struc into our
struc and in, in the cart pursuit.
I, at, I at test, container.
so I can, in the ho in the lifecycle hook, I can start the container.
for the whole, for the whole suit.
And then, shut down the container after all tests are run.
and the last line, they show how, actually how to run the whole suits and
they run the whole suits, which is going to run all the tests in the same suit.
it shows, how we can set up the suit.
so there are special methods called setup suit and tear down suits.
And in here, I'm doing some, initialization.
I'm starting re container, I'm initializing, PG x, connection pool.
And then I'm initializing, repository layer using the connection
pool, and in tear down suit.
I am just, stopping the container, terminating the container.
Actually, it's not needed because by default, test container library starts a
special container, second container, which is called Rike, which is responsible,
for killing all the containers if they're not, terminated after the test.
but yeah, so we are just helping our right container here.
And then, and this, snippet shows how can we attach, our
actual test to the test suit.
I'm going to test, a method which add items to the cart.
before, before running, the table tests, some configuration.
Can be done.
and then I need to use, this, suran, instead of, instead of Teron.
so yeah, basically, I run, I, I run the method from, a ad item, method from repo.
I provide some, parameters from the test.
the, no, I can get the actual cart and compare, the cart.
To set the cart with the expected one.
and for assertion, it's, very handy to use, GO CMP library,
which I'm going to explain.
Right now.
So go in pure library.
It's it's actually about semantical equality and it's much more
power, powerful, than, reflect package and deep eco method,
which come out of the box in go.
and then, basically the idea that.
one can try to use that, default behavior of go in PO library, but if it's not,
if it's not working, then one can, customize, the behavior using, options.
so yeah, eventually, e eventually you would get, basically all
the fields of the structure.
They can, they can be compared.
to, to each other.
the test itself, the comparison itself are much more comprehensive.
And by the way, this is not official Google, product.
so yeah, here I show how one can use GO MP at the most bottom of the snippet.
the D method.
is the most, basically, default behavior.
you can just, compare like expected cart and actual cart.
And if that, if it doesn't work, then you can start customizing the behavior.
you can customize, you can exclude certain fields from the comparison.
And here I am ex excluding created at field, for example.
you can also customize, you can, guide, go CMP to actually, treat, empties and new
slices equally to each other because by default they're not considered the same.
but you can customize that behavior.
And also, like certain.
Certain structures which have unex exported fields by de
default, they wouldn't be equal.
So you can customize that behavior, and I'm customizing the comparison how,
the currency are actually compared.
without, without customizing, a behavior how currencies are compared,
that I would basically get the error.
But, But with, yes.
so we are done with, we are done with repository layer, layer.
And let's think how we can, test, the service there.
there are couple of concepts here and, one of the main I idea is that we have
to use pure tests if it's possible.
sometimes it's not possible because service can have
some external dependencies.
It can depend on a repository, layer.
It can depend on other ports, in, in the technology of, of Hexagon architecture.
so basically we should, we should mock those dependencies and that would simplify
our, testing of the business logic.
it was significantly simplify.
So in Go Work, there are a couple of tools which, which, simplify, using of the
mocks, first idea that we have to, use.
we have to depend on the interface, not on the specific structure.
And then, if our service depends on, interface of repository, then
we can, use, Moog generators.
And then use, generated mocks of repository and other
dependencies in our tests.
historically, I have quite experience about using, Goen
mo, which is now de deprecated.
and but Uber Mo is basically the same thing, but they're not compatible.
And, the most popular, the most popular tool right now, is mockery
to generate mos for forgo on tests.
it generates mos from interfaces.
And it follows con conventions, which are defined in, testify mock package.
So it provides much slim, simpler a API comparing to Uber Mock or Golan Mock.
And it's also the most popular option right now.
I'm going to show you example how one can generate mocks.
So our cart report is interface.
we scope both method methods, which, we, we implement somewhere else.
And which we already tested.
basically using, test, test containers go.
So by using, go, goog generate, tech, we can, we can specify and configure,
what mock we want to create, how the name, the mock structure, what would be
the package, output package, what be, would be the output file name, and so on.
And then, one needs to install, the mock generator.
The local machine and then actually around go generate commands, command, and
then the mock, file would be generated.
And then, this, mock, mock, a mock can be used in the test, to mock, dependency,
to mock a behavior of our dependency.
Here I show how one can use, set up, configure, mocks, to use, in table tests.
basically, this new mock setup, function has to be introduced, for each table test.
And then, Basically it's a quite simple a API, that, you can provide okay, what
method is going, are you, what method do you, expect to be code and what
value you want to return, basically.
And, there are certain things like you can specify, mock anything.
This is for the go and context.
And then, yeah.
and then, when you actually run the tests, So you, you create, basically
you create, the mo and then you use mo set type function to configure the mo.
then you, you pass, the MOOC as dependency.
Then you, create, a service card service in my case.
Then, then you would call certain methods.
And the service, basically calling some methods, making some assertions,
and at the most end, it's, it's not obligatory, but, it's handy to call,
this assert expectation, function.
yeah.
so this is quite straightforward.
just, remember, that, about, the moss, that one, one should, pass
dependencies, as interfaces, and then moss can be generated.
But this is one more topic about how we.
how we, we provide the data for the, for the test.
of course we can use, hardcoded data.
but there are, several options and I found I. Myself personally, this go fake it.
library, is the most, useful, for my case.
I show in this, in this snippet how we can, create, A fake
card, with some faked value.
So instead of hard coding, we can use, go fake it, library, which has
a thousands of different methods.
It can provide you some randomly generated UID.
some price value, some currency, and it has like dozens and dozens
of methods and it's really useful.
so first of all, you don't have to hard co hard code the data yourself.
And first, each time, the data you use, in, in the co in the test it's different.
So sometimes, you can, find some bug, which is would be otherwise
really difficult, to find.
Because of this randomized data I. All right.
now, now we completed, how, the topic of how we, we test, the service where,
and now let's, proceed to Htt PO air, how we can retest Htt PO Air.
The first principle is, we have to mock the service ware, which is used by
CTP ware, so we can use, the same mock generator, mockery and, our server.
It means that our service has also to be interface and, to be
dependent, provided dependency has to be provided as interface.
and then, actually for the HT TP aware, there are two, two
cons, two things we can test.
first thing we can test each, HTTP handler in isolation, and then we can test, the
whole router of the whole application.
Sometimes people combine them as, use, just, one type of test.
But ideally they have to be separated.
I explain in following sites why, and, it's quite handy to use HTTP test,
which is, comes, it's a standard fiber.
It comes, with, go out of the box.
and it's, very handy to, test that.
but it can be a little bit where both to use that fiber, especially
if you want to assert like different, more advanced things like you want
to affect the body and maybe be flexible how you do the assertion.
no.
So there are a couple of libraries, below which provide like more fluent API, how
you make, how you make the call and how you as assert what you get, back from the
call to, to HTTP handler or the router.
Alright, here I show, okay, this is our gene handler.
I use, gene Framework for HCTP.
it's one of the most, popular.
And, we have this GET card, method, which takes, gene context and gets,
returns us the card and some JHTP status.
okay, so now how can we test, this, gene handler?
So first of all, we need to.
we need to create, the handler and we, we configure the service as the mo.
We inject that, service as the mo.
We configure the Moog.
And, here comes, like the more interesting part.
we need to create a recorder from HT TP test, package.
And then we would, use, that recorder.
To make a call, to make a call to our handler and then back from the
recorder, we would be able to get the data, because it would record like,
what was our input per parameters and, what was like the output basically.
if we want to test the Gina, hand, handler, without testing the whole router.
Then there is, a specific method, to create a gene, gene context.
so we, we can provide the gene context, basically, to, to our method.
And we use record to create, the gene context and we provide some, parameters.
we make, We actually make, get, we make, get requests, we prepare, get request, and
we set that request into our gene context.
And then finally, we make a call to our handler.
we constructed, And started gene context.
And then, yeah, after we make a call, from the recorder, we can
get what was the HTTP status code.
we can compare that.
We can, serve that to the one, to the expected one and also.
It is possible to get the body, we can get the body, body back as a bites
by array, and then we can convert that, we can convert that to our cart.
and then we can, compare cart, assert, the last line of this code snippet.
And of course we are going to use, peel library under the hood to compare.
the cart, very flexibly and, with minimum amount of code.
Alright, now, I'm going, I would like to talk, about, a bit about
why should we actually separate our router and handler test?
because, the idea is to focus on different things.
When we test, a handler, we are going to test, we are going to, test how the
handler interacts with our, our service.
the dependency we injected, how it handles the errors, how it processes
the input, what are the H cases, HKH, case, status codes, and so on.
So it has to be deep and comprehensive test how our handler behaves.
But for the router, it's a bit different.
we have to focus on the router mechanics, so about, that we actually set up,
correctly the handlers, to each route.
and if the route puff is correct and how parameters are being extracted and if
middleware is applied correctly or not.
and then, on the router, it's, it's enough, just to test a happy puff because
we are focusing on the different things.
on the route in mechanics.
I'm going to show you.
How we can do a simple test, for the router.
so in here we actually need, to, to test, the ural, the puff, which is, going to be,
code and what's we are going to expect.
And, again, we need to configure the mock, for the service and what's
the status, we expect in back.
And then, here, again, we have to use H-H-T-T-P test, recorder.
and then, we, we set, the router is our gene, engine, basically, which we
construct somewhere outside of here.
And then we create a request, HTTP request.
yes, and then we, we can provide ht TP request to the router and see what, yeah.
So again, recorder would record, what was the input, what was the
output, and we can assess them.
And here I'm just, I'm certain, the status code, here.
Because, because it's about the happy puff.
Bo comparing the body here, it can be done, but it's not necessary because
our handler test should focus more on, on comparing, certaining the body.
but here we are just doing the happy puff.
All right.
In the context, in the context of tests, it makes sense to, to talk a bit
about, the coverage and coverage report.
the first snippet shows, that, we can, generate a coverage report, locally.
We can convert that to HTML and we can, we can open that.
In the browser, it can be combined in a single command, and sometimes
it can be, cu sometimes it can be handy, and curious to see, what parts
of, your code are not tested enough.
But, there are, even more automatic solutions.
there are services, where you can upload, your coverage, reports.
if you maintain open source, project on the GitHub, then it's also quite
handy, like for example, I use cover offs and, Where you, there you
can upload, your coverage, report.
and it also provides some UI and also, it can track for you if your coverage
increases, increased or decreased.
Alright, so we are quite close to the end of the talk and some more
tools I would like to mention.
which are interesting to explore, maybe used in the, in your projects.
so Park go, is the tool which facilitate, facilitate, contract testing.
if, so this talk was focusing on testing, a single service, but, in
multi-service environment, it makes sense.
The test, just, more than one service and then, this PGO tool
can be, can be useful also.
recently I started to use this, test file inter, this is, link, actually
tells you and, alerts you if you are not, using the testify assert.
and, require library, correctly, which is, sometimes, can be, quite useful
and, of course it's not so critical.
but yeah, I. All right.
And, yeah, also it makes sense to think about, testing, the service
when it's deployed to the test cluster in here, I indicate that the smoke
plus test, so like the smoke test is like the most basic one, but.
once you deploy, the service, the test cluster, you can think of, some,
integration test with other services or maybe, end-to-end test, which
involve involves multiple, services.
but this is outside of this talk.
Alright.
All right.
take, takes away here I summarize, the libraries, which are useful.
to use, which, when you do test in the Go services.
so for the different levels, depending on what way you're testing,
like I, I would, I would mention specifically go CMP and go, go fake it.
test containers, go, mocky, and of course HT TP test for HT TP interactions.
And thank you very much for, for attention and happy testing.