Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, I'm Manuel de Peña.
I am a staff software engineer at Docker.
I'm the testcontainers go maintainer, which is the library
I'm going to present today.
Our Go applications usually consume external resources such as SQL or no SQL databases.
Message queues for sending asynchronous messages, LDAP or mail servers,
cloud services such as S3 or AWS
Lambdas, and more recently, vector databases for AI applications.
How do you make sure the application code interacting
with those services is correct
when you are writing it? Do you have the right tools to test your application
while developing it so that you have confidence in creating the core behaviors?
One of the most adopted techniques is using a continuous integration server
such as Jenkins, GitLab runners, or GitHub actions in which you, or a platform
team, configures the servers to run all your dependencies in the CI pipelines.
At some point, you must make sure that those dependencies, databases, queues,
cloud emulators, et cetera, are in exactly the same version as in production with
the same configuration if possible.
Also, the CI workers must be big enough to start and run those
dependencies at the same time your test code is executing the tests.
My question is: is this kind of setup convenient for making progress?
Every time you need to update the dependencies, you need
to update the CI config.
And many times as a developer, you don't own the CI service.
On the same hand, when your application grows and you depend on more services,
you have to start them on the CI, increasing the probability of
overloading the CI server with concurrent execution of different CI pipelines.
Finally, how can you reproduce a failing test in your local machine
in that scenario? Do you need to start the full blown application
to run just one failing test?
Enter Testcontainers for Go. Testcontainers is a set of open source
libraries that are present in the most popular programming languages.
I am the official maintainer for the Go implementation and I'm
going to show you how to use the library to resolve the issues
I described before.
First, Testcontainers for Go gives you programmatic access to the
Docker engine, so you are now able to create, start, stop, and terminate
containers directly in your code.
Instead of configuring the CI, you'll be adding the containers to the place
they're used in your test code.
Imagine you're working in the persistent layer, instead of spinning up the
entire application to run this layer's test, you just add an instance of your
database where you really need it, so your test will interact with just a
container representing that database. Testcontainers gives you a random port for the well
known port for a given technology.
For Postgres, it is the 5, 4, 3, 2 port.
For MySQL, the 3 3 0 6 port.
This allows you to run multiple containers in parallel with three major benefits.
Consistency, test data isolation and build speed. Consistency
thanks to docker containers. The dependency will be isolated in a
container, so you don't need to mess up your system installing different
technologies with multiple configurations that apply only to your host. Instead
of creating an onboarding document with lots of instructions to set up
the development or test environment.
You just install docker and run the test.
We call it the clone and Run experience. About data isolation
each container will have its own set of data, so depending on how
you start them, by test function, by test suite, or by test package, you
have more or less containers running in your system at a given time.
But with no test data pollution. And for build speed, thanks to the
containers, spinning up technologies as docker images is a piece of cake.
Of course, depending on the size of the image, you must pull it first, and
it could be Gigas, but talk is cheap.
Let's show the code.
We are gonna present here a simple microservice app
for rating conference talks.
It provides an API to track the ratings of the talks in real time.
Storing the results in a postgres database.
Also using a Redid cache, a redpanda as a broker for event streaming.
And finally it will use an AWS Lambda function to calculate some statistics
about the ratings of the talk. We have an storage layer here with
PostgreSQL in the to repo.go file.
So if we go to the talks repo.go file, we'll see the repository
pattern holding the PGX connection.
We build a new repository from the connection string.
And with that we perform the CRUD operations.
Create, insert into sql, exist.
With this sql, et cetera, for the Redis cache, we follow the same pattern.
We have a repository here, and this repository has a new repository
receiving a connection string.
We create the client, we ping the client and we add, find all.
We convert to a key here About the streaming
We have here a broker, in this broker we define a client using
the Kafka franz-go package.
We create a new string from the connection string to the queue, and finally we do
a ping here to verify the queue is up and we're able to send ratings to the queue.
To interact with the AWS Lambda, we have this Lambda client, which is
basically doing HTTP requests to our URL.
So we create the new Lambda client, basically a HTTP client with the
Lambda URL as URL of the client.
We're gonna perform a post request to the URL of the Lambda, this is
the expected response, and this is the payload that the Lambda will receive.
Finally, we return the response so the application can consume
the result from the AWS Lambda.
How do we verify that those interactions work properly?
The simplest way is to create integration tests with testcontainer Go.
We're gonna see them in action. For the Postgres database
we have a new repository test here, and we'll consume the postgres
module of testcontainer go.
We are gonna create a container with all the options that we want to customize
the Postgres instance, the SQL database that will populate the database, the
name of the database, the credentials, and we are gonna wait for the
Postgres container until this log entry appears two times in the log.
We are gonna make sure the container is disposed at the end of the test,
we're gonna get the connection string to the Postgres instance, which is part
of the API with the Postgres module.
And finally we're gonna create our new repository, which is the persistent
layer that talks to the database.
If you recall this repository, received the connection string
and perform the connections.
Once we have the to talks repository here, we are able to perform any
CRUD operation like creating, retrieving a talk or checking if
it doesn't exist.
We're gonna execute this.
We run the file test,
and see the logs here, we're gonna see that testcontainers
willl run the test and create the containers for us.
OK, it's using the postgres 15 Alpine image,
And in just 10 seconds, everything passes. Exactly the same.
For the redis cash, we consume the module for Redis.
And create a container with the run function.
We don't need to customize it here.
We just consume the plain module. Again, we clean up the container, we get
the connection string, we create the repository from the connection stream.
If you recall, it received the connection string.
And once we have the repository ready, and once we have the repository
ready, we are able to perform operations like adding or getting
the results. If we run the test here.
for Redis, we're gonna see testcontainers running the Redis container and
executing the tests. In just seven seconds
We are confident that our production can work as expected
because it talks to a Redis instance.
And finally, for that queue, we have these tests. In our application
We are using Redpanda.
For that reason, we are gonna use the module for Redpanda
in Testcontainers Go.
We call the run function from the redpanda module, with this Docker image, and
we configure the redpanda container to autocreate the topics.
We calculate the seed broker URL for the redpanda container,
and we create our new stream.
If you recall, this stream received the connection string and created the Kafka
client, and we are able to execute our tests against that redpanda instance.
We're gonna run them.
Probably it takes longer because it's gonna pull the image and finally in
14 seconds it's able to run the test against a real redpanda instance.
If we run the test again, we're gonna see that it takes shorter
time because the redpanda image is already pulled, just five seconds.
Amazing.
Let's see the code for the AWS Lambda.
It is basically using the AWS Lambda Go package to create the Lambda and
it will receive this payload and we'll respond with this response.
here is handling the request.
Calculate the maths here.
Very simple.
And finally, we have a main function, which is the start from the Lambda.
Let's see how to test it.
Here we are running make to ZIP the Lambda, which is basically
Packaging the Lambda into the artifact that AWS needs to create
The Lambda, is very basic.
We see here that we're building the Lambda and creating a ZIP file
for that, named function.zip.
And for the test, we are gonna use localstack.
LocalStack is emulating all the AWS services, specifically
Lambda, and we're gonna copy the ZIP file created by the Make file
into the localstack container.
And we are gonna execute our custom code in the container after it starts
in this case. We need to create the Lambda and get its URL
so, for that reason, we are gonna get the ZIP file here.
Create this Lambda command, which is a Lambda
Create function config,
Wait for the Lambda to be ready, executing these commands inside the container,
which is provided by testcontainer to execute commands in the running container.
Finally, we're gonna get the function URL configs, and for
that we are gonna parse the response.
And we need the function URL here for the Lambda.
This URL is the one needed to perform HTTP requests against
the Lambda provided by localstack.
So if we continue, we have another container.
We have the mapped port,
We're gonna replace this in the URL of the Lambda.
This is needed because testcontainer give you a random port for the
running container, for the well known port for the running container.
So we need to calculate that and replace it in the URL.
We are gonna pass this payload in our test.
We calculate the histogram here.
We perform a post request, and finally we, this is the expected
response from the lambda.
Pretty amazing, right?
We're gonna execute AWS Lambdas in our local machine.
So we're gonna run the test here
and we are gonna see the localstack container running where the code is
calculating the URL of the Lambda.
And finally we'll execute the HTTP request against that URL.
And the test pass.
We are gonna make it fail, for example, here, instead of
calculating the average properly, we are gonna introduce a bug here.
And if we run the test, we are able to detect that bug. In just a few seconds
we are able to run locally AWS Lambdas
and verify that our code is working as expected.
Cool!
We detected it.
Total count is correct, but the average is incorrect.
Where can I find find more information about testcontainers for go?
If you visit the testcontainer.com website, you can find all the modules that
we support for testcontainers Go. Here
You can find Cloud emulators,
Databases, relational databases, no SQL databases, vector
databases, message brokers, web emulators, cloud, et cetera.
Lots of them, all the modules are based on the container request struct.
With this struct, you're able to configure your
container in the way you want.
For example, defining the image, defining the exposed port.
Defining the network
this container belongs to. Defining the conditions to wait
for this container to be ready.
And more.
For example, one advanced capability are the lifecycle hooks.
You can execute your own code directly into the container lifecycle, for
example, before the container creates,
After the container creates, before the container starts and after it starts.
Before the container and after the container stops, before the container
is ready, checking the health checks and also before and after the container
terminates, you can inject your own code and execute it in your test code.
You can also create data into the container, for example, mounting volumes
or copying files into the container.
It's interesting that the files attributes allow you to copy container
before the container is started.
They are copied after the container is created.
So when the container starts the data, the state of the container, is already there.
This is really useful for containers that depends on an state.
For example databases, you can pass here a file to a SQL file to load the database,
So the tests start with a well known state.
One interesting capability of testcontainers is Ryuk.
You have probably seen the name Ryuk in the list of containers
that are running in your system.
This Ryuk container is responsible for removing containers,
networks, volumes, and images created by testcontainer for go.
So you don't need to clean up the containers you create in your test.
Regarding the wait strategies, which I mentioned before, there are situations
where you have to wait for a the container to be in a well-known state.
Instead of adding time.Sleep to your CI pipeline or your or to your test
executions, you rely on wait strategies.
You wait for a command to be executed, a container.
For example like this here, let's wait for this command to
be executed in the container.
Or for example, we can wait for a container to exit.
At the same time.
We can wait for a file to be present in a container and the test will continue after
this file is present in the container.
Also, you can wait for a health check waiting for the
health check of the container.
You can wait for a port to be already listening in your container.
Or an HTTP request, and you can configure the port you want to wait for, or even
the path, the credential with basic auth or even expecting a response
code to be present in the response.
You can also configure the response headers and wait for them
to be present in the response.
Another wait condition that is very handy is waiting for a log to be
present in the log of the container.
You can configure it to inspect a string or a regular expression.
Using AsRegexp.
You can also combine multiple of them with wait for all and all of
them will be applied at the same time, and you can wait for a SQL
query to be executed on a database.
It's also possible to use testcontainer
Go for building docker images from a Dockerfile.
If you define this struct inside the container request struct, you will
be able to build this Echo Dockerfile in this path and with this repository.
Name and tag name.
You can also pass arguments to the build and at the end the container
will be running after the build.
This is what I have here.
I'm sharing with you the QR codes to a go-workshop where demonstrated
power of testcontainers for go, and also the website with all the
information about the project.
I hope you enjoyed my talk.
Thank you very much.