Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
My name is Alex.
I'm head of Ops and DevOps at Z Coder.
I also double l de and in general, what I like to call they have asked me anything.
It's a startup, so you have to wear many hats.
Today I will be talking with you about AI coding agents how you can code
them, how you can use them, how you can leverage them to make your everyday
life easier and be more productive Now.
Recently you might have come across headlines like this where
in this case, mark Zuckerberg said, mentioned in his interview that
AI will soon replace developers.
And in this particular case, he even went as far as saying that mid-level
engineers will be replaced pretty soon.
Now there are two main sort of sources.
For those headlines.
First of all, of course, new models.
New AI or LLM models are coming up being released.
For example here the recent release from Google of Gemini 2.5 Pro,
and they of course show that new they are new model beats all the
previous models on the benchmarks which is always the case, right?
Now for us for coordin agents in particular, we are mostly interested
in the three code related benchmarks, which I highlighted here in red.
Now the problem with benchmarks is while it is good to see that models
are becoming better on them it's a good way to compare models with each other.
It's all those benchmarks are not always.
Transferable one-to-one to the real world.
What I mean by that is for example, if we look at either gl the main idea behind
this data set is that the LLM or AI is presented with a list of tasks and it
essentially needs to create a function.
It create needs to create a class or method that would fulfill that task.
So essentially it's one or single file modification.
However reality of software development is not that simple, right?
So usually you need to modify at least multiple files inside the
repository, or sometimes you might even need to modify files across
multiple repositories at the same time.
So basically, even though there are improvements in the quality of AI of LLM.
They're not always transferable one-to-one on in the real life world.
So you might not see as big of an improvement as you
would see on the benchmarks.
So that's one source of those headlines.
And the second one is AI agents, which is the main topic
of the day of today's stock.
So basically, we'll, we will first cover what is AI agent.
Then we'll go through a live coding session where we'll see
how we can, how you can create the coding agent on your own right.
And you will see how they can also interact with each other
for multi agent collaboration.
And then briefly talk about what is next for the developers, what all those.
New improvements, what all those new changes mean for the
future of software development.
So yeah, basically by the end of the talk, we will potentially build an ai, which
might probably replace you or maybe not.
We'll see.
So first things first let's define what is AI agent.
Now, AI agent can be defined as a software that performs tasks on behalf of a user.
And you might say here that any software performs tasks
on behalf of the user, right?
Especially if we're talking about current generation of coding systems.
And they essentially perform tasks of code generation on your behalf.
They perform tasks of documentation generation on your behalf.
And, you can just essentially tap, tap tab get a thousand slice of code in a
couple minutes, and then spend some time on the bag in that the bag in that code.
Now the main difference between AI agents and classic software is that AI
agents follow multiple design patterns which you can see here on the side.
So first of all they have what's called tool use.
So they can basically, can have access to the tools, to the
list of tools at a disposal.
And those tools could be pretty much anything.
It could be doing the web search, it could be accessing some local file system.
It could be accessing restful APIs and so on.
So basically any tool LLM can use to gather information to
perform actions and so on.
It can use it.
Then of course, AI agents can also do some planning.
So they don't just blindly perform the task, perform code generation.
They can first come up with a plan on how to tackle these specific task at hand.
And they then will follow that plan.
They can also reflect on their own work, on their own planning step
on their, output or they can also reflect on work of other agents.
And this is where we come to this fourth design pattern, which
is multi-agent collaboration.
So basically you can have multiple agents working together towards the same goal.
And those could be either working together as in colleagues where they
for example, one agent, performs half of the task, and then the other agent
performs the second half of the task.
Or they can also critique each other.
So for example, you could have one agent, which critiques the work of the
second agent, and then the second agent iterates on this critique and so on.
But basically yeah, those patterns are what differentiates AI agents from the
typical or from the classical software.
And you can think about AI agents like a, I guess a Roomba vacuum cleaner.
The software itself on the vacuum cleaner is an AI agent, however, to perform.
To perform the task of clean the house, right?
It needs also multi some things on top of that.
So just software is not enough.
The vacuum cleaners, they also have tools, so that could be brushes that
could be wheels to move around and so on.
They can also do playing and for some reason charge GPT thinks that
robot vacuum cleaners are moving this weirdly shaped pattern.
But it is AI being a AI, there is no steel yet multi agent collaboration unless
you count these for the vacuum cleaners.
But maybe in the future they will be able to also talk with each other.
They will be able to talk with the other home appliances and so on.
Yeah, basically now.
Why is this important?
To answer that question, let's first talk about what we as developers spend
our time on during the work hours.
And I will give you a few seconds to think about this.
Spoiler alert, it's not drinking coffee in reality, and this study was
done actually 10 years ago, however, I don't think much has changed.
Since then now in reality 70% of the time developers spend on understanding
that could be understanding the code base, understanding
the requirements understanding.
Tasks and so on.
So basically most of the time developers are not spending on writing the code.
They are spending on understanding.
Now if we juxtapose this with the adjunct design patterns, which we
just discussed, we will see that, for example, for the tool use.
We of course, have editing.
We need to use tools to edit the code.
We need to use tools when we are outside of ID and on Googling something and so on.
And of course for UI interactions, we also need to use tools.
Then of course for planning we need to understand the task at hand.
We need to understand the requirements.
Because otherwise it's hard to do planning of the work, right?
There is also reflection.
Of course, we can, we don't just blindly.
Follow the requirements.
Don't just blindly follow the task.
I hope we also can think about different ways, solve the task.
We can think about different ways to tackle the problem.
And so on.
And of course we don't usually work in a vacuum.
We have colleagues with whom we can work together, we can collaborate.
And this is where basically multi collaboration comes in.
And of course, for that follow that you'll need to have a good understanding.
You probably also need to navigate the code base.
You need to spend something outside of ID in Slack or whatever.
But basically all of the design print principles can be directly juxtaposed
with what developers do in their free, or, sorry, not free, but of course
what they do in their work time.
Now, with that let me switch the sides and let's finally do some coding.
All of the code, which I will be showing today, is available
on this, repository over here.
I done Q code or the link below.
And the way I structure the coding session is I will go through several stages and
hopefully not those stages but basically I will go through several stages.
We'll start with a simple baseline where we won't have
any agenda egen patterns at all.
And then we will slowly build on top of, on top of the previous stages to come to
the final step of multi-agent multi-agent system, which will work together to,
to fulfill the task on our behalf.
So let me switch here to the to the Jupiter notebook I have here.
Let's see.
Alright,
so let's start with our baseline, right?
And if you used AI in any capacity, if you used any LLM, like open AI or
cloudy you probably have experienced something similar to what we will
see right now in the baseline.
But basically, and again, if you are not familiar with Python it's not a problem.
All of the code is pretty simple.
Pretty simple to it.
And, pretty sim pretty easy to understand.
Now, over here on this baseline basically what I do is I will be using the most
recent model from Open ai, namely GT 4.1.
And basically over here, I'm just defining the code to query the model so
that this function will send the input from us, from user to the LLM and we
will get the response from GPT-4 0.1.
Now, the problem with the L LMS is that they are leaving the,
somewhere in the cloud and they don't have access to real life world.
What I mean by that is for example, they can't give us the response time for
Google or for any website for that matter.
So it does try to be helpful, and I think it probably will, yeah.
Does give us some command to run to GI to get the answer, unfortunately on its own.
The LLM in this case, GPT-4 0.1, isn't able to help.
The same goes with the access to local local machine.
This case in that it doesn't have access to my local computer.
Again, it does try to help me with providing some batch
commands to, to list the packages.
Unfortunately even for those simple tasks, it can't really help me.
And, we are especially, we are going to some more sophisticated tasks that
require, access to local to local machine.
For example, Iranian dock image.
The LLM would fail here and wouldn't be very helpful.
And the same goes within new and updated knowledge.
Now, in this case the GPT-4 0.1 has a more UpToDate knowledge.
If you try the same query with the older model, like T four O, for example,
it wouldn't know anything at all about Python three 14 in this case.
Because the knowledge cut off for the model was, I think in June, 2024.
It does have some knowledge about Python three 14.
And it does provide some links for us as well.
However still the knowledge is not up to date, right?
It is 2025, my 2025 right now.
And basically the gap between them.
Alumni knowledge and real life world is essentially one year now and a
lot has has happened since then.
So yeah, basically, even though it does try to be helpful by giving us command,
by giving us answers, but based on its knowledge, it's not really that helpful in
real life world, in real life situations.
Now let's see how we can improve on that by introducing the tool use.
To the LLM.
So let's let me switch for that for the basic to the basic agent.
Agent over here.
Alright, so I'm again using the same model GT 4.1 for that.
And what we will be using on top of that is what is called react framework.
React stands for reasonable effect and basically we will be instructing
LLM to go through the loop of multiple stages, namely, thought,
action, post observation, and answer.
In the third step, the LLM would need to think about what action it needs
to perform to get the answer or to get the information to give us answer.
Then it needs to decide on the action itself basically.
As a response to our request, it will need to send the actual action.
It wants to perform with some, along with some inputs potentially.
Then there is a post step where basically the action should run on behalf of LLM.
Then during the observation step, the output from the action
will be sent to LLM and then.
Based on this observation and the initial input from the user the LLM hopefully
will come up with a, with an answer.
Or if that information is not enough, it'll go to the second loop,
to the third loop and so on until it gets asked the answer or until
it runs out of low penetrations.
And this can pretty easily be implemented through this.
System prompt over here, and basically I'm instructing a LLM to do, to follow
the same the, to follow the loop of react framework, which I just described, right?
So you run in a loop of thought action post observation.
I use thought, I use action.
And then we also provide a few actions available for
LLM to use or to choose from.
In this case, those would be Ping.
So basically that would perform the ping command.
On behalf of LLM, we have bh which would allow LLM to use any,
to execute any batch commands.
And we will have web search pretty basic web search which will
allow LLM to well do web search.
We also have some example session over here for LLM to use as a reference.
So for example here, the question from the user could be how many
islands make up Madera Ali Madera.
So it's a sort of nice, nice small Easter egg for that.
And then the thought from LLM should hopefully be that it needs
to do a web search for the madada.
Then it would need to perform to, to reply with an action namely web search,
and then the input for that action.
In this case that could be just Madeira.
Then during the post, the actual web search will happen.
And then based on the web search.
Based on the observation from that web search the LLM should
hopefully come up with an answer.
In this case that would be four islands.
We are using the same code to query the LLM, we'll just directly send
the input from from us, from the user, along with the pro, with the
system prompt we've seen above.
And then let's also define our three actions, three
commands to perform actions.
Name the ping bash and web search.
Let's test them out.
And ping for google.com.
Resource 200 milliseconds.
I do indeed have Python, three point 12.6 installed locally and for the web
search for Python three 14 it gave us a lot of information from the website.
So this seems to be working fine now, let's compile this in the dictionary
just to make it more easily accessible.
And then the main implementation of react framework happens basically in
these, and these function over here where we are going through the react loop
we start with the sending the, is the query of the digital query to the model.
Then based on the output from the model, we check if there is any action
that model decided to perform, which will be, signified by these actions.
Semicolon prefix of the string.
And if there is an action we get the list of actions.
We perform that action and give the result of running that action to the
LLM to hopefully give us the answer.
And then of course, if it's, if that information is not enough,
we will go to the second loop.
To the third loop, and so on.
Now let's try pretty much the same, the same prompts, the same questions
we had before in the baseline, but now with these basic agent behavior.
So let's first try to get the response time for Google, in this case it, the A
lamp decided to perform Action Ping for google.com which is a correct action.
It got the answer from the action, and then as an answer to our
initial question, it gave us.
The rounded number of milliseconds or seconds for the pink the same should
work with the Python packages, right?
So we decided to perform the action batch with the input of P list
which gave the alarm a bunch of packages I have installed locally.
And then based on that the answer from LLM was, I guess sort of summary
of, of those Python packages which have installed locally showing a
presentative sample apparently.
All right.
And this, let's see what will happen with the, our Python three 14 question.
So the action that Ellan decided to perform is a web search,
which makes sense, right?
It's pretty much the actual question of ours was new by three 14.
It got to the.
Logical website from python.org with a bunch of information and somewhere below.
Let me scroll all the way down, somewhere below it should give us the answer based
on the observation from that action.
Namely over here.
It, it summarized some new features changes in Python three 14, a few
PEPs, a few new features some changes.
So basically with just a single system prompt like this.
We already are able to significantly extend the capabilities of LLM beyond
it just being a sort of knowledge base.
We else, it, it now has access to web, it now has access to our local machine.
It now has access to batch commands, right?
So basically it already can be much more helpful.
For us was in everyday life and in our coding coding journey.
But of course it wouldn't be very convenient to.
Modify this prompt every time we need to add new action.
And there could be a lot of different actions depending on what tool
you want LLM to be able to use.
Like for example, if you think about Git right?
The Git itself has a bunch of different commands like Git commit,
Git checkout, Git branch, and so on.
So you would potentially need to define each.
Single git command as a set protection, and that probably
would take quite a long time.
Now, ho fortunately for us, there are ways to abstract all these tool definitions
from us, from the end user, and one of those, one of these ways is MCP.
So let me go back to the slides real quick over here.
Let's see.
So MCP stands for Model Context Protocol, and it was introduced
pretty recently and the end of November last year two four by Tropic.
So essentially MCP is a protocol based on JSON RPC, which, which aims to facilitate.
The tool use and the development of those tools, it consists of two main
counterparts, MCP server and MCP host, and MCP client, and now MCP client could
be our LLM, it could be some id, it could be any AI enabled tool, basically.
And then MCP server performs essentially two main functions.
First, it, of course, interacts with the, with the actual tool
performs the actual actions, right?
So it could access some local sources.
It could be MCP server for the file system, so it could manipulate files,
for example, it could also be MCP server that performs some a p requests.
So it could be MCP server, which provides tools to access Jira, for example.
Or it could be tools to access.
Flight radar, right?
So basically any anything that can be coded to be accessed through
code can be converted to MCP server.
And then through MCP Protocol, the LLM can get the list of tools,
can get any names, different descriptions, and how to use them.
And then again, through MCP protocol, it would send essentially the request
to perform the action to the MCP server, and then it would get the
response, from NCP server with the result of running that ac that action
or the result of running that tool.
And it is pretty straightforward to run to create your own NCP servers.
Let me switch back to our notebook over here, and let's go now to
the agent with MCP directory.
So first, let's see how we can implement our own MCP server.
And you can do that in multiple languages.
I think currently there are SDKs of Python, JavaScript, Java
and Lin, if I'm not mistaken.
But essentially, of course, I'm sure that more languages will follow, but
basically over here we have the two MCP two or MCP server defined in Python
and Tropic provided a nice, fast MCP class or from the MCP framework, which
basically requires us to essentially just do essentially the server itself.
And then through this, the curator, we can convert any function into an MCP tool.
So in this case, that would be a bar tool which will perform any
we will execute any batch command.
For us, and of course you, ideally, you don't want to
execute blindly any b command.
But for the sake of the, for the presentation we we will allow that.
But basically those three lines is the only thing, things that you would need
this in Python to create a bar Oh, sorry.
To create an MCP server for yourself.
And as I said, anything that you can code can essentially
be converted to MCP server.
Now, along with that, we also would require a sort of description of how
to call that server, that MCP server.
And over here I have a. Small JSON file, which defines the MCP servers.
In this case, the only MCP server is called B and it is run as
python command, right with the arcs of just name of the file.
'cause it's essentially by Python script.
So that's the only two things that needs that it needs to be executed.
But we'll see later different ways.
Beyond just Python script that you can execute MCP servers as.
So let's now see how we can use those MCP servers along with LLM.
So for that first of all, we will switch to tropic.
In this case it would be cloud D 3.7, I believe.
Yep, it's cloud D 3.7.
The reason for this switch.
Is initially tropic tropics Cloud was more was better at using tools.
However, now you might have heard that even OpenAI adopted the MCP
protocol from the arrivals, basically.
So the recent models are also pretty good with following, with
using tools defined as MCP servers.
But still here we'll be using, tropics called 3.7.
Now, what we essentially need to use the CP tools is we will need an MCB client.
And in this case the main functions or methods of this class is, first
of all, we need of course to connect to servers to get the list of tools
available for us to use no for LLM to use.
These function is essentially responsible for that.
So we just connect to each server in the loop, in the server config, and then
get a list of tools from that server.
And then we also need to of course, send the request to LLM and then act
accordingly to the response from LLM.
So basically we attach the list of tools.
I got from MCP servers in the format of tool name, tool description, and
tool input schema, along with our, user input message, and those things are
sent directly to CLA 3.7 in this case.
And then the clo, the LLM, decides if it wants to just respond as
a plain text or if it wants to perform an action or a tool use.
And this is how we basically differentiate between.
Between those as part of the response that would be a content type.
It could be either text or a tool use in, and in the case of tool use, we will
see what kind of tool it wants to use.
So that will basically tool name and then what kind of input arguments, it needs to
be we need to pass to that to that tool.
Then we will perform that actual tool call over here.
And then we'll send the result of that tool call back to LLM, and then hopefully
we will get an answer to our question.
And then here I have a small, a nice small chat loop.
Where basically we will send the we will send the query into the into
the text field, and then proceed with this chat until we type the qui word.
So let's start with our, just with our bar tool over here.
We got connected to the, to our server which was run as
Python Bar tool, fast speed pi.
We have one single tool called Barge.
And let's ask LLM to do something.
Let's say list all files and let's see what will come up with.
So in this case, he decided to call the B command, the Bash two with
a comment with a. With the comment LS dash la, which makes sense.
It got the list of files and basically summarized them all.
Good.
Now let's see how we can extend these two multiple servers to multiple tools.
And for that we will use the full J confi over here.
Which basically defines four MCP servers.
First of all, we have a file system, MCP server, which is essentially
an NPM package which runs with few inputs, see few arguments.
We also have a web search realized through Brave web search, and
that is run as docker image.
So here we have some end file for the tokens.
We also have a fe, a web server, which would allow LLM to fetch any URL from the
web, is implemented as Python package.
And then of course we have our Bash MCP server, which we just
defined couple seconds ago.
So you can see, cP servers can come in all flavors and
languages and ways to run them.
Now, let's see let's not give those servers to our LLM and let's ask
to perform some actions, right?
First of all, let's see what's new in Python.
Three 14.
So let's see if it'll need to perform a web search to give us the answer.
All right, so it decided to call the web search tool with the
following query, Python three 14 new features through release notes.
The unlike OpenAI GT 4.1, it didn't just copy the questions from us and then based
on the response from the web search tool, it gave us the summary of new features.
Let's also try to ask cla to create a file called Hello the text with the content.
Hello world.
Alright.
Let's see.
So hopefully it'll be able to access our local file system.
So in this case, we decided to use the tool right file
with the following arguments.
And let's actually see, yeah, we just got the new file.
We on our local file system called as Requested, hello to 60 with the content.
Hello World.
Great.
So essentially with just a few lines of code of basically the JSON config
of MCP servers, we, again extended the capabilities of LLM beyond just
being able to answer questions based on its on what is seen in the tearing
data, it is now also able to perform.
A lot of different actions and those MCP server can come in different flavors.
And there are actually a lot of different sources where you
can find M CCP servers online.
There are MCP registries, which contains hundreds of different M
CCP servers for tools like, for example, Grafana JIRA and so on.
So basically you can, give a lamb access to any tools you use in your
everyday every everyday working life.
And basically, as we will see later, ask it to, for example, solve the ticket.
Now let's let me quickly jump back to the slides for a few more seconds.
All right, so we've seen MCP.
Now let's see how we can make agents work with each other because of course,
before, before that, we, all the tools we had, oh, LLM had at this disposal
were essentially just a software, right?
There was just API calls, it was just file system manipulations and so on.
However, you can imagine those tools being agents on their own.
And then one agent could potentially call another agent to delegate a task
to ask it, to ask a subordinate agent to perform some small subtask and so on.
And in our case, we will be building a system which contains four
agents and it will be hierarchical.
Hierarchical system where, we'll, where we will have supervisor,
agent which you can think about as product manager, for example.
And this agent will be able to call three other agents subordinate agents.
And those would be frontend developer virtual frontend developer.
Virtual backend developer and virtual DevOps engineer.
And they also would be able to interact with us with the client to
ask for any clarification if needed.
But generally all of the manipulation of the all of the.
Interactions between those subordinate agents should happen
through a supervisor agent.
There is no direct connection from one subordinate agent to another.
However, of course, you can also implement this where where in the system potentially
all agents would be able to talk with each other directly, not through the
supervisor agent, it would really depend on what you want the system to work to do.
Now let me switch back again to our Jupyter Notebook.
Alright,
and let's go to the final notebook of the day, which is multi-agent.
Notebook.
Now here I will be adding one more level of abstraction and memory,
link chain and glowing graph.
And the reason for that is of course you can implement all of
the agent, all of the cross agent collaborate interactions on your own.
It is essentially.
Just the structured inputs and outputs and then some state
manipulations on top of that.
Why spend time on that when you can use the frameworks which
do that for you basically.
So again, we will use the same quality 3.7 here and.
Give it three, or, sorry, four four tools.
Pretty much the same tools which we've seen before.
Shell tool for the batch commands, we will, it'll also
have access to brave search.
Again, as we've seen before.
It'll also have access to file file system manipulation.
And there is also one more new tool which I called Human, which
basically would allow LLM to ask back questions to the human.
And then we at the are discussed on the site.
We will, we'll create four agents.
First, we'll create agent supervisor, which will have access to three team
members, namely frontend developer, backend developer and ops engineer, and
the system run the split straightforward.
We are telling the agent that it is a supervisor overseeing the following
three workers, and it basically needs to decide who's who is the
next agent responsible to, for, to perform the task or to act next.
And then once the supervisor agent is happy with the result, it should
respond with a keyword finish.
Basically, we will give access to all the tools, all the available
tools to these agents as well.
However in real life potentially you probably would want agents to have access
to different lists of tools, right?
For example, you might have DevOps engineer, virtual DevOps engineer.
Being able to access tools like Ana or Promeus or Datadog or whatever, right?
But at the same time, you don't want, probably, you probably don't need front,
end agent to be able to access that.
But on the other hand, it needs to have access to other
tools like Figma, for example.
So the only things we basically need to create our supervisor agent is
this piece of code over here where we define what what sort of nodes or what
sort of hand off notes are available for these supervisor nodes to call.
And those are again, our team members.
And then of course, end node to finish the tasks.
With that, let's also define our three agents, namely front end agent
backend agent, and DevOps agent.
As you can see here, we are using pretty much the same React framework.
As we've seen before, and again, we are passing all the tools, all of
the available tools to those agents, and the prompts are pretty simple.
Although in real life world, of course, you probably would want
to be more creative or be more specific with those prompts.
But in our case, we are just defining the frontend agent as a frontend developer.
And this agent can also ask.
Help ask for help from backend developer or DevOps engineer.
And it can also ask for clarifications from the human client.
The same goes with a backend backend agent.
It can help, can ask for help from front end or DevOps
engineer, and it can also ask for clarifications from the human client.
And the same goes with a DevOps agent.
It can also.
Call, ask help for front end or backend developers and ask
for clarifications from human.
All of those agents or nodes in the terms of land graph are defined
in a pretty three forward manner through these comment over here.
And that's pretty much all we need to do.
To define those agents.
Now, with that, let's also build a graph and let's also visualize that graph.
So indeed, as we've seen before, we have the supervisor note or supervisor
agent which can talk directly with our front end backend and DevOps engineers.
But subordinate agents can talk to each other.
They only can talk through supervisor.
Over here I have just a small help helper, now helper function
to color the output as we see.
That's what we'll see in a few seconds, so I won't go over that piece of code.
And then the task for the, for our multi-agent system would be to
create a website for the conference.
It needs to have three pages.
Namely intro intro page page for people to submit it.
There, the docs and page with the submitted docs.
I want the front end in React.
I want a backend test, API and the submissions should be
stored in Postgres database.
And also, it's always as have Docker and Docker compose.
So I also ask for that.
And then we also mention that they can ask the human client for any clarifications.
This part over here is basically all what is needed to initiate the
execution of our multi-agent system.
So I actually started and see what will come up with first.
Indeed we went into supervisor, which decided to call the front
frontend developer the frontend developer itself decided to check
what kind of tools it has installed.
On the system.
I got the answer and then it went ahead to check the versions the version.
Alright, it decided to create a conference website directory.
So here.
So we already should be able to see some changes reflected locally.
Indeed.
We have a few directors already.
We have the package on, so probably it is, yeah, it is doing.
N-P-N-P-X command, which will potentially take some time because, the n payment
style usually takes a few minutes.
We, yeah, we can see the non models appearing for us with a bunch of packages.
All right, let's okay, let's go back and we already starting to see some of
the code popping up over here, right?
Some CSS, some TypeScript code.
Let me.
Scroll scroll down here.
So it did install some packages and it started to implement the code over here.
So it started with the Python.
Alright.
Yeah.
Basically it'll proceed to implement the website for us on our behalf.
Now let me switch back again to the slides.
All right.
All right.
But it is always nice to, fiddle around with the code with the
agents, with the L lms and see what you can build on your own.
Of course you don't always have time and still you probably would want to be able
to use, those agents be able to have those agents help you with your everyday tasks.
And of course there is a way for you to have them help you with your coding.
Coding tasks, basically.
With that, let me briefly introduce briefly talk about what we do at Z Coder.
Z Coder basically is just a plugin for V Code of Brains, and this
plugin brings you brings the agents to you, to your id natively.
You don't need to switch to any other id outside of what you are already using.
And let me briefly show showcase.
Our agent basically solving the Jira task.
Let me switch to this code over here.
Alright,
so our agent, our z coder is already natively integrated in the J So we
have this encoder test over here.
So basically we can tag any ticket from the j.
I think that's the wrong the wrong account.
Alright,
base.
Let's see.
Alright, so let's ask our agent to solve this ticket.
Solve it create a branch called 42.
Check out and commit changes.
All right.
So the ticket itself basically requires the agent to implement
changes across multiple languages across the whole repository.
So what the, what our agent will do first that we'll try to analyze.
The code base, it'll try to understand the project.
So it will check some Python files.
It decided to also check across different languages.
So this repository specifically contains three languages Python, go in TypeScript.
So it went ahead to check some go implementation of the repository.
Yeah, so it will, it can take some time depending on the
task which we have at hand.
So it also create already a branch for us.
Thanks to MCP support, of course, you can attach any MCP servers in this case.
I have Git, MCP, which basically allows the agent to, to perform any git commands
locally on your machine, and we already see some changes being implemented
in our repository, first and Python.
Okay?
Okay.
We already have modified, I believe, two files over here.
Then agent decided to switch, to go implementation of the repository.
All right, so yeah, we also need to introduce this name space
thingy in the go go implementation server as well as in Python.
Alright.
Then we also, of course as instructed the GI a and Git and this small the sort of
nice feature I have here is I instructed the agent through instructions to always
slack me when it is done with the task.
So let's go back to our chat.
And yeah, basically after the combination of agen patterns and MCB servers, it
was able to basically pull the JIRA JIRA ticket and then essentially do all
the all the tasks surrounding the, the typical software development steps, right?
So it creates, the branch created a commit for us and so on.
So basically in a matter of minute, it was able to fix fix the ticket for us.
Alright.
Let's switch back to the slides one more time.
Okay.
So yeah, and if you are wondering this is how those messages from sun Coder
would look like in Slack, for example.
And, every time I get those messages, it feels this meme and of course, I can
instruct Theod to not message you, but your PM or whoever you wanted to message
and not necessarily in Slack or, you can, through MCP, you can connect to
pretty much any messaging app whatsoever.
And well, of course, these sort of improvements or what you've seen now.
Probably made you think if it's time to switch switch the profession, right?
Is still a good time to be a software developer?
And of course there are essentially two main sides to that question to,
what's coming for the developers.
Of course, you can think about, ai replication and.
There is also some, variability as to if you see it as a negative or a positive
thing because for example, maybe you always wanted to be a goose farmer.
And then once you are replaced by ai, you finally would be able to
chase that long lasting dream of whatever you whatever your heart
wants you, what wants you to be.
Now of course.
At least for now, AI is not able to train itself without the data.
So there is at least one job that is safe for now.
Namely writing the code for AI to train on because, still it requires a
bunch of data for training, for model improvements, however jokes site even
with those, all of those improvements from, on the AI side, on the gen side
with things like white coding, you still hopefully need to they still need to
have a good knowledge of what's going on.
Especially we're talking about some mission critical systems especially
about putting things into production.
You still want to be able to un to be able to understand what's going on.
You need to still need to be able to.
Read the code or know, debug the code.
So what we at Zone Coder and me personally see as a sort of upcoming future for
the software development is that we as developers would become more as a managers
of virtual virtual agents, virtual junior developers, if you will, and.
We as humans would be more for reviewers, more managers as that
rather than writing the code ourselves.
So basically we would be able to offload any mundane tasks.
For example writing documentation, writing new in tests, and so on to our
army of virtual virtual developers.
We as as people would be able to focus on more creative or more high
level tasks, more architecturals architectural tasks and so on.
So yeah, hopefully the future, no longer doesn't look as as dark as it did before.
But of course, AI moves fast.
So we'll see what will happen a few years with that.
A couple cure codes for you to use.
On the right is my contact details.
If you want to connect on LinkedIn, if you want to follow up with any
questions, feel free to message me.
Feel free to send the connect request.
And on the left you can see the cure code for our website for encoder ai,
where you can basically sign up for free and try as, try the agents in your id.
Already they end with that.
Thank you for your attention.