Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hey everyone.
Thanks for joining me today.
My name is La Bari Lan.
I'm chief scientist at Bit Cloud and today I'm going to show you how
teaching AI to work with components can help you build features faster,
reuse more of your own code, and stay in control of your code base.
By the end of this talk, you'll learn how to shift from using
code generation tools that produce isolated code snippets to using AI
with an architecture first mindset.
This will help you not only build faster, but also keep your code base maintainable,
scalable and understandable over time.
If you think about it, software development has never been about
writing lines of code Writing.
Lines of code is just a means to an end, which is creating
digital products and features.
However, if you look at today's code assistance like copilot or cursor,
which I'm sure a lot of you're using, they don't see this big picture yet.
They do an impressive job creating code, snippets and files, but
they don't see the products, the features, and the architecture
we're actually trying to build.
So today I'll show you how teaching AI to think in components lets it
move beyond code snippets and start building full products and features and
platforms with composable architecture principles and reusable components.
I'll show you why components are the ideal building blocks for a
common language between developers, data scientists, non-technical
stakeholders, and of course ai.
And why this language can help keep us humans in the loop
as AI continues to advance.
Let's dig in.
We'll start with a quick journey surveying how our communication with machines has
evolved over time and why speaking in components is just the natural next phase.
In the beginning, were zeroes and ones right.
In the 1940s, the first programmers had punch cards where each hole in the
card physically controlled a machine.
No abstractions.
No shortcuts.
You are literally speaking the machine's language one painful bit at a time.
Then in the 1950s came assembly languages with commands like move, add, or jump.
Still close to the hardware, but now you could use symbolic instructions instead
of remembering endless binary codes.
So this was a small step toward human readability.
The real jump though came with high level procedural languages in the
late fifties, like Fortran and cobalt.
And in the seventies when C was invented, these languages introduced
structured logic loops and conditionals and functions, and they
were the first compiled languages.
The code you wrote was already semi human, and the compiler
translated it to machine code.
Then in the eighties and nineties we started using object oriented
programming with c plus and Java.
These languages really nailed how we see the world as humans because the code
was finally organized around objects that modeled real world entities,
not just procedures and instructions.
The turn of the century, the mid nineties to early two thousands
saw the rise of scripting languages like JavaScript, Python, and PHP.
So these languages not only added another level of abstraction,
but also shifted programming even closer to natural language.
Not only that, but they encapsulated many tiring technicalities, like memory
allocation and manual compilation.
These freed us to focus more on logic and outcomes rather than infrastructure.
And this is an important shift.
Remember it for later and in the past couple of years we've
entered a new frontier, which is AI assisted, co assisted coding.
Now we have low code and no code platforms, and we have code
assistance for code generation.
These aren't just new tools.
They represent a new kind of language.
It's not exactly natural language, and it's not also
code in the traditional sense.
It's more like a dialect of English with domain specific jargon, let's
call it a semi-natural language.
In just a few decades, we've moved from speaking directly to
the hardware to conveying the abstract concepts of human intent.
So if you think about it, the history of programming is actually the history of
making machines fluent in human language.
So what do components have to do with it?
This evolution that we just saw has two main characteristics.
One, it moves away from the hardware toward human language
and conceptual systems.
Two, the building blocks of communication between humans and machines gradually
shifted to bigger and more complex building blocks, just like you can
see with the Russian dolls here.
So these bigger and more complex building blocks, they encapsulate
more and more of the technical complexity, allowing us to focus on
the architecture and the business and product intent rather than technicalities.
In the earliest days, a single instruction might simply move a value into a
register, a tiny mechanical action.
Later, a building block might represent something bigger like a
mathematical operation or an algorithm.
As abstraction grew, our building blocks expanded.
They became user accounts and shopping carts and payment flows.
They became real world entities and actions wrapped up neatly in code and
as abstract abstraction grows, so do the meaning bearing units we build with.
We no longer move single bites.
We move products and features, and these are best represented by components.
Let's dig into that a little bit and see what components are.
Essentially, components are independent software entities, each representing a
single business or product functionality.
They're like packages or meaning repositories, designed to be
shared and reused across projects.
You probably know components from front end.
Most developers are at least familiar with React components like
headers and buttons and webpages.
But we at bid use components to model our entire code base,
both front end and backend.
So we also represent each entity, each database handler, each algorithm,
and each microservice as a component that everyone can access and reuse.
But one of the most important things about components is that they encapsulate
all the gory technical complexity.
First of all, the business logic itself, the implementation.
Then things like tests and configurations and version control, all the things
that you don't really need to know in order to use a component.
It's like you don't have to know what Lego bricks look like on the inside in order
to connect them with other Lego bricks.
You just need to know what they look like on the outside.
So I imagine components like an iceberg where you only see the tip, which in
our case is the components API and docs.
But that's really all you or the AI need to know in order to use them.
Having a product meaning, and a human readable interface and hidden
implementation makes components the ultimate building blocks for component
language for a common language, sorry, a lingua franca, if you will, that can
connect both non-technical stakeholders and AI to what we do as developers.
Moreover, as developers, we can be less and less bothered by implementation and
move more and more towards higher level product and architecture definitions.
So components are the words in that language of product and business meaning.
But like any language, there's also syntax, a set of rules that defines
how those words fit together.
In this case, that syntax is the component dependency graph.
Here's an example.
You can see that each node on the graph is a frontend or backend component,
and each edge is a dependency relation.
This graph represents an architecture of the entire code base, so you can.
You can see, for example, that header depends on logo,
avatar and navigation menu.
And that in turn navigation menu is using search box, which uses
button and search service and so on.
But this graph actually represents something much more.
It's a live map of the entire business and product functionality in the organization.
This has a few crucial implications.
Implications.
This map, first of all, is visible and clear to everyone in the
organization, including non-technical stakeholders and upper management.
Second, it promotes reuse.
When we want to create new functionality, we first search for it on the
existing graph, and then we create it only if it doesn't already exist.
So if we see, for example, that menu already exists, we'll use
it instead of creating a new one.
This keeps our code base lean and healthy and prevents it from inflating.
But most importantly, this map can be taught to AI and help it become
an amazing software architect.
Think about it.
When a human developer and AI look at code, they both seemingly see
the same thing, lines of code, but there's a significant difference.
When AI looks at our code, it only sees sequences of tokens.
What are tokens?
Operators, names of functions, classes and variables.
Reserved words, all the ifs and elses, and this is comparable
to viewing a topographical map.
You get an exact view of the terrain, but you don't see the functional
meaning and boundaries that we see as humans when we look at the code.
We see it more like a political map.
We see the functional boundaries between products and features.
We see the components that make up our code, and we see their connections.
And this is what we want to teach AI to do.
We wanna teach AI to work with components for a few reasons.
First of all.
Tokens have a very small syntax level, meaning while components
have product or business, meaning components have clear APIs.
That allow them to be easily composed with each other, which turns the task.
We give AI from generation with tokens to composition with components.
Instead of generating lines of code from scratch, it searches for existing
components that can be reused and composes them together, which is a much easier
task than generating from scratch.
Third components encapsulate their EnCap, their implementation.
So they both protect the organization's IP because they
don't expose the implementation to the model, and they don't burden
the model with unnecessary details.
They provide only the exact context the model needs without burden
burdening it with implementation noise.
Just like what I said about Lego bricks a few slides ago.
In order to put them together, you don't need to know what
they look like on the inside.
Now providing AI with components and their relations through the component
graph means that the AI no longer needs to infer the business or product
functionality from sequences of tokens.
And it also doesn't need to infer the relations between parts of the
code anymore, from import statements scattered across files and repositories.
Both the product and business meaning, and the relations are provided to the
model explicitly, but if we wanna be accurate, what we're actually doing is we
use a hybrid approach to code generation.
It's both top down architecture first and bottom up, which is generating tokens.
How exactly does it work?
It all begins with a user prompt.
So the user prompts the model to create a product or feature, build
me a shopping cart or a website for blogging or an authentication service.
So we start top down.
The model creates the architecture of components that make up this new feature.
It then proceeds to search existing components it can reuse in this
architecture, and it uses whatever relevant components it found.
Then we move to the bottom up part, which is composed of three elements.
One, making modifications in existing components if necessary.
Second, generating code for using these components.
And third, generating new components for the relevant functionality that
doesn't exist yet in the component graph.
So it also generates tokens, but it generates them into actual
components and not just snippets.
Then finally, after human review and approval, all the changes and new
components are immediately integrated into the component graph and are reusable
for everyone, both humans and ai.
What does it actually look like on our platform?
So at the top left, you can see the user's prompt build a news platform.
Then our model called Hope AI suggests the architecture here, for example, you can
see it's suggesting a few scopes, design news platform, and articles, and you can
also see that the components inside them.
Some are new, but some are reused and some are modified.
Finally on the right, you can see what it looks like in product production after
the components have been created and approved by the developer and deployed.
So what does this shift actually mean for developers?
Let's start with speed.
Many of us, probably most of us, are already expected to deliver at AI speed.
This shift is happening whether we're ready for it or not.
Make, but working with components makes that speed manageable.
First of all, you spend less time on low level implementation and more
time on high level architecture, which accelerates delivery.
Second, you ship faster because you reuse and compose rather than code from scratch.
Third, new and modified components, since they're automatically
integrated into the graph.
Instantly reusable over time.
You need to generate less and less components.
The more components you have, the more you can reuse, the less you need to create.
And finally, components are runnable and testable, so you can validate their
functionality and their impact on the rest of the graph before they're deployed.
This is a big difference too, about the systems that exist today.
Second.
The second implication is having maintainable code bases.
So building with components doesn't just make us faster, it keeps our code bases
lean, understandable, and scalable.
How first, as I said before, the component graph is a live map
of all the business and product functionality in the organization.
So all the existing components are visible to everyone, and you and the AI can
always know what you can reuse Second.
Representing the entire code base is one.
Graph means that changes are made at the component level and not scattered
across files and separate repositories.
This makes it easy for both humans and AI to understand and maintain.
Third, by reusing components, we avoid inflating our code bases
with duplicate functionality.
Today, two developers sitting in the same room can ask the model to
generate the exact same functionality.
Let's say a button, and the model will happily create both instances.
Now think about what happens in a big company over time, how much
duplicate code is created and how much chaos it can create at scale.
The third thing I wanna talk about is collaboration.
So that's another interesting aspect in that shift.
It doesn't just affect developers, it transforms collaboration
across the entire organization.
At the center, we can find the component graph as a basis for co collaboration.
It's visible to everyone, even if not everyone goes into the implementation
details or can understand them.
Then in the circle closest to it, we have developers and AI who can compose features
and guide architecture and review outputs.
The next circle includes non-technical or less technical stakeholders that are
still involved in the development process.
The product managers and designers who can now see, understand, and
iterate on the component graph.
How come?
Because first of all, people don't have to be highly technical to
understand component APIs and docs.
Second components have version control, so now product and design can actually
make changes and open pool requests and review and approve components.
This means that they are now part of the development loop.
Much more than before, which makes dev cycles a lot faster and more efficient.
Then in the outermost circles, we have marketing, sales, business and
leadership teams who, for the first time gain visibility into system
structure and feature flow so they can now understand how the product
is actually evolving in real time.
Basically, this is what happens when development becomes a shared language
rather than a siloed activity.
And finally, as we assign more and more development tasks to ai, it's
important to think about how to keep humans in the loop in the long run.
This is another advantage of components, is a shared language for a few reasons.
First using components.
We have a shared entity to collaborate on and improve.
Over time, AI can propose improvements, which humans can understand and
valid validate or fine tune.
It's so much easier to read component documentation and tests than going over
all the lines of code in a snippet.
And when human developers change individual components, it's easier for
the AI to understand these changes, and not only that, it can also understand
their impact on the entire graph.
Second, when we build with components, we augment our code base carefully
instead of just letting AI generate endless amounts of duplicate code.
This keeps our code bases lean and understandable to humans in the long run.
Third human control over ai crucially depends on the permissions we give it,
what it can access, what it can decide.
A modular component based approach ensures granular permissions.
In instead of letting AI act across a monolithic system,
each component is isolated.
So if something behaves unpredictably, it can be reviewed or modified or replaced
without risking the entire system.
In short, composable AI means controllable ai critical
decisions stay human controlled.
AI can't overstep and systems stay safe and visible and scalable.
So through components, humans and AI are finally speaking the same
language, the language of product, functionality, architecture and intent.
This is how we build faster and smarter and how we keep humans in the loop.
Because let's face it, GI blame only tells half the story now, doesn't it?
So thank you.
Come check us out at Bit Cloud and connect with me on LinkedIn.
My name is Ali Bar Lan.
I'd love to hear your thoughts and of course if you have any
questions, I'd love to answer.
So hope to hear from you and hope you enjoy the rest of the conference.