Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, I'm Ajita Bona Chandran, and I'm a PRI principal machine learning
and a generative AI engineer in our leading investment firm.
Thank you so much for giving me this opportunity to present my topic.
In this conference, I'll be taking the next 15 to 20 minutes to talk about
agent search, powering next generation insights using multi-agent intelligence.
Let's consider a typical enterprise scenario.
Maybe there's a financial organization that manages multiple
terabytes of operational data.
A senior decision maker requires analysis of declining mobile banking
satisfaction metrics, and strategic recommendations for improvement.
Traditional search systems return thousands of potentially relevant
documents requiring extensive manual review to extract actionable insights.
This process typically requires several days and may result
in incomplete analysis.
Agent search system offer an alternative approach, comprehensive analysis
delivered within minutes, incorporating trend identification, competitive
benchmarking, root cause analysis, and strategic recommendations through
coordinated multi-agent intelligence.
Today I will demonstrate how JavaScripts architectural characteristics
position it as an optimal platform for implementing these systems.
Let's talk a bit about the search problem.
Before that, let's quickly see what the agenda is.
We will understand what the search problem is, and then we will briefly talk about
evolution of various search paradigms.
We will also talk about the agent, multi-agent architecture, which is the
topic of the day and how JavaScript implementation works in this case.
We'll also talk a little bit about real world impact and future
considerations that you can take away from this particular session.
So what is the search problem that we face today?
Enterprise data volumes have expanded exponentially with many organizations
managing Pega bytes of structured and unstructured information.
However, traditional keyword based search systems remain functionally
limited in their ability to transform this data into actionable intelligence.
Current search paradigms operate on lexical matching principles
designed for document retriable rather than inside generation.
When executors require complex analytical responses, these systems
return document collections that further require extensive human interpretation
to derive meaningful conclusions.
Research indicates that knowledge workers spent approximately
half of the day looking for information on searching for them.
And maybe only 50 to 60 per 60% of these searches yield relevant results.
The financial impact in this case is significant, delayed strategic decisions,
missed market opportunities, and inefficient res resource allocation.
The fundamental issue here is very architectural.
Traditional search systems are designed for information location,
not intelligence synthesis.
The enterprise requirements have evolved beyond document matching
towards comprehensive analysis, prediction, and strategy recommendation.
Let's talk a little bit briefly about evolution of search paradigms.
The first one is keyword search.
Keyword search operates like we spoke about lexical pattern matching
and term frequency analysis.
This approach delivers documents based solely on literal word presence, often
producing high volumes of irrelevant results with very limited precision.
The second generation is semantic search.
Semantic search represents a significant advancement actually achieving contextual
comprehension through natural language.
Understanding these systems interpret meaning and intent, driving conceptually
relevant results with substantially reduced irrelevant content.
The third generation is the topic that we are talking about agentic search.
It constitutes a paradigm shift from passive retrival to
active intelligence generation.
These systems deploy specialized AI agents that understand the problem structure,
plan investigations, and synthesize findings into decision ready intelligence.
So now let's talk briefly about what is agent search.
Like we spoke, this is a fundamentally a paradigm shift from traditional approaches
through several key capabilities, collaborative processing through multiple
specialized agents coordinating to handle different aspects of complex queries,
semantic understanding, providing deep comprehen comprehension of the query
intent beyond just keyboard matching.
Synthesis generation, transforming this raw information into actionable
insights and quality assurance through systematic validation of the findings
through multi-agent verification.
Let's maybe demonstrate all these capabilities with some practical examples.
Let's consider a business query.
Why?
Why has the mobile banking satisfaction declined over the past?
Keyword search might require, may return, maybe a thousand documents containing all
the words that are there in the question.
Mobile banking satisfaction and decline.
This includes irrelevant content such as mobile phone banking,
hardware specifications.
User would receive quarterly reports, customer service transcripts,
market materials, and so much of I.
Relevant documents that request, manual review, man has to go through all the
hundreds of documents that has been.
Returned to extract relevant insights.
This analysis might take maybe a week or even more.
Systematic search understands the query intent and focuses on customer
satisfaction trends in mobile banking.
It returns maybe to any concept conceptually relevant document.
About customer experience metrics, filtering out all the hardware
on all A 11 stuff, and focusing mainly on user experience data.
This provides satisfaction surveys, maybe NPS scores and customer feedback
analysis, and then there could be some manual synthesis on this
data for maybe about a day or two.
Agentic search orchestrates a very coordinated investigation.
Research agents simultaneously extract customer satisfaction metrics from
CRM systems, app store reviews, support ticket databases, and show
social media monitoring tools.
Such a across the time period that we are talking about analysis agents process this
data to discover that satisfaction scores have actually dropped maybe 20% during
the last week of November, and there was a mobile update at the same time.
And it makes correlation between the user compliance that are received.
To the navigation complexity.
That was that.
That was introduced because of the new UI redesign.
Domain expert agents contribute mobile regulatory context, noting that similar
dissatisfaction patterns or satisfaction patterns have occurred industry-wide
during digital transformation initiatives, while identifying the
best practices from competitors who maintained higher satisfaction
during this transition period.
Quality checking agents cross validate all these findings to potentially
find out if there was any sampling bias from C seasonal factors.
The system delivers a comprehensive intelligence.
It reports the problem and also finds out like what was the primary cause.
It also does benchmarking by comparing each two other competitors with similar
experience, and it provides maybe some really remediation strategies on also
a projected timeline for recovery.
Complete analysis is delivered within maybe 20 minutes.
So now let's talk about the multi-agent architecture.
We can use the same example in this case to understand the architecture
agent search systems, implement collaborative intelligence through
specialized agent rules, each optimized for specific analytical functions.
Research agents decompose complex queries into structured search components and
re drive relevant information across distributor data systems, including
structured databases and unstructured documents or even external databases.
Analytical agents process this reive information to identify pattern.
Correlations and insights using statistical analysis, machine
learning algorithms and domain specific analytical frameworks.
Domain expert agent applies the specified knowledge and industry
specific context to ensure analytical accuracy and relevance within a
particular operational environment.
Quality checking agent validates the findings through systematic verification,
including bias detection, accuracy assessment, and completeness evaluation.
So how do these agents collaborate?
The multi Asian coordination.
Operates through four distinct phases.
Query interpretation phase is where the research agents analyze the
user queries to find out what are the underlying requirements, and it
decomposes this complex query into manageable investigative components.
Distributed execution phase where agents execute a specialized task.
In parallel with research agents gathering data, while analysis agents
have begun the preliminary pattern identification, collaborative synthesis
phase, domain experts and analysis agents integrate their findings to generate
coherent analytical narratives that address the original query requirement.
Quality validation phase where we verify the qua, the accuracy, eliminates
potential bias, and ensure an analytical integrity before the final delivery.
This architecture enables processing capabilities that exceed
single agent limitations while maintaining the analytical rigor
and systematic quality AC assurance.
Let's now talk about why JavaScript and why it is a natural fit.
JavaScript's, even driven asynchronous architecture provides optimal
support for multi-agent coordination.
The language native capabilities for handling concurrent operations,
message passing and distributed task orchestration directly addresses the
communication and coordination challenges inherent to multi-agent system.
Nor JS serves as an execution environment for high performance non-blocking
coordination among the agents.
The platform's asynchronous IO model enables efficient handling of multiple
concurrent agent requests while maintaining system responsiveness.
Complementary frameworks within the Java systems ecosystem provide
enterprise grade capabilities.
For example, express JS and NS js offer robust microservice
architecture foundations.
Event driven messaging services enable reliable inter communication, load
balancing and fault tolerance, mechanism mechanisms, ensure system reliability.
So now what are the core building blocks,
line chains, JS function as?
The distributed orchestration, coordinated agent activities across
microservices using message queues, even buses and workflow engines.
Systematic query interpretation, transform the natural queries that are
coming in into structured intent using embedding models, entity extraction,
and contextual understanding framework.
Dynamic Knowledge exchange enable agents to share context findings and
state through shared memory systems, vector database, and real time data
streams, quality assurance layer.
It implements various validation pipelines that check for factual accu
accuracy, logical consistency, and bias detection before returning the results.
This slide is a comparison between agentic and traditional search.
So we see four, four different key parameters, which is the, they are
the KPIs, context, understanding, reasoning, capabilities, insight
generation, and quality assurance.
And in all of them we can see that the agent searches are performing way
better than both synthetic as well as the traditional search methods.
Now let's talk about the implementation pathway.
Organizations can implement the agent search systems through
a very structured approach.
First they have to design their agent personas.
Define specified roles and responsibilities and communication
protocols for each of the agent that they are choosing.
Within the organizational context, build the architect the orchestration
layer, implement even driven coordination using JavaScripts famous
and message broker systems to enable reliable inter agent communication.
Integrate knowledge systems, connect vector databases, semantic search
engines, and large language model APIs to provide the agents with the
comprehensive data access capabilities and deploy quality pipelines.
Establish validation mechanism including bias detection, accuracy
verification, and completeness assessment before production deployment.
So what are all the tools that the JavaScript ecosystem provides?
It provides multiple tools with immediate access to production ready tools that
make implementation really practical.
Today, the land chain JS functions as the core engine for land, large
language model orchestration, managing complex actions, sequences, and dynamic
reasoning through chains and engine.
It uses these chains and agents to manage complex action sequences and dynamic
reasoning, leveraging the expression language for a robust streaming and
parallel execution essential for advanced RAC and conversational AI applications.
Vector databases enable RAC or the Ag augmented generation
implementation storing embeddings of proprietary data for.
For ultra low latency sematic searches, rag injects real time context into the
prompts bypassing the LLM context limits, and significantly reducing hallucinations.
No JS microservices provide enterprise grade scalability and resilience.
The asynchronous IO model handles high concurrency crucial for
simultaneous agent request.
Deploy agents.
As mod as modular services managed by API Gateways for robust
communication load balancing and accelerated continuous delivery.
This technological foundation enable implementation of agent
search systems using established JavaScript development practices at
existing organizational expertise.
Now let's talk about the real world impact implementation
demonstrates three business impacts.
Faster decision making agents synthesize massive data sets in
instantly providing real-time actionable insights that cut critical
analysis time from days to minutes.
Empowering rapid strategic responses, improved accuracy by leveraging
proprietary data via rack agents, deliver highly contextualized and
precise answers drastically reducing.
LLM has hallucinations and data errors.
User satisfaction, highly personalized and context aware interactions result in
relevant, specific information delivered on demand, elevating the user experience
for beyond traditional search methods.
Organizations report measurable benefits, including reduced analyst
time allocation to for information gathering and accelerated strategic
decision implementation, and improved accuracy of market analysis.
The technology's ability to process vast data value volumes while
maintaining the analytical rigor provides competitive advantages in
time-sensitive market environments.
So what are all the challenges that we have and that the ones that we
should consider latency management.
Multi-agent coordination introduces overhead, operational overhead, optimize
with caching parallel execution, progressive result streaming to
maintain the responsiveness LLM calls.
They usually accumulate very quickly.
Implement robust batching result caching and intelligent model selection to
balance between the quality and budget.
Debu debugging the distributed agents request comprehensive logging,
tracing, and monitoring across the Y entire orchestration pipeline.
Quality check agents must actively detect and mitigate bias in both
the data sources and model outputs to ensure equitable results.
These challenges represent engineering problems with established solutions rather
than fundamental implementation barriers.
So how does the future look like?
Agent search represents the initial implementation of intelligent
problem solving ecosystem that scale with enterprise requirements.
Future developments will enable systems that anticipate organizational
needs, proactively surface the strategic insights and continuously
learn from operational interactions.
The convergence of JavaScript's flex flexibility with the multi-agent
intelligence creates opportunities for innovation across multiple enterprise
domains beyond information re driver.
Organization face a strategic decision regarding agent
tech architecture adoption.
The technology foundations exist today.
Implementation pathways are established and the competitive
advent advantages are measurable.
So with that, I would like to conclude my presentation.
Thank you so much for your attention, and I appreciate that
opportunity to present this.
Thank you.