Conf42 Large Language Models (LLMs) 2024 - Online

Implementing GenAI in a banking context: A hands-on approach

Abstract

Unlock business potential with LLMs! Learn strategic use case selection, navigate proprietary vs. external models, build a dynamic PoC team, deploy, refine with customer feedback, and scale success across cases. Embrace an agile, iterative development approach for impactful results.

Summary

  • Welcome everybody to the talk about implementing of GnEi in banking context. I will be speaking from practical and business perspective because I am chief product officer of the company which created geneae solution for the bank. And in the end, we'll speak about the team which is required to create such kind of solution.
  • The potential market of software for the banks is more than $4 billion. We launched a system which is talking to customer in chat. It is AI powered chatbot. It collects documents, it extracts data from documents, does all the checks which are required and prepare compliance report. Benefits for banks is of course cost reduction on burden speeds.
  • Most people believe that OpenAI use client's data to develop new models. Many people still believe that chat boxes are not reliable. It is mix of chat and hard coded logic. We started integrations with the large CRM and compliance case management systems.
  • A few mission critical roles are necessary for the success of such product. Data scientists, QA engineer with prompt engineering experience and back end developer. If you need high quality environment for the banks, your infrastructure should be really great.
  • It is a huge use case for ginia in the banks and it is a great potential of implementation. The main pain for the banks is working with a large volume of different documents and different processes. Most of the banks do not fully understand the opportunities.

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Welcome everybody to the talk about implementing of GnEi in banking context. I will be speaking from practical and business perspective because currently I am chief product officer of the company which created geneae solution for the bank. We actually had a lot of communication with the banks around leveraging AI in banking environment. Firstly, we'll discuss about context and use case overall. Then we'll go into short description of the product. Then we'll have a deeper dive of technical aspects of product, how was it created? And then we'll discuss the frequent concerns of the banks of launch product within the banking environment. And in the end, we'll speak about the team which is required to create such kind of solution. So let's start. If we are talking about potential market of software for the banks, it is really huge. It is more than $4 billion. It is just a service addressable market, but actually it is much larger. We believe that this market is growing. At this market, we focus on actually two solutions. Our company focuses on two problems. The first problem is that when you try to open account in a bank for your business, this takes a lot of time. Like according to our survey, it is seven days and more than 400 pounds per one customer. Also, we know that there is a huge problem with transaction monitoring and why so. And according to our research, if we look at the onboarding process of business customers for the bank, you can see that some parts of this process is very much automated. It is like verification in registers, identity verifications, et cetera, et cetera. But there is a part which I colored green, which is very difficult to automate. And this part is related to two things. The first thing is back and forth clarification. For example, you ask customer to provide a contract. Customer provided to you. Your manager read the contract and realized that the contract is expired and you need the fresh one. And you write a message, could you please provide the new contract? And this usually takes two days. It is a significant amount of time. And secondly, actually reading of the loan documents take a lot of time. Sometimes contracts provided to the banks could be 50 pages, 100 pages. And you can imagine how much time managers spend on reading such documents. And our product actually resolves this issue and it is not covered by the current players yet. And what we are doing, we launched a system which is talking to customer in chat. It is AI powered chatbot. It collects documents, it extracts data from documents, does all the checks which are required and prepare compliance report. And this compliance report looks like this. You can see that in this report we provide all the discrepancies found in the documents, all the issues, all communications from customer around it. And in the end compliance officer, the person who is making decision about open account just read our report and take decision to open account or do not open account. And it takes for them just like ten minutes but not seven days as it was previously. So this is our use case, this is the product we are offering for the banks and this product, the benefits for the banks is of course cost reduction on burden speeds. Customers are happy and actually many others. And this product is like I will speak a bit more about how was it built from maybe technical perspective if we try to create such product just based on LLM. As you know, LLM will not have a structured discussion on some topic. Even if we make very good prompts, very good structure, the stability and ability to control what is going on will be very low. That's why we decided that we should have a hard coded dialogue scenario. So we should ensure that system will ask customer for example for registration documents for the company contracts, invoices and if customer has some questions or would like to discuss anything around these documents, we'll support it. But it will be very well guided dialogue. And you can see that we have splitted this flow into the small steps. Like for example we tell the task to LLM customers asking us to provide some information about requirements for the invoice and LLM just answer to this particular small question and this is small task we are given to LLM. We are not trying to ask you to collect all the data and secondly we use separate OCR service like the quality is higher and we do not use even llms for this. And we see that stability is very good with it. Thirdly, we do extraction and also it is a separate task to the model using OCR we converted images to text and now we use LLM to extract from this text structured data. We need to make documents analysis and finally we decided that it should be not only LLM and OCR but also we would like to use APIs to enrich additional data for this particular case. So we have integration with registers. For example, we are taking data from there, comparing this within data we extracted from the documents and as the result received very well structured reports. If we're talking about integration perspective here, it is actually a very important concern of the customer. Like when it is a bank, they have very very complicated architecture and with a long history and it is very difficult and takes a lot of time to to integrate seamlessly in this kind of architecture. That's why for the proof of concept. To make this much faster, we created kind of fast integration approach. It looks like this. For example, bank would like to open a business account for the customer and customer usually fulfill forms on the website of this bank and they authorized and we have this and we do not integrate into. We do not integrate API using API here. We just do an iframe integration. So we find some screen and put it there and it is very fast. Second approach is that our service which is actually creating all of this, which is in the cloud. After user talks to potential customer of the bank in iframe chat, which is actually our chat, we created report and we send this report through email. Very straightforward. We can do it of course through API. But as I mentioned that for the banks it can take a lot of time. Rolling out such kind of fast API takes one week and it is much more interesting for the banks to try our product. Moving on, I mentioned that we used to face different concerns to llms and overall usage of GPT in the bank environment. And the first concern is for example that most of people believe that OpenAI use client's data to develop new models. And we prepared special documents which confirmed that corporate API that OpenAI declares that they do not use any client's data in corporate API and is protected by agreements and is protected by all necessary policies. And we show this to the customer and usually this helps. Secondly, many people still believe that chat boxes are not reliable. For example, I have heard such concern. Yes, I like your solution is great, but we like this part when you are checking the documents, making compliance checks. But this chat is like. I personally don't like chats. Chatbots, can you please implement this kind of dynamic form where a customer can fulfill some information and get the response, but not in chat. Also very very interesting concern because people still see that chatbots are not reliable. And this is true actually. And what we can do here, as I mentioned that firstly we made our chart very structured. Also we limited answers to internal database. So if our chatbot answering to customer, it is not using any information from the Internet, it uses information from the data source which is checked and we are sure that there will be no hallucinations around it. Also we are adding where it is possible, we are adding some hard coded logic and quick buttons which can skip chat communication where it is not necessary. So I would not say that we have kind of clear chat. It is mix of chat and hard coded logic. And this helps with banks when we are trying to convince them that this product is good enough. Number three is the quality. Of course, from the beginning people remember that if we are talking about machine learning, when you need to read, when you need to read any new document with a high quality, you need to train model on thousands and thousands samples of such documents. And they do not understand that situation has been changed. And actually you do not need so huge train data to to have a good quality. And we actually support our sales materials with the results of the testing and we suggest them to try their own documents. And after they are trying this, usually this helps. And number four, I also already mentioned it, but maybe elaborate a bit more. That of course banks afraid of hard integration and for proof of concept they would like something small what they can try. And we created this kind of quick integration scheme. And also secondly, we started integrations with the large CRM and compliance case management systems like for example Salesforce and banks which are using Salesforce. When they think about integration and when they understand that this one will be that our product will be part of the Salesforce flow, it is much easier for them to take this decision to get our solution. And now let's move to the team. Of course it could be different team working on such projects, but I would like to mention a few mission critical roles. I think they are necessary for the success of such product. Of course data scientists and when we're talking about generative AI, the role of data scientists have changed. It is already more prompt engineering, already more about data extraction from the text and it shifted. But what is important data scientists from the knowledge perspective, they can structure working on the quality and I think this is critical. They can create a great data set, they can ensure the quality of the data set, they can ensure that the labeling process is set up properly and you can test it properly. This role shifted, but it's still the core in this process. Second role, this is actually the new role and I have not heard about this role before this project, but I think it is kind of role just appeared the previous year. QA engineer with prompt engineering experience. It's a really interesting mix. The people who can test quality of responses of llms on the high volumes, so they can use, they are very experienced in prompt engineering. They have all the tools which are required and they can write actually these tools for automation of such QA. This is new role really helping a lot to our product to ensure quality of chat. Duck and number three, of course, back end developer who actually currently can do prompt engineering as well, integration with different models, but important that this role should be combined with DevOps or it should be a separate person develops on the team. Because if you need high quality environment for the banks, your infrastructure should be really great and I would personally recommend to have this role in your team. So to wrap up about my talk, it is a huge use case for ginia in the banks and it is a great potential of implementation. The main pain for the banks actually working with a large volume of different documents and different processes. It's not only compliance process, could be lending process, credit process and due diligence legal process. So there are a lot of them. Our company created chatbot which collects and verifies documents for the banks and we have experience in implementing it. Most of the banks do not fully understand the opportunities and we see that banks actually risk averse organizations and we hear a lot of concerns and you should be prepared to argue and to provide support that the risks are not so high. And as a hint to overcome the concerns that prepare maybe responses to frequently asked questions. And second one to have instant proof of concept so you can prove that your solution works. Well, that's it from my side. Thank you very much and have a nice day.
...

Denis Skokov

CPO @ Generative AI startup

Denis Skokov's LinkedIn account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways