Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone.
Thank you so much for this opportunity.
My name is Kate Kumar Patel.
I am a senior business application architect at Wind
River wind River Systems.
I have 13 Salesforce certification, which includes system and
application architect certification.
I know everything about Salesforce from inside to outside.
So nowadays after the AI introduction in this world, now I'm spending a
lot of my time just thinking about how we can use AI more responsibly.
Today I want to talk about something really exciting building
ethical AI with rust, a framework for trust and performance.
Now AI is everywhere.
It writes our emails, it helps sales team to close more deals.
It even suggests what shoes we might want to buy.
Sometimes it's too aggressive, right?
In Salesforce, we see it in Einstein, GPT it.
It is writing like personalized email in forecasting tools, and even in
chat bots handling customer service.
But here is the catch.
AI is like a superhero.
Which is, it is very fast.
It is very powerful.
It saves us a lot of time.
But if you think about it, like when I was in my 10th standard I
saw a movie that is like Spiderman.
I think everybody knows about that movie.
And in that movie uncle Ben told Spider-Man with the great power
comes with the great responsibility.
That's the same thing we have to apply in ai.
Like AI is super powerful, but that powerful things come
with the great responsibility.
Ethical AI in rust refers, so that's the ethical AI in Rust.
And Ethical AI in Rust refers to the practice of developing and deploying
artificial intelligence systems.
With the focus on aligning with the moral PR principles and so
society values and in ensuring it, it has like fairness, transparency.
Accountability and it also respects for human values.
So what happens if we don't have this?
What happens if we just use the AI without any without any ethical values?
So if we don't guide AI with the ethic, ethics, it can invade our privacy.
It can treat people very unfairly, or it can even make mistakes at scale.
So that's where the rust comes in.
It gives us that safety net to make AI not just the powerful,
but also the trustworthy.
So why ethical AI Rust matters?
So think about this.
AI in Salesforce is making like life easier for sales rep. It makes
life easier for service agents.
It predicts who's likely to buy or help answer customer's question.
But if privacy is ignored, suddenly your shoe search follows you across
every website like obsessed stalker.
So if fairness isn't checked and AI might give a better service to one group,
but worse service to another group if.
Fairness is not checked, then AI might give a better service to one
group and worse service to another.
And if reliability fails, then imagine Einstein GPT sending the
wrong email to the wrong person.
Rust is like the seatbelt for ai.
It has memory safety, it has ownership, it has strong type system.
Those sounds like a boring developer features, but in reality
they mean less data leaks, fewer bugs and stronger guardrails.
So whether you are working with Salesforce Einstein or building a custom CRM engine,
rust helps make sure AI doesn't just work.
It works responsibly.
And let's not forget the rule maker.
We have GDPR in Europe.
We have CCPA in California, and now we have EU AI Act.
These law says you must ask clearly to customer before collecting their data.
You must give people the right to say no.
You must prove your AI isn't doing any sketchy things by 2024.
More than 70% of AI system in CRM already started complying with
already started comply with it.
That means if you are running Salesforce Service Cloud in healthcare, you
can't just hope patients are okay with sharing their data with others,
but you must have to prove them.
Rust helps because we can enforce these rules right in the code.
Think of it like baking a privacy into the cake, not just sprinkling
it on top of the LA top of the cake.
Later on, let's talk privacy.
Rust naturally protects data with its own ownership model and a memory safety.
It's like having a digital bouncer who makes sure that data doesn't
sneak out to the wrong place.
Now add some cool privacy trick, differential privacy.
Imagine Salesforce dashboard showing sales trend, but with the customer's names
scrambled, so no one's identify leaks.
Home homomorphic encryption.
Sounds funny, right?
It's like a being able to calculate your credit card bill without
even looking at the numbers.
Secure multi-party computation.
Two companies compare customer overlap without actually
revealing their customer list.
Federated learning.
Instead of moving all the raw data to one big bucket, the AI learns one each.
One local salesforce org and just share the learning, like a
student doing homework separately.
But comparing the answers later.
So instead of a creepy movement like this, like why my insurance
data is sold everywhere.
Movements like this, you don't see when you build trust.
Now let's look at the bias.
So bias is third trickiest.
We learn of ai.
AI is not like a human.
It doesn't think like us.
It cannot think like us.
It doesn't have a actual brain that we have.
So what it does is it just copies the pattern from the past, and if past
was unfair, the AI becomes unfair to, for example, a Salesforce lead scoring
model might keep favoring customer from New York to San Francisco just because
historically that's where deal closed.
Sorry, small towns, you are out of luck.
Or an AI chatbot may answer faster in English than the Spanish, not
because it's smarter, but because that training data was lopsided.
Rust helps by letting us build bias detection metrics and forcing fairness
rules and continuously monitor results.
Think of it like running a spell checker.
Instead of catching typos, it catches unfit treatments.
That's why Einstein forecasting or Einstein GPT treats all customer equally,
not just the one with the most data.
Now, let's be honest, nobody trust us Black box.
Imagine Salesforce AI tells you.
The customer will churn in two weeks, and then you would ask, how do you know?
So Rust gives us the speed to air explainability
without slowing things down.
So local explanation.
Why did this lead get a score of 92?
That's your, so when we have a question like, why did this lead get a score of 92?
So then AI look, then the AI look at that.
Oh, because they, because this customer open like five emails and
it the customer has scheduled a demo.
So that is why the lead score is 92.
Global explanation.
Why.
Generally driving sales.
What's generally driving sales?
Maybe it's the webinar attendance more than the website clicks counterfactual.
What if this customer had one more support ticket?
Would that change?
Churn prediction?
So this is like giving a chair GPT, not just a voice, but also ability to explain
itself, and that's how you build trust.
Governance structure in Rust who watches ai.
That's governance.
Think Avenger style framework design phase ethic rules are closed.
Coded into the blueprint development phase.
Audit trails can't be erased.
Like a Salesforce field history tracking, but on steroid deployment phase.
Real time monitoring that doesn't slow down the apps.
Operational operations phase, continuous testing because AI should
never be set it continuous testing because AI should not, should never
be like, set it and forget its system.
Government means there is always someone holding the AI accountable.
Let's look at the one real story.
A big multinational needed a CRM recommendation engine that was
fair, private, and transparent.
We built it with, we built it in rust with federated learning to keep customer data,
local differential privacy to anonymous sensitive information, fairness rule to
stop to, to stop bias in recommendation.
Explain NA ability.
So every suggestion come with a reason.
So what's the result out this out of this?
So a hundred percent GDPR and CCPA compliance 93, 3% less demographic bias
and 27% higher customer trust scores.
And zero privacy leaks for 18 months.
So imagine.
Salesforce Einstein recommendation work like this, it wouldn't
just suggest the right product.
It would also explain why and customers would actually trust it.
So how do we put this into the action?
So architectural pattern, like Salesforce flow, but for AI
pipelines that like Salesforce flows.
For AI pipeline data training, data inference, all separated development
practices, thrust links and fairness test baked into your DevOps pipeline,
operational safeguard dashboard and alerts in Salesforce to catch
when the AI drifts off track.
The good news, we have open source reusable com components so you don't
have to reinvent the wheel every time.
Let's wrap up with three simple points.
Rust Safety feature makes it the perfect foundation for ethical ai.
It takes privacy, fairness, transparency, and governance.
Must be built in from Salesforce must be built in from the Salesforce.
Ethical AI isn't just the right thing to do it right thing to do, but trust drive.
But trust drives adoption and success.
Think about it.
Would you rather use AI that is fast and sketchy or.
Or one that's powerful and trustworthy, and most customer will
always say powerful and trustworthy.
Thank you so much for your time today.
Remember before you leave, just remember ai, just AI is like a
superhero, but in every movie I have seen a superhero needed a sidekick.
So this ai, big boy, the AI boy also need a sidekick.
And for the ai boy that's a sidekick is ethics.
And with the rust as a foundation and Salesforce as a platform, we can build
AI that is not just smart, but also fair, transparent, and trustworthy.
Thank you so much for listening to me today.