Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
My name is I'm from Meta.
Today, I'm excited to talk about like how we can bring together AI DevSecOps
ethics and accessibility to create personalized learning pathways that
truly like supports every student, especially those with learning needs.
So this talk is built around like one core belief.
AI and education must be secure, transparent, and equitable if
we wanted to like change lives.
So currently, like where do we stand the landscape, especially
like the challenge in general.
So we have like over 7.5 million US students that receive services under idea.
Which is nothing but the individuals with Disabilities Education Act.
So these students like still experience persistent achievement
gaps and the traditional instruction just wasn't like built to adapt
dynamically to like cognitive sensory communication differences.
So the opportunity is that here, like how can we like adapt AI where we
can personalize learning in general.
And also in real time which can like significantly improve reading
outcomes for like students, especially with like disabilities.
So this is basically like adapting AI for these students.
And before jumping into like how can we like, use AI in
general for the, in the ed tech.
So we also wanted to bake few things.
How can we like, protect the user data in general as we are like dealing sensitive
data, especially related to like students.
So dev, DevSecOps, like embeds like Ed at each and every stage enables like scalable
deployment and forces like compliance.
With ferpa, GDPR and like the accessibility laws in general and provides
like transparency through audit trails.
So in education, like trust isn't optional, so it's essential.
So these are the things that we wanted to make sure the DevSecOps
like can needs to be like baked into the AI ed tech services.
So this is like the overall picture on a DevSecOps how we wanted to do it, but
also measuring the impact in net tech.
So using ai, the skills have been like accelerated by 41%, like
acquisition and the engagement has been like significantly grown by 35%.
And reading challenges can be like detected as early as seven months.
So using ai, like these are like the things that were like
achievable in like recent days.
Which can like, actually impact the long term economic for these students.
And basically like how, how these AI enabled ethics system have
like threat modeling as well.
So essentially like the AI driven education like
comes with some unique risk.
So these include like exposure of like sense to student data model inversion
attacks, bias outcomes, and some spoofing of assistive tech interface interfaces.
And also like some insider threats.
So these risks define the guardrails we must build into the system from day one.
I meant to say so these are like some risks.
And we wanted build some guardrails around it and we build a system for
the for the students in general who are like using ai in their tech systems.
So before before that, like.
How can we use AI in general?
This is just like an overall picture that I want to like, talk about,
like how these adapt algorithms can, like personalize learning.
There are like three major ways we can make sure the
adapt systems can personalize.
One is like the modality adoption.
The system identifies whether the student can learn through
like best visually auditorally.
Can especially and improving the engagement improving the engagement
of the students in general.
So one is the other the other one is the difficulty calibration.
So the task continuously can adjust to keep the student in their optimal
learning zones, resulting in like more faster skills acquisition.
This is one of the important thing like to make it like optimal zone
for the students in general so that they can learn as soon as possible
and the scaffolding intelligence.
So what does this mean?
So the system actually like, provides the right level of support at the right
time by providing hints, examples, and some explanations where they can like
gradually feel that support is not needed.
And and so that they can like mastery those skills as soon as possible.
So as soon as like the, these things like gradually fades
away so the mastery increases.
So this reduces the learning helplessness.
And also and boost the confidence in students.
So this is how generally the, you can have some adapter algorithms
for personalized learning.
Okay, so now coming to the security privacy of designing these systems.
So the systems have to be like built securely.
We must ensure that there are like, all the compliances are met,
like for example, the ferpa, GDPR.
And we also need to like encrypt the data, use zero trust security
models and continuously like monitor access and model behavior so that the
student shouldn't have to like trade privacy for like personalization.
So this is like the overall design that we wanted to have for the students.
Basically not for the students, but for the the system itself.
And so that is like the overall picture of the security and privacy and coming
to the, how can we in general, like you also need some message to technology along
the way for building these technologies.
So we are not leaving some of the core essential parts like accessibility.
Especially we need some seamless screen data support, like text to
speech input and like natural text tope our ascension for, so that
the students can easily understand.
So with these things like set up the students can see like nearly like 50%
increase in independent task completion and like even 40% increase in engagement.
So accessibility is like the core component and also like
the performance amplifier.
So you wanted to have like bridge these gaps as well with AI and like the assist
techno assist to technology integration.
So coming into like the, yeah, this is what I was like talking about, like
increases the engagement in general.
So when students can access their content in their preferred modality,
which we like spoke about previously having having them to navigate smoothly
which frustrat which kind of like decreases the frustration and also the
ta and also we can see like the reduce in the task abandonments in general.
So the more the intuitive the experience is, the deeper the learning is.
So that is what we have to like, make sure we have all the systems in place
like the accessibility engagement like ski like this like the screen readers,
like this speech to tech text or vice versa so that they have like much
more intuitive experience and they have the deeper learning in general.
So that is how we wanted to define ai in the ed tech systems.
Like a overall core goal.
We wanted to make sure that the students doesn't feel demotivated
and have like more stronger focus and more improved retention rate.
And yeah, and coming to making the, how the algorithm should behave as well.
So in order to be like trusted the AI must be like explainable.
So we should use interpretable models and we should like audit and like
providing dashboards for the teachers and providing clear explanation for families.
So the transparency has to be there so that the no addition
feels like a black box in general.
Because as we are like dealing with student data especially,
and and the other things like coming to the transparency the one of the
other things we need to take care is like the mitigating the bias.
So bias in like educational AI has like real consequences.
Where the thing is like we need to like combat this using like diverse
training data, like fairness metrics and routine audits and inclusive,
like design practices in general.
So the solution is to have like proactive bias detection and
mitigation which must be like, embedded into the development life cycle.
So it's not like a checkbox, it's a continuous responsibility so that we like.
Mitigate the bias in general in these systems.
So the tech, the now we are actually even like looking at these systems like
making what they call it this technology is done meant replace the educators
or like the teachers in general.
So it has to it's ma it's basically like helping the teachers in a way.
Over here, I meant to say it deliver relieves from a few of the things
like automatic grade using the ai you can do, like automatic grading
adapting the content in general.
And so that the, it's, it frees the teachers.
To focus mostly on having an emotional connect and providing like context
and providing like individual support to the students so that the
AI can really help in other ways.
And also like making the teachers to have that special connection with the students.
So it like arguments together in a best way not to replace everyone.
That is like the main motivation that we need to like, take
into the account as well.
This is like how the ai has to be like baked into for the EdTech systems.
Now it comes to the picture of like how we can set up a blueprint
in like implementing this.
So we respond like how, like the DevSecOps can help us.
Operationalize these things.
So we need to like look into like few things like infrastructure as a core,
automated like security and accessibility, like testing through each and every
scenario and need to see checking that like model signing and like tamper
detection is in place like continuous monitoring and iterative, like fast back.
It to like iterative feedback driven improvement so that the.
You have if anything like goes out of picture so we need to make sure
the platform like evolves better for like students and educators.
That is like the core blueprint in this AI tech related.
And how do we like, see like pipeline, these adapt to like learning models.
So the things that we need to like look into is like some of the data sets
and models and like making sure every scanning like takes place like secretary
scanning at each and every step and ensuring like you have the fair pan
GDPR checks and mitigating like bias and like drift through some like fairness
tests to make sure there is less bias.
And and continuous like monitoring of the model behavior.
That is what we wanted to like look into some learning like the adapt learning
models in this like DevSecOps pipeline and during this the same pipeline.
The thing is like we wanted to look is like how can we like secure
the the student data lifecycle?
The thing is like we need to make sure collect like
minimally, like minimal encrypt.
We collect like minimally and like encrypt thoroughly to make sure like the data is
like secure and and also like we need to make sure the student has the consent has.
Taken and delete any logs in general for only like storing it
for like certain period of time.
And also like taking the pay and consent and revoking if
needed if they don't need it.
So the privacy is like non-negotiable in this in this case.
So we may, we need to make sure that the student data has like
a strict life cycle in general.
Along this process it comes into the picture of each region has
their own compliances as well.
For example, like the compliance score.
We need to embed like multiple compliances like ferpa, GDPR A-D-A-A-D-A
is nothing but like the Americans with Disabilities Act and V sag.
It's nothing but like the web content.
Accessibility guidelines, so me ensuring like these are in place and
and checks which checks di directly into the continuous integration and
like continuous deployment pipelines.
And making sure we are like auditing the each and everything in place.
So if anything comes up you are like protecting the user data in general.
So this makes like the compliance, like proactive rather than like reactive.
So yeah.
So the compliance must be like continuous in general.
But so this is what I was saying so it has to be like the compliance has
to like balance with the innovation.
So you have to like, follow some the DevSecOps transforms the
tension between the innovation and regulation into strength.
So you need to have automated, like guardrails, allowing teams
to like, innovate quickly while ensuring like trust and safety.
So the balance has to be there in between these two things and making
sure that we are in compliance.
And so these are like some of the lessons for AI and like regulator sectors.
The thing is like key lessons, which include security first, like
design which prevents like rework.
So accessibility issues must be treated as a severity one.
Because without these especially we as we are like dealing with diversify
students with di diversifying needs.
So the accessibility is a core issue which I spoke earlier as well
which has the performance one mark.
And and auditing the bias runs in production and having like privacy tools,
friendly making sure that everyone has the access to these data, especially
the students and their parents.
So these are like some things we need to like, keep in mind because these are
like the crucial stakeholders and, we wanted to make sure like everyone is
safe using the AI and ed tech and there, and we are not doing any privacy leaks.
Yeah.
So that is what, like the trust, like building transparency, inclusion, ethical
thinking and like secure engineering.
So when DevSecOps, like guides adaptive learning, the result in systems.
Like safe, scalable, and like human centered.
So we need to build that trust using this AI enabled education.
So that comes to like closing.
So the key takeaways from these, like the prac basically the practical path forward.
So we need to like, start with security and ethics.
Basically like design for like diverse learners.
Keep educators at at core and commit to like continuous improvement.
The current presentation provides like a blueprint.
So the tools are here and it's now the time to build
AI and education responsibly.
So embrace the, like the continuous implement in general.
So the DevSecOps kind of like rapidly, like enables like
rapid learning and refinement.
So yeah.
Now it's like time to build AI and education responsibility using some of
these core blueprint ideas in general.
Thank you.
Yeah, I would like to hear like some some of the questions feel free
to drop me a message or email me.
I'll try to respond as soon as possible.
Thank you.