Conf42 Machine Learning 2025 - Online

- premiere 5PM GMT

AI’s Hidden History: Unpacking Bias, Power, and the Builders Who Shaped It

Video size:

Abstract

Ever wondered how the early builders of AI shaped the tech we use today? Whether you’re a developer or researcher, this talk offers valuable insights into how our past influences the future of AI—and how we can learn from it to avoid the same mistakes.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
AI isn't just a tool, it's a cultural force. And despite technological advancements in ai, a constant remains, and that is the human. Now, this talk is about those humans and the cultures they inhabit. So today, I'm gonna take you back into the 1980s when the research in this book was being conducted. This is Diana e Forsyth and anthropologists who for about eight years during the eighties and nineties, there was a fly on the wall in four US labs observing AI research in action. Now these labs specialized in creating medical AI tools like building systems to help migraine sufferers cope with their symptoms. The question will be, has much changed about the way we de design our AI systems 40 years later, I. This is what we're gonna explore through the lens of Diane. So Diana e exposed the power dynamics at play in these labs, and she gave us insights around the perception of what tech work is. Now, this talk isn't a historical C, but it's going to be a warning that if we don't change the way we build this technology soon, we may be doomed to make the same mistakes that our grandparents of AI did. So together today, I hope we can cover the key lessons for building a more inclusive, more responsible ai. A little note about anthropology, just in case you're not familiar with it, it's not what Ross did from friends. He's a paleontologist, and it's not what Indiana Jones did. He's an archeologist and a pretty dubious one at that. Hey, Indy, I hear the British Museum is still hiring. Now anthropology is the study of human experience, so that's culture and communities, including all the work and tools that we use. And then when you add tech mix, this is called digital anthropology or techno anthropology. So digital anthropology is the study of how we use and live with technology from a cultural perspective. Now, most of the work is done. When conducting anthropology is things like field work, so observing and interacting with people to get a better understanding of a person or a group. And this is what Diana did. So she would come into these AI labs and she would observe them and interview with 'em while they worked. Oh, and if you can't cope with not listening to an ai, talk about picture of the Terminator. I present you with an alternative Lady Terminator, a 1980s Indonesia film, where the main character starts off as an a anthropologist. Before turning into the Terminator. It's a fantastic film. Highly recommend it. It's absolutely crazy. Iman, in this day and age, you speak of legends. I'm an anthropologist. Huh? Iman, in this day age, you speak of legend. My ai an anthropologist, huh? In this day and age, you speak of legends. Anyways, I'm an anthrop anthropologist. As Diana's research developed through those eight years in the lab, she became increasingly concerned about the power dynamics influencing the end product of that ai. Namely that there was a lot of voices missing from the creation of it. So she explained that in conducting anthropological research in these labs, she was exposing key absent voices, as she described this as speaking truth to power. It wasn't always well received as speaking truth to power, however, so while her work earned her lots of accolades from her fellow anthropologists and people working in academia, it ruffled a lot of feathers amongst the AI researchers that she was observing in the lab as we'll. See, perhaps if they paid a bit more attention to the observation she was making, maybe we'd have better AI capabilities. Now from one anthropologist to another I'm Leanne Potter. I'm your time traveling guide here to connect Diana's work with the work we're doing today in AI creation. Aside from being an anthropologist, I've also worked in tech for the last 10 years versus as a software developer and then pivoted into cybersecurity, and I'm now currently doing an MSC in AI and data science. So Diana was once told that those who ca code shouldn't be in the Lao. Unfortunately, I think many people in text still feel that way today, but the danger is as soon as you exclude, you delete the social. But what does it mean to delete the social? Humans are intricate beings. We're all layered. We have history and culture. We belong to multiple communities. There's the family communities, our friendship groups that are local and national groups. And of course work. Going to work is a community and a cultural event. So whenever we interact with anything or anyone, we carry all this cultural tapestry around others. And sometimes our cultural experiences align. And sometimes they clash. And this also influences how we engage with technology. So to delete the social is to remove what makes us human. And those creating the tools that are part of the social fabric too, they have a significant impact on this, even if sometimes they don't see it. So Diana explored how the researcher's own assumptions were woven into the tools they created. But she asked them how they viewed their work. Their answers were often very telling there was doing AI and not doing ai. So to them, this was doing ai, which was writing code, problem solving, building systems, but get an anthropologist to observe for a while. Then you are presented with this picture. So this in order is what the AI researchers actually did all day. I'll give you a moment to look through that. So you can see it's there like lab meetings research seminars, more meetings. It looks like my diary. At work, going to conferences. And then on, on the other side in the black background. Teaching courses, taking courses, writing papers doing admin. This is what they actually did all day. And as you can see, they were pretty selective with what they called work because right at the very bottom, even though they themselves prioritize, this is what they mainly do, the thing they did least was writing code, building systems and solving problems. So all the social elements of work were neglected. The meetings, the engaging with other people, they didn't see that as. Doing real work. So what does this omission mean for the social elements for the systems they've built? Now, typically when you don't value something like being social, then you don't put any F into it, or you can be really bad at it. There was no good for the very and this is like no good for the very social tools that these technologies will bring because these researchers. Became often very shy and introverted and describing an interaction. She said, one expert even said, they have dreadful communication skills and that the technical skills far exceed their need to be social. In my experience, and probably in yours too, we've all met technologists who prefer tech over people. But can you really succeed in delivering good tech without good communication? I. Now Diana's insights suggest that better communication could have prevented a lot of shelfware, and we'll go into more of that later. So if these AI researchers didn't see meetings as work, then what actually was work? It was hard science and seeking universal truths. So anything outside the scientific method wasn't valid for ai 'cause they were building quantifiable codifiable. Testable systems, believing only technical solutions could solve problems, coding and building were the only legitimate AI work in their eyes. With one research going researcher, going so far to suggest that AI research is something that you do sitting alone in front of the computer for our zone. So I say anything outside that was considered pseudo work. Sorry. Meetings and discussions, for example. Disciplines like anthropology, psychology philosophy were all dismissed as soft science or even unscientific with no place in the lab's AI development with this mindset that made Diana an anthropologist work very challenging. Despite being a highly educated doctor herself, her work was seen as lesser. One lab member even described her as a Dictaphone with legs. It wasn't easy being an anthropologist in a technical space, and from my experience, it still isn't. So how does viewing their work as a purely logical and scientific endeavor delete the social? It's twofold. They fail to recognize that their work is inherently social and it takes place in a highly social setting, which is the institution. Their worldview is. Decontextualize and idealize, they only see their work for the aspects they enjoy most and ideal not the reality. Now, we all do this to an extent, but this was skewing their worldview and in turn influenced the products they were building because their approach raised the nuances of the human experience, which is crucial for creating meaningful AI systems, which are supposed to replicate the human experience. So the question is, if they don't value the social elements of their own work, what's stopping them from raising the social aspects of other people's work? Turns out not much. And that's exactly what happened. Now this brings up to the section on power dynamics. Now, the researchers were mostly male, white, and from privileged backgrounds. In the 1980s and nineties, they were known as knowledge engineers and their aim was to duplicate human expertise. They saw knowledge as something to extract, like a mineral or a disease tooth and make that machine readable. But how did this knowledge ac Accusa Accu? How did this knowledge acquisition work? Without today's masses amounts of data, these researchers relied on very manual data extraction, which often was delivered through face-to-face views. So these sessions, these interviews would last for about one or two hours, occurring once or twice a fortnight. And they typically involved a single expert coming in. So maybe a heart surgeon, for example, which was often another white man in power whose expertise was then taken at face value. They relied on one specialist to reflect the knowledge base of the entire topic. So as a back to that heart surgeon example, getting a heart surgery to tell you everything they know about heart attacks and building a system from there, as you can imagine, this opens up the potential of the knowledge gaps, but also a lot of bias. Researchers treated knowledge as something that could be easy, accessible with direct questions. They weren't interested in what really happened during heart surgery. They wanted to know what heart surgery was by the book. Now Diana observed that even the warmest of personalities would adopt a really cold style during this knowledge of citation or interviews. And she would ask them, why was em empathy switched off? When you're interviewing people? And one researcher replied, it just seems like we're doing a technical task. There isn't room for empathy. The knowledge gathering process for them was transactional. It wasn't collaborative. It was medical questioning. It was really task orientated, treating the expert as a brain to be mind, not really to be engaged. As I said, they weren't often very good at interviewing, and some of the experts that came in to give their expertise often felt mistreated. One, this is a direct quote, someone saying that they felt like they'd been treated like a dog during the interview process. Using cold methods to gather data from people was ineffective. The engineers blamed the experts when they didn't get the desired knowledge as well, and they failed to recognize that a lot of knowledge isn't universal or rule bound like coding. Humans are stable entities to be codified in absolutes. We're just not. And today, much of the data is great from the internet and it's sought by others and we know how reliable online information can be. And then, obviously we have to make a judgment. If you've seen the thumbs up, are you happy with that response or not? It's very easy to train a model with reinforcement learning to also give really rubbish answers because the whole process of getting a good response out of AI or a suitable response is incredibly subjective. Who am I to say that when I, that what I've heard about. Or given information about heart surgery is good enough or not good enough to go into the system. For example, who am I to click? Yes, well done. Gemini. Can you make sure you keep doing this? And then that influences everyone else's experience. These activities in the ones Diana observes are form, again, of deleting the social because it's making judgment calls about whose knowledge counts. Creating a significant power dynamic, a small group deciding what's important for us to know is a form of power. And these knowledge engineers were very aware of that power. But their end users were not. So the doctor down the line who was supposed to be using these AI tools didn't know that this whole software, this AI software, was based on just a handful of sources. And really that was best endeavors, a handful of sources. Usually it was, one person or one textbook. The knowledge engineer's power was invisible to the end user. And that is just like it is today with black box models. Imagine building a medical diagnosis tool today that's based on that kind of data set and that lack of transparency. But maybe we're not too far off. We might have a lot more data points, but as the old others goes, shit in, shit out. We don't always know of these models we're interacting with today are deleting the social until there's a scandal, like the Amazon recruiting tool or many other cases. Now I think power is at its most insidious when you can't see it or challenge it or when we're given a false sense of security that it's working in our best interests. The knowledge engineers that Diana worked with viewed knowledge in absolutes. They were very comfortable with sweeping the perspective of others under the AI rug. Not only that, actual end users, clinicians who would eventually be using these tools day in day out, were not a part of the data collection process. Nurses, for example, were not considered value expert sources for medical informatics According to these researchers had absolute power in delineating who counted in the build process and spoiler, it usually looked like them, came from the same schools and backgrounds. They did. So in doing so, they raised various cultures, races, classes, and genders to name a few. Muting. These voices caused both ethical and practical issues. Introducing challenges were still contending with as we build AI today. They didn't see their power as a problem. However, the issue was a problem with user acceptance. After the 1970s AI winter labs were thriving with funding and possibilities build, building medical AI for clinicians and patients with really few constraints. And it sounds perfect, but it's not quite because when you have no constraints, you have no rails. You don't have to adhere to any guidelines. And when you don't have any guardrails or guidelines, you don't care about what you build because if the money's gonna come in anyways in your head, it's all one step towards scientific mastery, right? Just keep breaking things. If the money's gonna keep flowing, one, one of these things will land. The consequence of blue Sky building was that they didn't care if the end users liked their products or thought they were very useful. Because at the end of the day, their main goal was that they were doing ai. As a result, nearly everything that Diana witnessed in these labs that these researchers built ended up completely untouched AI automation promises today, as it did back then. Quicker diagnosis is lower costs, but nobody want to use these tools. So when Diana asked about this, why is all the tech that you know, this AI tech that they've been building not being used, where is it? She was told it's all on the shelf. They had literally built shelfware. They were also looking in the wrong place, is for answers to why this kept happening, or as Diana said, they genuinely were baffled by him. She pointed out, but if the physicians need these systems so much, why aren't they using them now? The problem was, is that these systems just weren't seen as useful either the researchers weren't building the right systems or not building the systems, right? So Diana spent a lot of time trying to figure out why. She found that instead of examining their own mistakes, the AI researchers blamed the user. They would call them things like naive or computer phobic. It was an end user problem, not a problem with the tool itself. So they believed because they believed that these systems work fine if you just know how to use them. Diana pointed out to them, however, that instead of blaming users, maybe they should, realize the problem is with the systems themselves, not the people trying to use it. That suggestion was dismissed. So Diana observed that the researchers blamed user acceptance instead of recognizing the unusable systems. And this is because of how they perceived success and what success was, because success did for them. Wasn't tied to real life use usefulness. One. Scientists even claimed that usefulness is not quantifiable. So seeking feedback from users was rare and the users weren't observed using the systems. What do you have? Then? You have a system that's going to remain on the shelf. Now, if you recall, I said earlier that work is a social aspect. It's a cultural social event. It really is, and it's very nuanced. So yes, a Dr. May do heart surgery, but there is more to their job than just doing heart surgery. They have meetings, they have consultations, they see patients, they update records. None of this stuff was any of interest to these researchers. However, despite the very activities, creating barriers to making this actually a successful soft pace of software. Now you speak to a cian today. They don't wanna be bogged down with steps in an IT process. They want to help patients. This is a disconnect between AI designers and real world users deleting the social realities of work. Remember, data gathered was often a clinical engagement, pun intended, between experts, medical question out. And, they rarely set foot in clinical settings. One time I read in the book, she asked a designer of one of these AI systems and she said, have you actually visited a hospital site? 'cause they were building stuff in before this site. And she said, have you been to go see them? And the researcher in astonishment said, oh no I can't stand the site of blood. So the clinician's real job processes weren't part of the design either, and AI teams rarely observed real life problems. The AI tools that were built on engineers knowledge or a single user experts input, not specific data on in the individuals who will actually be using the systems and facing these problems. Remember, humans aren't like computer programs. But I work like, think about if you were to describe your day job, you'd miss so many steps, would you be able to remember that most of your day is meetings? Would if you're a coder now or someone working in ai, do you say that you, you do more coding than going to meetings, doing admin, little things like that. Now, remember the AI researchers in this case, the ones that Diana was observing during the eighties and nineties. They can't even grasp the realities of their own day jobs. So how are they going to imagine someone else's that they've not even seen? Now, work, as we all know, doesn't follow official procedures. We, I'm sure HR would love them to, but they don't. And these systems. Miss the tactic knowledge and like workarounds, and they lack accommodation for real life complexities. And as a result, knowledge falls off a cliff, which is a term that the AI researchers used when their systems hit limits and faced unanticipated situations. For example, one system suggests that a male patient had an infection of post. That can only occur if in pregnant women, so they forgot to code in that men don't get pregnant. So Diana's conclusion was that these engineers could choose what goes into the knowledge base, but focused on an ideal scenario. And this made their systems fragile, just like they only focused on the ideal aspects of their own work. They didn't recognize that humans. And not typical. And as such, systems need to be designed knowing that people will deviate from the ideal, will deviate from the script, will deviate from the happy path. Doing AI in the end really meant that this powerful group was able to delete the social and as a result, limit this tool's potential. But what else were they deleting? Now the lab wasn't the labs that data research weren't just made up of men. In the 1980s and nineties, women made up 8% of the academic computer science profession in America. Today, it's 15% go progress. Women in the lab were often treated as other in this male dominated field. And I experienced something very similar on a podcast recently where they introduced me as Leanne Potter female cybersecurity expert, to which I responded, do you refer to your male guest that way? They quickly changed the introduction. Now, Diana observed that women in these labs were seen as second class citizens and face teasing and sexual harassment. In one instance, a file was placed on a female worker's desktop that played the orgasm scene from when Harry met Sally on the loop and could not be stopped. When Diana asked the woman about it how she felt, she just said, it's just male territory. When Diana highlighted this gender divide. Male lab workers often denied it or dismissed their female colleagues as being overly sensitive. After all these scientists believe that science was objective and culture free, arguing that science is gender neutral. Ergo, because they're scientists, they obviously can't be sexist. Yeah, surely we've made some progress in recognizing women as significant contributors to tech and the AI space. Here is a snippet that really hit home from the book, and this is Diana commenting on gender initiatives at the time. So this is probably early nineties. So she says, public efforts to increase the proportion of women in computer science is often treated as a pipeline issue. The idea is that encouraging more girls and young women to study science will solve the problem. However, this overlooks the fact that there are already women in this field who face unequal treatment, pipeline problem, getting girls into tech and why women are leaving tech is the culture. How is this an issue still after 40 years, I. I would argue it is more than disappointing. It's actually dangerous. Nothing much has changed, and with AI technology moving so fast, we don't have the luxury of another 40 years to get this right. So we need to act now. Now, Diana. The anthropologist tragically passed away in 1997 during a hiking accident, and I feel like we are definitely poorer for not seeing her work continued in today's AI landscape. I'm sure she would have so much to say on that. We need anthropologists to help AI researchers who focus solely on theoretical data by immersing themselves in AI development. Anthropologists can provide crucial insights into the social, cultural, and ethical dimensions of ai. This bridges the gap between the technical aspects and human experiences and preventing harm because an outsider's perspective is always valuable. Diana said this about her work as an outside observer, I have a different perspective on the source of this problem. Medical information systems are not being accepted because they do not meet the needs of the consumers, and that the difficulty in turn results in the way that. In which the problem formulation, system design, system building and evaluation are understood and carried out in medical informatics. What I take away as an anthropologist from Diana's Worth is that the main problem of user acceptance was that those creating the AI systems valued the technology and its potential more than the humans using it. Let's not keep making that mistake because we stand at a crossroads. We should learn from the AI pioneers of the eighties and nineties and embrace this exciting air landscape with the human in mind. Otherwise risk not only AI harms and power alignment, but also a lot of shelfware. Now I'm hoping, ensuring your insights. Showing her insights with you today that you'd be tempted to pick up a copy of this book. This book is called Studying Those Who Study Us. I was lucky enough to start this project with the blessing of Diana's widow, who has been a great source of support, insight, and this is a picture of me and him. He came to go see this talk we presented for the first time. And to prove that it's a really small world. After all, I was posting about the work I was doing, do doing a deep dive into Diana's research and her niece got in touch with me. She's a software developer and she's also doing an AI safety course at the moment. So it's great to see that AI family tradition continuing. So I had a call with Diana's niece, and she was reading Diana's book at the time, and I said, you know what? In Diana's research really resonates with her as a person working in tech, but also someone who's witnessing this new AI revolution. And what she said was that the hypocrisy is that a mostly male tech elite are the ones telling us how dangerous AI is, but they're also the ones in control. The real danger then answer is now isn't AI itself, but that the people who have the power to build it have great power indeed. Diana said her work was about speaking truth to power. Now it's your turn. Go build better. Ai. Thank you very much.
...

Lianne Potter

Co-Founder, Podcast Host and Author @ Compromising Positions

Lianne Potter's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)