4 Sources
[1]
AI industry is influencing the world. Mozilla adviser Abeba Birhane is challenging its core values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change.
[2]
AI industry is influencing the world. Mozilla adviser Abeba Birhane is challenging its core values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data "Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change.
[3]
AI industry is influencing the world. Mozilla adviser Abeba Birhane is challenging its core values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change.
[4]
AI Industry Is Influencing the World. Mozilla Adviser Abeba Birhane Is Challenging Its Core Values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Share
Copy Link
Abeba Birhane, an adviser to Mozilla, is raising critical questions about the AI industry's impact on society and its core values. Her work focuses on the ethical implications of AI and the need for diverse perspectives in its development.
As artificial intelligence continues to shape our world, voices like Abeba Birhane's are becoming increasingly crucial in the tech landscape. Birhane, a cognitive scientist and adviser to Mozilla, is challenging the core values and practices of the AI industry, urging for a more critical and inclusive approach to AI development 1.
Abeba Birhane's journey into the world of AI ethics is as unique as her perspective. Born in Ethiopia and now based in Ireland, Birhane brings a diverse background to her work. With a PhD in cognitive science from University College Dublin, she has become a prominent voice in the field of AI ethics 2.
Birhane's work focuses on questioning the fundamental assumptions and practices of the AI industry. She argues that the current approach to AI development often overlooks crucial ethical considerations and fails to account for diverse perspectives 3.
One of her primary concerns is the industry's reliance on vast amounts of data, often collected without proper consent or consideration for privacy. Birhane challenges the notion that "more data is always better," pointing out the potential harms of this approach 1.
A key aspect of Birhane's advocacy is the importance of including diverse voices in AI development. She argues that the current AI landscape is dominated by a narrow group of individuals and institutions, primarily from the Global North. This lack of diversity, she contends, leads to AI systems that fail to account for the needs and experiences of marginalized communities 4.
Birhane emphasizes the need for ethical considerations to be at the forefront of AI development. She warns against the potential misuse of AI technologies, particularly in areas such as surveillance and social control. Her work highlights the importance of transparency, accountability, and human rights in the development and deployment of AI systems 2.
Birhane's insights and advocacy are influencing discussions around AI governance and policy. Her work with Mozilla, a non-profit known for its commitment to an open and accessible internet, is helping to shape approaches to responsible AI development 3.
As AI continues to evolve and impact various aspects of society, voices like Birhane's play a crucial role in ensuring that ethical considerations and diverse perspectives are not overlooked in the rush towards technological advancement.
Summarized by
Navi
[1]
[2]
[3]
[4]
U.S. News & World Report
|AI Industry Is Influencing the World. Mozilla Adviser Abeba Birhane Is Challenging Its Core ValuesMeta, under Mark Zuckerberg's leadership, is rapidly expanding its AI infrastructure with plans for multiple gigawatt-scale data centers, including the 5GW 'Hyperion' project, to compete in the AI race and develop superintelligence.
29 Sources
Technology
20 hrs ago
29 Sources
Technology
20 hrs ago
xAI, Elon Musk's AI company, lands a $200 million contract with the US Department of Defense for its Grok AI model, just days after the chatbot's antisemitic incident. The deal raises questions about AI in defense and Musk's government ties.
21 Sources
Technology
20 hrs ago
21 Sources
Technology
20 hrs ago
Elon Musk's xAI has launched a new "Companions" feature for its Grok AI chatbot, including anime-style characters, sparking debates about AI ethics and societal impact.
9 Sources
Technology
20 hrs ago
9 Sources
Technology
20 hrs ago
Meta's new Superintelligence Lab is discussing a potential shift from its open-source AI model, Behemoth, to a closed model, marking a significant change in the company's AI strategy.
5 Sources
Technology
4 hrs ago
5 Sources
Technology
4 hrs ago
Amazon Web Services introduces Kiro, an AI-powered Integrated Development Environment (IDE) designed to streamline the software development process and address the limitations of vibe coding.
9 Sources
Technology
20 hrs ago
9 Sources
Technology
20 hrs ago