Curated by THEOUTPOST
On Tue, 23 Jul, 12:02 AM UTC
4 Sources
[1]
AI industry is influencing the world. Mozilla adviser Abeba Birhane is challenging its core values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change.
[2]
AI industry is influencing the world. Mozilla adviser Abeba Birhane is challenging its core values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data "Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change.
[3]
AI industry is influencing the world. Mozilla adviser Abeba Birhane is challenging its core values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change.
[4]
AI Industry Is Influencing the World. Mozilla Adviser Abeba Birhane Is Challenging Its Core Values
"Scaling up" is a catchphrase in the artificial intelligence industry as tech companies rush to improve their AI systems with ever-bigger sets of internet data. It's also a red flag for Mozilla's Abeba Birhane, an AI expert who for years has challenged the values and practices of her field and the influence it's having on the world. Her latest research finds that scaling up on online data used to train popular AI image-generator tools is disproportionately resulting in racist outputs, especially against Black men. Birhane is a senior adviser in AI accountability at the Mozilla Foundation, the nonprofit parent organization of the free software company that runs the Firefox web browser. Raised in Ethiopia and living in Ireland, she's also an adjunct assistant professor at Trinity College Dublin. Her interview with The Associated Press has been edited for length and clarity. Q: How did you get started in the AI field? A: I'm a cognitive scientist by training. Cog sci doesn't have its own department wherever you are studying it. So where I studied, it was under computer science. I was placed in a lab full of machine learners. They were doing so much amazing stuff and nobody was paying attention to the data. I found that very amusing and also very interesting because I thought data was one of the most important components to the success of your model. But I found it weird that people don't pay that much attention or time asking, 'What's in my dataset?' That's how I got interested in this space. And then eventually, I started doing audits of large scale datasets. Q: Can you talk about your work on the ethical foundations of AI? A: Everybody has a view about what machine learning is about. So machine learners -- people from the AI community -- tell you that it doesn't have a value. It's just maths, it's objective, it's neutral and so on. Whereas scholars in the social sciences tell you that, just like any technology, machine learning encodes the values of those that are fueling it. So what we did was we systematically studied a hundred of the most influential machine learning papers to actually find out what the field cares about and to do it in a very rigorous way. A: And one of those values was scaling up? Q: Scale is considered the holy grail of success. You have researchers coming from big companies like DeepMind, Google and Meta, claiming that scale beats noise and scale cancels noise. The idea is that as you scale up, everything in your dataset should kind of even out, should kind of balance itself out. And you should end up with something like a normal distribution or something closer to the ground truth. That's the idea. Q: But your research has explored how scaling up can lead to harm. What are some of them? A: At least when it comes to hateful content or toxicity and so on, scaling these datasets also scales the problems that they contain. More specifically, in the context of our study, scaling datasets also scales up hateful content in the dataset. We measured the amount of hateful content in two datasets. Hateful content, targeted content and aggressive content increased as the dataset was scaled from 400 million to 2 billion. That was a very conclusive finding that shows that scaling laws don't really hold up when it comes to training data. (In another paper) we found that darker-skinned women, and men in particular, tend to be allocated the labels of suspicious person or criminal at a much higher rate. Q: How hopeful or confident are you that the AI industry will make the changes you've proposed? A: These are not just pure mathematical, technical outputs. They're also tools that shape society, that influence society. The recommendations are that we also incentivize and pay attention to values such as justice, fairness, privacy and so on. My honest answer is that I have zero confidence that the industry will take our recommendations. They have never taken any recommendations like this that actually encourage them to take these societal issues seriously. They probably never will. Corporations and big companies tend to act when it's legally required. We need a very strong, enforceable regulation. They also react to public outrage and public awareness. If it gets to a state where their reputation is damaged, they tend to make change. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Share
Share
Copy Link
Abeba Birhane, an adviser to Mozilla, is raising critical questions about the AI industry's impact on society and its core values. Her work focuses on the ethical implications of AI and the need for diverse perspectives in its development.
As artificial intelligence continues to shape our world, voices like Abeba Birhane's are becoming increasingly crucial in the tech landscape. Birhane, a cognitive scientist and adviser to Mozilla, is challenging the core values and practices of the AI industry, urging for a more critical and inclusive approach to AI development 1.
Abeba Birhane's journey into the world of AI ethics is as unique as her perspective. Born in Ethiopia and now based in Ireland, Birhane brings a diverse background to her work. With a PhD in cognitive science from University College Dublin, she has become a prominent voice in the field of AI ethics 2.
Birhane's work focuses on questioning the fundamental assumptions and practices of the AI industry. She argues that the current approach to AI development often overlooks crucial ethical considerations and fails to account for diverse perspectives 3.
One of her primary concerns is the industry's reliance on vast amounts of data, often collected without proper consent or consideration for privacy. Birhane challenges the notion that "more data is always better," pointing out the potential harms of this approach 1.
A key aspect of Birhane's advocacy is the importance of including diverse voices in AI development. She argues that the current AI landscape is dominated by a narrow group of individuals and institutions, primarily from the Global North. This lack of diversity, she contends, leads to AI systems that fail to account for the needs and experiences of marginalized communities 4.
Birhane emphasizes the need for ethical considerations to be at the forefront of AI development. She warns against the potential misuse of AI technologies, particularly in areas such as surveillance and social control. Her work highlights the importance of transparency, accountability, and human rights in the development and deployment of AI systems 2.
Birhane's insights and advocacy are influencing discussions around AI governance and policy. Her work with Mozilla, a non-profit known for its commitment to an open and accessible internet, is helping to shape approaches to responsible AI development 3.
As AI continues to evolve and impact various aspects of society, voices like Birhane's play a crucial role in ensuring that ethical considerations and diverse perspectives are not overlooked in the rush towards technological advancement.
Reference
[1]
[2]
[3]
[4]
U.S. News & World Report
|AI Industry Is Influencing the World. Mozilla Adviser Abeba Birhane Is Challenging Its Core ValuesExploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
2 Sources
2 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
Ali Farhadi, CEO of the Allen Institute for Artificial Intelligence (Ai2), argues that the closed nature of most AI research is hindering innovation and calls for more openness in AI development.
5 Sources
5 Sources
A study by Purdue University researchers uncovers a significant imbalance in human values embedded in AI training datasets, highlighting the need for more balanced and ethical AI development.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved