Curated by THEOUTPOST
On Thu, 14 Nov, 4:06 PM UTC
6 Sources
[1]
AI could help scale humanitarian responses. But it could also have big downsides
NEW YORK (AP) -- As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can -- including using artificial intelligence. Since 2015, the IRC has invested in Signpost -- a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools. Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. As humanitarian organizations encounter more people in need, they are also facing enormous funding shortfalls. The turn to artificial intelligence technologies is in part driven by this massive gap between needs and resources. To meet its goal of reaching half of displaced people within three years, the IRC is building a network of AI chatbots that can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google. The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems. Beyond developing these tools, the IRC wants to extend this infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment. "We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't," said Jeannie Annan, International Rescue Committee's Chief Research and Innovation Officer. The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is. More complex or sensitive queries are escalated for humans to respond. The most important potential downside of these tools would be that they don't work. For example, what if the situation on the ground changes and the chatbot doesn't know? It could provide information that's not just wrong, but dangerous. A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government? IRC said it's agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They've also worked to anonymize the data, including removing personal information and location. As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises. Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected? She also emphasized that organizations need to convene a range of people to help govern and design the initiative -- not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools. "There are many good models sitting in the AI graveyard," she said, "because they weren't worked out in conjunction and collaboration with the user community." For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge. People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said. Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the U.S. Chatbots won't help when there's no device, internet or electricity, she said. Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have. "We must be extraordinarily careful not to replicate power imbalances and biases through technology," Ali said. "The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way." The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. Associated Press coverage of philanthropy and nonprofits receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP's philanthropy coverage, visit https://apnews.com/hub/philanthropy.
[2]
AI could help scale humanitarian responses. But it could also have big downsides
NEW YORK -- As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can -- including using artificial intelligence. Since 2015, the IRC has invested in Signpost -- a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools. Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. As humanitarian organizations encounter more people in need, they are also facing enormous funding shortfalls. The turn to artificial intelligence technologies is in part driven by this massive gap between needs and resources. To meet its goal of reaching half of displaced people within three years, the IRC is building a network of AI chatbots that can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google. The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems. Beyond developing these tools, the IRC wants to extend this infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment. "We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't," said Jeannie Annan, International Rescue Committee's Chief Research and Innovation Officer. The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is. More complex or sensitive queries are escalated for humans to respond. The most important potential downside of these tools would be that they don't work. For example, what if the situation on the ground changes and the chatbot doesn't know? It could provide information that's not just wrong, but dangerous. A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government? IRC said it's agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They've also worked to anonymize the data, including removing personal information and location. As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises. Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected? She also emphasized that organizations need to convene a range of people to help govern and design the initiative -- not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools. "There are many good models sitting in the AI graveyard," she said, "because they weren't worked out in conjunction and collaboration with the user community." For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge. People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said. Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the U.S. Chatbots won't help when there's no device, internet or electricity, she said. Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have. "We must be extraordinarily careful not to replicate power imbalances and biases through technology," Ali said. "The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. ___ Associated Press coverage of philanthropy and nonprofits receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP's philanthropy coverage, visit https://apnews.com/hub/philanthropy.
[3]
AI could help scale humanitarian responses. But it could also have big downsides
NEW YORK (AP) -- As the International Rescue Committee (IRC) copes with dramatic increases in displaced people in recent years, the refugee aid organisation has looked for efficiencies wherever it can -- including using artificial intelligence. Since 2015, the IRC has invested in Signpost -- a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organisations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools -- if they can do so safely. Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. The turn to artificial intelligence technologies is in part driven by the massive gap between needs and resources. To meet its goal of reaching half of displaced people within three years, the IRC is testing a network of AI chatbots to see if they can increase the capacity of their humanitarian officers and the local organisations that directly serve people through Signpost. For now, the pilot project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google. The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems. If they decide the tools work, the IRC wants to extend the technical infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment. "We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't," said Jeannie Annan, International Rescue Committee's Chief Research and Innovation Officer. The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is. More complex or sensitive queries are escalated for humans to respond. The most important potential downside of these tools would be that they don't work. For example, what if the situation on the ground changes and the chatbot doesn't know? It could provide information that's not just wrong, but dangerous. A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government? IRC said it's agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They've also worked to anonymize the data, including removing personal information and location. As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises. Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organisations should ask themselves, does the technology work? Is it fair? Are data and privacy protected? She also emphasised that organisations need to convene a range of people to help govern and design the initiative -- not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools. "There are many good models sitting in the AI graveyard," she said, "because they weren't worked out in conjunction and collaboration with the user community." For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge. People receiving services from humanitarian organizations should be told if an AI model will analyse any information they hand over, she said, even if the intention is to help the organisation respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said. Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the US Chatbots won't help when there's no device, internet or electricity, she said. Ali also warned that few local organisations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have. "We must be extraordinarily careful not to replicate power imbalances and biases through technology," Ali said. "The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way."
[4]
AI could help scale humanitarian responses. But it could also have big downsides
NEW YORK (AP) -- As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can -- including using artificial intelligence. Since 2015, the IRC has invested in Signpost -- a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools. Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. As humanitarian organizations encounter more people in need, they are also facing enormous funding shortfalls. The turn to artificial intelligence technologies is in part driven by this massive gap between needs and resources. To meet its goal of reaching half of displaced people within three years, the IRC is building a network of AI chatbots that can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google. The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems. Beyond developing these tools, the IRC wants to extend this infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment. "We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't," said Jeannie Annan, International Rescue Committee's Chief Research and Innovation Officer. The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is. More complex or sensitive queries are escalated for humans to respond. The most important potential downside of these tools would be that they don't work. For example, what if the situation on the ground changes and the chatbot doesn't know? It could provide information that's not just wrong, but dangerous. A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government? IRC said it's agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They've also worked to anonymize the data, including removing personal information and location. As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises. Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected? She also emphasized that organizations need to convene a range of people to help govern and design the initiative -- not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools. "There are many good models sitting in the AI graveyard," she said, "because they weren't worked out in conjunction and collaboration with the user community." For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge. People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said. Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the U.S. Chatbots won't help when there's no device, internet or electricity, she said. Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have. "We must be extraordinarily careful not to replicate power imbalances and biases through technology," Ali said. "The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. ___ Associated Press coverage of philanthropy and nonprofits receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP's philanthropy coverage, visit https://apnews.com/hub/philanthropy.
[5]
AI could help scale humanitarian responses, but it could also have big downsides
Since 2015, the IRC has invested in Signpost - a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools - if they can do so safely.As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can - including using artificial intelligence. Since 2015, the IRC has invested in Signpost - a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools - if they can do so safely. Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. The turn to artificial intelligence technologies is in part driven by the massive gap between needs and resources. To meet its goal of reaching half of displaced people within three years, the IRC is testing a network of AI chatbots to see if they can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the pilot project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google. The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems. If they decide the tools work, the IRC wants to extend the technical infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment. "We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't," said Jeannie Annan, International Rescue Committee's chief research and innovation officer. The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is. More complex or sensitive queries are escalated for humans to respond. The most important potential downside of these tools would be that they don't work. For example, what if the situation on the ground changes and the chatbot doesn't know? It could provide information that's not just wrong, but dangerous. A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government? IRC said it's agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They've also worked to anonymise the data, including removing personal information and location. As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises. Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected? She also emphasized that organizations need to convene a range of people to help govern and design the initiative - not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools. "There are many good models sitting in the AI graveyard," she said, "because they weren't worked out in conjunction and collaboration with the user community." For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge. People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said. Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the US Chatbots won't help when there's no device, internet or electricity, she said. Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have. "We must be extraordinarily careful not to replicate power imbalances and biases through technology," Ali said. "The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way."
[6]
AI Could Help Scale Humanitarian Responses. but It Could Also Have Big Downsides
NEW YORK (AP) -- As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can -- including using artificial intelligence. Since 2015, the IRC has invested in Signpost -- a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools. Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. As humanitarian organizations encounter more people in need, they are also facing enormous funding shortfalls. The turn to artificial intelligence technologies is in part driven by this massive gap between needs and resources. To meet its goal of reaching half of displaced people within three years, the IRC is building a network of AI chatbots that can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google. The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems. Beyond developing these tools, the IRC wants to extend this infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment. "We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't," said Jeannie Annan, International Rescue Committee's Chief Research and Innovation Officer. The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is. More complex or sensitive queries are escalated for humans to respond. The most important potential downside of these tools would be that they don't work. For example, what if the situation on the ground changes and the chatbot doesn't know? It could provide information that's not just wrong, but dangerous. A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government? IRC said it's agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They've also worked to anonymize the data, including removing personal information and location. As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises. Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected? She also emphasized that organizations need to convene a range of people to help govern and design the initiative -- not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools. "There are many good models sitting in the AI graveyard," she said, "because they weren't worked out in conjunction and collaboration with the user community." For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge. People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said. Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the U.S. Chatbots won't help when there's no device, internet or electricity, she said. Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have. "We must be extraordinarily careful not to replicate power imbalances and biases through technology," Ali said. "The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. ___ Associated Press coverage of philanthropy and nonprofits receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP's philanthropy coverage, visit https://apnews.com/hub/philanthropy. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Share
Share
Copy Link
The International Rescue Committee is testing AI-powered chatbots to expand its Signpost project, aiming to reach more displaced people. While the technology offers potential benefits, it also raises concerns about data security and ethical implementation.
The International Rescue Committee (IRC) is leveraging artificial intelligence to address the growing demand for humanitarian assistance amidst increasing global displacement. With over 117 million people forcibly displaced in 2024 1, the IRC is exploring AI-powered solutions to bridge the gap between needs and resources.
Since 2015, the IRC has invested in Signpost, a portfolio of mobile apps and social media channels providing multilingual assistance to people in dangerous situations 2. Having reached 18 million people, the IRC now aims to significantly expand its reach using AI tools.
The organization is testing a network of AI chatbots to enhance the capacity of humanitarian officers and local organizations. This pilot project, currently operational in El Salvador, Kenya, Greece, and Italy, responds in 11 languages 3. It utilizes large language models from tech giants like OpenAI, Anthropic, and Google, along with customer service software from Zendesk.
Jeannie Annan, IRC's Chief Research and Innovation Officer, emphasizes the potential of AI to scale humanitarian responses:
"We're trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can't." 4
The IRC aims to extend this technological infrastructure to other nonprofit humanitarian organizations at no cost, creating shared resources for less technically focused organizations.
While AI offers promising solutions, it also presents significant risks:
To mitigate these risks, the IRC has implemented several measures:
Cathy Petrozzino from MITRE emphasizes the importance of responsible AI implementation, urging organizations to consider technology effectiveness, fairness, and data privacy [7]. She advocates for diverse input in governing and designing AI initiatives, including technical experts, context specialists, legal experts, and user representatives.
Helen McElhinney, executive director of CDAC Network, stresses the importance of consulting with displaced people and obtaining informed consent for AI-driven services [8]. She argues that while this may increase development time, it is crucial for addressing safety and ethical concerns.
As part of the Signpost project, the IRC is also exploring other AI-powered tools, such as digital automated tutors and data-integrating maps for crisis preparation and response [9]. These developments highlight the potential for AI to revolutionize humanitarian aid, while also underscoring the need for careful implementation and ongoing evaluation to ensure ethical and effective deployment in serving vulnerable populations.
Reference
[3]
Borneo Bulletin Online
|AI could help scale humanitarian responses. But it could also have big downsides[4]
A volunteer network of interpreters is leveraging AI technology to provide real-time translation services for refugees and humanitarian workers. The collaboration between human translators and AI aims to overcome language barriers in crisis situations.
4 Sources
4 Sources
Researchers are exploring how artificial intelligence and community-driven data collection can work together to address poverty in low- and middle-income countries, offering new solutions to longstanding challenges in humanitarian assistance.
2 Sources
2 Sources
A critical examination of AI's use in social services, highlighting potential benefits and risks, with a focus on preventing trauma and ensuring responsible implementation.
2 Sources
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
Islamic State supporters are increasingly using artificial intelligence tools to create and disseminate propaganda, raising concerns about the potential misuse of AI technology for extremist activities.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved