Curated by THEOUTPOST
On Tue, 17 Dec, 12:02 AM UTC
2 Sources
[1]
Collaborative power of AI and citizen science can advance Sustainable Development Goals
Citizen science and artificial intelligence (AI) offer immense potential for tackling urgent sustainability challenges, from health to climate change. Combined, they offer innovative solutions to accelerate progress on the UN Sustainable Development Goals (SDGs). IIASA researchers explored the synergies between citizen science and AI, specifically highlighting how the integration of citizen science data and approaches into AI can enhance sustainable development monitoring and achievement while mitigating AI risks. The SDGs were launched in 2015 to guide global efforts toward sustainability by 2030. However, as this deadline nears, many countries still lack the data needed to track SDG progress. For example, data are missing for nearly half of the 92 environmental indicators, only 15% of targets are on track, and all SDG targets suffer from insufficient data. Other challenges include poor data quality, limited data sharing, infrequent data collection, and lack of local data, hindering targeted interventions. The perspective piece authored by IIASA researchers and published in Nature Sustainability, explores how combining the collaborative strengths of citizen science with AI can enhance both SDG monitoring and achievement. Citizen science is already contributing to the SDGs by helping to address data gaps through public participation in scientific research. Successful applications have been demonstrated for SDGs three (good health and well-being), 11 (sustainable cities and communities), 14 (life below water), and 15 (life on land). However, despite increasing interest from the UN, National Statistical Offices (NSOs) and government agencies, challenges around data quality, lack of awareness and legal frameworks continue to limit the integration of citizen science data into SDG monitoring and reporting, and ultimately for informing policy decisions. In parallel, recent advancements in AI have sparked interest in its potential to support sustainable development and address data challenges faced by NSOs and international organizations. AI's major contributions to SDG progress include rapid analysis of large datasets, enhanced data accessibility, efficient data collection, task automation, real-time data and insights, and improved data visualization -- potentially in a more cost-efficient way. Nonetheless, AI poses challenges and risks, including biases in training data that can produce unreliable results. The authors propose that citizen science approaches can help mitigate AI risks by providing more localized and disaggregated, thus representative data. "AI algorithms require large amounts of data, yet many parts of the world, especially the Global South, face data shortages. This lack of data, especially local data, can lead to AI models that don't reflect specific local contexts, resulting in inaccurate findings, biases, and widening disparities between the Global North and South, as well as within countries," explains Dilek Fraisl, lead author of the perspective piece and researcher in the Novel Data Ecosystems for Sustainability Research Group of the IIASA Advancing Systems Analysis Program. "Citizen science can help address this gap by providing more local and thus representative data, which can help improve the accuracy of AI results." Fraisl further explains that AI models are only as reliable as the data they are trained on, and any biases in this data can cause misleading results. So, while AI has great potential, its benefits will only be fully realized if its biases and limitations are carefully addressed. The recent adoption of the Global Digital Compact within the UN's Pact for the Future, a framework outlining principles, objectives, and actions for advancing an open, free, secure and human-centered digital future for all, highlights the need for global cooperation in AI governance. This framework emphasizes AI's role in achieving sustainable development while also warning of its risks, such as potential threats to human rights. Incorporating citizen science approaches into AI can be a crucial step towards addressing these risks and ensuring that AI serves the common good. "The integration of citizen science and AI offers a promising path forward in SDG monitoring and achievement. When used together, AI's analytical power and citizen science's contextual relevance create synergies that can address sustainability challenges more effectively. However, careful attention to inclusivity, representation, and governance is essential to harnessing these tools in a way that genuinely benefits all," concludes Fraisl.
[2]
Leveraging the collaborative power of AI and citizen science for sustainable development - Nature Sustainability
Future developments in AI and citizen science are likely to bring a wealth of new opportunities and varied applications in addition to the ones that are outlined here. For example, generative AI can change how citizen science applications are developed and transform how citizen scientists interact with them, transitioning these applications from static to conversational interfaces. Although ref. identified 134 SDG targets that AI could help to achieve, they also found that the application of AI could impede 59 SDG targets. For example, the use of AI may result in the need for much higher job qualifications, which would widen the already-existing inequalities and prevent the relevant SDGs from being met. Here we discuss some of these AI challenges and the role citizen science approaches can play in helping to address them. Large amounts of data are needed to train AI algorithms, yet a lack of data is a prevalent issue in many parts of the world, particularly in the Global South. For example, in 2023, 40% of SDG indicators in the Asia and Pacific region had no data available, while 10% lacked sufficient data with only one data point available. Where data are available, they are often poorly disaggregated by location, sex, gender, disability or other factors, making it challenging to understand the disparities that exist between various demographic groups in society and to make interventions targeted at those most in need. In the context of AI and SDG monitoring, the lack of data can result in the use of algorithms that were trained using data that does not consider specific local circumstances. Consequently, this could compromise the accuracy of the findings, introduce biases, increase hallucinations and widen already-existing disparities between the Global North and the Global South, as well as within and between countries. To illustrate this, a recent study has shown that an AI algorithm created in Europe can identify marine plastic litter along the coastline of Ghana using drone imagery. This can support the relevant SDG monitoring activities under SDG 14, Life Below Water, and address key policy gaps in the country. The same study highlights the importance of effectively identifying context-specific litter items by refining the algorithm with more and local data, since the results derived from these data could result in substantial policy implications. More specifically, while this is uncommon in Europe, drinking water is stored and sold in water sachets in Ghana. The algorithm that was developed in Europe will not be able to recognize this specific item unless trained with local data. Therefore, more and local training data are needed for the algorithm employed so that it can accurately recognize and classify such items that are specific to the local context. Citizen science approaches can help to mitigate the issue of lack of local and representative data and increase the accuracy of AI algorithms when their potential limitations and challenges, elaborated later, are addressed. Citizen science can improve both the availability and quality of data in a more cost-efficient way for SDG monitoring compared with traditional data sources, such as censuses and surveys. Citizen science approaches can be even more beneficial in addressing the risk of growing disparities due to a lack of local data, particularly when it comes to the Global South and the marginalized and hard-to-reach individuals and communities. For example, because citizen science initiatives are conducted primarily at local and community levels, they can collect data that take into account the unique circumstances and nuances of the local area. More specifically, in the context of the aforementioned citizen science project in Ghana, participants collect litter from their local beaches, classifying, counting and recording each litter item they find by litter type. These kinds of data, gathered by citizen scientists in the field, are critical for understanding and enhancing the accuracy of the AI algorithm to be employed in the second stage of the project, which, if funded, will involve producing litter density maps for the entire coastline of Ghana using drone imagery and AI. Participants will contribute by classifying the drone imagery using an application for rapid image classification. This will provide input data to the AI algorithm for recognizing local items, thereby further addressing the issue of the lack of local data and enhance its accuracy (Box 1). AI, in all its forms, can exhibit and emphasize biases that exist in society, such as race, colour, gender, disability and ethnic origin. A variety of factors can cause bias in AI models, such as the use of data that are biased to train AI algorithms. Such biases are known to exist in the literature and to influence AI outputs. Specific examples include some AI products connecting the word 'Africa' with poverty, or 'poor' with dark skin tones, as well as portraying housekeepers as people of colour and flight attendants as women, and in proportions that can exceed actual observations. Citizen science approaches can be leveraged to address such biases by increasing the availability of data that reflect realities rather than prejudices. Citizen science can also offer disaggregated data by location, gender, disability, race and other demographic aspects that are currently lacking but are crucial for achieving the 'leaving no one behind' principle of the SDG agenda. To ensure inclusiveness and address potential biases in their methods, many citizen science projects go above and beyond to engage with underrepresented and vulnerable individuals and communities. Examples of their practices include organizing community events, and interacting with community leaders and other key stakeholders; translating project materials such as mobile phone applications into multiple languages spoken by their target groups in the same country or neighbourhood; utilizing data sheets and pens in addition to smartphone applications and other technologies to ensure participation of individuals with varying degrees of access to such technologies; using voice-recording services or images for those who are illiterate to report data; and incorporating sign languages for the hearing impaired. Furthermore, many citizen science projects collect demographic data from their participants to determine whether they are able to create a truly inclusive project and representative results. All these practices can help to improve the representativeness of the input data used to train AI algorithms for high-accuracy outcomes. In addition, citizen science approaches can be used to test the accuracy of the AI algorithms as a quality assurance and control measure. For example, citizen science projects could be implemented to enable individuals to report instances of bias and discriminatory outcomes that they encounter when using AI tools, similar to the Public Editor project (https://www.publiceditor.io/). In Public Editor, following some training, citizen scientists identify biased ideas and messaging in the daily news and report on any material they believe to include biased thinking and messages. Such an approach can be utilized to encourage the public to report instances of biased behaviour that they encounter in their daily lives -- this time in AI applications. This can assist in tracking bias and potential human rights violations in AI at larger scales, holding those who have developed such AI applications accountable, and eventually aid in improving these and future applications (Box 1). AI typically lacks public engagement, which is important to ensure representativeness, reduce biases and enhance the quality of AI results. However, despite AI having an increasing impact on everyone's lives, few people are making major decisions about its growth. This is a social justice issue that requires the involvement of a much larger range of people in the discussion and development of AI, especially in the Global South and those who are typically marginalized. Global-level efforts on governing AI to ensure its ethical and responsible use explicitly call for multistakeholder partnerships involving everyone, especially local communities, in discussions related to AI development, use and governance. Such multistakeholder partnerships can be established through the integration of citizen science methodologies, especially those that involve co-creation, into AI. This aspect is important to ensure inclusive partnerships that engage everyone, particularly those who are typically marginalized and harder to reach. By meaningfully engaging the public in AI development and use, citizen science can also be an effective means to fight against misinformation (unintentional) and disinformation (intentional) (https://www.publiceditor.io/). As AI penetrates our daily lives, disinformation will have an increased impact on fundamental freedoms and human rights by undermining people's privacy and democracy, which are the guiding principles of the SDGs. More specifically, people are already being misled by the widespread dissemination of disinformation created by generative AI. For example, events during the COVID-19 pandemic have resulted in an infodemic -- an abundance of information about an issue that spreads misinformation, and disinformation, impeding an efficient public health response and fostering misunderstanding and distrust among the population according to the World Health Organization. In fact, according to the World Economic Forum, AI-related misinformation and disinformation are considered to be the second greatest global challenge that is likely to worsen in the short term. The potential of AI for addressing global challenges and for achieving the SDGs can be achieved only if AI systems can be trusted. Citizen science approaches, through engaging the public and all relevant actors in AI initiatives and AI-related conversations, can help to increase media, information and AI literacy among the members of the public (Box 1).
Share
Share
Copy Link
Researchers explore how combining artificial intelligence with citizen science can accelerate progress on UN Sustainable Development Goals, addressing data gaps and mitigating AI risks.
In a groundbreaking perspective piece published in Nature Sustainability, researchers from the International Institute for Applied Systems Analysis (IIASA) have highlighted the immense potential of combining artificial intelligence (AI) and citizen science to address urgent sustainability challenges 1. This innovative approach could significantly accelerate progress towards the United Nations Sustainable Development Goals (SDGs), offering solutions to complex issues ranging from health to climate change.
As the 2030 deadline for achieving the SDGs approaches, many countries still struggle with insufficient data to track their progress. Nearly half of the 92 environmental indicators lack data, and only 15% of targets are on track 1. Citizen science has already demonstrated its ability to contribute to SDGs by addressing these data gaps through public participation in scientific research.
Recent advancements in AI have sparked interest in its potential to support sustainable development. AI's major contributions include:
However, AI also poses challenges, particularly in terms of biases in training data that can produce unreliable results 1.
The researchers propose that citizen science approaches can help mitigate AI risks by providing more localized and disaggregated data. This is particularly crucial in addressing the data shortage in many parts of the world, especially the Global South 2.
Dilek Fraisl, lead author of the perspective piece, explains, "Citizen science can help address this gap by providing more local and thus representative data, which can help improve the accuracy of AI results" 1.
A recent study demonstrated how an AI algorithm created in Europe could identify marine plastic litter along Ghana's coastline using drone imagery. However, the study also highlighted the importance of refining the algorithm with local data to effectively identify context-specific litter items 2.
AI can exhibit and emphasize societal biases related to race, color, gender, disability, and ethnic origin. Citizen science approaches can be leveraged to address such biases by increasing the availability of data that reflect realities rather than prejudices 2.
The integration of AI and citizen science is expected to bring a wealth of new opportunities and varied applications. For instance, generative AI could transform how citizen science applications are developed and how citizen scientists interact with them, transitioning from static to conversational interfaces 2.
The recent adoption of the Global Digital Compact within the UN's Pact for the Future emphasizes AI's role in achieving sustainable development while also warning of its risks. Incorporating citizen science approaches into AI can be a crucial step towards addressing these risks and ensuring that AI serves the common good 1.
As Fraisl concludes, "The integration of citizen science and AI offers a promising path forward in SDG monitoring and achievement. When used together, AI's analytical power and citizen science's contextual relevance create synergies that can address sustainability challenges more effectively" 1.
Researchers are exploring how artificial intelligence and community-driven data collection can work together to address poverty in low- and middle-income countries, offering new solutions to longstanding challenges in humanitarian assistance.
2 Sources
2 Sources
The rapid growth of AI technology has raised concerns about its environmental sustainability. This story explores the energy consumption of AI models, their carbon footprint, and potential solutions for a greener AI industry.
2 Sources
2 Sources
A comprehensive study reveals that scientific papers mentioning AI methods receive more citations, but this benefit is not equally distributed among researchers, potentially exacerbating existing inequalities in science.
3 Sources
3 Sources
Google DeepMind and the Royal Society co-hosted the inaugural AI for Science Forum, showcasing AI's potential to accelerate scientific breakthroughs and address global challenges across various fields.
2 Sources
2 Sources
AI is transforming scientific research, offering unprecedented speed and efficiency. However, it also raises concerns about accessibility, understanding, and the future of human-led science.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved