2 Sources
2 Sources
[1]
Could NZ's next Christchurch Call be a push for fairer, safer AI?
For New Zealanders, artificial intelligence (AI) is fast becoming as much a part of everyday life as smartphones and social media did before it. According to the recently released 2026 InternetNZ Internet Insights report, nearly eight in ten Kiwis have used AI tools in the past year. More than half are now using them at least weekly. But as use is rising, so too is unease about this transformative technology's impact on society. In a recent survey, half of respondents were extremely or very concerned about AI's implications for misinformation, privacy and potential misuse. Other national surveys tell a similar story. One found only a quarter of respondents believed current safeguards are sufficient to make AI use safe. In another, two thirds of those surveyed said they would stop using a company's products if they had concerns about how it was using AI. These views are not surprising. Major AI companies are increasingly entangled in everything from "deepfake" images and AI-generated misinformation to geopolitics and military applications. At the same time, this widespread distrust could represent another opportunity for New Zealand to influence big tech - and build our own valuable brand grounded in responsible AI. Who controls AI - and on whose terms? The US-Israel war on Iran - where AI has helped identify bombing targets - has raised fresh concerns about the technology. In the lead-up to the conflict, major AI companies were pressured by the US Department of War to allow widespread military uses of their AI systems. Anthropic pushed for limits on applications like autonomous weapons and surveillance but was sidelined. Rival OpenAI instead agreed to allow broad "lawful" military uses, prompting a backlash and reports of users deleting the company's ChatGPT at triple the usual rate. China's military is meanwhile leveraging its own AI-powered systems, while companies like Palantir, chaired by US billionaire and New Zealand citizen Peter Thiel, have reportedly supplied AI tools used by militaries in Ukraine, Gaza and Iran. New Zealand's defence ministry is now mulling its own approach, with parliament divided on the issue. These developments highlight how closely advanced AI companies are becoming entwined with state power, blurring the line between consumer technology and instruments of war. Aside from military use, these systems are also vulnerable to political pressures in the US, including government influence over how they are deployed and used. Research has shown the products can reflect the values and biases of their creators. As they spread globally, they are also increasingly seen as a form of what has been called "digital colonialism" - where powerful countries and companies export technologies that embed their own values and priorities in other societies. How NZ can be a leader in AI For all the concern expressed by New Zealanders, the country has so far taken a "light-touch" regulatory stance on the technology. Rather than create dedicated regulation, as a recent open letter from AI experts to political leaders has called for, the government has chosen to rely on a patchwork of existing rules. As consumers, New Zealanders have little say in how these products are evolving, how they are designed or who they sometimes serve. This reinforces the common feeling that AI is something happening to us, but not for us. It is also sometimes claimed the country is being left behind in the "AI race", particularly by New Zealand business leaders concerned about keeping up with rapid technological change. But there is another way for New Zealand, even with its limited scale and capacity, to make its mark in the AI world. This would involve playing to its global reputation for integrity, human rights and independent thinking. Initiatives such as the Christchurch Call - launched after the 2019 mosque attacks to curb online extremist content - showed how a small country can convene governments and technology companies around shared standards. In this case, New Zealand could strategically position itself at the forefront of a growing global push for responsible AI, which advocates for values such as fairness, accountability, safety and privacy. The nation's Māori data sovereignty movement is already an example of responsible data use. Māori values such as kaitiakitanga (guardianship and stewardship) reframe data as taonga (treasured or sacred assets) deserving careful protection. Just as it did by drawing attention to social media harm with the Christchurch Call, New Zealand could collaborate with like-minded countries to push big tech companies to adopt concrete safeguards. These could include measures such as watermarking and mandatory human oversight by a range of governance groups. This would also involve introducing standards for reporting environmental impact and auditing bias, ensuring AI aligns with New Zealanders' expectations. The government could work with industry to set clearer expectations for responsible AI - building on existing guidance for businesses on safe and ethical use - and invest in the development of local products that meet those standards. There is also an economic opportunity. Local companies could use this reputation to differentiate themselves in a global market where trust is becoming increasingly important. Research by global consultancy PwC suggests responsible AI can create real value, with more resilient systems and fewer trust-damaging failures. Advocating for safe, responsible AI with clear economic benefits should be an easy decision - and the recent survey findings provide a clear mandate to do so. But New Zealand won't get there without decisive political leadership and a cohesive strategy. In an election year, politicians should be challenged to commit to AI that serves both its economy and its people. The authors acknowledge the contribution of Dr Andrew Chen to this article.
[2]
Could NZ's Next Christchurch Call Be A Push For Fairer, Safer AI?
For New Zealanders, artificial intelligence (AI) is fast becoming as much a part of everyday life as smartphones and social media did before it. According to the recently released 2026 InternetNZ Internet Insights report, nearly eight in ten Kiwis have used AI tools in the past year. More than half are now using them at least weekly. But as use is rising, so too is unease about this transformative technology's impact on society. In a recent survey, half of respondents were extremely or very concerned about AI's implications for misinformation, privacy and potential misuse. One found only a quarter of respondents believed current safeguards are sufficient to make AI use safe. In another, two thirds of those surveyed said they would stop using a company's products if they had concerns about how it was using AI. These views are not surprising. Major AI companies are increasingly entangled in everything from "deepfake" images and AI-generated misinformation to geopolitics and military applications. At the same time, this widespread distrust could represent another opportunity for New Zealand to influence big tech - and build our own valuable brand grounded in responsible AI. The US-Israel war on Iran - where AI has helped identify bombing targets - has raised fresh concerns about the technology. In the lead-up to the conflict, major AI companies were pressured by the US Department of War to allow widespread military uses of their AI systems. Anthropic pushed for limits on applications like autonomous weapons and surveillance but was sidelined. Rival OpenAI instead agreed to allow broad "lawful" military uses, prompting a backlash and reports of users deleting the company's ChatGPT at triple the usual rate. New Zealand's defence ministry is now mulling its own approach, with parliament divided on the issue. These developments highlight how closely advanced AI companies are becoming entwined with state power, blurring the line between consumer technology and instruments of war. Aside from military use, these systems are also vulnerable to political pressures in the US, including government influence over how they are deployed and used. Research has shown the products can reflect the values and biases of their creators. As they spread globally, they are also increasingly seen as a form of what has been called "digital colonialism" - where powerful countries and companies export technologies that embed their own values and priorities in other societies. For all the concern expressed by New Zealanders, the country has so far taken a "light-touch" regulatory stance on the technology. Rather than create dedicated regulation, as a recent open letter from AI experts to political leaders has called for, the government has chosen to rely on a patchwork of existing rules. As consumers, New Zealanders have little say in how these products are evolving, how they are designed or who they sometimes serve. This reinforces the common feeling that AI is something happening to us, but not for us. It is also sometimes claimed the country is being left behind in the "AI race", particularly by New Zealand business leaders concerned about keeping up with rapid technological change. But there is another way for New Zealand, even with its limited scale and capacity, to make its mark in the AI world. This would involve playing to its global reputation for integrity, human rights and independent thinking. Initiatives such as the Christchurch Call - launched after the 2019 mosque attacks to curb online extremist content - showed how a small country can convene governments and technology companies around shared standards. In this case, New Zealand could strategically position itself at the forefront of a growing global push for responsible AI, which advocates for values such as fairness, accountability, safety and privacy. Just as it did by drawing attention to social media harm with the Christchurch Call, New Zealand could collaborate with like-minded countries to push big tech companies to adopt concrete safeguards. These could include measures such as watermarking and mandatory human oversight by a range of governance groups. This would also involve introducing standards for reporting environmental impact and auditing bias, ensuring AI aligns with New Zealanders' expectations. The government could work with industry to set clearer expectations for responsible AI - building on existing guidance for businesses on safe and ethical use - and invest in the development of local products that meet those standards. Local companies could use this reputation to differentiate themselves in a global market where trust is becoming increasingly important. Research by global consultancy PwC suggests responsible AI can create real value, with more resilient systems and fewer trust-damaging failures. Advocating for safe, responsible AI with clear economic benefits should be an easy decision - and the recent survey findings provide a clear mandate to do so. But New Zealand won't get there without decisive political leadership and a cohesive strategy. In an election year, politicians should be challenged to commit to AI that serves both its economy and its people. The authors acknowledge the contribution of Dr Andrew Chen to this article.
Share
Share
Copy Link
As AI adoption surges in New Zealand with nearly eight in ten Kiwis using AI tools, public concern is rising sharply over misinformation, privacy and military applications. The nation could leverage its Christchurch Call legacy to convene governments and tech companies around responsible AI standards, positioning itself as a global leader in AI governance despite mounting geopolitical tensions.
AI has rapidly integrated into daily life for New Zealanders, with the 2026 InternetNZ Internet Insights report revealing that nearly eight in ten Kiwis have used AI tools in the past year
1
. More than half now engage with these technologies at least weekly, marking a significant shift in how citizens interact with digital tools2
. However, this widespread adoption comes with considerable unease about the societal impact of AI.
Source: The Conversation
Public concern regarding AI has reached notable levels, with half of survey respondents expressing extreme or very serious worries about misinformation, privacy violations, and potential misuse of the technology
1
. The anxiety deepens when examining trust in current protections: only a quarter of respondents believe existing AI safeguards are sufficient to ensure safe use2
. Two thirds indicated they would stop using a company's products if concerns arose about its AI practices, demonstrating that accountability matters deeply to consumers1
.The entanglement of major tech companies with state power has intensified anxieties around AI. During the US-Israel war on Iran, AI helped identify bombing targets, while major AI companies faced pressure from the US Department of War to permit widespread military uses of their systems
1
. Anthropic pushed for limits on applications like autonomous weapons and surveillance but was sidelined, while OpenAI agreed to allow broad "lawful" military uses2
. This decision triggered backlash, with reports indicating users deleted ChatGPT at triple the usual rate.These developments blur the line between consumer technology and instruments of war, highlighting how advanced AI companies are becoming entwined with geopolitics
2
. New Zealand's defence ministry is now considering its own approach, with parliament divided on the issue1
. Beyond military concerns, these systems remain vulnerable to political pressures, with research showing AI products can reflect the values and bias of their creators2
. As these technologies spread globally, critics increasingly view them as a form of "digital colonialism" where powerful countries and tech companies export technologies embedding their own priorities into other societies1
.Despite widespread concern, New Zealand has maintained a "light-touch" regulatory stance on AI, relying on a patchwork of existing rules rather than creating dedicated AI regulation
2
. This approach persists even as AI experts have issued an open letter to political leaders calling for stronger oversight1
. As consumers, New Zealanders currently have little say in how these products evolve, how they are designed, or who they serve, reinforcing the feeling that AI is something happening to them rather than for them2
.Related Stories
New Zealand could leverage its global reputation for integrity, human rights and independent thinking to position itself at the forefront of responsible AI
1
. The Christchurch Call, launched after the 2019 mosque attacks to curb online extremist content, demonstrated how a small country can convene governments and tech companies around shared standards2
. A similar initiative focused on fairer, safer AI could advocate for values such as fairness, accountability, safety and privacy1
.Concrete measures could include watermarking, mandatory human oversight by AI governance groups, standards for reporting environmental impact, and auditing bias to ensure alignment with citizen expectations
2
. Research by global consultancy PwC suggests responsible AI can create real economic value through more resilient systems and fewer trust-damaging failures2
. Local companies could use this reputation to differentiate themselves in a global market where trust is becoming increasingly important, turning what some perceive as falling behind in the "AI race" into a strategic advantage built on values rather than speed1
.Summarized by
Navi
[1]