Curated by THEOUTPOST
On Sat, 13 Jul, 12:02 AM UTC
2 Sources
[1]
Shaping Our AI Future: Madhumita Murgia on Agency, Ethics, and Resistance - Times of India
Award-winning journalist Madhumita Murgia discusses her debut book Code Dependent and the far-reaching impacts of AI on society. From data colonialism to eroding human agency, Murgia explores the unintended consequences of AI through human stories, while offering hope in our collective power to shape the futureExcerpts from an interview: Q: Can you tell us about what Code Dependent is about? A: I've been writing about tech since 2012. And when I started off, it was all about the disruption and the innovation that tech entrepreneurs bring to society and how it's really changed the way that we live, the way we work, the way we interact with each other over the last decade or so.I've been very optimistic about tech entrepreneurship so for me, the obvious thing would have been to write a book about the sort of opportunities of AI and the hugely positive ways it might influence and change our lives. But, in the decade that I have been reporting on, really the impact of technology on business, on society, I started to notice a lot of darker undercurrents. And I started writing this more than two years ago. So this was prior to the whole ChatGPT kind of explosion. But I could already see how AI had seeped into our lives in these very invisible ways, but very powerful ways, like recommendation systems on social media, or telling us what to listen to on Spotify, or what to watch on Netflix or Amazon Prime Video and so on. And so I wanted to understand how is it changing lives around the world today. And what does that tell us about our future? So I went looking for human stories around the world, across nine countries that I've featured in my book. And through each of these stories of ordinary people and their interactions with AI, I've tried to explore the ripple effects and the consequences of automation on our lives and try and help us to grapple with what our world might look like as it becomes increasingly automated. Q: What is interesting is that you have these case studies, nine countries, whether it is to the scrubbing teams in Africa, to the data mining which is happening in other countries, but there is a sinister edge to it all, which is very worrying. A: When I started out, the book was about finding stories of all kinds, not necessarily leaders of companies building AI products. I think there's so much focus on the other side of the coin, which is the rest of us, and how we experience technology, how we use it and how it's going to change the way we live. Through my stories, I discovered that there were a lot of unintended harms, and a lot of the consequences that I was finding were the opposite of what was intended when these systems were rolled out. I can give you an example. I interviewed a single mother in Amsterdam called Diana. She has two young sons who, at the time of this encounter, were 14 and 16, or 14 and 15. And the Amsterdam govt had created an algorithm, a predictive system, to figure out which young people would go on to commit serious crimes. And her two boys had ended up on this list. One list was for children who had already committed crimes, and they were predicted to commit even worse crimes in future. And the other one was for mostly brothers and siblings of these children who had, in some cases, never even committed a crime. And the goal was not to imprison them in advance but to help these families and to prevent those crimes from occurring. But in practice, it really tore apart these families. It made the parents feel that they had done something wrong. Many of these were single mothers who felt that their children would be taken away from them. And so it ended up having the opposite effect from what was hoped. And this comes up again and again when we try and implement automation into very sensitive parts of our society and daily life. So I think, yes, it did end up being darker and more sinister than even I had expected. But I think that's the reality of how things are today. And I hope that, for me, it's about sending a message saying, so what are we going to do about it? If this is how things are today, how can we change this story? How can we participate and shape it to be a brighter future? Listen to the interview here: Q: A lot of those who have been born in this digital age think that any tool which comes their way is acceptable and accessible can be adopted. A: That's why I kind of chose to tell my story through the lens of people. My hope is that we won't, even a generation younger than me won't sleepwalk into a world where a few companies control the sort of automation of so much of our society, from entertainment to dating, to the way we learn at school and to the relationship with our doctors, with our govt. I think it's really important for us to have agency in this discussion because it's happening now. Q. You can have agency if you recognise you have free will and you have a right. What do you do when you seem to believe that this is the world? A: I think we as humans tend to have this automation bias, which is that if it's a machine that hands out a decision or tells you an answer, you tend to trust it, because somehow we are hardwired into believing that computers are more accurate, more efficient than us. And this has been transferred into our relationship with AI. And I think, for me, a huge part of what I hope to educate and sort of bring to people's awareness who read my book, but also my reporting more broadly, is that that isn't how AI works. Let's take self-driving cars as an example, which require AI to run. No matter how many thousands of hours these cameras have racked up driving around and trying to gather data and predict, we still haven't managed to predict every single edge case. This is why these cars end up having accidents. It's very difficult to predict human behaviour in particular, because it's chaotic and it doesn't follow definitive patterns. And we're trying to bring AI into these domains, into human creativity, like writing and making art and movies, but also in deciding who should get bail or who should be given a govt welfare benefit and things like that. We should always have human accountability for these systems, and we shouldn't be treating them like calculators that have the answers. Q: How is this over-reliance on digital going to impact human society in the long run? A: We forget that behind these systems are just a few companies from currently a small corner of the world, which is the west coast of the US. And these are the puppet-masters behind these systems. They collect the data that we are giving to these systems by talking to them, writing, uploading documents and so on. And ultimately they are profit-making entities. When you are interacting with a system that's going to talk to you and tell you you should do this, or this is an interesting thing, you should buy or read or kind of influence your thinking through what it says, then it's very easy for advertisers to manipulate you because they kind of understand the ways that you think and the ways that you speak. Q. It parallels the kind of imperialistic ways of conquering the physical world in the 19th century, during the first industrial revolution. A: I look at examples of Western companies coming, say, to India to collect health data, using Asha workers as data collectors from the front lines, but never really reporting back what they are doing with that data. This concept of data colonialism, you start to see that these companies pop up all over the world supporting govts and becoming the infrastructure that govts rely on or hospitals rely on. They have moved far past just being consumer product companies, similar to in the ways that the East India Company grew to be much more than just a corporation. Q: Collective will and energy can push back. A: The entire final third of my book is actually looking at resistors and people who are fighting back against a faceless algorithm or a boss who's just a computer software. You have collectives, you have unions in Latin America, in Africa and in Europe and many other parts of the world for gig workers who are fighting back. I talk about Maya Wang, who's a human rights activist in China who's fighting back against the Communist Party, and finding a way to break through the tyranny there in many ways, even though she's just one person in the face of it. And that, I think, is what will help us to come together and decide, you know, what are our lines in the sand? What's okay to have automation? And we need to speak up now because it's being rolled out.
[2]
AI Has Become a Technology of Faith
An important thing to realize about the grandest conversations surrounding AI is that, most of the time, everyone is making things up. This isn't to say that people have no idea what they're talking about or that leaders are lying. But the bulk of the conversation about AI's greatest capabilities is premised on a vision of a theoretical future. It is a sales pitch, one in which the problems of today are brushed aside or softened as issues of now, which surely, leaders in the field insist, will be solved as the technology gets better. What we see today is merely a shadow of what is coming. We just have to trust them. I had this in mind when I spoke with Sam Altman and Arianna Huffington recently. Through an op-ed in Time, Altman and Huffington had just announced the launch of a new company called Thrive AI Health. That organization promises to bring OpenAI's technology into the most intimate part of our lives, assessing our health data and making relevant recommendations. Thrive AI Health will join an existing field of medical and therapy chatbots, but its ambitions are immense: to improve health outcomes for people, reduce health-care costs, and significantly reduce the effects of chronic disease worldwide. In their op-ed, Altman and Huffington explicitly (and grandiosely) compare their efforts to the New Deal, describing their company as "critical infrastructure" in a remade health-care system. They also say that some future chatbot offered by the company may encourage you to "swap your third afternoon soda with water and lemon." That chatbot, referred to in the article as "a hyper-personalized AI health coach," is the centerpiece of Thrive AI Health's pitch. What form it will take, or how it will be completed at all, is unclear, but here's the idea: The bot will generate "personalized AI-driven insights" based on a user's biometric and health data, doling out information and reminders to help them improve their behavior. Altman and Huffington give the example of a busy diabetic who might use an AI coach for medication reminders and healthy recipes. You can't actually download the app yet. Altman and Huffington did not provide a launch date. Normally, I don't write about vaporware -- a term for products that are merely conceptual -- but I was curious about how Altman and Huffington would explain these grand ambitions. Their very proposition struck me as the most difficult of sells: two rich, well-known entrepreneurs asking regular human beings, who may be skeptical or unfamiliar with generative AI, to hand over their most personal and consequential health data to a nagging robot? Health apps are popular, and people (myself included) allow tech tools to collect all kinds of intensely personal data, such as sleep, heart-rate, and sexual-health information, every day. If Thrive succeeds, the market for a truly intelligent health coach could be massive. But AI offers another complication to this privacy equation, opening the door for companies to train their models on hyper-personal, confidential information. Altman and Huffington are asking the world to believe that generative AI -- a technology that cannot currently reliably cite its own sources -- will one day be able to transform our relationships with our own bodies. I wanted to hear their pitch for myself. Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems -- a notion I found potentially alarming, given the technology's propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.) "We would hear these stories where people say ... 'I used it to figure out a diagnosis for this condition I had that I just couldn't figure out, and I typed in my symptoms, and it suggested this, and I got a test, and then I got a treatment.'" Read: Generative AI can't cite its sources I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity. "I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM," Altman told me. He said he'd recently been on Reddit reading testimonies of people who'd found success by confessing uncomfortable things to LLMs. "They knew it wasn't a real person," he said, "and they were willing to have this hard conversation that they couldn't even talk to a friend about." Huffington echoed these points, arguing that there are billions of health searches on Google every day. That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people's real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information. "It'll be super important to make it clear to people how data privacy works; that you know what we train on, what we don't, like when something is ever-stored versus just exists in one session," he said. "But in our experience, people understand this pretty well." Although savvy users might understand the risks and how chatbots work, I argued that many of the privacy concerns would likely be unexpected -- perhaps even out of Thrive AI Health's hands. Neither Altman nor Huffington had an answer to my most basic question -- What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant? -- but Huffington suggested that Thrive's AI platform would be "available through every possible mode," that "it could be through your workplace, like Microsoft Teams or Slack." This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman's rebuttal was philosophical. "Maybe society will decide there's some version of AI privilege," he said. "When you talk to a doctor or a lawyer, there's medical privileges, legal privileges. There's no current concept of that when you talk to an AI, but maybe there should be." Read: This is what it looks like when AI eats the world Here I was struck by an idea that has occurred to me over and over again since the beginning of the generative-AI wave. A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also -- perhaps even more importantly -- on what it might presage about what is coming next. AI is always measured against the end goal: the creation of a synthetic, reasoning intelligence that is greater than or equal to that of a human being. That moment is often positioned, reductively, as either a gift to the human race or an existential reckoning. But you don't have to get apocalyptic to see the way that AI's potential is always muddying people's ability to evaluate its present. For the past two years, shortcomings in generative-AI products -- hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn't render fingers; chatbots going rogue -- have been dismissed by AI companies as kinks that will eventually be worked out. The models will simply get better, they say. (It is true that many of them have, though these problems -- and new ones -- continue to pop up.) Still, AI researchers maintain their rallying cry that the models "just want to learn" -- a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in faith: Judge us not by what you see, but by what we imagine. When I asked about hallucinations, Altman and Huffington suggested that the models have gotten much better and that if Thrive's AI health coaches are focused enough on a narrow body of information (habits, not diagnoses) and trained on the latest peer-reviewed science, then they will be able to make good recommendations. (Though there's every reason to believe that hallucination would still be possible.) When I asked about their choice to compare their company to a massive government program like the New Deal, Huffington argued that "our health-care system is broken and that millions of people are suffering as a result." AI health coaches, she said, are "not about replacing anything. It's about offering behavioral solutions that would not have been successfully possible before AI made this hyper-personalization." I found it outlandish to invoke America's expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not. That very nonexistence also makes it difficult to criticize with specificity. Thrive AI Health coaches might be the Juicero of the generative AI age -- a shell of a product with a splashy board of directors that is hardly more than a logo. Perhaps it is a catastrophic data breach waiting to happen. Or maybe it ends up being real -- not a revolutionary product, but a widget that integrates into your iPhone or calendar and toots out a little push alert with a gluten-free recipe from Ina Garten. Or perhaps this someday becomes AI's truly great app -- a product that makes it ever easier to keep up with healthy habits. I have my suspicions. (My gut reaction to the press release was that it reminded me of blockchain-style hype, compiling a list of buzzwords and big names.) Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertain it as something profound. My immediate frustration with the vaporware quality of this announcement turns to trepidation once I consider what happens if they do actually build what they've proposed. Is OpenAI -- a company that's had a slew of governance problems, leaks, and concerns about whether its leader is forthright -- a company we want as part of our health-care infrastructure? If it succeeds, would Thrive AI Health deepen the inequities it aims to address by giving AI health coaches to the less fortunate, while the richest among us get actual help and medical care from real, attentive professionals? Am I reflexively dismissing an earnest attempt to use a fraught technology for good? Or am I rightly criticizing the kind of press-release hype-fest you see near the end of a tech bubble? Read: The real lesson from The Making of the Atomic Bomb Your answer to any of these questions probably depends on what you want to believe about this technological moment. AI has doomsday cultists, atheists, agnostics, and skeptics. Knowing what AI is capable of, sussing out what is opportunistic snake oil and what is genuine, can be difficult. If you want to believe that the models just want to learn, it will be hard to convince you otherwise. So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people? I put that question -- why should people trust you? -- to the pair at the end of my interview. Huffington said that the difference with this AI health coach is that the technology will be personalized enough to meet the individual, behavioral-change needs that our current health system doesn't. Altman said he believes that people genuinely want technology to make them healthier: "I think there are only a handful of use cases where AI can really transform the world. Making people healthier is certainly one of them," he said. Both answers sounded earnest enough to my ear, but each requires certain beliefs. Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling. Blind faith gives those who stand to profit an enormous amount of leverage; it opens up space for delusion and for grifters looking to make a quick buck. The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.
Share
Share
Copy Link
Exploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
As artificial intelligence continues to evolve at a rapid pace, experts are grappling with the ethical implications and societal impact of this transformative technology. Madhumita Murgia, a prominent voice in the field, emphasizes the importance of human agency in shaping the future of AI. She argues that individuals and communities must actively participate in determining how AI systems are developed and deployed, rather than passively accepting technological determinism 1.
Murgia advocates for a form of "productive resistance" to AI, encouraging people to question and challenge the technology's implementation. This approach aims to ensure that AI serves human interests and values, rather than undermining them. The call for regulation and ethical guidelines in AI development is growing louder, with experts like Murgia stressing the need for frameworks that protect individual rights and societal well-being 1.
While concerns about AI's impact persist, some visionaries see potential benefits in unexpected areas. Arianna Huffington, founder of Thrive Global, is exploring how AI could be harnessed to improve personal health and well-being. In collaboration with OpenAI's Sam Altman, Huffington is developing an AI-powered well-being assistant named "Faith" 2.
Faith, the AI assistant, is designed to provide personalized health advice and support to users. By analyzing various data points, including sleep patterns, stress levels, and daily habits, Faith aims to offer tailored recommendations for improving overall well-being. This application of AI technology demonstrates its potential to positively impact individual lives, moving beyond productivity-focused applications 2.
As AI continues to permeate various aspects of our lives, the contrasting perspectives of Murgia and Huffington highlight the complex nature of this technological revolution. While Murgia calls for vigilance and active participation in shaping AI's future, Huffington's project showcases the technology's potential for enhancing human well-being. This dichotomy underscores the need for a balanced approach that embraces innovation while remaining mindful of ethical considerations and potential risks 1 2.
Reference
[1]
[2]
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
4 Sources
4 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
3 Sources
An exploration of the complex landscape surrounding AI development, including political implications, economic impacts, and societal concerns, highlighting the need for responsible innovation and regulation.
2 Sources
2 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved