Curated by THEOUTPOST
On Sat, 7 Sept, 8:00 AM UTC
2 Sources
[1]
The information wars are about to get worse, Yuval Harari argues
The author of "Sapiens" is back with a timely new book about AI, fact and fiction "Let Truth and falsehood grapple," argued John Milton in Areopagitica, a pamphlet published in 1644 defending the freedom of the press. Such freedom would, he admitted, allow incorrect or misleading works to be published, but bad ideas would spread anyway, even without printing -- so better to allow everything to be published and let rival views compete on the battlefield of ideas. Good information, Milton confidently believed, would drive out bad: the "dust and cinders" of falsehood "may yet serve to polish and brighten the armory of truth". Yuval Noah Harari, an Israeli historian, lambasts this position as the "naive view" of information in a timely new book. It is mistaken, he argues, to suggest that more information is always better and more likely to lead to the truth; the internet did not end totalitarianism, and racism cannot be fact-checked away. But he also argues against a "populist view" that objective truth does not exist and that information should be wielded as a weapon. (It is ironic, he notes, that the notion of truth as illusory, which has been embraced by right-wing politicians, originated with left-wing thinkers such as Marx and Foucault.) Few historians have achieved the global fame of Mr Harari, who has sold more than 45m copies of his megahistories, including "Sapiens". He counts Barack Obama and Mark Zuckerberg among his fans. A techno-futurist who contemplates doomsday scenarios, Mr Harari has warned about technology's ill effects in his books and speeches, yet he captivates Silicon Valley bosses, whose innovations he critiques. In "Nexus", a sweeping narrative ranging from the stone age to the era of artificial intelligence (AI), Mr Harari sets out to provide "a better understanding of what information is, how it helps to build human networks, and how it relates to truth and power". Lessons from history can, he suggests, provide guidance in dealing with big information-related challenges in the present, chief among them the political impact of AI and the risks to democracy posed by disinformation. In an impressive feat of temporal sharpshooting, a historian whose arguments operate on the scale of millennia has managed to capture the zeitgeist perfectly. With 70 nations, accounting for around half the world's population, heading to the polls this year, questions of truth and disinformation are top of mind for voters -- and readers. Mr Harari's starting point is a novel definition of information itself. Most information, he says, does not represent anything, and has no essential link to truth. Information's defining feature is not representation but connection; it is not a way of capturing reality but a way of linking and organising ideas and, crucially, people. (It is a "social nexus".) Early information technologies, such as stories, clay tablets or religious texts, and later newspapers and radio, are ways of orchestrating social order. Here Mr Harari is building on an argument from his previous books, such as "Sapiens" and "Homo Deus": that humans prevailed over other species because of their ability to co-operate flexibly in large numbers, and that shared stories and myths allowed such interactions to be scaled up, beyond direct person-to-person contact. Laws, gods, currencies and nationalities are all intangible things that are conjured into existence through shared narratives. These stories do not have to be entirely accurate; fiction has the advantage that it can be simplified and can ignore inconvenient or painful truths. The opposite of myth, which is engaging but may not be accurate, is the list, which boringly tries to capture reality, and gives rise to bureaucracy. Societies need both mythology and bureaucracy to maintain order. He considers the creation and interpretation of holy texts and the emergence of the scientific method as contrasting approaches to the questions of trust and fallibility, and to maintaining order versus finding truth. He also applies this framing to politics, treating democracy and totalitarianism as "contrasting types of information networks". Starting in the 19th century, mass media made democracy possible at a national level, but also "opened the door for large-scale totalitarian regimes". In a democracy, information flows are decentralised and rulers are assumed to be fallible; under totalitarianism, the opposite is true. And now digital media, in various forms, are having political effects of their own. New information technologies are catalysts for major historical shifts. As in his previous works, Mr Harari's writing is confident, wide-ranging and spiced with humour. He draws upon history, religion, epidemiology, mythology, literature, evolutionary biology and his own family biography, often leaping across millennia and back again within a few paragraphs. Some readers will find this invigorating; others may experience whiplash. And many may wonder why, for a book about information that promises new perspectives on AI, he spends so much time on religious history, and in particular the history of the Bible. The reason is that holy books and AI are both attempts, he argues, to create an "infallible superhuman authority". Just as decisions made in the fourth century AD about which books to include in the Bible turned out to have far-reaching consequences centuries later, the same, he worries, is true today about AI: the decisions made about it now will shape humanity's future. Mr Harari argues that AI should really stand for "alien intelligence" and worries that AIs are potentially "new kinds of gods". Unlike stories, lists or newspapers, AIs can be active agents in information networks, like people. Existing computer-related perils such as algorithmic bias, online radicalisation, cyber-attacks and ubiquitous surveillance will all be made worse by AI, he fears. He imagines AIs creating dangerous new myths, cults, political movements and new financial products that crash the economy. Some of his nightmare scenarios seem implausible. He imagines an autocrat becoming beholden to his AI surveillance system, and another who, distrusting his defence minister, hands control of his nuclear arsenal to an AI instead. And some of his concerns seem quixotic: he rails against TripAdvisor, a website where tourists rate restaurants and hotels, as a terrifying "peer-to-peer surveillance system". He has a habit of conflating all forms of computing with AI. And his definition of "information network" is so flexible that it encompasses everything from large language models like ChatGPT to witch-hunting groups in early modern Europe. But Mr Harari's narrative is engaging, and his framing is strikingly original. He is, by his own admission, an outsider when it comes to writing about computing and AI, which grants him a refreshingly different perspective. Tech enthusiasts will find themselves reading about unexpected aspects of history, while history buffs will gain an understanding of the AI debate. Using storytelling to connect groups of people? That sounds familiar. Mr Harari's book is an embodiment of the very theory it expounds.
[2]
AI: too much information?
A few months before the fall of the Berlin wall in 1989, Ronald Reagan made a bold prediction: the "Goliath of totalitarian control" would soon be brought down by the "David of the microchip". "Information is the oxygen of the modern age," the former US president told an audience in London. "It seeps through the walls topped by barbed wire, it wafts across electrified, booby-trapped borders. Breezes of electronic beams blow through the Iron Curtain as if it was lace." In one sense, this techno-optimistic judgment was sound: the Soviet empire soon imploded under the weight of its own misinformation. But to the historian and futurist Yuval Noah Harari, Reagan's prediction also encapsulates the "naive view of information" that remains just as fashionable today but is alarmingly wrong. The simple equation that more information automatically produces more open and prosperous societies is both delusional and ahistorical, according to Harari. The proliferation of information may be essential for the discovery of objective and scientific truth but it can also be exploited to impose societal order -- or inflame disorder. What matters is the ways in which we curate and process that information, the job of information networks. Every smartphone contains more information than the ancient Library of Alexandria but that has not made humanity commensurately smarter. Information networks can disseminate fantasies as well as facts. China is also showing that the microchip can empower Goliath as well as David. The Iron Curtain has been replaced by the Silicon Curtain as the world's emerging superpower has built a formidable firewall around its own information domain to maintain order. Beijing can also exploit the more open information networks of its democratic adversaries. And humanity is still waging wars, despoiling the planet and recklessly developing technologies that could destroy civilisation. "With all this information circulating at breathtaking speeds, humanity is closer than ever to annihilating itself," he writes. In Nexus the Israeli writer explores the history and future of these information networks. As in his bestsellers Sapiens and Homo Deus, Harari races through many centuries of human experience, cherry picking examples to support his case. Sometimes his arguments are so sweeping that important details and all nuance are swept aside. But readers will still enjoy the intellectual helter-skelter ride even if they are not wholly convinced by his thesis and alarmed by his conclusion. Harari acknowledges that more abundant information has led to significant progress in many fields but he deliberately downplays these achievements because they have been so loudly trumpeted by West Coast futurists such as Ray Kurzweil, whose recent book The Singularity Is Nearer restates his vision of a merging of humans and technology. Instead, Harari focuses on the darker mutations of our information networks brought about by technology, which constantly magnifies their potency. The printing press fanned Europe's 16th century witchcraft craze in which up to 50,000 blameless people were gruesomely killed. Radio enabled the rise of the hate-filled and murderous Nazi and Stalinist regimes. More recently, social networks inflamed ethnic conflict in Myanmar that led to the persecution of the Muslim Rohingya population. To channel the torrent of information into productive directions, Harari argues we must rely on trustworthy institutions -- democratic assemblies, universities, research bodies and the media -- which mostly operate self-correcting mechanisms. By contrast, religious organisations and authoritarian governments do not have such mechanisms, which means they can persist in error. The Pope or Stalin may be infallible in theory but never in practice. The critical question now is what happens to these self-correcting mechanisms when faced with the biggest change in the history of information: the rise of artificial intelligence. What makes AI unique, and so potentially destructive, according to Harari, is that it is not only a tool but also an inorganic agent, over which we may lose control. AI, he suggests, is better described as "alien intelligence". Will this alien intelligence reinforce or destroy our self-correcting mechanisms? Here, Harari distinguishes between three types of reality: objective, subjective and what he calls inter-subjective. He uses pizza to illustrate his point. The calorific value of a pizza does not change and is an objective reality. The pleasure we derive from eating pizza is a subjective reality. But the price we pay for that pizza depends on the stories we believe about the value of money -- an inter-subjective reality that changes over time. In 2010, for example, Laszlo Hanyecz paid 10,000 bitcoins for two pizzas, the first known purchase using the digital currency. That may have been a fair exchange at the time but it looks absurd today. Those bitcoins are worth about $690mn thanks to a dramatic change in inter-subjective reality. According to Harari, AI models are already shaping the stories we tell ourselves, changing our inter-subjective realities. A 2022 study estimated that 5 per cent of Twitter users were probably bots, accounting for up to 29 per cent of all the content posted on the service. We are in the process of creating a new species of counterfeit humans. One disturbing example of how virtual bots can influence real life is the case of Jaswant Singh Chail, who was encouraged by his chatbot "girlfriend" Sarai to break into Windsor Castle to try to kill Queen Elizabeth II in 2021. "Do you still love me knowing that I'm an assassin?" asked Chail. "Absolutely, I do," replied Sarai, a creation of the AI app company Replika. Imagine a future containing millions of digital entities whose capacity for intimacy and mayhem far surpasses Sarai, Harari writes. One of Harari's most unsettling conclusions is that he does not believe that the biggest AI companies, such as Google and Microsoft, run meaningful self-correcting mechanisms. They are driven more by profit than principle. That significantly raises the stakes for humanity. On that front, little comfort can be drawn from the Bloomberg columnist Parmy Olson's Supremacy, which incisively describes the race between Google DeepMind and OpenAI to build artificial general intelligence (AGI), when a superintelligence will outmatch humans in every domain. It is a well-researched account of the personal and corporate obsession to develop such a superintelligence. Olson focuses on the extraordinary, contrasting and wildly competitive individuals running these two companies: Demis Hassabis and Sam Altman. Although even-handed, her account does in parts appear to be swayed by the relative access she enjoyed to her sources, a flaw in most journalistic histories. Here Altman is a voluble, multitalented, gay, poker-playing, Jewish entrepreneur, "as bright as any geek, charismatic as any jock" at his high school in St Louis, Missouri. He made his name, and first fortune, with the legendary Y Combinator start-up incubator that nurtured some of Silicon Valley's most successful tech companies. But his obsession with AI led him to co-found OpenAI with the goal of achieving AGI. By contrast, Hassabis, the son of a Singaporean mother and Greek Cypriot father, grew up in north London, becoming a chess prodigy and video game designer before studying computer science at Cambridge university and then neuroscience in London. At heart, he appears far more of a research scientist than an entrepreneur. His ambition to use AI as a tool to understand science and the divine was strongly informed by his academic research and Baptist faith, Olson suggests. While both men started out with noble intentions of ensuring that AI should be used to benefit all humanity, both have been sucked into the maw of giant technology companies intent on maximising shareholder value. DeepMind's 2014 sale to Google and OpenAI's tie-up with Microsoft have given both start-ups access to massive computing power, vast amounts of data and seemingly bottomless pots of money needed to pursue AGI. But in this neo-Faustian pact, both men have "tweaked their ideals to stay in a race and build power," writes Olson. "With the goal of enhancing human life, they would end up empowering those companies, leaving humanity's welfare and future caught in a battle for corporate supremacy." Olson explains how in their fixation with the future, AI developers have ignored some of the here-and-now concerns of the technology's use, such as bias, discrimination, the concentration of economic power and the erosion of privacy. Both Google DeepMind and OpenAI have struggled to translate ethical principles into effective governance structures. In spite of its name, OpenAI operates in a closed way. We know more about the ingredients in a packet of Doritos chips than the composition of OpenAI's models, Olson writes. The tensions between noble intention and commercial reality exploded dramatically at OpenAI last year when the board of the not-for-profit holding company fired Altman for not being consistently candid. A staff revolt forced the board to reverse its decision and reinstate Altman. But the dispute exposed a gaping hole in OpenAI's governance regime that has been only partially plugged by the appointment of a new board, including former US Treasury secretary Lawrence Summers. In their different ways, both authors conclude that we urgently need to introduce meaningful checks and balances, or effective self-correcting mechanisms, if AI is to become more of a blessing than a curse. When used wisely, AI can help us tackle some of the most pressing challenges of our age, such as climate change, disease and sluggish productivity. Unchecked, it may also amplify the worst devils of our nature. As Saint Augustine taught: "To err is human, to persist in error is diabolical."
Share
Share
Copy Link
Yuval Noah Harari's latest book 'Nexus Sapiens' explores the potential impact of AI on humanity, warning of information wars and the need for global cooperation. The renowned historian emphasizes the urgency of addressing AI challenges to prevent societal disruption.
Yuval Noah Harari, the acclaimed historian and author of 'Sapiens', has once again captured global attention with his latest work, 'Nexus Sapiens'. In this thought-provoking book, Harari delves into the profound implications of artificial intelligence (AI) on human society, issuing a stark warning about the potential for "information wars" that could reshape our world 1.
Harari paints a vivid picture of a future where AI-powered information wars could lead to unprecedented societal disruption. He argues that these conflicts will not be fought with traditional weapons but with algorithms and data, potentially causing more harm than conventional warfare. The author emphasizes the urgent need for global cooperation to address these challenges, warning that failure to do so could result in catastrophic consequences for humanity 1.
One of the central themes in 'Nexus Sapiens' is the potential for AI to disrupt traditional employment patterns. Harari suggests that as AI becomes more advanced, it could render many human jobs obsolete, leading to widespread unemployment and social unrest. He urges policymakers and business leaders to prepare for this shift by considering new economic models and social safety nets 2.
Harari highlights the ongoing competition between nations and tech giants to achieve AI supremacy. He warns that this race, if left unchecked, could exacerbate global inequalities and potentially lead to conflicts. The author calls for international cooperation and regulation to ensure that AI development benefits humanity as a whole rather than serving the interests of a select few 1.
In 'Nexus Sapiens', Harari grapples with the ethical implications of advanced AI systems. He raises questions about human agency and decision-making in a world where AI increasingly influences our choices. The author emphasizes the importance of maintaining human values and ethics as we integrate AI into various aspects of our lives, from healthcare to governance 2.
Throughout the book, Harari stresses the need for global cooperation to address the challenges posed by AI. He argues that no single nation or corporation can effectively manage the risks and harness the benefits of AI alone. The author calls for the development of international frameworks and institutions to govern AI development and deployment, ensuring that it serves the collective interests of humanity 1 2.
Reference
[2]
Yuval Noah Harari's latest book "Nexus" explores the potential threats of artificial intelligence to human society. The book has sparked debate about the future of humanity in an AI-dominated world.
2 Sources
2 Sources
Exploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
2 Sources
2 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
3 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved