2 Sources
[1]
Artificial intelligence: what five giants of the past can teach us about handling the risks
De Montfort University provides funding as a member of The Conversation UK. The progress of artificial intelligence (AI) has been relentless. With OpenAI's latest model, o3, recently breaking records yet again, it raises urgent questions about safety, as well as the future of humanity. One place we can turn for help is to great thinkers from the past. They explored beyond the obvious in their worlds and often looked into the future, foreseeing a time when machines would have AI-like capabilities. The English 19th century mathematician and writer Ada Lovelace is sometimes recognised as the first computer programmer for her work with the polymath Charles Babbage on his "analytical engine". This was a general purpose mechanical computer, which was never completed, but its design mirrored that of computers decades later. Her 1842 notes to Babbage, exploring the potential of his proposed device, foresaw something akin to AI in future. "It might act upon other things besides number", she said, suggesting that such a machine could one day express relationships between pitched sounds in order to "compose elaborate and scientific pieces of music of any degree of complexity or extent". This requires pattern recognition across a vast array of sound and music data - exactly what large language models are doing today by generating music from text prompts. All the same, Lovelace was sceptical about the machine's thinking capabilities, arguing it would still be dependent on humans to originate whatever it could come up with. Indeed, AI models today are still not really thinking, so much as building sentences based on mathematical probabilities from being trained on trillions of human words from the internet. Lovelace pointed to such limitations to "guard against the possibility of exaggerated ideas that might arise as to the powers of the analytical engine". However, she also emphasised the "collateral influences" this machine could have beyond its bare output. Her example is that it could shed new light on science, but the wider implication is that such devices must never be underestimated. The Turing test Lovelace's argument also raised another implicit question. What happens if and when the machines do become the originators, once sentience is no longer science fiction? This inspired another English mathematician and thinker a few decades later, Alan Turing. Turing's 1949 "imitation game", later known as the Turing test, sought to determine whether a computer could think in a way comparable to a human. It remained a key test of AI until it was considered surpassed by OpenAI's ChatGPT in 2022. Turing actually thought this would happen sooner, writing in a famous 1950 paper: I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. He wasn't especially pessimistic about what crossing this rubicon would mean, arguing in the same paper in favour of trying to create a machine that simulated a child's mind rather than an adult's. He thought this could be "easily programmed", implying we had little to fear from such endeavours. Equally, he wasn't blind to the potential for humans to end up subordinated by thinking machines. In a public lecture in 1951, he remarked: "If a machine can think, it might think more intelligently than we do, and then where should we be?" Turing's biographer, Christof Teuscher, described him as an "Orwell of science". It's interesting to contrast his views with George Orwell himself, who despite never pondering AI, did talk about the dangers of machines more generally in The Road to Wigan Pier (1937). If you are prepared to indulge swapping out the references to "machines" for "AI", it offers interesting possibilities about what Orwell might have made of today's technological arms race: The sensitive person's hostility to [AI] is in one sense unrealistic, because of the obvious fact that [AI] has come to stay. But as an attitude of mind there is a great deal to be said for it ... Verbally, no doubt, we would agree that [AI] is made for man and not man for [AI]; in practice any attempt to check the development of [AI] appears to us an attack on knowledge and therefore a kind of blasphemy. And even if the whole of humanity suddenly revolted against [AI] and decided to escape to a simpler way of life, the escape would still be immensely difficult ... Mechanise the world as fully as it might be mechanised, and whichever way you turn there will be some [AI] cutting you off from the chance of working - that is, of living. Norbert Wiener's ethics This brings us to the American scientist and mathematician Norbert Wiener. Recognised as the founder of computer ethics, Wiener's seminal work is The Human Use of Human Beings (1950), which aimed to "warn against the dangers" of exploiting machines' potential. Wiener foresaw a time when machines would be talking to one another, and improve over time by being able to keep track of their past performances. Comparing it to the old folk tale of a person finding a djinnee (genie) in a bottle and knowing it was better left there, he wrote: The machine like the djinnee which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. Decades later, the English physicist Stephen Hawking had similar concerns. He wrote in 2016 that AI could be: The biggest event in the history of our civilisation, but it could also be the last - unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many. In his final months, he wrote: I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans. These five giants of the past prompt us to think very carefully about AI. Lovelace talked about a human tendency to first overrate the potential of a new technology, only to later over-correct by underestimating the reality. Wiener warned against the "selfish exploitation" of untested technological potential, which has surely led us to numerous catastrophic outcomes from IT failures over the years. Clearly the same thing could now happen with a much more powerful technology. It's likely that these writers would have looked at recent developments and seen fools rushing in where angels fear to tread.
[2]
Artificial intelligence: What five giants of the past can teach us about handling the risks
The progress of artificial intelligence (AI) has been relentless. With OpenAI's latest model, o3, recently breaking records yet again, it raises urgent questions about safety, as well as the future of humanity. One place we can turn for help is to great thinkers from the past. They explored beyond the obvious in their worlds and often looked into the future, foreseeing a time when machines would have AI-like capabilities. The English 19th century mathematician and writer Ada Lovelace is sometimes recognized as the first computer programmer for her work with the polymath Charles Babbage on his "analytical engine." This was a general purpose mechanical computer, which was never completed, but its design mirrored that of computers decades later. Her 1842 notes to Babbage, exploring the potential of his proposed device, foresaw something akin to AI in future. "It might act upon other things besides number," she said, suggesting that such a machine could one day express relationships between pitched sounds in order to "compose elaborate and scientific pieces of music of any degree of complexity or extent." This requires pattern recognition across a vast array of sound and music data -- exactly what large language models are doing today by generating music from text prompts. All the same, Lovelace was skeptical about the machine's thinking capabilities, arguing it would still be dependent on humans to originate whatever it could come up with. Indeed, AI models today are still not really thinking, so much as building sentences based on mathematical probabilities from being trained on trillions of human words from the internet. Lovelace pointed to such limitations to "guard against the possibility of exaggerated ideas that might arise as to the powers of the analytical engine." However, she also emphasized the "collateral influences" this machine could have beyond its bare output. Her example is that it could shed new light on science, but the wider implication is that such devices must never be underestimated. The Turing test Lovelace's argument also raised another implicit question. What happens if and when the machines do become the originators, once sentience is no longer science fiction? This inspired another English mathematician and thinker a few decades later, Alan Turing. Turing's 1949 "imitation game", later known as the Turing test, sought to determine whether a computer could think in a way comparable to a human. It remained a key test of AI until it was considered surpassed by OpenAI's ChatGPT in 2022. Turing actually thought this would happen sooner, writing in a famous 1950 paper: "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." He wasn't especially pessimistic about what crossing this rubicon would mean, arguing in the same paper in favor of trying to create a machine that simulated a child's mind rather than an adult's. He thought this could be "easily programmed," implying we had little to fear from such endeavors. Equally, he wasn't blind to the potential for humans to end up subordinated by thinking machines. In a public lecture in 1951, he remarked: "If a machine can think, it might think more intelligently than we do, and then where should we be?" Turing's biographer, Christof Teuscher, described him as an "Orwell of science." It's interesting to contrast his views with George Orwell himself, who despite never pondering AI, did talk about the dangers of machines more generally in The Road to Wigan Pier (1937). If you are prepared to indulge swapping out the references to "machines" for "AI," it offers interesting possibilities about what Orwell might have made of today's technological arms race: "The sensitive person's hostility to [AI] is in one sense unrealistic, because of the obvious fact that [AI] has come to stay. But as an attitude of mind there is a great deal to be said for it ...Verbally, no doubt, we would agree that [AI] is made for man and not man for [AI]; in practice any attempt to check the development of [AI] appears to us an attack on knowledge and therefore a kind of blasphemy. And even if the whole of humanity suddenly revolted against [AI] and decided to escape to a simpler way of life, the escape would still be immensely difficult ... Mechanize the world as fully as it might be mechanized, and whichever way you turn there will be some [AI] cutting you off from the chance of working -- that is, of living." Norbert Wiener's ethics This brings us to the American scientist and mathematician Norbert Wiener. Recognized as the founder of computer ethics, Wiener's seminal work is The Human Use of Human Beings (1950), which aimed to "warn against the dangers" of exploiting machines' potential. Wiener foresaw a time when machines would be talking to one another, and improve over time by being able to keep track of their past performances. Comparing it to the old folk tale of a person finding a djinnee (genie) in a bottle and knowing it was better left there, he wrote: "The machine like the djinnee which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. " Decades later, the English physicist Stephen Hawking had similar concerns. He wrote in 2016 that AI could be: "The biggest event in the history of our civilization, but it could also be the last -- unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many." In his final months, he wrote: "I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans." These five giants of the past prompt us to think very carefully about AI. Lovelace talked about a human tendency to first overrate the potential of a new technology, only to later over-correct by underestimating the reality. Wiener warned against the "selfish exploitation" of untested technological potential, which has surely led us to numerous catastrophic outcomes from IT failures over the years. Clearly the same thing could now happen with a much more powerful technology. It's likely that these writers would have looked at recent developments and seen fools rushing in where angels fear to tread.
Share
Copy Link
An exploration of AI's potential and risks through the lens of five influential thinkers from the past, offering valuable insights for our current AI landscape.
Ada Lovelace, often recognized as the first computer programmer, foresaw the potential of AI-like capabilities in her 1842 notes on Charles Babbage's analytical engine. She envisioned machines that could "act upon other things besides number," such as composing music, a capability now realized by large language models 1. However, Lovelace remained skeptical about machines' independent thinking abilities, arguing they would still rely on human input - a perspective that aligns with current AI models' dependence on training data.
Alan Turing, another English mathematician, proposed the famous "imitation game" or Turing test in 1949. This test, designed to determine if a computer could think like a human, remained a benchmark for AI capabilities until recently surpassed by ChatGPT in 2022 2. Turing anticipated the rapid advancement of AI, predicting that by the end of the 20th century, machines would be considered capable of thinking. While not overly pessimistic, he did caution about the potential for machines to outthink humans.
Although George Orwell never directly addressed AI, his writings on machines offer relevant insights. In "The Road to Wigan Pier" (1937), Orwell's concerns about mechanization can be interpreted as a warning about AI's potential to dominate human life and work. His observations highlight the tension between technological progress and maintaining human agency 1.
Norbert Wiener, considered the founder of computer ethics, warned about the dangers of exploiting machine potential in his 1950 work "The Human Use of Human Beings." He predicted machines communicating with each other and improving through self-assessment. Wiener cautioned that advanced AI might make decisions incompatible with human values or expectations 2.
In more recent times, physicist Stephen Hawking echoed similar concerns about AI. He viewed AI as potentially "the biggest event in the history of our civilization," but also warned it could be the last if risks are not properly managed. Hawking highlighted specific dangers such as autonomous weapons and new forms of oppression 1. In his final months, he expressed a stark fear that "AI may replace humans altogether" 2.
These historical perspectives from intellectual giants offer valuable insights as we navigate the rapidly evolving landscape of AI. Their combined wisdom underscores the need for careful consideration of AI's potential benefits and risks, ethical development, and the importance of maintaining human agency in an increasingly AI-driven world.
Elon Musk's xAI has made Grok 2.5, an older version of its AI model, open source on Hugging Face. This move comes after recent controversies surrounding Grok's responses and aims to increase transparency in AI development.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago
NVIDIA has introduced the Jetson AGX Thor Developer Kit, a compact yet powerful mini PC designed for AI, robotics, and edge computing applications, featuring the new Jetson T5000 system-on-module based on the Blackwell architecture.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Ex Populus, the company behind Ethereum-based gaming network Xai, has filed a lawsuit against Elon Musk's AI company xAI for trademark infringement and unfair competition, citing market confusion and reputational damage.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
The upcoming ROG Xbox Ally X, a collaboration between Asus and Microsoft, promises to revolutionize handheld gaming with its powerful AMD Ryzen AI Z2 Extreme processor and innovative AI-driven features.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
South Korea announces a major policy shift, making AI investment a top priority to combat economic slowdown and demographic challenges. The government plans to introduce policy packages for 30 major AI projects and create a substantial fund for strategic investments.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago