Curated by THEOUTPOST
On Wed, 31 Jul, 4:04 PM UTC
2 Sources
[1]
Opinion | A.I. Is Actually Our Friend
A lot of my humanistic and liberal arts-oriented friends are deeply worried about artificial intelligence, while acknowledging the possible benefits. I'm a humanistic and liberal arts type myself, but I'm optimistic, while acknowledging the dangers. I'm optimistic, paradoxically, because I don't think A.I. is going to be as powerful as many of its evangelists think it will be. I don't think A.I. is ever going to be able to replace us -- ultimately I think it will simply be a useful tool. In fact, I think instead of replacing us, I think A.I. will complement us. In fact, it may make us free to be more human. Many fears about A.I. are based on an underestimation of the human mind. Some people seem to believe that the mind is like a computer. It's all just information processing, algorithms all the way down, so of course machines are going to eventually overtake us. This is an impoverished view of who we humans are. The Canadian scholar Michael Ignatieff expressed a much more accurate view of the human mind last year in the journal Liberties: "What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection." The brain is its own universe. Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don't know how people think. The human mind isn't just predicting the next word in a sentence; it evolved to love and bond with others; to seek the kind of wisdom that is held in the body; to physically navigate within nature and avoid the dangers therein; to pursue goodness; to marvel at and create beauty; to seek and create meaning. A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn't mean the A.I. "mind" is like the human mind. The A.I."mind" lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences. A lot of human knowledge is the kind of knowledge that, say, babies develop. It's unconscious and instinctual. But A.I. only has access to conscious language. About a year ago, Ohio State University scholar Angus Fletcher did a podcast during which he reeled off some differences between human thinking and A.I. "thinking." He argued that A.I. can do correlations, but that it struggles with cause and effect; it thinks in truth or falsehood, but is not a master at narrative; it's not good at comprehending time. Like everybody else, I don't know where this is heading. When air-conditioning was invented, I would not have predicted: "Oh wow. This is going to create modern Phoenix." But I do believe lots of people are getting overly sloppy in attributing all sorts of human characteristics to the bots. And I do agree with the view that A.I. is an ally and not a rival -- a different kind of intelligence, more powerful than us in some ways, but narrower. It's already helping people handle odious tasks, like writing bureaucratic fund-raising requests and marketing pamphlets or utilitarian emails to people they don't really care about. It's probably going to be a fantastic tutor, that will transform education and help humans all around the world learn more. It might make expertise nearly free, so people in underserved communities will have access to medical, legal and other sorts of advice. It will help us all make more informed decisions. It may be good for us liberal arts grads. Peter Thiel recently told the podcast host Tyler Cowen that he believed A.I. will be worse for math people than it will be for word people, because the technology is getting a lot better at solving math problems than verbal exercises. It may also make the world more equal. In coding and other realms, studies so far show that A.I. improves the performance of less accomplished people more than it does the more accomplished people. If you are an immigrant trying to write in a new language, A.I. takes your abilities up to average. It will probably make us vastly more productive and wealthier. A 2023 study led by Harvard Business School professors, in coordination with the Boston Consulting Group, found that consultants who worked with A.I. produced 40 percent higher quality results on 18 different work tasks. Of course, bad people will use A.I. to do harm, but most people are pretty decent and will use A.I. to learn more, innovate faster and produce advances like medical breakthroughs. But A.I.'s ultimate accomplishment will be to remind us who we are by revealing what it can't do. It will compel us to double down on all the activities that make us distinctly human: taking care of each other, being a good teammate, reading deeply, exploring daringly, growing spiritually, finding kindred spirits and having a good time. "I am certain of nothing but of the holiness of the Heart's affections and the truth of Imagination," Keats observed. Amid the flux of A.I., we can still be certain of that.
[2]
AI regulation - will machines really destroy us before we do that ourselves?
Debates over the need to regulate artificial intelligence are sometimes dominated by extreme claims. The biggest among these is that AI poses an existential threat to humanity. So, does it? Or are the risks more mundane, practical, and strategic? One example of the latter is the bewildering rush with which some organizations are adopting cloud-based generative systems, apparently without forethought. And without the requisite skills, or any consideration of their own brand values, business models, and privileged data. In such scenarios, the first casualty is sometimes ethics or good governance, as reports have shown in recent years (see diginomica, passim). However, the second casualty is professional judgement and responsibility. An oft-quoted example is the Manhattan lawyer who relied on ChatGPT to discover relevant caselaw to present in litigation. But the precedents it presented were hallucinations, and it took a judge's expertise, knowledge, and first-hand experience to spot the error. Yet the unasked question there is: why did a successful lawyer abdicate his professional responsibility - barely months after the chatbot had been released? The answer is simple: the generated content seemed plausible and used the same language as expert humans. So, he trusted it - just like that. Doubtless many have made the same mistake, and will continue to do so. So, let's hope that experienced humans in the loop pick up all the errors. Other concerns have been well rehearsed for years - long before the AI summer gave us generative apps that promise something for nothing, like some code-based myth of perpetual motion. These include the risk of automating historic biases, by training AIs with data from flawed human systems (see diginomica, passim), with the end result being discrimination against ethnic minorities, women, and others. I won't rehash those arguments here, but they are well expressed in the book Hidden in White Sight by Calvin Lawrence, IBM CTO for Responsible and Trustworthy Artificial Intelligence. And in the story of Thomas Siebel, founder, Chair and CEO of C3ai, advising the US military not to use AI in recruitment, as it would always give the answer "white male who went to West Point". The challenge there, of course, is that this issue has become heavily politicized on both sides of the Atlantic. Some wrongly perceive the need for diversity in AI training (and elsewhere in life) as being a racist attack on white males, rather than an obligation to not automate and perpetuate discrimination against minorities, or against women. To those people I can only say this: it is simply about not industrializing systemic unfairness. And not giving it a veneer of computerised neutrality. I say this as a middle-aged, middle-class white male from the Home Counties. So, what of the rumoured apocalypse? Some commentators, academics, and technologists believe it to be a distinct possibility - once AIs hit the 'singularity' of exceeding human intelligence, and thus become exponentially smarter as they train themselves. Dr David Krueger is Assistant Professor of Machine Learning and Computer Vision at the University of Cambridge. He is one of several academics who take the risk very seriously - despite working within the industry themselves. Speaking at a Westminster policy eForum on AI regulation last week, he explained that this fear is not just of some AI-triggered apocalypse, but also of the abandonment of human agency. Though he didn't use these words, letting AIs become a ruling class over us is something that I would argue we are witnessing the beginnings of in real time. Organizations, professionals, students, and others already trust relatively primitive tools to know better than human experts. As a result, these things are already embedded in our lives, and even force-fed to us on social platforms (LinkedIn, for example). But these AIs are merely recycling human-authored data; they are neither artificial nor intelligent. Krueger said: I got into the field because I was worried about the long-term future of AI, and the potential for advanced AI systems to replace humanity as the driving force of our future on Earth. Ten years ago, this was viewed as a fringe, even crazy issue by people in the field. But in the years since, we've seen more and more people come to agree that it is a serious concern. Just how serious can be found in a public statement on the Center for AI Safety's website. It has been signed by hundreds of the world's leading academics, technologists, business leaders, and creatives. The statement simply says: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. It is hard to argue with that, or with the long list of signatories. But Krueger explained: I don't think we have even got close to making it the priority it needs to be. [...] I like to compare it to climate change, because I think there's this international aspect. AI is not something that I believe we can effectively regulate in a single country. We had scientific consensus on climate change four or five decades ago. Yet we still are not taking the most effective actions that we need. On AI, we are approaching a similar scientific consensus that the risk of extinction is serious [so the same problem may apply]. There are ways in which we can make positive steps on regulation [in one country]. But ultimately, we will always be subject to the concern that people will just go elsewhere. And that's really a problem, because I think we may need strict regulations. Why might they need to be strict, rather than - as we are constantly told - light touch so as not to impede 'innovation', that most overused and debased of words? He said: It's something that we will have to worry about, potentially, within the next ten years. And once we get to that point [the threat becoming real], we'll have a really difficult international cooperation problem. We need to be coordinating with other countries and figuring out how we can prevent race-to-the-bottom dynamics on regulation. This is where people will sacrifice safety, ethics, and the common good out of their greater concern for economic competitiveness. Especially when these companies are already so economically powerful. On that point (rather than the extinction question), I agree. Let's hope the UK government, among others, is listening as it draws up new legislation. Indeed, we can see the beginnings of these trends all around us. And that is partly because some AI vendors knew exactly what they were doing when they seeded the cloud with free generative apps: they were creating the dependency culture that naturally follows from humans being pleasure-seeking primates: push button, get free candy. It's the same reason that drug dealers are wealthy and Las Vegas hotels are full of people sitting at slot machines. Except with Gen-AI we supposedly hit the jackpot every time: free art, music, movies, stories, opinions, journalism, speeches, essays, and more. (Now we never need to pay clever people again! We don't even have to make things ourselves!) He continued: It is so tempting to ignore the risk of AI, because it's a daunting thing to try and address. It's much easier to imagine the situation is better than it is. Or to think, 'Oh, now that people have started to worry about it, let's get some research done'. Or, 'Let's set some rules to address it.' But unfortunately, none of these things are true. We need fundamental research breakthroughs in order to assure the safety of AI systems [not their risk]. We don't really understand how they work, so without that research we can't provide meaningful guarantees about AI's behaviour. And experts are pessimistic about those things changing at any point in the near future. Bear in mind, he is a Professor of Machine Learning. Krueger added: Plus, some of the misinformation that technology companies' advocates have put out, saying that AIs aren't 'black boxes' anymore: that's just a complete lie. We have really difficult problems, where we need fundamental research breakthroughs. So, I think that a lot of the ideas that people have about how we might regulate AI, while helpful in some ways, are not addressing the fundamental problem. Namely, we don't understand the technology well enough to make a meaningful assessment of risk. And we need to cooperate internationally, therefore, to make sure we are adopting a conservative approach. But again, we are told - albeit by technology companies, plus their investors and consultants - that this will "impede innovation". He said: That's a mistake. I'm not representing the rest of the safety community in saying that. Speaking personally, innovation is great, but we're talking about the future of humanity. The potential for millions or billions or people to die or lose their jobs, or for humanity to be replaced and lose control of the future. And that is more important than the economy. Fair enough. Though we should add the caveat that AI, like every new technology before it, will create new companies, services, industries, and jobs, even as it destroys others. On the face of it, Krueger's apocalyptic thinking would be easy for cynics to dismiss, given its association with that perennial tabloid obsession, The Terminator. The idea that marauding, embodied intelligences - an increasingly common term for robots - will decide to exterminate us. After all, many researchers now believe that robots are how AIs will learn about the real world, and that AIs are how robots will navigate it. So, it becomes a plausible future - if and when AIs are advanced enough to bear a grudge. (But currently they don't understand what 'grudge' means; they can explain it accurately, but that doesn't mean they are consciously aware of the meaning - of it or any other word.) However, what is interesting about this meme is that it is centuries old - millennia old, in fact. Fear of technology, science, and even of knowledge itself dates back through all of 20th Century science fiction, into 19th Century tales, and from there into the Renaissance, then Medieval times and the origins of the Faust legend, and thence back to Classical mythology - Prometheus stealing fire from the Gods and gifting us science - and the Biblical Eden. Fear of knowledge, progress, science, and the future is as old as storytelling itself. But we keep eating the forbidden fruit. It hasn't killed us yet - but that's a survivor talking. The body count is pretty high. The question then becomes why might an AI come to regard humans as a problem to solve? One answer is that AI companies already regard artists and professionals as urgent problems to get rid of, so why not everybody else? Another is simple: data. AIs need it and are meaningless without it. But unfortunately for us, the data consistently shows that we are burning the planet. Ironically, one of the major causes of climate change will be... AI. Data centers already use as much energy as the world's fourth largest economy, Japan, and rising. So, perhaps AIs will self-delete to save us? Flippancy aside, it seems that the best way to prevent AIs from deleting us would be to stop providing so much data that we are destroying the planet ourselves. But fear of the future isn't necessarily about evil machines and meddling scientists, suggested Krueger - a scientist who is meddling in our favour: There are many problematic uses of AI right now. Areas where AI might lead to the risk of a real loss of control, or troubling economic indicators that we are handing over too much control over the economy to AIs. There are a lot of huge opportunities with AI. But even if we set aside the existential risk, it's not a foregone conclusion that the development of AI will be a positive thing - even for the economy, let alone for people's day-to-day lives. If we look at the recent history of technology, it hasn't brought nearly the economic growth that has been promised. So, we should be sceptical of this narrative that 'AI is going to do amazing things by default, so we must get out of its way'. Instead, we need to do some deliberate steering of how technology is used and deployed. However sceptical some may be about apocalyptic threats and malign AIs, Krueger has a valid point about the tech sector. Think about it. Since the 1980s alone, we have seen countless transformative technologies. An incomplete list would include: GUIs, the IBM PC, the Apple Mac, Windows, desktop publishing, packaged applications, the public internet, the Web, client/server computing, ecommerce, dotcoms, email, mobile phones, texting, broadband networking, Wi-Fi, cloud platforms, robotics, automation, the app economy, on-demand services, social networking, chat, smartphones, tablets, smart watches, digital assistants, payment platforms, mobile banking, blockchain, cryptocurrency, tokenization, AI, and more, with quantum to come. Yet UK economic growth is in the very low-single or fractional digits, and productivity has been flatlining for 20 years. Meanwhile, many of us find it almost impossible to leave work and just switch off. The same is true in many parts of the world. But economies that are still booming, in relative terms, include the US and China, where countless massive IT companies and their investors are based. Much of this innovation, it seems, is largely benefiting the innovators. An overstatement? Not when some companies are worth trillions of dollars: more than the individual GDPs of nearly every nation on Earth. Krueger continued: Here's a very cartoon picture. Compare advanced AI systems to a button you can push that gives you a million dollars, but it has a one percent chance of destroying the world. A lot of people would push that button - and do it a lot of times. So, if a lot of people had access to the same button, the world would be over very quickly. That's the nature of the problem. It is easy, in a cynical world, to dismiss the doom merchants. And to observe that fear of the future is a narrative that has passed from generation to generation. As I said above, it is as old as storytelling itself. But I have a suggestion: add one word to the question 'Will AI destroy humanity?' And that word is 'our'. Will it destroy our humanity? As I have written in these pages before, we have been told throughout this century that AI will automate all the boring jobs and free us to be creative. But right now, it is doing the exact opposite. So, what makes us human? There would be general agreement that what sets us apart from the animals is our hands, our brains, and - above all - our ability to make things: painting, sculpture, music, photography, stories, plays, poems, dramas, films, games, inventions, machines, and buildings. And to pursue science, chemistry, engineering, biology, physics, astronomy, maths, and more - including software development. And to learn for ourselves and seek out knowledge first hand. All of this is definitively human. Of course, some unique, talented individuals will make extraordinary human work assisted by AI, just as they always have with new technologies. And perhaps there will be an equal but opposite reaction to Big Tech's AI dominance: a return to the bespoke, the handmade, the local, the analog, and the crafted - human expression in the margins. But on a global scale, if we hand over all of human creativity and expression to AIs, and just sit back and become passive consumers of machine ideas, machine-authored content, and machine-recycled simulations - reducing creativity to 'show me this!' instead of making it ourselves - haven't we destroyed humanity ourselves?
Share
Share
Copy Link
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
As artificial intelligence (AI) continues to advance at an unprecedented pace, public discourse has increasingly focused on the potential risks and benefits of this transformative technology. Recent surveys indicate that a significant portion of the population harbors concerns about AI's impact on society, with some fearing it could lead to humanity's downfall 1.
While public anxiety about AI is palpable, many experts in the field offer a more nuanced perspective. They acknowledge the need for caution but argue that the immediate threats posed by AI are often exaggerated. Instead, they point to more pressing concerns such as AI's potential to exacerbate existing societal issues like inequality and job displacement 2.
As the debate intensifies, policymakers face the complex task of developing regulatory frameworks for AI. The challenge lies in striking a balance between fostering innovation and mitigating potential risks. Some argue for preemptive measures to prevent worst-case scenarios, while others caution against overly restrictive policies that could stifle technological progress 1.
While long-term existential risks capture headlines, experts emphasize the importance of addressing more immediate AI-related issues. These include algorithmic bias, data privacy, and the spread of misinformation. Proponents of this view argue that focusing on these tangible problems can help build public trust in AI technologies while paving the way for responsible development 2.
As the AI debate continues, many emphasize the importance of informed public discourse. Experts argue that a better understanding of AI's capabilities and limitations can help alleviate unfounded fears while highlighting genuine areas of concern. This approach could lead to more effective and balanced regulatory measures 1.
Moving forward, many in the field advocate for collaborative approaches to AI development and regulation. This involves bringing together technologists, policymakers, ethicists, and the public to create comprehensive strategies for responsible AI advancement. Such cooperation could help ensure that AI technologies are developed and deployed in ways that benefit society while minimizing potential risks 2.
Reference
[1]
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
An exploration of the differences between AI-generated art and human creativity, highlighting the unique aspects of human consciousness and the challenges AI faces in replicating genuine intelligence.
2 Sources
2 Sources
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
4 Sources
4 Sources
Exploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
2 Sources
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved