Curated by THEOUTPOST
On Tue, 19 Nov, 12:02 AM UTC
2 Sources
[1]
AI feels like an unstoppable force. But it is not a panacea for businesses or society
University of Bath provides funding as a member of The Conversation UK. In Greek mythology, Prometheus is credited with giving humans fire as well as the "spark" that spurred civilisation. One of the unintended consequences of Prometheus's "gift" was that the need for celestial Gods diminished. Modern humans have been up to all sorts of things that present similar unintended consequences, from using CFCs that led to a hole in the ozone layer to building systems that they do not understand or cannot fully control. In dabbling with artificial intelligence (AI), humans seem to have taken on the role of Prometheus - apparently gifting machines the "fire" that sparked civilisation. Predicting the future is best left to shamans and futurologists. But we could be better informed about the dangers that follow from how AI operates and work out how to avoid the pitfalls. First, we must recognise that AI holds immense promise for human society. AI is becoming ubiquitous - from mundane tasks such as writing emails to complex settings that require human expertise. AI - by which we mean large language models (LLMs) that appear to "understand" and produce human language - are prediction machines. They are trained on large datasets that enable them to establish statistical associations between a huge number of variables and to predict what is next. If you have used Google, you might have experienced some version of this through its predictive prompts. For example, you might type "how to drive" and Google will complete it as "how to drive an automatic car". It is unlikely to complete it with "how to drive an aeroplane". Google establishes this by looking at the history of what words come after "how to drive". The larger the dataset upon which it has been trained, the more accurate its prediction will be. Variations of this logic are used in all of its current applications. AI's strength, of course, is that it can process untold amounts of data, and extrapolate it to apply to the future. Read more: AI development works better for everyone when its workforce is well looked after But this strength is also its weakness - it makes it vulnerable to a phenomenon management scholars refer to as the "confidence trap". This is the tendency to assume that since earlier decisions have led to positive outcomes, continuing in the same way in future will continue to be OK. Consider an example: the intervals between maintenance of critical aeroplane parts. If increasing the intervals in the past has worked out fine (no failures), these might be adopted widely and there might be a push to increase the intervals further. Yet, it turned out that this was a recipe for disaster. Alaska Airlines flight 261 crashed into the Pacific Ocean killing all 88 people on board because - perhaps influenced by previous success - a decision was made to delay the maintenance of a critical part. AI might just exacerbate this tendency. It can take attention away from signs that there are problems as AI analysis feeds into the picture to inform decision-making. Or AI can extrapolate the results of the past and take decisions without human intervention. Take the example of driverless cars, which have been involved in more than a dozen cases of pedestrians being killed. No dataset, no matter its size, can provide training for every potential action a pedestrian could take. AI cannot yet compete with human discretion in situations like these. But more worryingly, AI can diminish human capabilities to the extent that the ability to determine when to intervene might be lost. Researchers have found that use of AI leads to skill decay - a particular concern where workplace decisions involve life-or-death consequences. Amazon learned the hard way about letting "prediction machines" make decisions when its internal hiring tool discriminated against women as it was trained on a database spanning a ten-year period that skewed towards males. These are, of course, examples that we are aware of. As LLMs get more complex and their inner workings become more opaque, we might not even realise when things go astray. Looking backwards Because AI mirrors the past, it might also be limited in its ability to spark radical innovation. By definition, a radical innovation is a break from the past. Consider the context of photography. Innovative photographers were able to change the way in which the business was done - the history of photojournalism is an example of how something that started as a way of illustrating the news gradually acquired storytelling power and was elevated to the status of an art form. Similarly, fashion designers such as Coco Chanel modernised women's clothing, freeing them from uncomfortable long skirts and corsets that lost their relevance in the post-war world. The founder of sportswear manufacturer Under Armour, former college football player Kevin Plank, used the discomfort from sweaty cotton undershirts as an opportunity to develop clothing using microfibres that draw moisture away from the body. AI can improve on these innovations. But because of how it operates in its current form, it is unlikely to be the source of novelties. Simply put, AI is unable to see or show us the world in a new way, a shortcoming we have termed the "AI Chris Rock problem", inspired by a joke the comedian cracked about making bullets prohibitively expensive. By suggesting a remedy that involved "bullet control" rather than gun control to curb violence, Rock got laughs tapping into the cultural zeitgeist and presenting an innovative solution. In doing so, he also highlighted the absurdity of the situation - something that requires human perception. AI shows its shortcomings when what previously worked loses its relevance or problem-solving power. AI's past success means it will roll out in ever-widening circles - but this itself constitutes a confidence trap that humans should avoid. Prometheus was ultimately rescued by Hercules. No such god stands in the wings for humans. This implies more, rather than less, responsibility rests on our shoulders. Part of this includes ensuring our elected representatives provide regulatory oversight for AI. After all, we cannot let the technocrats play with fire at our expense.
[2]
AI feels like an unstoppable force. But it is not a panacea for businesses or society
In Greek mythology, Prometheus is credited with giving humans fire as well as the "spark" that spurred civilization. One of the unintended consequences of Prometheus's "gift" was that the need for celestial Gods diminished. Modern humans have been up to all sorts of things that present similar unintended consequences, from using CFCs that led to a hole in the ozone layer to building systems that they do not understand or cannot fully control. In dabbling with artificial intelligence (AI), humans seem to have taken on the role of Prometheus -- apparently gifting machines the "fire" that sparked civilization. Predicting the future is best left to shamans and futurologists. But we could be better informed about the dangers that follow from how AI operates and work out how to avoid the pitfalls. First, we must recognize that AI holds immense promise for human society. AI is becoming ubiquitous -- from mundane tasks such as writing emails to complex settings that require human expertise. AI -- by which we mean large language models (LLMs) that appear to "understand" and produce human language -- are prediction machines. They are trained on large datasets that enable them to establish statistical associations between a huge number of variables and to predict what is next. If you have used Google, you might have experienced some version of this through its predictive prompts. For example, you might type "how to drive" and Google will complete it as "how to drive an automatic car." It is unlikely to complete it with "how to drive an airplane." Google establishes this by looking at the history of what words come after "how to drive." The larger the dataset upon which it has been trained, the more accurate its prediction will be. Variations of this logic are used in all of its current applications. AI's strength, of course, is that it can process untold amounts of data, and extrapolate it to apply to the future. But this strength is also its weakness -- it makes it vulnerable to a phenomenon management scholars refer to as the "confidence trap." This is the tendency to assume that since earlier decisions have led to positive outcomes, continuing in the same way in future will continue to be OK. Consider an example: the intervals between maintenance of critical airplane parts. If increasing the intervals in the past has worked out fine (no failures), these might be adopted widely and there might be a push to increase the intervals further. Yet, it turned out that this was a recipe for disaster. Alaska Airlines flight 261 crashed into the Pacific Ocean killing all 88 people on board because -- perhaps influenced by previous success -- a decision was made to delay the maintenance of a critical part. AI might just exacerbate this tendency. It can take attention away from signs that there are problems as AI analysis feeds into the picture to inform decision-making. Or AI can extrapolate the results of the past and take decisions without human intervention. Take the example of driverless cars, which have been involved in more than a dozen cases of pedestrians being killed. No dataset, no matter its size, can provide training for every potential action a pedestrian could take. AI cannot yet compete with human discretion in situations like these. But more worryingly, AI can diminish human capabilities to the extent that the ability to determine when to intervene might be lost. Researchers have found that use of AI leads to skill decay -- a particular concern where workplace decisions involve life-or-death consequences. Amazon learned the hard way about letting "prediction machines" make decisions when its internal hiring tool discriminated against women as it was trained on a database spanning a ten-year period that skewed towards males. These are, of course, examples that we are aware of. As LLMs get more complex and their inner workings become more opaque, we might not even realize when things go astray. Looking backwards Because AI mirrors the past, it might also be limited in its ability to spark radical innovation. By definition, a radical innovation is a break from the past. Consider the context of photography. Innovative photographers were able to change the way in which the business was done -- the history of photojournalism is an example of how something that started as a way of illustrating the news gradually acquired storytelling power and was elevated to the status of an art form. Similarly, fashion designers such as Coco Chanel modernized women's clothing, freeing them from uncomfortable long skirts and corsets that lost their relevance in the post-war world. The founder of sportswear manufacturer Under Armour, former college football player Kevin Plank, used the discomfort from sweaty cotton undershirts as an opportunity to develop clothing using microfibres that draw moisture away from the body. AI can improve on these innovations. But because of how it operates in its current form, it is unlikely to be the source of novelties. Simply put, AI is unable to see or show us the world in a new way, a shortcoming we have termed the "AI Chris Rock problem", inspired by a joke the comedian cracked about making bullets prohibitively expensive. By suggesting a remedy that involved "bullet control" rather than gun control to curb violence, Rock got laughs tapping into the cultural zeitgeist and presenting an innovative solution. In doing so, he also highlighted the absurdity of the situation -- something that requires human perception. AI shows its shortcomings when what previously worked loses its relevance or problem-solving power. AI's past success means it will roll out in ever-widening circles -- but this itself constitutes a confidence trap that humans should avoid. Prometheus was ultimately rescued by Hercules. No such god stands in the wings for humans. This implies more, rather than less, responsibility rests on our shoulders. Part of this includes ensuring our elected representatives provide regulatory oversight for AI. After all, we cannot let the technocrats play with fire at our expense.
Share
Share
Copy Link
An analysis of AI's potential and limitations, highlighting its promise for society while cautioning against overreliance and potential pitfalls in decision-making and innovation.
Artificial Intelligence (AI) has emerged as a transformative force in modern society, offering immense potential across various domains. From handling mundane tasks like email composition to tackling complex problems that traditionally required human expertise, AI's presence is increasingly ubiquitous 12. At the core of this AI revolution are Large Language Models (LLMs), sophisticated prediction machines trained on vast datasets to understand and generate human-like language.
AI's functionality is rooted in its ability to process enormous amounts of data and establish statistical associations between variables. This capability allows AI to make predictions based on patterns observed in historical data. A simple example of this predictive power can be seen in Google's search autocomplete feature, where the system suggests likely completions for user queries based on past search patterns 12.
While AI's predictive capabilities are impressive, they also expose a significant vulnerability known as the "confidence trap." This phenomenon occurs when past successes lead to an overconfidence in continuing the same approach, potentially overlooking emerging risks or changing circumstances 12.
A stark illustration of this danger comes from the aviation industry. The crash of Alaska Airlines flight 261, which resulted in 88 fatalities, was attributed to a decision to delay critical maintenance based on past success with extended maintenance intervals 12. This tragedy underscores the potential consequences of blindly trusting historical data without considering evolving risks.
The increasing reliance on AI systems raises concerns about the potential decay of human skills and decision-making abilities. Research has shown that prolonged use of AI can lead to skill atrophy, a particularly worrying trend in fields where human judgment is crucial for life-or-death decisions 12.
Moreover, AI systems can perpetuate and amplify existing biases present in their training data. Amazon's experimental AI hiring tool, which showed bias against women due to historical hiring patterns, serves as a cautionary tale of the unintended consequences of unchecked AI implementation in sensitive areas 12.
While AI excels at incremental improvements based on existing data, it faces limitations in driving radical innovation. By definition, groundbreaking innovations often represent a departure from historical patterns, something that AI, with its reliance on past data, struggles to achieve 12.
This limitation in AI's creative capabilities is exemplified by what the authors term the "AI Chris Rock problem." Named after the comedian's innovative joke about "bullet control" as an alternative to gun control, this concept highlights AI's inability to generate truly novel ideas or tap into cultural nuances in the way human creativity can 12.
As AI continues to evolve and expand its reach, it's crucial for businesses and society to approach its implementation with a balanced perspective. While embracing the undeniable benefits of AI, we must remain vigilant about its limitations and potential pitfalls. This includes maintaining human oversight, regularly reassessing AI systems for bias or outdated assumptions, and preserving human skills and creativity alongside AI advancements 12.
By understanding both the promise and the perils of AI, we can work towards harnessing its power responsibly, ensuring that it serves as a tool for progress rather than a source of unintended consequences.
Reference
[1]
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
An exploration of AI's potential and risks through the lens of five influential thinkers from the past, offering valuable insights for our current AI landscape.
2 Sources
2 Sources
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
2 Sources
2 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
As major tech companies invest heavily in AI, questions arise about sustainability and the potential of smaller, specialized models. This story explores the current AI landscape, its challenges, and emerging alternatives.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved