Curated by THEOUTPOST
On Fri, 21 Feb, 12:04 AM UTC
3 Sources
[1]
Bang goes AI? DeepSeek and the 'Star Trek' future
Just when the AI market seemed on a steady course, DeepSeek entered like a kid setting off fireworks indoors. Its low-cost AI model, bypassing NVIDIA frameworks, led to a historic $600 billion slump for the US tech giant. And the industry's crown prince, OpenAI, has started a war on two fronts - the first is legal, challenging DeepSeek on model training, and the second is competitive, combatting its sudden rival with the launch of 03-mini. With much of Silicon Valley now in a tailspin, Big Tech is still scratching its head about how a much more efficient AI tool could take the world by surprise. Yet, while for many the full capabilities of this new Open Source platform are still being understood, it speaks to an age old truth in business - competition can come from anywhere, and ultimately this is the latest example of a fresh innovator pushing competitors to be more efficient with their emerging tech investments. For curious onlookers, what's DeepSeek really doing that is so gamechanging? The company is marketing itself in direct contest with OpenAI - "rivalling OpenAI's Model o1". DeepSeek's R1 LLM is priced at the fraction of the cost of vendor alternatives - one of its key draws. Another kicker is that its foundation is based on reinforcement learning rather than labeled data. For AI, this is revolutionary. Labeled data provides a target for the model to predict - it's like training wheels for AI: time-intensive to set up but helping keep models on the right path. DeepSeek runs without this, causing it to be slower upfront, but faster and more scalable in the long run, avoiding data tagging bottlenecks. Like ChatGPT in many aspects, DeepSeek can excel in mathematical and computational tasks, and through Open Source availability its 'weights' have been disclosed to the public, a huge win for the open-source community, in comparison to black box products on the market. All these disparate elements added up means DeepSeek provides attractive scope for largescale automation for users, with its free availability and capacity to create chatbots rivalling other models. But it's not all green flags. Concerns about data protection and information freedom are an issue, as data is housed in China under their own non-EU regulations. This is why it's vital that organizations and individuals alike should carefully consider if the business process and regime it sits in are acceptable, compared to current requirements for data privacy, protection, and creative and political expression. However, throwing data caution to the wind, consumers have flocked to the app in their millions, demonstrating that there will likely soon come a time where users are split between an 'everyday' AI they can play or interact with comparatively simply, and more expensive, advanced AI, coexisting for different, likely public sector, research, and industry use cases. Thus, for providers, the best competition strategy is to innovate, improve UX and functionality and find the right niche or market to dominate. LLMs and GenAI are of course just one avenue of AI innovation. A hot new buzzword in Big Tech is 'agentic AI'. This will be revolutionary for reimagining workflows and may soon create holistic AI ecosystems that autonomously manage and optimize processes in concert, with little human oversight in some use cases. Agentic AI is tipped by Gartner to revolutionize AI's potential, with scope for it to be featured in a third (33%) of enterprise software applications by 2028, up from a minimal 1% today. While more complex AI workflows are highly anticipated, basic prompt-and-response applications will become available off-the-shelf and, for now, this will serve many public users more than adequately. In our State of AI in Sales report, we found that nearly half (47%) of current AI users have no immediate plans to further integrate AI into their workflows. In fact, AI usage is barely moving past a basic level for many organizations. But those exploring exciting, nuanced applications will find more options with agentic AI. This different type of AI service is less likely to hallucinate as it's not the same kind of AI as GenAI, but will come with its own pros and cons to manage in terms of effectiveness, ability to execute what's asked of it, and the levels of human oversight required to ensure reliability and accountability for the actions it takes. A world of supercomputers and starships - with the pace of change in emerging tech, it seems like sci-fi isn't that far off from reality. But is that accurate? We are a long way from technology being leveraged for the sake of flash - what's practical is what's functional. Finding the right level of technology to solve the needs of the user is critical. For environmental and cost reasons, people don't get in a Ferrari to go pick up groceries or tear up a whole field for just one bowl of cornflakes. Right now, a lot of the effort, expense, and resources of AI 'behind the hood' is hidden from the public, but society needs to become more knowledgeable about this to make better decisions, and to help direct industry to innovate where it will have the most impact. Yet, it's plain to see that, all other issues put to one side, the likelihood of more ubiquitous and embedded AI just took a step towards reality. And, given that DeepSeek comes from outside the US, perhaps AI creation may come from a greater variety of cultures, potentially reshaping global AI offerings and diversifying centers of expertise and the ways in which different users are catered for. One of the most notable things that struck me in the vision that Star Trek laid out for a possible utopian future was the level of trust that users had in their computing. They appeared to have cracked issues of data privacy and security such that they didn't hesitate to use their AI for work and leisure, for reasons great or small. That hinges on trust. For current AI users, there must be trade-offs around trust and economics if they want to balance being good data stewards of their own or customer data, and both the monetary and external costs of their tech use. There is a logical progression in AI innovation. Users want support with small tasks, see value, and increase their expectation of what AI can deliver. If AI can sort emails, can it review them? If AI can find a fault, can it fix it? Hence the growing interest in agentic AI as a new milestone to reach. Arguably, there are tensions between innovation and trust, between economics and excitement, standard and frameworks and features. Over the next few years, the industry will need this to shake out more clearly if individual suppliers are to gauge their markets well and afford to keep innovating. Additionally, partnerships, ecosystems, and APIs - generally, working together to provide greater customer value - will need very clear international standards and for secure and trustworthy interoperability. This will be key as, barring a massive leap forward in AGI (artificial general intelligence), perhaps led by quantum computing, there's it not looking likely that there will be 'one system to rule them all', as in an all-encompassing AI that can accurately perform all the tasks a person or organisation might want. But siloed AIs, like any siloed software solution, aren't likely to create that Star Trek-like world society apparently envisions. Consumers and organizations must vote with their wallets, consciences, and needs in mind. But suppliers must really look ahead to a longer-term play if they are to answer these major challenges and support the society that they cannot just supply, but hopefully make better, with AI. We've featured the best AI phone.
[2]
10 key reasons AI went mainstream overnight - and what happens next
This AI thing has taken off really fast, hasn't it? It's almost like we mined some crashed alien spacecraft for advanced technology, and this is what we got. I know, I've been watching too much *Stargate*. But the hyper-speed crossing the chasm effects of generative AI are real. Generative AI, with tools like ChatGPT, hit the world hard in early 2023. All of a sudden, many vendors are incorporating AI features into their products, and our workflow patterns have changed considerably. Also: The best AI for coding in 2025 (and what not to use - including DeepSeek R1) How did this happen so quickly, essentially transforming the entire information technology industry overnight? What made this possible, and why is it moving so quickly? In this article, I look at ten key factors that contributed to the overwhelmingly rapid advancement of generative AI and its adoption into our technology stacks and workday practices. As I see it, the rapid rise of AI tools like ChatGPT and their widespread integration came in two main phases. Let's start with Phase I. Researchers have been working with AI for decades. I did one of my thesis projects on AI more than 20 years ago, launched AI products in the 1990s, and have worked with AI languages for as long as I've been coding. Also: 15 ways AI saved me time at work in 2024 - and how I plan to use it in 2025 But while all of that was AI, it was incredibly limited compared to what ChatGPT can do. As much as I've worked with AI throughout my educational and professional career, I was rocked back on my heels by ChatGPT and its brethren. That's Phase I. The 2020s marked an era of fundamental AI innovation that took AI from solving specific problems with the ability to work in very narrow domains to the ability to work on almost anything. There are three key factors in this phase. While AI has been researched and used for decades, for most of that time, it had some profound limitations. Most AIs had to be pre-trained with specific materials to create expertise. In the early 1990s, for example, I shipped an expert system-based product called *House Plant Clinic* that had been specifically trained on house plant maladies and remedies. It was very helpful as long as the plant and its related malady were in the training data. Any situation that fell outside that data was a blank to the system. Also: How to run DeepSeek AI locally to protect your privacy - 2 easy ways AIs also used neural networks that processed words one at a time, which made it hard for an AI to understand the difference between "a bank of the river" and "a bank in the center of town." But in 2017, Google posted a paper called "Attention Is All You Need." In it, they proposed a model called "self-attention" that lets AIs focus on what they identify as important words, allowing AIs to process entire sentences and thoughts at once. This "transformation of attention mechanisms" enabled the AIs to understand context (like whether the "bank" in a sentence refers to the side of a river or a building that holds money). The transformer approach gave researchers a way to train AIs on broad collections of information and determine context from the information itself. That meant that AIs could scale to train on almost anything, which enabled models like OpenAI's GPT-3.5 and GPT-4 to operate with knowledge bases that encompassed virtually the entire Internet and vast collections of printed books and materials. Also: What is sparsity? DeepSeek AI's secret, revealed by Apple researchers This makes them almost infinitely adaptable and able to pull on vast arrays of real-world information. That meant that AIs could be used for nearly any application, not ones specifically built to solve individual problems. While we spent months training *House Plant Clinic* on plant data, ChatGPT, Google Gemini, and Microsoft Copilot can all diagnose house plant problems (and so much more) without specialized training. The one gotcha has been the question of who owns all that training data? There are numerous lawsuits currently underway against the AI vendors for training (and using) data from copyrighted sources. This could restrict data available to large language models and reduce their usefulness. Another issue with the sort of infinitely scaled training data being used is that much of that information isn't vetted. I know this comes as a surprise to all of you, but information published on the Internet isn't always accurate, appropriate, or even sane. Vendors are working to strengthen guardrails, but we humans aren't even sure what is considered appropriate. Just ask two people with wildly divergent perspectives what the truth is, and you'll see what I mean. By the early 2020s, a number of companies and research teams developed software systems based on the transformer model and world-scale training datasets. But all of those sentence-wide transformation calculations required enormous computing capability. Also: AI data centers are becoming 'mind-blowingly large' It wasn't just the need to be able to perform massively parallel and matrix operations at high speed, it was also the need to do so while keeping power and cooling costs at a vaguely practical level. Early on, it turned out that NVIDIA's gaming GPUs were capable of the matrix operations needed by AI (gaming rendering is also heavily matrix-based). But then, NVIDIA developed its Ampere and Hopper series chips, which substantially improved both performance and power utilization. Also: 5 reasons why Google's Trillium could transform AI and cloud computing - and 2 obstacles Likewise, Google developed its TPUs (Tensor Processing Units), which were specifically designed to handle AI workflows. Microsoft and Amazon also developed custom chips (Maia and Graviton) to help them build out their AI data centers. There were three major impacts from these huge AI-chip-driven data centers: Okay, so now we have working technology. What of it? I mean, how many times has an engineering team produced a product or capability it thought was revolutionary, only to have their work output die due to lack of practicality or market acceptance? But here, now, with generative AI, the market forces are what are driving the real change. Let's dig into seven more key factors. And then came ChatGPT. It's a funny name and took a while for most of us to learn it. ChatGPT literally means a chat program that's generative, pre-trained, and uses transformer technology. But despite a name that only a geek could love, in early 2023, ChatGPT became the fastest-growing app of all time. OpenAI made ChatGPT free for everyone to use. Sure, there were usage limitations in the free version. It was also as easy (or easier) to use than a Google search. All you had to do was open the site and type in your prompt. That's it. And because of the three innovations we discussed earlier, ChatGPT's quality of response was breathtaking. Everyone who tried it suddenly realized they were touching the future. Also: Are ChatGPT Plus or Pro worth it? Here's how they compare to the free version Then, OpenAI opened the ChatGPT models to other programmers through an API. All any programmer needed was a weekend of learning and a credit card number in order to add world-changing AI into any application. Cost per API call wasn't much more than for any other commercial APIs, which suddenly meant that AI was a very high-profile, easy addition that could expand a company's product line with a super-hot new income-producing service. Barrier to entry? What barrier to entry? While vendor-supported APIs like those from OpenAI can reduce time to market considerably, they also can lead to vendor lock-in. To prevent total reliance on proprietary technologies, the open-source community has embraced AI in a big way. Open-source models (LLaMa, Stable Diffusion, Falcon, Bloom, T5, etc.) provide non-proprietary and self-hosted AI capabilities without relying on big technology monopolies. Open source also democratizes AI by allowing developers to create AI solutions for areas outside the guardrails the big model providers are required to keep in operation. Also: The best open-source AI models: All your free-to-use options explained Platforms like those from Hugging Face provide easy-to-use and easy-to-test tools that allow developers of varying skill levels to integrate AI into their projects quickly. Then, of course, there are the classic benefits of open source: large-scale collaboration, continuous improvements, community-generated and validated optimizations, and the introduction of new features, including some too obscure to be profitable for a big vendor but necessary for certain projects. All of this gives businesses of all sizes, researchers, and even nights-and-weekends developers the opportunity to add AI into their projects, which, in turn, is accelerating AI adoption across a wide range of application uses. The thing was, generative AI wasn't just hype. It worked and provided value. Separate from help with writing (which ZDNET policy prohibits for its writers), I documented 15 different ways AI helped me tangibly in just 2024. Also: The work tasks people use Claude AI for most, according to Anthropic These uses ranged from programming and debugging help to fixing photos, to doing that sentiment analysis I mentioned above, to creating album covers, to generating monthly images for my wife's e-commerce store, to creating moving masks in video clips, to cleaning up bad audio, to tracking me during filming, to doing project research, to so much more. And I'm not alone. Small and large businesses alike, as well as students and individual contributors, all noticed that generative AI could help, for real. Not only were the valuations of the AI companies skyrocketing, but consumers actually bought -- and really used -- the AI tools that suddenly became available. For years, decades really, AI was far from mainstream. Sure, there were limited AIs in video games. Expert systems were built that helped solve specific problems for some companies. There was a lot of promise and research. But when it came to "Show me the money," there was never the overwhelming return that vulture capitalists and their ilk required from tech investments. Also: From zero to millions? How regular people are cashing in on AI Then, all of a sudden, Aunt Marge was talking about ChatGPT during family gatherings. AI was a thing, it was astonishing, and oh-my-gosh, the things it could do. Did you know you could make it talk like a pirate? Did you know you could get it to write a *Star Trek* story? Did you know it could analyze your siloed business data and give you sentiment analysis in minutes without a bit of programming? And did you know it could write code that worked? Within a few months, ChatGPT became the fastest-growing app of all time, hitting 100 million active users. A year later, that doubled to 200 million active users. Suddenly, AI was a headliner rather than the personality mark of the geeky neighbor you ask over to fix your PC but really would prefer they went away once the PC was working again and they were paid in fresh baked cookies. Oddly specific analogies about my geeky past aside, AI was clearly an opportunity. OpenAI was suddenly worth billions, and it seemed like Google, Microsoft, Meta, Amazon, Apple, and all the rest had been left behind. Investment and licensing deals were everywhere, and AI was being baked into mainstream products either as a bonus feature or (far more prevalent) a very nice upsell to a monthly annuity. Microsoft had Copilot, Google had Gemini, Meta had Meta AI, Amazon had Q, and Apple... eventually had Apple Intelligence (for whatever that's worth). This new AI boom took on characteristics of the wild, wild west. Governments were just trying to get their heads around what it all was, and whether this was an enormous economic opportunity or an existential threat. Hint: it's both. The US government set up some plans for AI oversight, but they were tepid at best. AI vendors warned of catastrophe if AI isn't regulated. Lawsuits over copyright issues complicated matters. Then, the new administration changed the game, with a focus on substantially reduced regulation. All this opens the door for AI companies and businesses using AI to innovate and introduce new capabilities. This is great for rapid growth and innovation, but it also means the technology is running without guardrails. It definitely fuels the mainstreaming of AI technology, but it could also be very, very baaaaaad. So, then we get to the rinse-wash-repeat phase of our discussion. AI isn't going anywhere. All of the self-fulfilling prophecies are fueling new innovation because they actually work. Major companies are continuing to not only make billion-dollar bets on the technology, but are also offering compelling products and services that can provide real value to their customers. More and more companies and individuals are investing in AI startups and ongoing services. We're seeing breakthroughs like multimodal AI with text/images/video/audio, autonomous agents, and even AIs used to code AIs. Also: What is Perplexity Deep Research, and how do you use it? The closest example I can think of to this virtuous cycle was the app economy of the mid-2000s. Data speeds became fast enough and affordable enough for phones to always be connected to the Internet, startups offered app services that proved to be tangibly valuable, those companies grew huge and continued to offer services, and more and more investment into mobile-first computing paid off for both consumers and producers. It's very likely that a virtuous cycle is also driving AI innovation and production, pushing generative AI and other AI-based services very much into the mainstream, where it's unlikely to ever go away. When I went to college in the 1980s and majored in computer science, my mom said that all she wanted from me was a computer that would vacuum her floors. Now, we have a wide range of little robots that go forth and do just that. This morning, while having coffee, I taped "Vac and mop bedroom," and Wally the Narwal did just that. My dream is to be able to say, "Alexa, bring me coffee," and have a device actually make me a cup of coffee and bring it to me while I'm sitting here writing. Don't laugh. Whether it's Tesla, Apple, or Meta, real work is being done right now on humanoid robots. Given how many times my Alexa screws up and how many times ChatGPT makes up stuff to save face, I'm not exactly sure that having a romping, stomping robot in my living room or office is a good idea. But I do want my coffee. Also: What is DeepSeek AI? Is it safe? Here's everything you need to know Stay tuned. The past two years have been a wild ride, and I suspect we've only just seen the beginning. What do you think has been the most significant factor in AI's rapid adoption? Have you incorporated AI tools like ChatGPT into your daily workflow? If so, how have they changed the way you work or create? Do you see AI as a long-term game-changer, or do you think we're in the midst of a hype cycle that will eventually stabilize? And what about the ethical and regulatory concerns? Do you think AI development is moving too fast for proper oversight? Let us know in the comments below.
[3]
AI risk - as trust in generative AI grows, the Great Forgetting has begun
In the 1960 movie The Time Machine, based on the 1895 novella by H.G. Wells, a Victorian inventor travels to the distant future. There, he finds a civilization of beautiful young men and women called the Eloi, living peacefully in an Eden-like world. At first, he is delighted: war is over, and paradise has seemingly returned. But the inventor discovers that the Eloi have no knowledge of history, literature, science, or technology: all human learning is crumbling to dust in an abandoned library. In fact, the Eloi have become so passive that they are unable even to think for themselves; they have fallen prey to degenerate beings who toil on giant machines underground. From our contemporary standpoint at the dawn of an AI Age, The Time Machine could be seen as a warning about our growing reliance on technology. The pleasure-seeking Eloi still have access to their vast library of 'talking rings', which contain millennia of human knowledge, but they have become too lazy to listen to them, or even to investigate the ruins in their midst. They have moved past the need to acquire knowledge at first hand. Instead, they have become fodder to a sequestered industrial process. As we turn to AI to explain the world to us - and perhaps even to write our texts, and make our art, music, and poetry - might we become like the Eloi one day: too lazy to find things out for ourselves, to make things, learn languages, to question things, and acquire new skills? Or will the opposite happen: a future in which we become AI-assisted experts in countless subjects? It is hard to predict. But my worry is that The Great Forgetting may already have begun. In February 2025 an academic report revealed an alarming finding: the more workers trust generative AI to give them answers, the less they think critically and for themselves. This fear that our reliance on generative AI might have a negative impact on human reasoning skills was crystalized by Hao-Ping Lee, a PhD student at Carnegie Mellon University in Pittsburgh, and Advait Sarkat et al, a team from Microsoft Research in Cambridge, UK. Based on a study of 319 knowledge workers who have adopted generative AI into their daily workflows, the researchers' February 2025 paper, 'The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers', found that higher confidence in generative AI is associated with reduced critical thinking, as users engage in what the authors call "cognitive unloading". Put simply, they offload the task of thinking onto the machine. The researchers note: The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI. Higher confidence in GenAI's ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration. This is troubling on a profound level, because humans may increasingly become mere support workers as our confidence in AI's abilities grows. At the same time, the perceived need to think critically about the accuracy of AI's responses - to verify them against external sources - will eventually fall away too as we trust our systems more. This is no minor issue for a simple reason: many external sources - websites, data archives, even public libraries - will inevitably vanish as AI's impact spreads. As a result, much data will only exist within the context of AI systems and will therefore be unverifiable by other means - especially as LLMs frequently fail to disclose their sources. Thus, we will have become decoupled from our own data, insights, and expertise - we will be one step removed from the source, and wholly reliant on AI mediators. But what about 2025's much vaunted newcomer, OpenAI's Deep Research? Daniel Litt is Assistant Professor of Mathematics at the University of Toronto. This week he published a long thread on X revealing how Deep Research hallucinates when it lacks the relevant data. While conducting research into the age of people doing mathematics research, he explained: I asked [Deep Research] about the age of authors publishing in the Annals of Mathematics, arguably one of the top math journals, from 1950-2025. This is a pretty involved request - Annals has published ~3000 papers in this time, and one has to look up each of the authors and figure out/estimate their birth date. Doing this would be tedious but trivial for a human. I thought it would be within the purview of Deep Research. The tool produced a beautifully written and argued report - and contrary to the original suggestion [which Litt was investigating], it seemed to be saying average age of publication in the Annals has been *increasing* with time. So far, so good. What could possibly go wrong? Litt explained: The only problem is that it's all made up. Not only had Deep Research hallucinated its findings, but the Professor found that it also lied: Despite claiming to have looked at every paper published in the Annals in this 75-year period, poking around the pages it looked at suggests it only looked at ~5-6 papers. Reconstructing the tool's approach, it seems to have run across an article challenging [the] claim that math research is a 'young man's [sic] game' and backfilled a narrative to support this challenge. Astonishing. But what if Litt had lacked the specialist knowledge to query the AI's findings himself? Or what if he had simply been too lazy to do the legwork and check the AI's workings? (Do you check what an AI tells you?) Last year I reported on a worrying experience of my own: an Otter AI summary of an interview I had conducted myself had flown in statistics from an unknown external source and credited them to the interviewee. Here was an AI that was not only rewriting history, but it was also putting words into an expert's mouth. A related challenge is that humans' first-hand knowledge of different subjects may become shallower through our reliance on AI, including in specialist markets that, until recently, were grounded in human expertise. Take software development, where AI's speedy adoption is driven by the competitive need to rush-release new products. On 14 February 2025, a software developer called Namanyay Goel published an update on his blog (nmn.gl) saying, "New junior developers can't actually code". He explained: Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They're shipping code faster than ever. But when I dig deeper into their understanding of what they're shipping? That's where things get concerning. Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares. The foundational knowledge that used to come from struggling through problems is just... missing. He continued: AI gives you answers, but the knowledge you gain is shallow. With [developer community] StackOverflow, you had to read multiple expert discussions to get the full picture. It was slower, but you came out understanding not just what worked, but why it worked. Think about every great developer you know. Did they get that good by copying solutions? No - they got there by understanding systems deeply and understanding other developers' thought processes. That's exactly what we're losing. Goel added: We're trading deep understanding for quick fixes, and while it feels great in the moment, we're going to pay for this later. Now apply that thought to every sector where the use of GPT and other models is fast becoming standardized - which is most of them. Remember the Manhattan lawyer who ceded control of his casework to ChatGPT, but lacked the first-hand knowledge to know that the case law and precedents he presented in court were hallucinations? By January 2025, there were 300 million weekly active users of that system alone, while AI functions are being force-fed to consumers on every cloud platform by the likes of Microsoft and Google, a process that risks infantilizing humanity at scale: AI as baby food, as spoon-fed mulch. For the record, I am not saying that this negative version of our AI future is inevitable. Time's arrow and technological progress are not always the same thing: we are not moving on a single track inexorably towards the future. Rather, we are on multiple paths simultaneously, and sometimes reject ideas or circle back and examine them again. (Are our roads full of autonomous vehicles? No. Are some companies abandoning their self-driving programs? Yes.) However, I am saying that these challenges are real and require critical, strategic thinking. As popular science's great 20th Century communicator, Carl Sagan, said in an interview: Science is more than a body of knowledge, it's a way of thinking, a way of sceptically interrogating the universe with a fine understanding of human fallibility. If we are not able to ask sceptical questions, to interrogate those that tell us that something is true, to be sceptical of those in authority, then we are up for grabs." I would argue that this applies more than ever to AI's evangelists, cultists, and mystic salesmen. We must avoid AI evangelism at all costs and engage our critical faculties whenever we encounter it. Namanyay Goel offered some good advice of his own: The future isn't about whether we use AI - it's about how we use it. Use AI with a learning mindset. When it gives you an answer, interrogate it. Ask it why. Sure, it takes longer, but that's literally the point. Do code reviews differently. Instead of just checking if the code works, start a conversation with your team. What other approaches did they consider? Why did they pick this one? Make understanding the process as important as the result. An excellent strategy. But there is a problem, which can be summed up by the acronym 'TL;DR': our collective consumption of more and more shallow information at speed, and less and less in-depth research slowly. Every report into enterprise AI adoption in recent years makes the same point: companies primarily want the technology to save them money and make workers more productive; but comparatively few want it to help them be smarter or to make better decisions. So, the economic imperative is: 'do more with less, faster', not 'become expert'. And there is another problem: the human data pool will begin shrinking, which means that sources of deep human expertise will become harder and harder to find. Remember, between 75% and 99% of users never search beyond Google Page 1 as it is. Now factor in AI and ask yourself, how realistic is a future in which we all check our workings? Then ask yourself: how long before people die, or lives are put at risk, because someone didn't check the output of an AI system? How long before the complex maths that LLMs struggle with cause a plane to fall out of the sky, or a reactor to explode, because no one knew how to check the calculations? Consider this: since the dawn of the Web as a public resource in the 1990s, millions of webpages have been added, of course, but millions have also vanished - including entire archives of data. And as generative AIs, LLMs, and chatbots proliferate, it stands to reason that many more webpages and archives will disappear, as they fall into disuse and become redundant. At the same time, many books will never be written: why bother in the AI age? On that point, it is worth noting that the Web is far from the sum of human knowledge that many people believe it to be. Millions of books have never been digitized and only exist on library shelves. The US National Library of Medicine estimates that only 12% of books have ever been digitised, which means that 88% have not. At best, therefore, the Web has only ever been a partial, vanishing snapshot of human knowledge and expertise. And in recent years it has become polluted by advertising, sponsored links, search engine optimisation (SEO), and more. In this way, any AI that has been trained solely on data scraped from the Web will have vast gaps in its training data - gaps that may never be filled. And that is not the only challenge facing us in the AI age. In 2024, European police organization Europol estimated that, as early as 2026, most online content will be synthetic - generated by AI systems of one kind and another: text, fake photos, illustrations, video, music, and sound. There is no reason to believe that prediction is wrong: if a human can, hypothetically, generate 10,000 images in one day using a generative AI system, but only one or two by hand, why would the Web not be swamped with synthetic slop? In roughly 30 years, therefore, the Web will have undergone a complete transformation: it will have changed from being a medium for deep, trusted, academic information exchange (its beginnings at CERN, as proposed by Sir Tim Berners-Lee) to one of superficial machine-to-machine communication. That is a tragedy. At heart, the problem is this. As 21st Century AIs become iteratively more sophisticated, their responses will become increasingly convincing and erudite. That is part of their unique appeal, of course, and shifts the focus of the Web and other online platforms and communities towards explicability, rather than search. Meanwhile, we lack the language to describe, accurately, what AIs are doing in these contexts. Without pausing to consider our words, we ascribe human thought processes to machine intelligences: we say things like, "ChatGPT thinks this...", or "Claude believes this...", and by doing so we enable the shared illusion to flourish of a human-like consciousness existing in these machines. Professor Punya Misra, Director of Innovative Learning Futuresat the Learning Engineering Institute at Arizona State University notes: We speak of AI 'thinking', 'deciding', or 'wanting', not because these terms are accurate, but because they serve as cognitive shortcuts, allowing us to grapple with complex systems in familiar terms. In short, we ascribe intelligence to anything that uses language well, and as we ask AIs more and more significant questions and bring them deeper into our work and decision-making, we infer genius from their responses, when they are simply remixing the data produced by brilliant human minds - and in some cases, hallucinating and lying. Thus, we anthropomorphize AIs while hypnotizing ourselves. And because the outputs appear authoritative, having been expressed in the language of an intelligent human, our instinct is to trust them. That is especially true if users lack the in-depth knowledge to spot when an AI is confecting a plausible-sounding fiction when it lacks sufficient training data. Overall, the result might be a kind of automated Dunning-Kruger effect, in which the user overestimates the intelligence of the machine, and thus uses its output with misplaced confidence. And that is not all: to such an AI, any new discovery, any conceptual outlier, any giant leap forward in human thinking - a thought experiment, perhaps, or a brilliant insight, original idea, or theory that unifies irreconcilable models of the universe - might be indistinguishable from an error. So, where would the next Aristarchus, Copernicus, Galileo, Da Vinci, Newton, Darwin, Einstein, or Dirac come from in such a world - one in which we rely on AIs to explain things to us? Who will create the new data for the trainers of deep learning models to scrape? Or, to turn that question on its head, where are all the new discoveries and giant conceptual leaps generated by these supposedly intelligent LLMs? With a world of human data to learn from - at the speed of the world's fastest processors - why do they just summarise what we already know? A gambler would call that a 'tell'. Of course, an AI trained on millions of medical scans or blood tests would be able to help detect cancers and other diseases early, and so help us work towards prevention and cure. Indeed, an AI that can examine vast amounts of data more quickly than human teams could accelerate research in countless areas, and that would be a wonderful thing - if it can be trusted. But that is not the same as reason, intuition, intelligence, and understanding - unless one uses the word 'intelligence' to mean the information itself, rather than the ability to understand and perhaps debate it. So, welcome to the Eloi future. I wonder what a future traveller will make of it.
Share
Share
Copy Link
DeepSeek's emergence disrupts the AI market, challenging industry giants and raising questions about AI's future development and societal impact.
DeepSeek, a newcomer in the AI industry, has made a significant impact on the market, causing a $600 billion slump for NVIDIA and challenging industry leaders like OpenAI 1. The company's low-cost AI model, which bypasses NVIDIA frameworks, has caught the attention of both investors and competitors.
DeepSeek's R1 LLM offers several advantages over existing models:
These features have made DeepSeek an attractive option for large-scale automation and chatbot creation 1.
OpenAI has responded to DeepSeek's challenge on two fronts:
However, concerns about data protection and information freedom have arisen, as DeepSeek's data is housed in China under non-EU regulations 1.
The sudden mainstream adoption of generative AI tools like ChatGPT has transformed the information technology industry. This rapid advancement can be attributed to several factors 2:
As AI tools become more prevalent, there are concerns about their impact on human cognition and critical thinking. A study by researchers from Carnegie Mellon University and Microsoft Research found that higher confidence in generative AI is associated with reduced critical thinking among knowledge workers 3.
Despite the advancements, AI systems still face challenges:
A recent example involves OpenAI's Deep Research tool, which was found to generate false information when lacking relevant data 3.
As AI continues to evolve, it is crucial to address these challenges and ensure that human critical thinking and verification processes remain an integral part of AI utilization.
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
6 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
Chinese AI company DeepSeek's new large language model challenges US tech dominance, sparking debates on open-source AI and geopolitical implications.
9 Sources
9 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved