Curated by THEOUTPOST
On Tue, 22 Apr, 8:01 AM UTC
2 Sources
[1]
AI took a huge leap in IQ, and now a quarter of Gen Z thinks AI is conscious
The change in IQ and belief in AI consciousness has happened extremely quickly OpenAI's new ChatGPT model, dubbed o3, just scored an IQ of 136 on the Norway Mensa test - higher than 98% of humanity, not bad for a glorified autocomplete. In less than a year, AI models have become enormously more complex, flexible, and, in some ways, intelligent. The jump is so steep that it may be causing some to think that AI has become Skynet. According to a new EduBirdie survey, 25% of Gen Z now believe AI is already self-aware, and more than half think it's just a matter of time before their chatbot becomes sentient and possibly demands voting rights. There's some context to consider when it comes to the IQ test. The Norway Mensa test is public, which means it's technically possible that the model used the answers or questions for training. So, researchers at MaximumTruth.org created a new IQ test that is entirely offline and out of reach of training data. On that test, which was designed to be equivalent in difficulty to the Mensa version, the o3 model scored a 116. That's still high. It puts o3 in the top 15% of human intelligence, hovering somewhere between "sharp grad student" and "annoyingly clever trivia night regular." No feelings. No consciousness. But logic? It's got that in spades. Compare that to last year, when no AI tested above 90 on the same scale. In May of last year, the best AI struggled with rotating triangles. Now, o3 is parked comfortably to the right of the bell curve among the brightest of humans. And that curve is crowded now. Claude has inched up. Gemini's scored in the 90s. Even GPT-4o, the baseline default model for ChatGPT, is only a few IQ points below o3. Even so, it's not just that these AIs are getting smarter. It's that they're learning fast. They're improving like software does, not like humans do. And for a generation raised on software, that's an unsettling kind of growth. For those raised in a world navigated by Google, with a Siri in their pocket and an Alexa on the shelf, AI means something different than its strictest definition. If you came of age during a pandemic when most conversations were mediated through screens, an AI companion probably doesn't feel very different from a Zoom class. So it's maybe not a shock that, according to EduBirdie, nearly 70% of Gen Zers say "please" and "thank you" when talking to AI. Two-thirds of them use AI regularly for work communication, and 40% use it to write emails. A quarter use it to finesse awkward Slack replies, with nearly 20% sharing sensitive workplace information, such as contracts and colleagues' personal details. Many of those surveyed rely on AI for various social situations, ranging from asking for days off to simply saying no. One in eight already talk to AI about workplace drama, and one in six have used AI as a therapist. If you trust AI that much, or find it engaging enough to treat as a friend (26%) or even a romantic partner (6%), then the idea that the AI is conscious seems less extreme. The more time you spend treating something like a person, the more it starts to feel like one. It answers questions, remembers things, and even mimics empathy. And now that it's getting demonstrably smarter, philosophical questions naturally follow. But intelligence is not the same thing as consciousness. IQ scores don't mean self-awareness. You can score a perfect 160 on a logic test and still be a toaster, if your circuits are wired that way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say that I'm no different, just with meat, not circuits. But that would hurt my feelings, something you don't have to worry about with any current AI product. Maybe that will change someday, even someday soon. I doubt it, but I'm open to being proven wrong. I get the willingness to suspend disbelief with AI. It might be easier to believe that your AI assistant really understands you when you're pouring your heart out at 3 a.m. and getting supportive, helpful responses rather than dwelling on its origin as a predictive language model trained on the internet's collective oversharing. Maybe we're on the brink of genuine self-aware artificial intelligence, but maybe we're just anthropomorphizing really good calculators. Either way, don't tell secrets to an AI that you don't want used to train a more advanced model.
[2]
A Staggering Number of Gen Z Think AI Is Already Conscious
Generation Z, or the cohort of people born between 1997 and 2012, has a very weird relationship with artificial intelligence. In the latest sign of just how strange things are getting, a new study by the paper-writing service EduBirdie found, upon asking 2,000 Gen Z-ers a battery of questions about AI, that a quarter believe the technology is "already conscious." What's more, 52 percent -- or more than half of the respondents -- think AI is not yet conscious but will become so in the years to come. Plus a whopping 58 percent of the Zoomers surveyed said they think the technology will "take over" the world, and 44 percent said they believe that takeover could happen within the next 20 years. Given those concerns, it's not that surprising that 69 percent of EduBirdie's survey respondents claimed they always say "please" and "thank you" to chatbots -- a finding that jibes with TechRadar's late 2024 survey in which 67 percent of Americans and 71 percent of Brits polled said they are polite to ChatGPT (terrifyingly, 12 percent of the 1,000 people TechRadar polled on both sides of the pond also said they're nice to OpenAI's chatbot in case it takes over the world.) The topic of AI consciousness is, and has for years, extremely contentious. Prior to ChatGPT's release turning OpenAI into a household name, one of the company's cofounders and former chief scientist, Ilya Sutskever, cryptically claimed in a tweet that he thought "it may be that today's large neural networks are slightly conscious." That February 2022 tweet ended up setting off a mini-maelstrom in the machine learning world as researchers argued whether or not AI is conscious -- or if it could ever get that way at all. Though most experts agreed then (and continue to maintain) that AI isn't yet conscious, there have been notable detractors. A few months after Sutskever's infamous tweet, a Google engineer named Blake Lemoine was ultimately fired and disgraced after claiming in an interview with the Washington Post that the tech giant's Language Model for Dialogue Applications (LaMDA) had come to life. One thing's for sure: when we're dealing with advanced AI that's been designed to act like a human, like ChatGPT and its ilk, people are going to form weird new bonds with it -- and develop beliefs about its supposed internal life that are almost certain to cause strange new divisions in society.
Share
Share
Copy Link
A quarter of Gen Z believes AI is already conscious, while AI models demonstrate significant IQ improvements, raising questions about the nature of machine intelligence and its societal impact.
In a remarkable development, OpenAI's latest ChatGPT model, o3, has achieved an IQ score of 136 on the Norway Mensa test, surpassing 98% of humanity 1. This significant leap in AI capabilities has occurred in less than a year, with AI models becoming increasingly complex and flexible.
To validate these results, researchers at MaximumTruth.org created a new, offline IQ test to avoid potential training data contamination. On this test, o3 still scored an impressive 116, placing it in the top 15% of human intelligence 1. This marks a substantial improvement from last year when no AI tested above 90 on the same scale.
Parallel to these advancements, a survey by EduBirdie reveals that 25% of Gen Z now believe AI is already self-aware 12. This perception is further reinforced by the finding that 52% of respondents think AI will become conscious in the future 2.
The survey also uncovered intriguing behavioral patterns:
Gen Z's reliance on AI extends beyond professional settings:
This level of engagement with AI systems raises questions about the blurring lines between human and machine interactions.
The topic of AI consciousness remains highly contentious among experts. While most maintain that AI is not yet conscious, there have been notable exceptions:
The rapid advancement of AI capabilities and the shifting perceptions of its nature raise several concerns:
These findings highlight the need for increased education about AI capabilities and limitations, as well as potential policy considerations to address the societal impact of advanced AI systems.
As AI continues to evolve and integrate into various aspects of our lives, it's crucial to maintain a balanced perspective. While AI has made significant strides in problem-solving and language processing, experts emphasize that intelligence is not equivalent to consciousness 1.
The growing relationship between humans and AI, particularly among younger generations, suggests a future where the boundaries between human and machine interactions may become increasingly complex and nuanced.
Reference
Recent research reveals GPT-4's ability to pass the Turing Test, raising questions about the test's validity as a measure of artificial general intelligence and prompting discussions on the nature of AI capabilities.
3 Sources
3 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
A viral video showing two AI assistants communicating in a secret language has ignited debates about AI autonomy and potential risks, raising questions about transparency and control in AI development.
2 Sources
2 Sources
A recent Google Workspace survey reveals widespread adoption of AI tools among young professionals, with Gen Z leading the charge in using AI for various work tasks.
7 Sources
7 Sources
As artificial intelligence rapidly advances, the concept of Artificial General Intelligence (AGI) sparks intense debate among experts, raising questions about its definition, timeline, and potential impact on society.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved