The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Mon, 10 Feb, 4:01 PM UTC
7 Sources
[1]
Meta Appears to Have Invented a Device Allowing You to Type With Your Brain
Mark Zuckerberg's Meta says it's created a device that lets you produce text simply by thinking what you want to say. As detailed in a pair of studies released by Meta last week, researchers used a state-of-the-art brain scanner and a deep learning AI model to interpret the neural signals of people while they typed, guessing what keys they were hitting with an accuracy high enough to allow them to reconstruct entire sentences. "As we've seen time and again, deep neural networks can uncover remarkable insights when paired with robust data," Forest Neurotech founder Sumner Norma, who wasn't involved in the research, told MIT Technology Review of the work. But, as Tech Review cautions, don't expect the potentially game-changing gizmo to hit the market -- ever. That's because the platform's built on a prohibitively large and expensive magnetoencephalography scanner, which detects the magnetic signals in your brain. The upside is that it can peer into your mind without having to place a device, like a brain computer interface, inside your skull -- an invasive approach favored by other mind-typing techniques. But the downside is that it's as unwieldy as an MRI machine, weighing about half a ton and costing $2 million. Not only that, but the scanner can only work in a shielded room that dampens the Earth's magnetic field, which would otherwise blow out our puny brain signals from being detected. And while it's being used, the subject can't move their head at all, or the signal is kaput. That's a lot of caveats -- too many and too significant to let the device become commercial. And yet, it's an undeniably impressive achievement, and Meta thinks it can use what it's learned here to give it a leg-up in the development of other AI models. "Trying to understand the precise architecture or principles of the human brain could be a way to inform the development of machine intelligence," Jean-Rémi King, leader of Meta's Brain & AI team, told Tech Review. "That's the path." According to the researchers' findings, the system is able to correctly detect what keys a "skilled" typist hit as high as 80 percent of the time. That's not flawless, but it's accurate enough to construct full sentences from brain signals, the researchers said. This is facilitated through Meta's deep-learning system called Brain2Qwerty, which can learn what keys a user is pressing after observing them for several thousand characters. With an average error rate of 32 percent, it's far from perfect. But Meta says this is the most accurate brain typing system using a full keyboard that reads brain signals from outside the skull, per MIT. Another one of the most promising approaches boasts an accuracy rate as high as 99 percent, but it relies on placing a neural implant directly on the brain. Still, Meta's system isn't likely to provide direct pathway to practical applications of the tech. The researchers are stoked, though, that what they've found seems to confirm the theory that our mind forms language signals hierarchically, which could be a boon for AI research. "Language has become a foundation of AI," King told MIT. "So the computational principles that allow the brain, or any system, to acquire such ability is the key motivation behind this work."
[2]
Meta's breakthrough: AI can successfully read 80% of thoughts, & understand how they emerge
TL;DR: Meta announced breakthroughs in AI research, developing a model that converts brain signals to text and exploring how thoughts become words. The model decodes up to 80% of typed sentences using noninvasive methods, aiding brain-computer interfaces. Meta recently announced two new breakthroughs from their global research labs. The first highlights that Meta successfully developed a model that can successfully convert brain signals into text. The second reveals how the brain transforms thoughts into words, offering new insights into language processing and AI development. Credit: Meta The first paper refers to an AI model that can decode up to 80% of typed sentences solely from brain activity. The model utilizes noninvasive MEG and EEG recordings, essentially sparing the need to conduct surgical procedures. It presents a few potential benefits, for example, the ability to establish brain-computer interfaces for those who have lost speech and clinical use for brain injury patients. While the advancement is impressive, researchers are still encountering challenges with accuracy. While the AI can decode up to 80% of characters, errors still occur, making full, reliable communication difficult. It also requires stillness and a magnetically shielded room, which poses issues for practicality. So don't worry - you won't need to build a Magneto-style thought shielding helmet any time soon. It's also unclear how well this would work for patients with brain injuries and disorders. Credit: Meta The second paper delves more into how thoughts turn into words at a neural level. Their findings indicated that the brain processes language in a structured, layered sequence and identified a 'dynamic neural code' that links successive thoughts. In other words, the brain doesn't just process one word at a time- it continuously holds multiple layers of information, seamlessly transitioning from abstract thoughts to structured sentences while maintaining coherence. The implications of this advancement relate more so to cognitive neuroscience, providing new insights into how we think, and how we could improve AI-powered speech assistance. Meta has come a long way since they first announced their 'typing-by-brain' project in 2017 (they were still called Facebook back then). The advancements are currently confined to clinical settings, but it's a step towards seeing it in real-world applications.
[3]
Meta might've done something useful, pioneering an AI model that can interpret brain activity into sentences with 80% accuracy
Depending on what areas of the internet you frequent, perhaps you were under the illusion that thoughts-to-text technology already existed; we all have that one mutual or online friend that we gently hope will perhaps one day post slightly less. Well, recently Meta has announced that a number of their research projects are coming together to form something that might even improve real people's lives -- one day. Maybe! Way back in 2017, Meta (at that time just called 'Facebook') talked a big game about "typing by brain." Fast forward to now and Meta has shared news of two breakthroughs that make those earlier claims seem more substantial than a big sci-fi thought bubble (via MIT Technology Review). Firstly, Meta announced research that has created an AI model which "successfully decodes the production of sentences from non-invasive brain recordings, accurately decoding up to 80% of characters, and thus often reconstructing full sentences solely from brain signals." The second study Meta shared then examines how AI can facilitate a better understanding of how our brains slot the Lego bricks of language into place. For people who have lost the ability to speak after traumatic brain injuries, or who otherwise have complex communication needs, all of this scientific research could be genuinely life-changing. Unfortunately, this is where I burst the bubble: the 'non-invasive' device Meta used to record brain signals so that they could be decoded into text is huge, costs $2 million, and makes you look a bit like Megamind. Dated reference to an animated superhero flick for children aside, Meta has been all about brain-computer interfaces for years. More recently they've even demonstrated a welcome amount of caution when it comes to the intersection of hard and 'wet' ware. This time, the Meta Fundamental Artificial Intelligence Research (FAIR) lab collaborated with the Basque Center on Cognition, Brain and Language, to record the brain signals of 35 healthy volunteers as they typed. Those brain signals were recorded using the aforementioned, hefty headgear -- specifically a MEG scanner -- and then interpreted by a purposefully trained deep neural network. Meta wrote, "On new sentences, our AI model decodes up to 80% of the characters typed by the participants recorded with MEG, at least twice better than what can be obtained with the classic EEG system." This essentially means that recording the magnetic fields produced by the electrical currents within the participants' brains resulted in data the AI could more accurately interpret, compared to just recording the electrical activity itself via an EEG. However, by Meta's own admission, this does not leave the research in the most practical of places. For one, MEG scanners are far from helmets you can just pop on and off -- it's specialised equipment that requires patients to sit still in a shielded room. Besides that, this study used a comparatively tiny sample size of participants, none of whom had a known traumatic brain injury or speech difficulties. This means that it's yet to be seen just how well Meta's AI model can interpret for those who really need it. Still, as a drop out linguist myself, I'm intrigued by Meta's findings when it comes to how we string sentences together in the first place. Meta begins by explaining, "Studying the brain during speech has always proved extremely challenging for neuroscience, in part because of a simple technical problem: moving the mouth and tongue heavily corrupts neuroimaging signals." In light of this practical reality, typing instead of speaking is kind of genius. So, what did Meta find? It's exactly like I said before: Linguistic Lego bricks, baby. Okay, that's an oversimplification, so I'll quote Meta directly once more: "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations -- the meaning of a sentence -- and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard [...] Our results show that the brain uses a 'dynamic neural code' -- a special neural mechanism that chains successive representations while maintaining each of them over long time periods." To put it another way, your brain starts with vibes, unearths meaning, daisy chains those Lego bricks together, then transforms the thought into the action of typing...yeah, I would love to see the AI try to interpret the magnetic fields that led to that sentence too.
[4]
Meta develops 'hat' for typing text by thinking -- uses AI to read brain signals for keypresses
Meta CEO Mark Zuckerberg has previously said that the company is working on a system that will allow you to type directly via your brain. According to MIT Technology Review, the tech giant was actually able to successfully create this technology, with the system capable of accurately determining what key the user was 'pressing' about 80% of the time. This probably isn't an impressive number to skilled typists, but we must remember that this machine reads your brain signals externally -- no implantation or invasive procedure required -- which is a feat in and of itself. However, don't think this is a comfortable hat that one could just wear anywhere, day to day. Instead, it's a massive and expensive machine that needs to be used in isolation to work effectively. Forest Neurotech founder Sumner Norman likens it to "an MRI machine tipped on its side and suspended above the user's head," with one device estimated to cost $2,000,000. Aside from that, the magnetoencephalography scanner, which reads the magnetic signals that your neurons generate when they fire, can only be used in a shielded room. That's because the earth's magnetic field, which is several orders of magnitude stronger than the one in your head, will interfere with the reading. The machine also loses the signal when the subject moves their head, making it impractical to use in everyday settings. Meta Brain & AI Research Team Head Jean-Rémi King says that the research isn't geared towards making a marketable device. "Our effort is not at all toward products," says King. "In fact, my message is always to say I don't think there is a path for products because it's too difficult." Nevertheless, the research returned meaningful results, as it discovered how the brain produces language information. Meta's team determined that our neurons first generate a signal for a thought or sentence, which then creates subsequent signals for words, syllables, and, lastly, letters. They were then able to see how these different levels interact with each other as a system for written communication. The company could then learn how this works and use it as a way to train artificial intelligence. "Trying to understand the precise architecture or principles of the human brain could be a way to inform the development of machine intelligence," said King. He adds, "Language has become a foundation of AI. So, the computation principles that allow the brain, or any system, to acquire such ability is the key motivation behind this work."
[5]
Meta and researchers unveil AI models that convert brain activity into text with unmatched accuracy
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? Working with international researchers, Meta has announced major milestones in understanding human intelligence through two groundbreaking studies. Namely, they have created AI models that can read and interpret brain signals to reconstruct typed sentences and map the precise neural processes that transform thoughts into spoken or written words. The first of the studies, carried out by Meta's Fundamental Artificial Intelligence Research (FAIR) lab in Paris, collaborating with the Basque Center on Cognition, Brain and Language in San Sebastian, Spain, demonstrates the ability to decode the production of sentences from non-invasive brain recordings. Using magnetoencephalography (MEG) and electroencephalography (EEG), researchers recorded brain activity from 35 healthy volunteers as they typed sentences. The system employs a three-part architecture consisting of an image encoder, a brain encoder, and an image decoder. The image encoder builds a rich set of representations of the image independently of the brain. The brain encoder then learns to align MEG signals to these image embeddings. Finally, the image decoder generates a plausible image based on these brain representations. The results are impressive: the AI model can decode up to 80 percent of characters typed by participants whose brain activity was recorded with MEG, which is at least twice as effective as traditional EEG systems. This research opens up new possibilities for non-invasive brain-computer interfaces that could help restore communication for individuals who have lost the ability to speak. The second study focuses on understanding how the brain transforms thoughts into language. By using AI to interpret MEG signals while participants typed sentences, researchers were able to pinpoint the precise moments when thoughts are converted into words, syllables, and individual letters. This research reveals that the brain generates a sequence of representations, starting from the most abstract level (the meaning of a sentence) and progressively transforming them into specific actions, such as finger movements on a keyboard. The study also demonstrates that the brain uses a 'dynamic neural code' to chain successive representations while maintaining each of them over extended periods. While the technology shows promise, several challenges remain before it can be applied in clinical settings. Decoding performance is still imperfect, and MEG requires subjects to be in a magnetically shielded room and remain still. The MEG scanner itself is large, expensive, and needs to be operated in a shielded room, as the Earth's magnetic field is a trillion times stronger than the one in the brain. Meta plans to address these limitations in future research by improving the accuracy and reliability of the decoding process, exploring alternative non-invasive brain imaging techniques that are more practical for everyday use, and developing more sophisticated AI models that can better interpret complex brain signals. The company also aims to expand its research to include a wider range of cognitive processes and explore potential applications in fields such as healthcare, education, and human-computer interaction. While further research is needed before these developments can help people with brain injuries, they bring us closer to building AI systems that can learn and reason more like humans.
[6]
Meta can turn your thoughts into words typed on a screen if you don't mind lugging a machine the size of a room around
The machine is a half-ton, costs $2 million, needs a shielded room, and even slight head movements disrupt the signal. Meta is showing off a machine capable of turning your thoughts into words typed on a screen, but don't expect to write your Instagram captions telepathically any time soon. The device weighs about half a ton, costs $2 million, and is about as portable as a refrigerator. So, unless you were planning to lug around a lab-grade magnetoencephalography (MEG) scanner, you won't be sending mind texts anytime soon. And that's before even considering how you can't even slightly move your head when using it. Still, what Meta has done is impressive. Their AI and neuroscience teams have trained a system that can analyze brain activity and determine what keys someone is pressing - purely based on thought. There are no implanted electrodes, no sci-fi headbands, just a deep neural network deciphering brainwaves from the outside. The research, detailed in a pair of newly released papers, reveals that the system is up to 80% accurate at identifying letters from brain activity, allowing it to reconstruct complete sentences from a typist's thoughts. While typing out phrases, a volunteer sits inside a MEG scanner, which looks a bit like a giant hair dryer. The scanner picks up magnetic signals from neurons firing in the brain, and an AI model, aptly named Brain2Qwerty, gets to work learning which signals correspond to which keys. After enough training, it can predict the letters a person is typing. The results weren't perfect, but could reach accuracy levels of up to 80%. Telepathic typing has some real limits for now. The scanner needs to be in a specially shielded room to block out Earth's magnetic field, which is a trillion times stronger than what's in your head. Plus, the slightest head tilt scrambles the signal. But there's more to it than just another Meta-branded product. The research could really boost brain science and, eventually, medical care for brain injuries and illnesses. "To explore how the brain transforms thoughts into intricate sequences of motor actions, we used AI to help interpret the MEG signals while participants typed sentences. By taking 1,000 snapshots of the brain every second, we can pinpoint the precise moment where thoughts are turned into words, syllables, and even individual letters," Meta explained in a blog post. "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations -- the meaning of a sentence -- and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard." Despite its limitations, the non-invasive aspect of Meta's research makes for a much less scary approach than cramming a computer chip right in your brain as companies like Neuralink are testing. Most people wouldn't sign up for elective brain surgery. Even though a product isn't the stated goal of the research, historical points demonstrate that giant, lab-bound machines don't have to stay that way. A tiny smartphone does what a building-size computer couldn't in the 1950s. Perhaps today's brain scanner is tomorrow's wearable.
[7]
Meta's New Research Begins Decoding Thoughts from Brain Using AI
The study involved 35 healthy volunteers who typed memorised sentences while their brain activity was recorded. Meta has been making impressive strides in the AI space, recently surpassing its earning estimates along with its plan to invest $65 billion to build a 2GW+ data centre. Now, it has showcased progress in using AI to decode language from the brain to help people with brain injuries who have lost their ability to communicate. Meta collaborated with the Basque Center on Cognition, Brain, and Language (BCBL), a leading research centre in San Sebastián, Spain, to study how AI can help advance our understanding of human intelligence. The goal is to achieve advanced machine intelligence (AMI). During the announcement of the new research, Meta said, "We're sharing research that successfully decodes the production of sentences from non-invasive brain recordings, accurately decoding up to 80% of characters, and thus often reconstructing full sentences solely from brain signals." The research was led by Jarod Levy, Mingfang (Lucy) Zhang, Svetlana Pinet, Jérémy Rapin, Hubert Jacob Banville, Stéphane d'Ascoli, and Jean Remi King from Meta. The study involved 35 healthy volunteers who typed memorised sentences while their brain activity was recorded. They were seated in front of a screen with a custom keyboard on a stable platform. The volunteers were asked to type what they saw on the screen without using backspace. According to the research paper, a new deep learning model, Brain2Qwerty, was designed to decode text from non-invasive brain recordings like electroencephalogram (EEG) and magnetoencephalography (MEG). The model uses a three-stage deep learning architecture, a convolutional module to process brain signals, a transformer module, and a pre-trained language model to correct the transformer's output. While it remains unconfirmed whether this model used 'The Frontier AI Framework', it is possible that future studies could incorporate it. Even with the advancements in the AI model, invasive methods continue to remain the gold standard for recording brain signals. However, these tests are a significant step towards bridging the gap between non-invasive and invasive techniques. Meanwhile, Jean-Rémi King, brain and AI tech lead, said, "The model achieves down to a ~20% character-error-rate on the best individuals. Not quite a usable product for everyday communication...but it's a huge improvement over current EEG-based approaches." "We believe that this approach offers a promising path to restore communication in brain-lesioned patients...without requiring them to get electrodes implanted inside," King added. Meta also announced a $2.2 million donation to the Rothschild Foundation Hospital to support the neuroscience community's collaborative work. While this is not something that we can use at the moment or benefit from, the insights from Meta's new research sound promising about how AI can make a difference in the neuroscience field.
Share
Share
Copy Link
Meta researchers have developed an AI model that can convert brain activity into text with unprecedented accuracy, potentially revolutionizing brain-computer interfaces and AI development.
Meta, in collaboration with international researchers, has unveiled a revolutionary AI model capable of decoding brain signals into text with unprecedented accuracy. This breakthrough, announced in two recent studies, marks a significant step forward in brain-computer interfaces and our understanding of human cognition 12.
At the heart of this innovation is Meta's deep-learning system called Brain2Qwerty. This AI model can interpret brain signals from individuals as they type, accurately predicting up to 80% of the characters being typed 13. The system utilizes a state-of-the-art magnetoencephalography (MEG) scanner to detect the magnetic signals in the brain, offering a non-invasive approach to brain signal interpretation 14.
The AI model employs a three-part architecture:
This process allows the system to reconstruct entire sentences solely from brain signals, offering potential applications in assistive technologies for those with communication difficulties 23.
Beyond its practical applications, this research has provided valuable insights into how the brain processes language. The studies reveal that the brain generates a sequence of representations, starting from abstract concepts and progressively transforming them into specific actions like typing 45. This "dynamic neural code" chains successive representations while maintaining each over extended periods 35.
Despite its impressive capabilities, the current system faces several limitations:
Meta researchers emphasize that this technology is not currently aimed at commercial products. Instead, they view it as a stepping stone towards better understanding human cognition and improving AI systems 4.
Jean-Rémi King, leader of Meta's Brain & AI team, suggests that understanding the brain's architecture could inform the development of more advanced machine intelligence 14. This research could potentially lead to AI systems that learn and reason more like humans, with applications spanning healthcare, education, and human-computer interaction 5.
As Meta continues to refine this technology, the future may hold more practical, non-invasive brain-computer interfaces and AI models that more closely mimic human cognitive processes. While challenges remain, this breakthrough represents a significant leap forward in our ability to bridge the gap between human thought and machine interpretation.
Reference
[2]
[4]
Meta has introduced a voice mode for its AI assistant, allowing users to engage in conversations and share photos. This update, along with other AI advancements, marks a significant step in Meta's AI strategy across its platforms.
10 Sources
10 Sources
Meta unveils SEAMLESSM4T, an advanced AI model capable of translating speech and text across multiple languages, bringing us closer to the concept of a universal translator.
4 Sources
4 Sources
Meta showcases its latest innovations in virtual and augmented reality technology, including a more affordable VR headset, AI advancements, and a prototype of holographic AR glasses, signaling the company's continued push into the metaverse.
38 Sources
38 Sources
Meta plans to release a standalone AI app in Q2 2025, aiming to compete directly with ChatGPT and Google Gemini. The move signifies Meta's ambition to lead the AI market by leveraging its vast user base and advanced AI capabilities.
27 Sources
27 Sources
Meta has introduced Motivo, an AI model designed to improve the realism of digital avatars in the metaverse. This development aims to enhance user experience and advance Meta's ambitious metaverse project.
7 Sources
7 Sources