Curated by THEOUTPOST
On Thu, 10 Oct, 12:08 AM UTC
5 Sources
[1]
'Electronic tongue' could revolutionize food safety - Earth.com
Ever wondered how an electronic tongue powered by artificial intelligence (AI) could enhance our ability to distinguish tastes? Researchers have recently unveiled this innovative technology, which identifies subtle differences in liquids. A team at Penn State led the research, demonstrating how AI could redefine how we perceive and analyze flavors. The electronic tongue is capable of identifying differences in liquids, such as milk with varying water content, diverse sodas, coffee blends, and even signs of spoilage in fruit juices. "We're trying to make an artificial tongue, but the process of how we experience different foods involves more than just the tongue," said Saptarshi Das, professor of engineering science and mechanics. Professor Das explained how the electronic tongue mimics the biological processes involved in taste, which go beyond the basic five taste categories. To artificially imitate the gustatory cortex, the researchers developed a neural network, a machine learning algorithm that mimics the human brain. Study co-author Harikrishnan Ravichandran is a doctoral student in engineering science and mechanics. "In this work, we're considering several chemicals to see if the sensors can accurately detect them, and furthermore, whether they can detect minute differences between similar foods and discern instances of food safety concerns," said Ravichandran. The experts noted that ion-sensitive field-effect transistors (ISFETs) have emerged as indispensable tools in chemosensing applications. "ISFETs operate by converting changes in the composition of chemical solutions into electrical signals, making them ideal for environmental monitoring, healthcare diagnostics, and industrial process control," wrote the researchers. "Recent advancements in ISFET technology, including functionalized multiplexed arrays and advanced data analytics, have improved their performance." The electronic tongue can broadly detect and classify numerous substances, assessing their quality, authenticity, and freshness with remarkable precision. This comprehensive approach not only holds the potential to revolutionize food safety and production but also extends its applications to medical diagnostics and beyond. According to study lead author Andrew Pannone, the AI reached a near ideal inference accuracy of more than 95% when utilizing the machine-derived figures of merit. To gain deeper insights into the AI's decision-making process, the team applied Shapley additive explanations - an advanced method grounded in game theory. This technique allowed the researchers to analyze how the AI weighed various factors in its assessments, providing a clearer view into the reasoning behind each decision. "We found that the network looked at more subtle characteristics in the data - things we, as humans, struggle to define properly," explained Professor Das. This highlights how the neural network's holistic approach mitigates variations that might occur daily. Professor Das noted that the electronic tongue's capabilities are "limited only by the data on which it is trained," suggesting future applications in medical diagnostics. "We figured out that we can live with imperfection. And that's what nature is - it's full of imperfections, but it can still make robust decisions, just like our electronic tongue." According to Professor Das, the electronic tongue's capabilities are only limited by the data it is trained on, making it potentially applicable in medical diagnostics and various industries. A significant advantage lies in the sensor's robustness, a quality that could support broad deployment across industries, despite day-to-day variations. Furthermore, these sensors do not need to be identical, making them more practical and cost-effective. Just like in nature, the electronic tongue can make robust decisions amid imperfections. The electronic tongue's potential applications go beyond basic taste detection. This versatile tool is equipped to handle various industry needs, from food and beverage quality control to medical diagnostics. By examining different chemicals and assessing factors like freshness and authenticity, the electronic tongue demonstrates its adaptability. The researchers believe that this technology's robustness, combined with the neural network's ability to adjust to day-to-day variations, makes the electronic tongue a valuable asset for any industry where precise and rapid substance identification is essential. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[2]
An electronic tongue that detects subtle differences in liquids also provides a view into how AI makes decisions
A recently developed electronic tongue is capable of identifying differences in similar liquids, such as milk with varying water content; diverse products, including soda types and coffee blends; signs of spoilage in fruit juices; and instances of food safety concerns. The team, led by researchers at Penn State, also found that results were even more accurate when artificial intelligence (AI) used its own assessment parameters to interpret the data generated by the electronic tongue. The researchers published their results Oct. 9 in Nature. According to the researchers, the electronic tongue can be useful for food safety and production, as well as for medical diagnostics. The sensor and its AI can broadly detect and classify various substances while collectively assessing their respective quality, authenticity and freshness. This assessment has also provided the researchers with a view into how AI makes decisions, which could lead to better AI development and applications, they said. "We're trying to make an artificial tongue, but the process of how we experience different foods involves more than just the tongue," said corresponding author Saptarshi Das, Ackley Professor of Engineering and professor of engineering science and mechanics. "We have the tongue itself, consisting of taste receptors that interact with food species and send their information to the gustatory cortex -- a biological neural network." The gustatory cortex is the region of the brain that perceives and interprets various tastes beyond what can be sensed by taste receptors, which primarily categorize foods via the five broad categories of sweet, sour, bitter, salty and savory. As the brain learns the nuances of the tastes, it can better differentiate the subtlety of flavors. To artificially imitate the gustatory cortex, the researchers developed a neural network, which is a machine learning algorithm that mimics the human brain in assessing and understanding data. "Previously, we investigated how the brain reacts to different tastes and mimicked this process by integrating different 2D materials to develop a kind of blueprint as to how AI can process information more like a human being," said co-author Harikrishnan Ravichandran, a doctoral student in engineering science and mechanics advised by Das. "Now, in this work, we're considering several chemicals to see if the sensors can accurately detect them, and furthermore, whether they can detect minute differences between similar foods and discern instances of food safety concerns." The tongue comprises a graphene-based ion-sensitive field-effect transistor, or a conductive device that can detect chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are non-functionalized, meaning that one sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to assess, all of which are related to how a sample liquid interacts with the sensor's electrical properties. Based on these researcher-specified parameters, the AI could accurately detect samples -- including watered-down milks, different types of sodas, blends of coffee and multiple fruit juices at several levels of freshness -- and report on their content with greater than 80% accuracy in about a minute. "After achieving a reasonable accuracy with human-selected parameters, we decided to let the neural network define its own figures of merit by providing it with the raw sensor data. We found that the neural network reached a near ideal inference accuracy of more than 95% when utilizing the machine-derived figures of merit rather than the ones provided by humans," said co-author Andrew Pannone, a doctoral student in engineering science and mechanics advised by Das. "So, we used a method called Shapley additive explanations, which allows us to ask the neural network what it was thinking after it makes a decision." This approach uses game theory, a decision-making process that considers the choices of others to predict the outcome of a single participant, to assign values to the data under consideration. With these explanations, the researchers could reverse engineer an understanding of how the neural network weighed various components of the sample to make a final determination -- giving the team a glimpse into the neural network's decision-making process, which has remained largely opaque in the field of AI, according to the researchers. They found that, instead of simply assessing individual human-assigned parameters, the neural network considered the data it determined were most important together, with the Shapley additive explanations revealing how important the neural network considered each input data. The researchers explained that this assessment could be compared to two people drinking milk. They can both identify that it is milk, but one person may think it is skim that has gone off while the other thinks it is 2% that is still fresh. The nuances of why are not easily explained even by the individual making the assessment. "We found that the network looked at more subtle characteristics in the data -- things we, as humans, struggle to define properly," Das said. "And because the neural network considers the sensor characteristics holistically, it mitigates variations that might occur day-to-day. In terms of the milk, the neural network can determine the varying water content of the milk and, in that context, determine if any indicators of degradation are meaningful enough to be considered a food safety issue." According to Das, the tongue's capabilities are limited only by the data on which it is trained, meaning that while the focus of this study was on food assessment, it could be applied to medical diagnostics, too. And while sensitivity is important no matter where the sensor is applied, their sensors' robustness provides a path forward for broad deployment in different industries, the researchers said. Das explained that the sensors don't need to be precisely identical because machine learning algorithms can look at all information together and still produce the right answer. This makes for a more practical -- and less expensive -- manufacturing process. "We figured out that we can live with imperfection," Das said. "And that's what nature is -- it's full of imperfections, but it can still make robust decisions, just like our electronic tongue."
[3]
AI tastebuds are better at identifying what's in food than you
Picking out individual ingredients from a dish can be a fun, if difficult, part of a meal. Professional chefs and food scientists can spend years refining their palettes. Now, a robot may be able to join in the activity thanks to the researchers behind a robotic taster that combines AI and an electronic tongue capable of detecting tiny differences in flavor. The Penn State research team has published a paper detailing how the AI 'brain' uses the artificial tongue to detect how much water is in a cup of milk, the mix of beans in a coffee blend, and even incipient rot in fruit juice that would be impossible for a human to spot. Electronics to identify components in a mixture isn't a new idea. That's how machines can measure things like acidity and temperature. But, what the researchers have done goes beyond that by using AI to mimic the way your tongue, nose, and brain interpret the taste of things beyond a simple detection of pH balance. Using the advanced sensors known as ISFET (graphene-based ion-sensitive field-effect transistor), the electronic tongue can measure a lot of complex chemicals at the same time instead of needing multiple kinds of sensors like a thermometer and pH testing stick. The sensors produce a huge amount of data, which standard computer processors might take a while to sort out, and the analysis wouldn't tell you much about how watered down the milk is or how freshly squeezed your orange juice is. Instead, the researchers used AI in the form of a neural network that can mimic some of how humans process taste. After teaching the AI how different chemicals affect the electronic tongue's sensors, the neural network could accurately identify different types of soda and the freshness of juice more than 80% of the time. That was just the beginning, however. When the scientists took the metaphorical leash off the AI and let it come up with its own way of analyzing the data, the AI's accuracy shot up to 95%, barely ever getting a wrong answer. The combination of measuring subtle aspects of food plus using AI to judge what they mean is an impressive simulation of how humans taste things. It can also do so when a difference is too subtle for human perception, like if milk isn't bad yet but will be soon. Food tests for purity and freshness are only some of what an accurate AI tongue could do for people. Taste is, at its most basic level, a way of identifying chemicals. That means the AI taster could help in more than just the kitchen. It could theoretically help in industrial factories or in medical diagnostics, spotting biomarkers of disease or changes in your health. These concepts are still in the early discussion phase, but the electronic tongue may be a taste of the future.
[4]
A matter of taste: Electronic tongue reveals AI 'inner thoughts'
The researchers published their results today (Oct. 9) in Nature. According to the researchers, the electronic tongue can be useful for food safety and production, as well as for medical diagnostics. The sensor and its AI can broadly detect and classify various substances while collectively assessing their respective quality, authenticity and freshness. This assessment has also provided the researchers with a view into how AI makes decisions, which could lead to better AI development and applications, they said. "We're trying to make an artificial tongue, but the process of how we experience different foods involves more than just the tongue," said corresponding author Saptarshi Das, Ackley Professor of Engineering and professor of engineering science and mechanics. "We have the tongue itself, consisting of taste receptors that interact with food species and send their information to the gustatory cortex -- a biological neural network." The gustatory cortex is the region of the brain that perceives and interprets various tastes beyond what can be sensed by taste receptors, which primarily categorize foods via the five broad categories of sweet, sour, bitter, salty and savory. As the brain learns the nuances of the tastes, it can better differentiate the subtlety of flavors. To artificially imitate the gustatory cortex, the researchers developed a neural network, which is a machine learning algorithm that mimics the human brain in assessing and understanding data. "Previously, we investigated how the brain reacts to different tastes and mimicked this process by integrating different 2D materials to develop a kind of blueprint as to how AI can process information more like a human being," said co-author Harikrishnan Ravichandran, a doctoral student in engineering science and mechanics advised by Das. "Now, in this work, we're considering several chemicals to see if the sensors can accurately detect them, and furthermore, whether they can detect minute differences between similar foods and discern instances of food safety concerns." The tongue comprises a graphene-based ion-sensitive field-effect transistor, or a conductive device that can detect chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are non-functionalized, meaning that one sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to assess, all of which are related to how a sample liquid interacts with the sensor's electrical properties. Based on these researcher-specified parameters, the AI could accurately detect samples -- including watered-down milks, different types of sodas, blends of coffee and multiple fruit juices at several levels of freshness -- and report on their content with greater than 80% accuracy in about a minute. "After achieving a reasonable accuracy with human-selected parameters, we decided to let the neural network define its own figures of merit by providing it with the raw sensor data. We found that the neural network reached a near ideal inference accuracy of more than 95% when utilizing the machine-derived figures of merit rather than the ones provided by humans," said co-author Andrew Pannone, a doctoral student in engineering science and mechanics advised by Das. "So, we used a method called Shapley additive explanations, which allows us to ask the neural network what it was thinking after it makes a decision." This approach uses game theory, a decision-making process that considers the choices of others to predict the outcome of a single participant, to assign values to the data under consideration. With these explanations, the researchers could reverse engineer an understanding of how the neural network weighed various components of the sample to make a final determination -- giving the team a glimpse into the neural network's decision-making process, which has remained largely opaque in the field of AI, according to the researchers. They found that, instead of simply assessing individual human-assigned parameters, the neural network considered the data it determined were most important together, with the Shapley additive explanations revealing how important the neural network considered each input data. The researchers explained that this assessment could be compared to two people drinking milk. They can both identify that it is milk, but one person may think it is skim that has gone off while the other thinks it is 2% that is still fresh. The nuances of why are not easily explained even by the individual making the assessment. "We found that the network looked at more subtle characteristics in the data -- things we, as humans, struggle to define properly," Das said. "And because the neural network considers the sensor characteristics holistically, it mitigates variations that might occur day-to-day. In terms of the milk, the neural network can determine the varying water content of the milk and, in that context, determine if any indicators of degradation are meaningful enough to be considered a food safety issue." According to Das, the tongue's capabilities are limited only by the data on which it is trained, meaning that while the focus of this study was on food assessment, it could be applied to medical diagnostics, too. And while sensitivity is important no matter where the sensor is applied, their sensors' robustness provides a path forward for broad deployment in different industries, the researchers said. Das explained that the sensors don't need to be precisely identical because machine learning algorithms can look at all information together and still produce the right answer. This makes for a more practical -- and less expensive -- manufacturing process. "We figured out that we can live with imperfection," Das said. "And that's what nature is -- it's full of imperfections, but it can still make robust decisions, just like our electronic tongue." Das is also affiliated with the Materials Research Institute and the Departments of Electrical Engineering and of Materials Science and Engineering. Other contributors from the Penn State Department of Engineering Science and Mechanics include Aditya Raj, a research technologist at the time of the research; Sarbashis Das, a graduate student at the time of research who earned his doctorate in electrical engineering in May; Ziheng Chen, a graduate student in engineering science and mechanics; and Collin A. Price, a graduate student who earned his bachelor of science in engineering science and mechanics in May. Mahmooda Sultana, with the NASA Goddard Space Flight Center, also contributed. A Space Technology Graduate Research Opportunities grant from NASA supported this work.
[5]
Electronic Tongue Uses AI to Detect Differences in Liquids - Neuroscience News
Summary: Researchers have developed an AI-powered "electronic tongue" capable of distinguishing subtle differences in liquids, such as milk freshness, soda types, and coffee blends. By analyzing sensor data through a neural network, the device achieved over 95% accuracy in identifying liquid quality, authenticity, and potential safety issues. Interestingly, when the AI was allowed to select its own analysis parameters, it outperformed human-defined settings, showing how it holistically assessed subtle data. This technology, which uses graphene-based sensors, could revolutionize food safety assessments and potentially extend to medical diagnostics. The device's AI insights also provide a unique view into the neural network's decision-making process. This innovation promises practical applications across industries where quality and safety are paramount. A recently developed electronic tongue is capable of identifying differences in similar liquids, such as milk with varying water content; diverse products, including soda types and coffee blends; signs of spoilage in fruit juices; and instances of food safety concerns. The team, led by researchers at Penn State, also found that results were even more accurate when artificial intelligence (AI) used its own assessment parameters to interpret the data generated by the electronic tongue. The researchers published their results today (Oct. 9) in Nature. According to the researchers, the electronic tongue can be useful for food safety and production, as well as for medical diagnostics. The sensor and its AI can broadly detect and classify various substances while collectively assessing their respective quality, authenticity and freshness. This assessment has also provided the researchers with a view into how AI makes decisions, which could lead to better AI development and applications, they said. "We're trying to make an artificial tongue, but the process of how we experience different foods involves more than just the tongue," said corresponding author Saptarshi Das, Ackley Professor of Engineering and professor of engineering science and mechanics. "We have the tongue itself, consisting of taste receptors that interact with food species and send their information to the gustatory cortex -- a biological neural network." The gustatory cortex is the region of the brain that perceives and interprets various tastes beyond what can be sensed by taste receptors, which primarily categorize foods via the five broad categories of sweet, sour, bitter, salty and savory. As the brain learns the nuances of the tastes, it can better differentiate the subtlety of flavors. To artificially imitate the gustatory cortex, the researchers developed a neural network, which is a machine learning algorithm that mimics the human brain in assessing and understanding data. "Previously, we investigated how the brain reacts to different tastes and mimicked this process by integrating different 2D materials to develop a kind of blueprint as to how AI can process information more like a human being," said co-author Harikrishnan Ravichandran, a doctoral student in engineering science and mechanics advised by Das. "Now, in this work, we're considering several chemicals to see if the sensors can accurately detect them, and furthermore, whether they can detect minute differences between similar foods and discern instances of food safety concerns." The tongue comprises a graphene-based ion-sensitive field-effect transistor, or a conductive device that can detect chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are non-functionalized, meaning that one sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to assess, all of which are related to how a sample liquid interacts with the sensor's electrical properties. Based on these researcher-specified parameters, the AI could accurately detect samples -- including watered-down milks, different types of sodas, blends of coffee and multiple fruit juices at several levels of freshness -- and report on their content with greater than 80% accuracy in about a minute. "After achieving a reasonable accuracy with human-selected parameters, we decided to let the neural network define its own figures of merit by providing it with the raw sensor data. "We found that the neural network reached a near ideal inference accuracy of more than 95% when utilizing the machine-derived figures of merit rather than the ones provided by humans," said co-author Andrew Pannone, a doctoral student in engineering science and mechanics advised by Das. "So, we used a method called Shapley additive explanations, which allows us to ask the neural network what it was thinking after it makes a decision." This approach uses game theory, a decision-making process that considers the choices of others to predict the outcome of a single participant, to assign values to the data under consideration. With these explanations, the researchers could reverse engineer an understanding of how the neural network weighed various components of the sample to make a final determination -- giving the team a glimpse into the neural network's decision-making process, which has remained largely opaque in the field of AI, according to the researchers. They found that, instead of simply assessing individual human-assigned parameters, the neural network considered the data it determined were most important together, with the Shapley additive explanations revealing how important the neural network considered each input data. The researchers explained that this assessment could be compared to two people drinking milk. They can both identify that it is milk, but one person may think it is skim that has gone off while the other thinks it is 2% that is still fresh. The nuances of why are not easily explained even by the individual making the assessment. "We found that the network looked at more subtle characteristics in the data -- things we, as humans, struggle to define properly," Das said. "And because the neural network considers the sensor characteristics holistically, it mitigates variations that might occur day-to-day. In terms of the milk, the neural network can determine the varying water content of the milk and, in that context, determine if any indicators of degradation are meaningful enough to be considered a food safety issue." According to Das, the tongue's capabilities are limited only by the data on which it is trained, meaning that while the focus of this study was on food assessment, it could be applied to medical diagnostics, too. And while sensitivity is important no matter where the sensor is applied, their sensors' robustness provides a path forward for broad deployment in different industries, the researchers said. Das explained that the sensors don't need to be precisely identical because machine learning algorithms can look at all information together and still produce the right answer. This makes for a more practical -- and less expensive -- manufacturing process. "We figured out that we can live with imperfection," Das said. "And that's what nature is -- it's full of imperfections, but it can still make robust decisions, just like our electronic tongue." Das is also affiliated with the Materials Research Institute and the Departments of Electrical Engineering and of Materials Science and Engineering. Other contributors from the Penn State Department of Engineering Science and Mechanics include Aditya Raj, a research technologist at the time of the research; Sarbashis Das, a graduate student at the time of research who earned his doctorate in electrical engineering in May; Ziheng Chen, a graduate student in engineering science and mechanics; and Collin A. Price, a graduate student who earned his bachelor of science in engineering science and mechanics in May. Mahmooda Sultana, with the NASA Goddard Space Flight Center, also contributed. Funding: A Space Technology Graduate Research Opportunities grant from NASA supported this work. Robust chemical analysis with graphene chemosensors and machine learning Ion-sensitive field-effect transistors (ISFETs) have emerged as indispensable tools in chemosensing applications. ISFETs operate by converting changes in the composition of chemical solutions into electrical signals, making them ideal for environmental monitoring, healthcare diagnostics and industrial process control. Recent advancements in ISFET technology, including functionalized multiplexed arrays and advanced data analytics, have improved their performance. Here we illustrate the advantages of incorporating machine learning algorithms to construct predictive models using the extensive datasets generated by ISFET sensors for both classification and quantification tasks. This integration also sheds new light on the working of ISFETs beyond what can be derived solely from human expertise. Furthermore, it mitigates practical challenges associated with cycle-to-cycle, sensor-to-sensor and chip-to-chip variations, paving the way for the broader adoption of ISFETs in commercial applications. Specifically, we use data generated by non-functionalized graphene-based ISFET arrays to train artificial neural networks that possess a remarkable ability to discern instances of food fraud, food spoilage and food safety concerns. We anticipate that the fusion of compact, energy-efficient and reusable graphene-based ISFET technology with robust machine learning algorithms holds the potential to revolutionize the detection of subtle chemical and environmental changes, offering swift, data-driven insights applicable across a wide spectrum of applications.
Share
Share
Copy Link
Researchers at Penn State have developed an AI-driven electronic tongue capable of detecting subtle differences in liquids, potentially transforming food safety, quality control, and medical diagnostics.
Researchers at Penn State have developed an innovative "electronic tongue" powered by artificial intelligence (AI) that can identify subtle differences in liquids. This groundbreaking technology has the potential to revolutionize food safety, quality control, and even extend into medical diagnostics 123.
The electronic tongue comprises a graphene-based ion-sensitive field-effect transistor (ISFET) linked to an artificial neural network. This device can detect chemical ions and is trained on various datasets 14. Unlike traditional sensors, this non-functionalized sensor can detect different types of chemicals without requiring a specific sensor for each potential chemical 4.
Initially, researchers provided the neural network with 20 specific parameters related to how a sample liquid interacts with the sensor's electrical properties. Using these human-specified parameters, the AI achieved over 80% accuracy in detecting various samples, including:
Remarkably, when the researchers allowed the neural network to define its own figures of merit using raw sensor data, the accuracy increased to over 95% 124.
The electronic tongue aims to mimic the complex process of human taste perception, which involves more than just the tongue. As Professor Saptarshi Das explains, "We're trying to make an artificial tongue, but the process of how we experience different foods involves more than just the tongue" 124.
To artificially imitate the gustatory cortex (the brain region that interprets tastes), the researchers developed a neural network that mimics the human brain in assessing and understanding data 124.
Using a method called Shapley additive explanations, the researchers gained insights into the neural network's decision-making process. This approach, based on game theory, allowed them to understand how the AI weighed various components of the sample to make its final determination 124.
The electronic tongue's capabilities extend beyond basic taste detection:
The robustness of the sensors provides a path for broad deployment across different industries. Importantly, the sensors don't need to be precisely identical, as the machine learning algorithms can process all information collectively to produce accurate results. This makes the manufacturing process more practical and cost-effective 45.
As Professor Das notes, "The tongue's capabilities are limited only by the data on which it is trained," suggesting vast potential for future applications in various fields 45.
Reference
[2]
[4]
[5]
Stanford researchers use mechanical testing and machine learning to improve plant-based meat textures, potentially accelerating the development of more convincing meat alternatives.
3 Sources
3 Sources
Researchers explore ChatGPT's capabilities in sensory evaluation of food, specifically brownies, potentially revolutionizing product development in the food industry.
2 Sources
2 Sources
Researchers develop an AI-powered algorithm that can diagnose various health conditions by analyzing tongue images. The model shows promising results in detecting diseases such as diabetes, stroke, and COVID-19.
4 Sources
4 Sources
Artificial intelligence algorithms have demonstrated superior accuracy in identifying whisky origins and aromas compared to human experts, marking a significant advancement in automated sensory analysis.
5 Sources
5 Sources
Mondelez International, the maker of Oreos and other popular snacks, is using AI to accelerate flavor development and recipe optimization, revolutionizing their R&D process.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved