3 Sources
[1]
Adobe's New AI Model Can Process Documents On-Device
The AI model was fine-tuned on a specialised software dubbed DocAssist Adobe researchers have published a paper that details a new artificial intelligence (AI) model capable of processing documents locally on a device. Published last week, the paper highlights that researchers experimented with existing large language models (LLMs) and small language models (SLMs) to find how to reduce the size of the AI model while keeping its processing capability and inference speed high. The researchers, as a result of the experimentations, were able to develop an AI model dubbed SlimLM that can function entirely within a smartphone and process documents. AI-powered document processing, which allows a chatbot to answer user queries about its content, is an important use case of generative AI. Many companies, including Adobe, have tapped into this application and have released tools that offer this functionality. However, there is one issue with all such tools -- the AI processing takes place on the cloud. On-server processing of data raises concerns about data privacy and makes processing documents containing sensitive information a risk-ridden process. The risk mainly emerges from fears that the company offering the solution might train the AI on it, or a data breach incident could cause the sensitive information to be leaked. As a solution, Adobe researchers published a paper in the online journal arXiv, detailing a new AI model that can carry out document processing entirely on the device. Dubbed SlimLM, the AI model's smallest variant contains just 125 million parameters which makes it feasible to be integrated within a smartphone's operating system. The researchers claim that it can operate locally, without needing Internet connectivity. As a result, users can process even the most sensitive documents without any fear as the data never leaves the device. In the paper, the researchers highlighted that they conducted several experiments on a Samsung Galaxy S24 to find the balance between parameter size, inference speed, and processing speed. After optimising it, the team pre-tained the model on SlimPajama-627B foundation model and fine-tuned it using DocAssist, a specialised software for document processing. Notably, arXiv is a pre-print journal where publishing does not require peer reviews. As such, the validity of the claims made in the research paper cannot be ascertained. However, if true, the AI model could be shipped with Adobe's platforms in the future.
[2]
Goodbye cloud, Hello phone: Adobe's SlimLM brings AI to mobile devices
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Adobe researchers have created a breakthrough AI system that processes documents directly on smartphones without internet connectivity, potentially transforming how businesses handle sensitive information and how consumers interact with their devices. The system, called SlimLM, represents a major shift in artificial intelligence deployment -- away from massive cloud computing centers and onto the phones in users' pockets. In tests on Samsung's latest Galaxy S24, SlimLM demonstrated it could analyze documents, generate summaries, and answer complex questions while running entirely on the device's hardware. "While large language models have attracted significant attention, the practical implementation and performance of small language models on real mobile devices remain understudied, despite their growing importance in consumer technology," explained the research team, led by scientists from Adobe Research, Auburn University, and Georgia Tech. How small language models are disrupting the cloud computing status quo SlimLM enters the scene at a pivotal moment in the tech industry's shift toward edge computing -- a model in which data is processed where it's created, rather than in distant data centers. Major players like Google, Apple, and Meta have been racing to push AI onto mobile devices, with Google unveiling Gemini Nano for Android and Meta working on LLaMA-3.2, both aimed at bringing advanced language capabilities to smartphones. What sets SlimLM apart is its precise optimization for real-world use. The research team tested various configurations, finding that their smallest model -- at just 125 million parameters, compared to models like GPT-4o, which contain hundreds of billions -- could efficiently process documents up to 800 words long on a smartphone. Larger SlimLM variants, scaling up to 1 billion parameters, were also able to approach the performance of more resource-intensive models, while still maintaining smooth operation on mobile hardware. This ability to run sophisticated AI models on-device without sacrificing too much performance could be a game-changer. "Our smallest model demonstrates efficient performance on [the Samsung Galaxy S24], while larger variants offer enhanced capabilities within mobile constraints," the researchers wrote. Why on-device AI could reshape enterprise computing and data privacy The business implications of SlimLM extend far beyond technical achievement. Enterprises currently spend millions on cloud-based AI solutions, paying for API calls to services like OpenAI or Anthropic to process documents, answer questions, and generate reports. SlimLM suggests a future where much of this work could be done locally on smartphones, significantly reducing costs while improving data privacy. Industries that handle sensitive information -- such as healthcare providers, law firms, and financial institutions -- stand to benefit the most. By processing data directly on the device, companies can avoid the risks associated with sending confidential information to cloud servers. This on-device processing also helps ensure compliance with strict data protection regulations like GDPR and HIPAA. "Our findings provide valuable insights and illuminate the capabilities of running advanced language models on high-end smartphones, potentially reducing server costs and enhancing privacy through on-device processing," the team noted in their paper. Inside the technology: How researchers made AI work without the cloud The technical breakthrough behind SlimLM lies in how the researchers rethought language models to meet the hardware limitations of mobile devices. Instead of merely shrinking existing large models, they conducted a series of experiments to find the "sweet spot" between model size, context length, and inference time, ensuring that the models could deliver real-world performance without overloading mobile processors. Another key innovation was the creation of DocAssist, a specialized dataset designed to train SlimLM for document-related tasks like summarization and question answering. Instead of relying on generic internet data, the team tailored their training to focus on practical business applications, making SlimLM highly efficient for tasks that matter most in professional settings. The future of AI: Why your next digital assistant might not need the internet SlimLM's development points to a future where sophisticated AI doesn't require constant cloud connectivity, a shift that could democratize access to AI tools while addressing growing concerns about data privacy and the high costs of cloud computing. Consider the potential applications: smartphones that can intelligently process emails, analyze documents, and assist with writing -- all without sending sensitive data to external servers. This could transform how professionals in industries like law, healthcare, and finance interact with their mobile devices. It's not just about privacy; it's about creating more resilient and accessible AI systems that work anywhere, regardless of internet connectivity. For the broader tech industry, SlimLM represents a compelling alternative to the "bigger is better" mentality that has dominated AI development. While companies like OpenAI are pushing toward trillion-parameter models, Adobe's research demonstrates that smaller, more efficient models can still deliver impressive results when optimized for specific tasks. The end of cloud dependence? The (soon-to-be) public release of SlimLM's code and training dataset could accelerate this shift, empowering developers to build privacy-preserving AI applications for mobile devices. As smartphone processors continue to evolve, the balance between cloud-based and on-device AI processing could tip dramatically toward local computing. What SlimLM offers is more than just another step forward in AI technology; it's a new paradigm for how we think about artificial intelligence. Instead of relying on vast server farms and constant internet connections, the future of AI could be personalized, running directly on the device in your pocket, maintaining privacy, and reducing dependence on cloud computing infrastructure. This development marks the beginning of a new chapter in AI's evolution. As the technology matures, we may soon look back on cloud-based AI as a transitional phase, with the true revolution being the moment AI became small enough to fit in our pockets.
[3]
Adobe announces development of SLM that can run locally on a phone with no cloud connection
A small team of AI researchers at Adobe Inc., working with a colleague from Auburn University and another from Georgia Tech, has developed a small language model (SLM) that they claim can be run locally on a smart phone with no access to the cloud. The group has written a paper describing their new app, which they call SlimLM, and have posted it to the arXiv preprint server. As LLM technology continues to mature, researchers across the globe continue to find new ways to improve it. In this new effort, the research team has found a way to cut the cord for a specific type of AI application -- processing documents locally. As LLMs such as ChatGPT become more popular, users have become more worried about privacy. And it is not just individuals -- companies large and small have adopted AI applications that assist with a variety of business processes, some of which require a high degree of privacy. The reason LLMs are not private right now is some of their work and a lot of their storage is on cloud devices, which can be hacked. The obvious solution, people in the field have been noting, is to cut the cord and run small language models (SLMs) locally with no need for the cloud, so that privacy worries can be solved. Some of the biggest players in the field have been working toward that end -- Google, Apple and Meta have all developed apps that can be run without accessing the cloud. But none so far are being used in the real world. That is where SlimLM differs, at least according to the team. They plan to make the app available to users "soon." The researchers acknowledge that the reason their product can be used locally is because of its specificity -- it is not a chatbot, or a general use tool. Instead, it can be used for specific document tasks, such as creating a summary or answering topical questions. That means the app was trained only on document processing, which reduces the number of parameters -- the smallest version currently runs with just 125 million. It also means it has far less work to do on the smartphone. The researchers suggest their app also represents a move toward more localized AI applications and a much higher degree of privacy for all types of applications.
Share
Copy Link
Adobe researchers have developed SlimLM, a small language model capable of processing documents locally on smartphones without internet connectivity, potentially transforming data privacy and AI accessibility.
Adobe researchers have unveiled SlimLM, a groundbreaking small language model (SLM) designed to process documents locally on smartphones without requiring internet connectivity. This innovation marks a significant shift in AI deployment, moving away from cloud-based solutions towards more privacy-focused, on-device processing 1.
SlimLM's smallest variant contains just 125 million parameters, making it feasible for integration within a smartphone's operating system. The researchers conducted experiments on a Samsung Galaxy S24 to optimize the balance between parameter size, inference speed, and processing capability 2.
Key features of SlimLM include:
The Adobe team, in collaboration with researchers from Auburn University and Georgia Tech, employed a strategic approach to create SlimLM:
SlimLM addresses growing concerns about data privacy in AI applications. By processing sensitive documents entirely on-device, it eliminates the need to send data to cloud servers, potentially reducing the risk of data breaches and ensuring compliance with regulations like GDPR and HIPAA 2.
Industries that stand to benefit the most include:
The development of SlimLM represents a shift in the AI industry's approach:
While the research paper is currently published on arXiv, a pre-print journal without peer review, the potential implications of SlimLM are significant. Adobe researchers plan to make the app available to users "soon," which could lead to widespread adoption of on-device AI processing for document-related tasks 3.
As the tech industry continues to push towards edge computing, SlimLM's development aligns with efforts by major players like Google, Apple, and Meta to bring advanced language capabilities to smartphones. This trend suggests a future where sophisticated AI doesn't require constant cloud connectivity, potentially transforming how professionals interact with their mobile devices and handle sensitive information 2.
Summarized by
Navi
[1]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
10 hrs ago
9 Sources
Technology
10 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
10 hrs ago
4 Sources
Technology
10 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
10 hrs ago
6 Sources
Technology
10 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
18 hrs ago
6 Sources
Technology
18 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
2 hrs ago
2 Sources
Policy
2 hrs ago