Amazon S3 Files gives AI agents native file system access, ending two decades of storage friction

3 Sources

Share

Amazon Web Services launched S3 Files, a native file system interface that lets AI agents access S3 object storage directly using standard file operations. The new capability eliminates the need for duplicate storage layers and data synchronization pipelines that have plagued developers for nearly 20 years, particularly as agentic AI workflows demand seamless access to enterprise data.

Amazon S3 Files Bridges the Object-File Divide

Amazon Web Services (AWS) has introduced Amazon S3 Files, a native file system interface that allows AI agents and applications to access its S3 object storage service as if it were a traditional file system

1

. The new capability addresses a fundamental incompatibility that has frustrated developers and data scientists for nearly two decades: the divide between object stores that serve data through API calls and file systems that use standard file operations

2

.

The file system presents S3 objects as files and directories, supporting all Network File System (NFS) v4.1+ operations like creating, reading, updating, and deleting files

1

. This means machine learning training jobs can run directly against data in S3 without first copying it to a separate file system, and AI agents can read and write files using the same basic tools they would use on a local hard drive

2

.

Source: VentureBeat

Source: VentureBeat

Why the Object-File Split Matters for AI-Driven Systems

The rise of agentic AI workflows has made the storage challenge more acute. AI agents run on file systems using standard tools to navigate directories and read file paths, but enterprise data predominantly resides in object stores like S3

3

. Bridging that gap previously required a separate file system layer alongside S3, creating data duplication and sync pipelines to keep both aligned

3

.

Andy Warfield, VP and distinguished engineer at AWS, explained that engineering teams using tools like Kiro and Claude Code kept running into the same problem: agents defaulted to local file tools, but the data was in S3 bucket storage

3

. Downloading data locally worked until the agent's context window compacted and the session state was lost. "By making data in S3 immediately available, as if it's part of the local file system, we found that we had a really big acceleration with the ability of things like Kiro and Claude Code to be able to work with that data," Warfield told VentureBeat

3

.

A Different Architecture Than FUSE-Based Solutions

Previous attempts to solve this problem relied on FUSE (Filesystem in USErspace), a software layer that lets developers mount custom file systems without changing underlying storage

3

. Tools like Google Cloud Storage FUSE and Azure Blob NFS all used FUSE-based drivers to make object stores look like file systems, but these solutions either faked file behavior by adding extra metadata or refused file operations that the object store couldn't support

3

.

S3 Files takes a different approach entirely. AWS connects its Elastic File System (EFS) technology directly to S3, presenting a full native file system access layer while keeping S3 as the system of record

3

. Both the file system API and the S3 object API remain accessible simultaneously against the same data

3

.

Source: GeekWire

Source: GeekWire

In an unusually candid essay, Warfield described how the team initially struggled with the technical challenge. They "locked a bunch of our most senior engineers in a room and not let them out till they had a plan that they all liked," he wrote. "Passionate and contentious discussions ensued. And then finally we gave up"

2

. The breakthrough came when the team stopped trying to hide the boundary between files and objects and instead made it a deliberate part of the design

2

.

Accelerating Multi-Agent Pipelines and Log Analysis

For multi-agent pipelines, the capability allows multiple agents to access the same mounted bucket simultaneously. AWS says thousands of compute resources can connect to a single S3 file system at the same time, with aggregate read throughput reaching multiple terabytes per second

3

. Shared state across agents works through standard file operations: subdirectories, notes files, and shared project directories that any agent in the pipeline can read and write

3

.

Warfield walked through a practical example involving log analysis. Previously, a developer using an AI agent to work with log data would need to tell the agent where the log files are located and to download them. With S3 Files, if the logs are immediately mountable on the local file system, the developer can simply identify that the logs are at a specific path, and the agent immediately has access

3

.

The file system can be accessed directly from any AWS compute instance, container, or function, spanning use cases from production applications to machine learning training and agentic AI systems

1

. For teams building Retrieval-Augmented Generation (RAG) pipelines on top of shared agent content, S3 Vectors—launched at AWS re:Invent in December 2024—layers on top for similarity search against that same data

3

.

S3 Files is available now in most AWS Regions and has been in customer testing for about nine months

2

. The service is built on Amazon's Elastic File System and offers developers simplicity and a more unified, cost-efficient data architecture

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo