5 Sources
[1]
AWS unveils Bedrock AgentCore, a new platform for building enterprise AI agents with open source frameworks and tools
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Cloud giant Amazon Web Services (AWS) believes AI agents will change how we all work and interact with information, and that enterprises need a platform that allows them to build and deploy agents at scale -- all in one place. Today at its New York Summit, AWS unveiled Amazon Bedrock AgentCore, a new enterprise-grade platform designed to build, deploy, and operate AI agents securely and at scale. Swami Sivasubramanian, AWS Vice President of Agentic AI, said during the keynote that AgentCore "helps organizations move beyond experiments to production-ready agent systems that can be trusted with your most critical business processes." AgentCore is a modular stack of services -- available in preview -- that gives developers the core infrastructure needed to move AI agents from prototype to production, including runtime, memory, identity, observability, API integration, and tools for web browsing and code execution. "We believe that agents are going to fundamentally change how we use tools and the internet," said Deepak Singh, AWS Vice President of Databases and AI. "The line between an agent and an application is getting blurrier." AgentCore builds on the existing Bedrock Agents framework, launched in late 2024, but dramatically expands capabilities by supporting any agent framework or foundation model -- not just those hosted within Bedrock. That includes compatibility with open-source toolkits like CrewAI, LangChain, LlamaIndex, LangGraph, and AWS's own Strands Agents SDK. What AWS Bedrock AgentCore includes * AgentCore Runtime: A serverless, low-latency execution environment that supports multimodal workloads and long-running sessions with session isolation. * AgentCore Memory: Long- and short-term memory services that let agents learn from past interactions and persist contextual knowledge across sessions. * AgentCore Identity: OAuth-based identity and access management, allowing agents to act on behalf of users across systems like GitHub, Slack, or Salesforce. * AgentCore Observability: Built-in dashboards, debugging, and telemetry tools with support for OpenTelemetry, LangSmith, and Datadog. * AgentCore Gateway: Converts internal APIs, Lambda functions, and third-party services into agent-compatible tools using the Model Context Protocol (MCP). * AgentCore Browser: Provides headless browser access for agents to autonomously interact with websites. * AgentCore Code Interpreter: A secure environment for executing code generated by agents for analysis and visualization. AgentCore also integrates with the AWS Marketplace, enabling teams to discover and deploy pre-built agents and tools. According to Singh, AgentCore has been designed with interoperability in mind. It supports emerging industry standards like MCP and Google's Agent-2-Agent (A2A) protocol. Features such as AgentCore Identity and Gateway ensure agents have clear permissioning and can interact securely with internal systems and third-party APIs. AWS's launch puts it squarely into the center of what's quickly becoming one of the most competitive segments in enterprise AI. OpenAI's Agents SDK and Google's Gemini-based Agents SDK are both pushing similar visions of end-to-end agent development platforms. Writer's AI HQ and startups like Cognition (maker of Devin) are also building tools for managing autonomous software agents. "Agents are the most impactful change we've seen in ages," Sivasubramanian said. "With agents comes a shift to service as a software. This is a tectonic change in how software is built, deployed and operated." Customer adoption and early use cases Several companies granted early access to AgentCore are already building production-grade agentic applications across industries including finance, healthcare, marketing, and content management. Cloud document and file storage company Box is exploring ways to extend its content management tools using Strands Agents and Bedrock AgentCore Runtime. CTO Ben Kus said the integration gives Box customers "top tier security and compliance" while scaling AI capabilities across enterprise environments. Brazil's Itaú Unibanco is using AgentCore to support its development of hyper-personalized, secure digital banking experiences. Chief Technology Officer Carlos Eduardo Mazzei said the new platform "will help us deliver an intuitive banking experience with the efficiency of automation and personalization customers expect." In the healthcare space, Innovaccer has built a new protocol -- HMCP (Healthcare Model Context Protocol) -- on top of AgentCore Gateway. CEO and co-founder Abhinav Shashank called Gateway a "game-changer" that allows the company to convert existing APIs into agent-compatible tools at scale while maintaining trust, compliance, and operational efficiency. Marketing firm Epsilon is leveraging AgentCore to accelerate campaign build times and improve engagement. Prashanth Athota, SVP of Software Engineering, said the company expects to reduce build times by up to 30% and enhance customer journey personalization. Availability and pricing AgentCore is now available in preview in select AWS regions including US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt). It's free to try until September 16, 2025, with pricing to begin thereafter. Pricing for AgentCore is entirely consumption-based, with no upfront commitments or minimum fees. Each module -- Runtime, Memory, Identity, Observability, Gateway, Browser, and Code Interpreter -- is billed independently and can be used a la carte or together. Runtime, Browser, and Code Interpreter services are priced per second, based on CPU and memory usage, with rates set at $0.0895 per vCPU-hour and $0.00945 per GB-hour. Gateway charges $0.005 per 1,000 tool API invocations, $0.025 per 1,000 search queries, and $0.02 per 100 tools indexed per month. Memory costs are based on data volume: $0.25 per 1,000 short-term memory events, $0.75 per 1,000 long-term memories stored (or $0.25 with custom strategies), and $0.50 per 1,000 retrievals. AgentCore Identity costs $0.010 per 1,000 token or API key requests, though it's included at no extra charge when used via Runtime or Gateway. Observability is billed via Amazon CloudWatch rates. To learn more or get started, AWS directs developers to its AgentCore documentation, GitHub samples, and a dedicated Discord server.
[2]
AWS looks to super-charge AI agents with Amazon Bedrock AgentCore
Includes a selection of tools and services to help build and deploy agents AWS has revealed a new agentic AI development platform as it looks to make building and deploying agents easier than ever. The new Amazon Bedrock AgentCore platform looks to give developers everything they need to create and deploy advanced AI agents. Speaking at its AWS Summit New York 2025 event, the company said the launch marks a "step change" in helping developers move agents from fun toys to something effective and into production. Set to be available soon, AgentCore includes the following services: AgentCore Runtime - secure serverless runtime purpose-built for deploying and scaling AI agents and tools AgentCore Memory - build context-aware agents by eliminating complex memory infrastructure management while providing full control over what the AI agent remembers AgentCore Identity - securely access AWS services and third-party tools on behalf of users or acting with pre-authorization AgentCore Gateway - build, deploy and discover agents across millions of connections - automatically convert into MCP-compatible tools without managing integrations AgentCore Code Interpreter - enable AI agents to write and execute code securely, enhancing accuracy for solving complex end to end tasks - including JavaScript and Python AgentCore Browser Tool - fast, secure cloud-based browser runtime to enable AI agents to interact with websites at scale - including live viewing for troubleshooting and auditing AgentCore Observability - trace, debug and monitor AI agents' performance in production environments
[3]
Amazon Launches AgentCore to Deploy and Operate AI Agents at Scale | AIM
AgentCore addresses the growing demand for infrastructure that supports production-ready AI agents capable of reasoning, planning, acting, and learning with limited human oversight. Amazon has announced the preview launch of AgentCore, a new suite of services designed to help developers deploy and manage AI agents at enterprise scale. Built on Amazon Bedrock and compatible with any model or framework, AgentCore addresses the growing demand for infrastructure that supports production-ready AI agents capable of reasoning, planning, acting, and learning with limited human oversight. The rise of agentic AI has accelerated with the adoption of standardised protocols like Model Context Protocol (MCP) and Agent2Agent (A2A), which simplify how agents interact with tools and systems. While frameworks like CrewAI, LangGraph, LlamaIndex, and Strands Agents have made prototyping easier, moving these agents into production still poses major challenges. Developers often spend months building session management, memory systems, observability layers, and secure identity controls, diverting focus from core functionality. "AgentCore eliminates tedious infrastructure work and operational complexity so development teams can bring agentic solutions to market faster," Amazon said in a blog post. AgentCore offers enterprise-grade services that handle key operational components of agent development. These include a serverless runtime environment with session isolation, long- and short-term memory management, execution observability with metadata and debugging tools, and secure identity integration for accessing AWS and third-party services such as GitHub and Slack. The platform also includes managed browser instances for web-based workflows and a code interpreter to run agent-generated code in an isolated environment. According to Amazon, these services are designed to work either independently or together, and can be integrated with existing agent code through the AgentCore SDK. "AgentCore can work with open source or custom AI agent frameworks, giving teams the flexibility to maintain their preferred tools while gaining enterprise capabilities," the company said. Developers can also discover and run pre-built agents and tools via AWS Marketplace, using AgentCore Runtime to deploy and AgentCore Gateway to connect them to APIs and other services. This unified access model is expected to make it easier for enterprises to scale agent-based applications while maintaining compliance and control. With AgentCore, Amazon is positioning itself at the centre of the agent infrastructure ecosystem, providing a foundational layer for developers to move beyond experimentation and build AI agents that operate reliably at scale.
[4]
AWS debuts new AI development tools, vector-optimized object store - SiliconANGLE
AWS debuts new AI development tools, vector-optimized object store Amazon Web Services Inc. is rolling out a new set of tools designed to help customers build artificial intelligence agents. Swami Sivasubramanian (pictured), the cloud giant's vice president of agentic AI, detailed the offerings today at the AWS Summit in New York. The first new offering that Sivasubramanian detailed during his keynote is called Amazon Bedrock AgentCore. It comprises a half dozen services designed to ease the task of building and maintaining AI agents. AgentCore's first component, AgentCore Runtime, provides cloud-based sandboxes for hosting AI agents. It allows agents to operate for up to eight hours per run, which makes it possible to automate time-consuming tasks such as analyzing large datasets. Each AgentCore sandbox can be configured with different security settings tailored to the workload it hosts. If completing a task requires an agent to use an external system, developers can activate a service called AgentCore Gateway. It allows agents to access application programming interfaces, code snippets deployed on AWS Lambda and other external workloads. If some of those workloads require an agent to authenticate itself, a module called AgentCore Gateway makes it possible to do so using access management services such as Okta. A code interpreter built into AgentCore allows AI agents to run the code they generate. Another tool, a cloud-based browser, enables agents to perform tasks that require interacting with websites. Developers can check that their AgentCore workloads run reliably using a service called AgentCore Observability. "AgentCore provides a secure, serverless runtime with complete session isolation and the longest running workload available today, tools and capabilities to help agents execute workflows with the right permissions and context, and controls to operate trustworthy agents," Sivasubramanian wrote in a blog post. AgentCore-powered agents and other AI applications can keep their data in Amazon S3 Vectors, a new storage offering that also debuted at AWS Summit today. It's optimized to store vectors, the mathematical structures in which neural networks encode their data. AWS says that the offering costs 90% less than alternative services. S3 Vectors stores information in repositories called vector buckets. A vector bucket can hold up to 10,000 data structures called vector indexes. Each vector index, in turn, may contain tens of millions of vectors. Customers can optionally enrich their records with metadata such as the date when a given vector was created. Such contextual information makes it easier for AI models to find relevant records in large datasets. According to AWS, S3 Vectors processes queries with sub-second latency. "As you write, update, and delete vectors over time, S3 Vectors automatically optimizes the vector data to achieve the best possible price-performance for vector storage, even as the datasets scale and evolve," AWS principal developer advocate Channy Yun explained in a blog post. S3 Vectors integrates with multiple AWS services including Amazon Bedrock, which offers access to a set of cloud-hosted foundation models. Some of the algorithms are developed by third-party providers such as Anthropic, while others are built by AWS itself. Companies can use the models to power their AI agents. Going forward, the cloud giant will enable users to customize the Amazon Nova series of models that it offers through Bedrock. The series comprises more than a half dozen algorithms including several large language models. The other neural networks in the lineup, meanwhile, are geared towards tasks such as image generation. AWS will enable customers to customize Nova models during both the pre- and post-training phases of the development workflow. The pre-training phase produces the base version of an AI model. Post-training, in term, is the umbrella term for the optimizations that engineers make to an AI model after initial development is complete. AWS will support several AI customization methods. One of them is RLHF, a particularly widely-used technique whereby humans provide an LLM with feedback on the quality of prompt responses. This feedback helps the model refine its output. After customizing a model, customers can deploy it on Bedrock. "Customers can now customize Nova Micro, Nova Lite, and Nova Pro across the model training lifecycle, including pre-training, supervised fine-tuning, and alignment," AWS senior developer advocate Betty Zheng detailed in a blog post. AWS announced the new offerings alongside a number of other AI-related updates. The AWS Marketplace now has a section dedicated to AI agents, tools and related offerings from the cloud giant's partners. Nova Act, a Bedrock model that can perform actions in a browser, is receiving an enhanced software development kit with expanded cybersecurity features. AWS is also releasing two new MCP servers. The first offers access to data about its APIs, while the other contains knowledge from its developer documentation. AI agents can use the MCP servers to incorporate that information into their prompt responses. AWS will invest $100 million in its AWS Generative AI Innovation Center to help customers with their AI projects. The business unit, which was formed in 2023, provides customers with access to AI researchers, engineers and other technical experts. AWS disclosed on occasion of the investment that the unit has completed AI projects for thousands of customers since launching.
[5]
AgentCore powers scalable AI agent deployment - SiliconANGLE
AWS debuts Bedrock AgentCore to move AI agents from pilot to production Amazon Web Services Inc. is accelerating the shift from artificial intelligence pilots to production with the launch of AWS Bedrock AgentCore, a new runtime built to deploy AI agents at scale with memory, orchestration and enterprise-grade security. As enterprises move past proof-of-concept phases, the need for stronger agent infrastructure is becoming urgent. AgentCore is designed to meet that demand, enabling developers to move faster while maintaining control over data, access and trust boundaries. By embedding memory and tool-use capabilities into a managed runtime, AgentCore gives teams the framework they need to operationalize AI agents across complex workflows -- without sacrificing governance or performance, according to Ben Schreiner (pictured), head of AI and modern data strategy at AWS. "Big news with AgentCore coming out and really demonstrating our innovation ahead of the problems you could foresee that agents could create if they weren't governed, if they weren't observed, if they didn't have a secure runtime," Schreiner said. "You can really see the innovation coming out of our engineering teams to help enterprises not only develop these agents, but also make sure that they're deploying them in a secure, reliable way." Schreiner spoke with theCUBE's John Furrier at AWS Summit NYC, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how AWS Bedrock AgentCore is enabling enterprises to move AI agents from pilot projects to scalable, secure production environments while also reshaping development practices and data strategies to support real-world deployment. AgentCore arrives as companies reimagine how AI agents fit into their business logic and workflows. Rather than being a one-size-fits-all solution, AWS emphasizes flexibility and alignment with actual customer needs, driven by its "working backwards" development ethos, according to Schreiner. "That's at the root of how we approach all customers; they're all unique and there may be some themes and trends that transcend customer segments or industries," he said. "We want to take each customer's challenge and what they're trying to do and work backwards from that." That mindset extends to the agent development lifecycle. Early enthusiasm for "vibe coding" -- where non-technical users sketch out intent -- is now meeting enterprise rigor. AWS' Kiro platform helps bridge this gap, enabling better collaboration between business users and technical teams to get agents from prototype to production, Schreiner explained. "Kiro brings those two things together for the first time, where you're seeing the requirements and the documentation and all the things that need to go into creating a production-ready solution," he said. "So many customers got stuck in POC land and it's unfortunate, but if you don't get into production, then you didn't really solve the problem that you had identified ... production and scale is the goal, and we drive toward that with our customers." Another critical factor in deploying agents at scale is data readiness. Organizations are rediscovering the importance of strong, well-governed datasets as they realize that AI effectiveness is tightly coupled to data quality. That's prompting many to rethink legacy architectures in favor of more modern, flexible designs. "We need executives to understand that the machines and the agents that you create are only as good as the data they have access to," Schreiner added. "If you want good answers from your agentic workflows or anything you do with AI, then you've got to make sure the data it has access to is strong. That gets to governance, it gets to security, it gets to all the things that we've been professing for decades now." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of AWS Summit NYC:
Share
Copy Link
Amazon Web Services launches Bedrock AgentCore, a comprehensive platform designed to streamline the development, deployment, and management of AI agents at enterprise scale, addressing key challenges in moving from prototype to production.
Amazon Web Services (AWS) has unveiled Amazon Bedrock AgentCore, a new enterprise-grade platform designed to revolutionize the development, deployment, and operation of AI agents at scale. Announced at the AWS Summit in New York, this comprehensive suite of services aims to address the growing demand for infrastructure that supports production-ready AI agents capable of reasoning, planning, acting, and learning with limited human oversight 12.
Source: TechRadar
AgentCore offers a modular stack of services that provide developers with the core infrastructure needed to move AI agents from prototype to production. The platform includes several key components:
AgentCore Runtime: A serverless, low-latency execution environment supporting multimodal workloads and long-running sessions with isolation 1.
AgentCore Memory: Long- and short-term memory services enabling agents to learn from past interactions and persist contextual knowledge 12.
AgentCore Identity: OAuth-based identity and access management for secure agent interactions across various systems 1.
AgentCore Observability: Built-in dashboards, debugging, and telemetry tools with support for OpenTelemetry, LangSmith, and Datadog 1.
AgentCore Gateway: Converts internal APIs, Lambda functions, and third-party services into agent-compatible tools using the Model Context Protocol (MCP) 12.
AgentCore Browser: Provides headless browser access for agents to autonomously interact with websites 14.
AgentCore Code Interpreter: A secure environment for executing code generated by agents for analysis and visualization 12.
AgentCore is designed with interoperability in mind, supporting emerging industry standards like MCP and Google's Agent-2-Agent (A2A) protocol. It is compatible with open-source toolkits such as CrewAI, LangChain, LlamaIndex, LangGraph, and AWS's own Strands Agents SDK 13.
Several companies granted early access to AgentCore are already building production-grade agentic applications across various industries:
AgentCore integrates with the AWS Marketplace, enabling teams to discover and deploy pre-built agents and tools. The platform is now available in preview in select AWS regions, including US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt). Pricing is consumption-based, with no upfront commitments or minimum fees, and each module is billed independently 14.
The launch of AgentCore positions AWS at the center of the rapidly growing enterprise AI agent development ecosystem. This move puts AWS in direct competition with other major players in the field, such as OpenAI's Agents SDK and Google's Gemini-based Agents SDK, as well as startups like Cognition (maker of Devin) 15.
Source: VentureBeat
As Swami Sivasubramanian, AWS Vice President of Agentic AI, stated, "Agents are the most impactful change we've seen in ages. With agents comes a shift to service as a software. This is a tectonic change in how software is built, deployed and operated" 1.
With AgentCore, AWS aims to provide a foundational layer for developers to move beyond experimentation and build AI agents that operate reliably at scale, potentially reshaping the landscape of enterprise AI development and deployment.
Google rolls out an AI-powered business calling feature in the US and enhances its AI Mode with Gemini 2.5 Pro and Deep Search capabilities, revolutionizing how users interact with local businesses and conduct online research.
13 Sources
Technology
19 hrs ago
13 Sources
Technology
19 hrs ago
Nvidia and AMD are set to resume sales of AI chips to China as part of a broader US-China trade deal involving rare earth elements, sparking debates on national security and technological competition.
3 Sources
Policy and Regulation
4 hrs ago
3 Sources
Policy and Regulation
4 hrs ago
Calvin French-Owen, a former OpenAI engineer, shares insights into the company's internal workings, highlighting its rapid growth, secretive nature, and innovative yet chaotic work environment.
5 Sources
Technology
19 hrs ago
5 Sources
Technology
19 hrs ago
OpenAI has added Google Cloud to its list of cloud providers, joining Microsoft, Oracle, and CoreWeave. This move aims to meet the escalating demand for computing capacity needed to run AI models like ChatGPT.
7 Sources
Technology
12 hrs ago
7 Sources
Technology
12 hrs ago
The U.S. eases restrictions on Nvidia's H20 AI chip sales to China, aiming to counter Huawei's growing influence. Meanwhile, a thriving black market for banned AI chips poses challenges to export controls.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago