2 Sources
2 Sources
[1]
AWS Launches Strands Agents Open Source SDK for AI Agent Development | AIM
Strands Agents is available on GitHub. Companies like Accenture, PwC, Meta, and Anthropic are already contributing. AWS has released Strands Agents, an open source SDK for building and deploying AI agents with minimal code. The framework adopts a model-driven approach, allowing developers to use prompts and tools directly, without complex orchestration logic. "Strands scales from simple to complex agent use cases, and from local development to deployment in production," the company said in the press release. Teams across AWS, including Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer, are already using Strands in production. Strands is positioned as a lightweight alternative to existing frameworks that require elaborate workflow definitions. "Compared with frameworks that require developers to define complex workflows for their agents, Strands simplifies agent development by embracing the capabilities of state-of-the-art models to plan, chain thoughts, call tools, and reflect," the announcement noted. Strands Agents allows developers to define three components in code -- the model, tools and a prompt. It supports a wide range of models, including those from Amazon Bedrock, Anthropic, Meta (via Llama API), Ollama, and others through LiteLLM. Tools can be custom Python functions or pre-built utilities that interact with files, APIs, and AWS services. The agent interacts with the model and tools in a loop until it completes the assigned task. "The Strands agentic loop takes full advantage of how powerful LLMs have become and how well they can natively reason, plan, and select tools," the announcement said. Strands also includes advanced tools to handle complex use cases. These include a retrieve tool for semantic search, a thinking tool to simulate deep analysis, and multi-agent tools for workflows and collaboration. "By modelling sub-agents and multi-agent collaboration as tools, the model-driven approach enables the model to reason about if and when a task requires a defined workflow, graph, or swarm of sub-agents," the company said. Strands Agents is available on GitHub. Companies like Accenture, PwC, Meta, and Anthropic are already contributing. "Anthropic has already contributed support in Strands for using models through the Anthropic API, and Meta contributed support for Llama models through Llama API." The initiative stems from the Amazon Q Developer team's own challenges with early agent frameworks. "Even though LLMs were getting dramatically better, those improvements didn't mean we could build and iterate on agents any faster," Clare Liguori, senior principal software engineer for AWS Agentic AI said, adding that what once took months now takes "days and weeks" with Strands.
[2]
Introducing Strands Agents, an Open Source AI Agents SDK
Today I am happy to announce we are releasing Strands Agents. Strands Agents is an open source SDK that takes a model-driven approach to building and running AI agents in just a few lines of code. Strands scales from simple to complex agent use cases, and from local development to deployment in production. Multiple teams at AWS already use Strands for their AI agents in production, including Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer. Now, I'm thrilled to share Strands with you for building your own AI agents. Compared with frameworks that require developers to define complex workflows for their agents, Strands simplifies agent development by embracing the capabilities of state-of-the-art models to plan, chain thoughts, call tools, and reflect. With Strands, developers can simply define a prompt and a list of tools in code to build an agent, then test it locally and deploy it to the cloud. Like the two strands of DNA, Strands connects two core pieces of the agent together: the model and the tools. Strands plans the agent's next steps and executes tools using the advanced reasoning capabilities of models. For more complex agent use cases, developers can customize their agent's behavior in Strands. For example, you can specify how tools are selected, customize how context is managed, choose where session state and memory are stored, and build multi-agent applications. Strands can run anywhere and can support any model with reasoning and tool use capabilities, including models in Amazon Bedrock, Anthropic, Ollama, Meta, and other providers through LiteLLM. Strands Agents is an open community, and we're excited that several companies are joining us with support and contributions including Accenture, Anthropic, Langfuse, mem0.ai, Meta, PwC, Ragas.io, and Tavily. For instance, Anthropic has already contributed support in Strands for using models through the Anthropic API, and Meta contributed support for Llama models through Llama API. Join us on GitHub to get started with Strands Agents! Our journey building agents I primarily work on Amazon Q Developer, a generative AI-powered assistant for software development. My team and I started building AI agents in early 2023, around when the original ReAct (Reasoning and Acting) scientific paper was published. This paper showed that large language models could reason, plan, and take actions in their environment. For example, LLMs could reason that they needed to make an API call to complete a task and then generate the inputs needed for that API call. We then realized that large language models could be used as agents to complete many types of tasks, including complex software development and operational troubleshooting. At that time, LLMs weren't typically trained to act like agents. They were often trained primarily for natural language conversation. Successfully using an LLM to reason and act required complex prompt instructions on how to use tools, parsers for the model's responses, and orchestration logic. Simply getting LLMs to reliably produce syntactically correct JSON was a challenge at the time! To prototype and deploy agents, my team and I relied on a variety of complex agent framework libraries that handled the scaffolding and orchestration needed for the agents to reliably succeed at their tasks with these earlier models. Even with these frameworks, it would take us months of tuning and tweaking to get an agent ready for production. Since then, we've seen a dramatic improvement in large language models' abilities to reason and use tools to complete tasks. We realized that we no longer needed such complex orchestration to build agents, because models now have native tool-use and reasoning capabilities. In fact, some of the agent framework libraries we had been using to build our agents started to get in our way of fully leveraging the capabilities of newer LLMs. Even though LLMs were getting dramatically better, those improvements didn't mean we could build and iterate on agents any faster with the frameworks we were using. It still took us months to make an agent production-ready. We started building Strands Agents to remove this complexity for our teams in Q Developer. We found that relying on the latest models' capabilities to drive agents significantly reduced our time to market and improved the end user experience, compared to building agents with complex orchestration logic. Where it used to take months for Q Developer teams to go from prototype to production with a new agent, we're now able to ship new agents in days and weeks with Strands. Core concepts of Strands Agents The simplest definition of an agent is a combination of three things: 1) a model, 2) tools, and 3) a prompt. The agent uses these three components to complete a task, often autonomously. The agent's task could be to answer a question, generate code, plan a vacation, or optimize your financial portfolio. In a model-driven approach, the agent uses the model to dynamically direct its own steps and to use tools in order to accomplish the specified task. To define an agent with the Strands Agents SDK, you define these three components in code: Model: Strands offers flexible model support. You can use any model in Amazon Bedrock that supports tool use and streaming, a model from Anthropic's Claude model family through the Anthropic API, a model from the Llama model family via Llama API, Ollama for local development, and many other model providers such as OpenAI through LiteLLM. You can additionally define your own custom model provider with Strands. Tools: You can choose from thousands of published Model Context Protocol (MCP) servers to use as tools for your agent. Strands also provides 20+ pre-built example tools, including tools for manipulating files, making API requests, and interacting with AWS APIs. You can easily use any Python function as a tool, by simply using the Strands @tool decorator. Prompt: You provide a natural language prompt that defines the task for your agent, such as answering a question from an end user. You can also provide a system prompt that provides general instructions and desired behavior for the agent. An agent interacts with its model and tools in a loop until it completes the task provided by the prompt. This agentic loop is at the core of Strands' capabilities. The Strands agentic loop takes full advantage of how powerful LLMs have become and how well they can natively reason, plan, and select tools. In each loop, Strands invokes the LLM with the prompt and agent context, along with a description of your agent's tools. The LLM can choose to respond in natural language for the agent's end user, plan out a series of steps, reflect on the agent's previous steps, and/or select one or more tools to use. When the LLM selects a tool, Strands takes care of executing the tool and providing the result back to the LLM. When the LLM completes its task, Strands returns the agent's final result. In Strands' model-driven approach, tools are key to how you customize the behavior of your agents. For example, tools can retrieve relevant documents from a knowledge base, call APIs, run Python logic, or just simply return a static string that contains additional model instructions. Tools also help you achieve complex use cases in a model-driven approach, such as with these Strands Agents example pre-built tools: Retrieve tool: This tool implements semantic search using Amazon Bedrock Knowledge Bases. Beyond retrieving documents, the retrieve tool can also help the model plan and reason by retrieving other tools using semantic search. For example, one internal agent at AWS has over 6,000 tools to select from! Models today aren't capable of accurately selecting from quite that many tools. Instead of describing all 6,000 tools to the model, the agent uses semantic search to find the most relevant tools for the current task and describes only those tools to the model. You can implement this pattern by storing many tool descriptions in a knowledge base and letting the model use the retrieve tool to retrieve a subset of relevant tools for the current task. Thinking tool: This tool prompts the model to do deep analytical thinking through multiple cycles, enabling sophisticated thought processing and self-reflection as part of the agent. In the model-driven approach, modeling thinking as a tool enables the model to reason about if and when a task needs deep analysis. Multi-agent tools like the workflow, graph, and swarm tools: For complex tasks, Strands can orchestrate across multiple agents in a variety of multi-agent collaboration patterns. By modeling sub-agents and multi-agent collaboration as tools, the model-driven approach enables the model to reason about if and when a task requires a defined workflow, graph, or swarm of sub-agents. Strands support for the Agent2Agent (A2A) protocol for multi-agent applications is coming soon.
Share
Share
Copy Link
AWS has released Strands Agents, an open-source SDK that streamlines AI agent development using a model-driven approach. This framework allows developers to build and deploy AI agents with minimal code, supporting various models and tools.
Amazon Web Services (AWS) has unveiled Strands Agents, an innovative open-source Software Development Kit (SDK) designed to revolutionize the creation and deployment of AI agents. This new framework adopts a model-driven approach, enabling developers to build sophisticated AI agents with minimal code
1
.Strands Agents simplifies the development process by leveraging the advanced capabilities of state-of-the-art language models. The framework allows developers to define three core components in code: the model, tools, and a prompt
2
. This streamlined approach eliminates the need for complex orchestration logic, making it a lightweight alternative to existing frameworks that require elaborate workflow definitions.The SDK supports a wide range of models, including those from Amazon Bedrock, Anthropic, Meta (via Llama API), Ollama, and others through LiteLLM. It also provides flexibility in tool selection, allowing developers to use custom Python functions or pre-built utilities that interact with files, APIs, and AWS services
1
.Strands Agents incorporates advanced tools to handle complex scenarios, such as:
The framework's model-driven approach enables the AI to reason about when a task requires defined workflows, graphs, or swarms of sub-agents
1
. This versatility allows Strands to scale from simple to complex agent use cases, and from local development to production deployment.Several AWS teams, including Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer, are already using Strands in production. The open-source nature of the project has attracted contributions from major tech companies:
1
2
Related Stories
The initiative behind Strands Agents stemmed from the Amazon Q Developer team's challenges with early agent frameworks. Clare Liguori, senior principal software engineer for AWS Agentic AI, noted that despite significant improvements in LLMs, building and iterating on agents remained time-consuming
1
.With Strands, the development process has been dramatically accelerated. What once took months now takes "days and weeks," significantly reducing time to market and improving end-user experience
2
.Strands Agents is now available on GitHub, opening up new possibilities for AI agent development across various industries. The SDK's flexibility and efficiency position it as a potentially game-changing tool in the rapidly evolving field of AI development
1
2
.As the AI landscape continues to advance, Strands Agents represents a significant step forward in making sophisticated AI agent development more accessible and efficient for developers worldwide.
Summarized by
Navi
[1]
[2]
1
Business and Economy
2
Business and Economy
3
Policy and Regulation