AWS makes guardrails a standalone API as it updates Bedrock

AWS released new capabilities for Amazon Bedrock and beyond with the new Guardrails API and extended memory for AI models. …

We want to hear from you! Take our quick AI survey and share your insights on the current state of AI, how you’re implementing it, and what you expect to see in the future. Learn More


Amazon Web Services (AWS) wants customers to bring its tools to the ground and create guardrails to models even outside of its model library, as it spins out its guardrails feature previously only on Amazon Bedrock. 

The feature will now be a standalone API called Guardrails API, which AWS announced during its annual New York Summit. It lets users “turn the knobs” on how much some common guardrails, like hate speech or sexualized terms, AI applications will allow. AWS customers, even those who do not use Amazon Bedrock, can connect any AI model and application to the API and begin setting guardrails.  

Vasi Philomin, vice president of generative AI at AWS, told VentureBeat the goal of standalone API for guardrails is to open up further model choice for customers, a core tenet of Amazon’s approach to generative AI. 

“Guardrails help you with safety, privacy and truthfulness. Now you can have all of those knobs in a single solution and apply these no matter the organization within the company and any sort of model; it could even be OpenAI’s models,” Philomin said. 


Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


Bedrock, Amazon’s cloud service that houses several AI models from Amazon’s own Titan models to open-sourced ones like Meta’s Llama 3 and Anthropic’s Claude 3 family of models, launched in April 2023. While it does host many models, it does not currently support any OpenAI models.

AWS first released Guardrails in Bedrock in April. Most model gardens, like Microsoft’s Azure AI, offer similar features to enterprise customers, as some enterprises may want to be a bit looser when interpreting certain words. 

Guardrails API is on preview.

Checking for hallucinations with Contextual Grounding

Customers will also be able to test if models, AI agents or retrieval augmented generation (RAG) systems are really reading from data from companies and not making things up. AWS added a new feature called Contextual Grounding to the Guardrails API. 

Contextual Grounding can check for hallucinations before the responses can reach the user. Philomena said hallucinations continue to be a big stumbling block for many companies in deciding to tap large language models (LLMs). Developers can control how automatic these contextual ground tests are. AWS said Contextual Grounding can detect and filter more than 75 percent of hallucinations. 

AWS Vice President for Technology Matt Wood said in an interview with VentureBeat that this allows their customers “to build much more sophisticated systems and start to apply generative AI in areas where hallucinations really have a big impact.”

“This is a check running in real-time, and it means that users can still be confident in responses because sometimes hallucinations happen, but building this sort of safeguard materially reduces the chance that it will end up in front of an end user,” Wood said. 

AWS also announced new capabilities for building AI agents on Bedrock.

A longer memory

Agents built on Bedrock will now remember more information across interactions over time. Often, AI agents and chatbots’ memory is constrained in one interaction or conversation. This new capability draws on historical discussions. Philomin used the example of a flight booking agent. With extended memory retention, it remembers if the end-user prefers an aisle or window seat and which airlines they usually like. 

AWS also added a new code interpretation capability to Bedrock so that AI models can analyze complex data with code. Wood says one limitation of AI models is that these aren’t great at math. So AWS is giving a sort of loophole by giving models on Bedrock the ability to look at numerical data, say a CSV file, and write code that will then analyze numbers and answer a query. A user could ask the AI agent to find the most profitable zip code for real estate; the agent can read a file containing these numbers, write code to analyze the figures and give its results. 

Users of Claude 3 Haiku, Anthropic’s smallest AI model, through Bedrock, will now be able to fine-tune the model.

Philomin said these updates aim to provide customers with more choices and customization when building with AI models, keeping with Amazon’s view that customers shouldn’t be locked into one model. 

“We truly believe that generative AI is going to transform pretty much every function in every business, in every industry, and we want to make sure that we give all the right tools and capabilities to customers so they can do it earlier,” he said. “So we’re going to provide capabilities that help them overcome the challenges or things they feel are important. 

Live Updates for COVID-19 CASES