All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Support for using Amazon Bedrock Agents (#49).
- Support for new LLMs available through Amazon Bedrock.
- Support for Bedrock cross-region inference profiles.
- Support for using existing Amazon Cognito user pool configuration when deploying the application and its use cases (#129).
- Use LCEL to replace LangChain
Chains
in the solution's implementation
- Fixed issue when removing a score threshold on an existing use case (#154).
- Updated library versions to address security vulnerabilities
- Updated node library versions to address security vulnerabilities
- Resolved an issue where use case deployments would fail when manually disabling anonymous metrics via the provided CloudFormation mapping
- Updated library versions to address security vulnerabilities
- Issue #135, added a new IAM permission for the cognito-idp:GetGroup action to the CloudFormation deployment role (used when deploying use cases). This was required due to a service change.
- With the release of AWS-Solutions-Constructs v2.65.0, the AWS ApiGateway websocket integration with Amazon SQS Queue is available in the library. Hence the implementation has been updated to use this construct.
- Issue #131 which caused an issue with rendering non-S3 source URLs from vector store for a RAG based use case.
- Issue #132 where configured Guardrails for Amazon Bedrock to have no effect on a use case's text input validations and output responses.
- Wizard validation failure for SageMaker model selection type, that allowed user to navigate ahead even when the page had failed validations.
- An AWS WAF rule that blocked larger payloads for HTTP POST request to the
/deployments
endpoint. This restricted configuring large system prompts (over 8 KB) for use case cases.
- Support for Knowledge Bases for Amazon Bedrock as an option for Retrieval Augmented Generation (RAG) based workflows.
- Support for Identity Federation (OpenID Connect or SAML 2.0) through Amazon Cognito.
- Ability to add role-based access control for Amazon Kendra for controlling access over documents that can be retrieved while using RAG based workflows.
- Provisioned Throughput support for Amazon Bedrock models, allowing custom and provisioned base models to be added as the backing LLMs for the text use case.
- Enhanced prompt interface, allowing fine-grained control over prompts (including disambiguation prompts for RAG), message history and input lengths.
- Streamlined upgrade scripts for upgrading from v1.4.x to v2.0.0. For detailed steps, refer to the following section
- Model support for Amazon Titan Text G1 - Premier
- Deprecated direct Anthropic and Hugging Face LLMs in favour of integrating them through Amazon Bedrock and Amazon SageMaker.
- Switch login screens from amplify-ui to Cognito Hosted UI to support Identity Federation.
- Switch from
webpack
tovite
for building and packaging UI projects. - Updates to Node and Python package versions.
- Updated library versions to address security vulnerabilities
- Updated library versions to address security vulnerabilities
- Updated package versions to resolve vulnerabilities
- Switched to using
langchain-aws
library for Bedrock and SageMaker LangChain calls instead oflangchain-community
.
- Updated package versions to resolve vulnerabilities
- Support for newest Bedrock models: Anthropic Claude v3 and Mistral family of models (#79)
- Significantly increased default prompt and chat input character limits. Should now support ~50% of the model's input prompt limit
- UI input validation misaligned with backend limits (#80)
- Missing hyperlink to solution landing page in README (#65)
- Updated package versions to resolve vulnerabilities
- Bug with Bedrock Meta/Cohere deployments in RAG configurations (#83)
- Updated Node and Python packages to resolve vulnerabilities
- Updated langchain package versions to resolve a vulnerability
- Add missing IAM action required to provision users for use cases when deploying through deployment dashboard
- Support for SageMaker as an LLM provider through SageMaker inference endpoints.
- Ability to deploy both the deployment dashboard and use cases within a VPC, including bringing an existing VPC and allowing the solution to deploy one.
- Option to return and display the source documents that were referenced when generating a response in RAG use cases.
- New model-info API in the deployment dashboard stack which can retrieve available providers, models, and model info. Default parameters are now stored for each model and provider combination and are used to pre-populate values in the wizard.
- Refactoring of UI components in the deployment dashboard.
- Switch to poetry for Python package management, replacing requirements.txt files.
- Updates to Node and Python package versions.
- Fix AWS IAM policy that causes use case deployments to fail when creating, updating or deleting from the deployment dashboard.
- Pinned
langchain-core
andlangchain-community
versions, fixing a test failure caused by unpinned versions in thelangchain
packages dependencies - Removed a race condition causing intermittent failures to deploy the UI infrastructure
- Updated Node package versions to resolve security vulnerabilities
- Unit tests failure due to a change in the underlying anthropic library.
- Support for Amazon Titan Text Lite, Anthropic Claude v2.1, Cohere Command models, and Meta Llama 2 Chat models
- Increase the cap on the max number of docs retrieved in the Amazon Kendra retriever (for RAG use cases) from 5 to 100, to match the API limit
- Fix typo in UI deployment instructions (#26)
- Fix bug causing failures with dictionary type advanced model parameters
- Fixed bug causing erroneous error messages to appear to user in long running conversations
- Updated Python and Node package versions to resolve security vulnerabilities
- Remove NodeJS 16 from supported runtimes, which was not supported
- Updated Python and Node package versions to resolve security vulnerabilities
- Markdown rendering in Chat UI LLM responses
- Increased prompt and chat input limits to 2000 and 2500 characters respectively
- Updated package versions to resolve security vulnerabilities
- Updated package versions to resolve security vulnerabilities.
- Initial Release