Redis announced on Monday that it is expanding its integration with Amazon Bedrock to further enhance the performance and reliability of generative AI applications. The goal, to company said, is to reduce errors such as hallucinations and improve the overall effectiveness of AI-driven solutions across various industries.
“Our expanded partnership with Amazon Bedrock gives developers a powerful tool to create more accurate and trustworthy generative AI apps,” Manvinder Singh, VP of AI Product at Redis said in a statement sent to reporters. “LLMs are still prone to hallucinations, and this integration will help tackle that challenge.”
LLMs are “large language models” – a type of artificial intelligence designed to understand, generate, and interact with human language. These models are trained on vast amounts of text data, allowing them to predict and generate text based on patterns they’ve learned. LLMs are used in applications like chatbots, virtual assistants, and content generation tools. They are generally capable of tasks like answering questions, translating languages, or even drafting original written content based on prompts. The downside to LLMs, however, are hallucinations.
In AI, a “hallucination” refers to a situation where information is presented as true when it is actually factually incorrect, misleading, or entirely fabricated. This can happen because the AI generates responses based on patterns it has learned from vast amounts of data, but it doesn’t have a true understanding of the information.
One example of a particularly damaging hallucination occurred in early 2023, when Google’s AI, “Bard,” famously flubbed its explanation of the James Webb Space Telescope’s capabilities. The system erroneously said that the telescope captured the first images of an exoplanet’s atmosphere – a feat claimed by the European Southern Observatory’s Very Large Telescope in 2004. As a consequence, Alphabet, Google’s parent company, lost $100 billion in market value over the next three days.
A “data platform” like Redis functions as a fast and powerful system designed to store and organize large amounts of information, which can then be quickly accessed and used by software applications. The system works like a super-efficient filing cabinet where any file can be retrieved, no matter how big or busy the office is.
Within this system lies the “knowledge base” – a well-organized library of critical information that helps the AI make more accurate decisions. In the case of Redis Cloud, this knowledge base ensures that the AI can quickly access the right data, which is essential for generating correct responses.
Amazon Bedrock is a service that helps developers access powerful AI tools, like advanced language models. It functions like a factory, producing the AI tools that developers need to power their applications.
When combined in a framework like Retrieval-Augmented Generation (RAG), Redis and Bedrock work together to deliver to the AI both the necessary tools and accurate, up-to-date information to perform its tasks reliably, the company said.
Historically, evaluating the performance of generative AI systems required costly and time-consuming human evaluations. However, with the new automated RAG evaluation service, Redis said developers can assess and optimize their systems more efficiently.
– – –
Christina Botteri is the Executive Editor and CTO at The Tennessee Star and The Star News Network.