Deploy confidential containers on OpenShift using hardware-backed Trusted Execution Environments (TEEs) on Azure and bare metal with Intel TDX, AMD SEV-SNP, and NVIDIA confidential GPU support.
The goal of this demo is to showcase a Chatbot LLM application augmented with data from Red Hat product documentation running on Red Hat OpenShift. It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM. The application generates a project proposal for a Red Hat product.
This pattern helps you deploy stack enabling Intel Gaudi Accelerator and it also deploys RAG application - Chat QnA
This pattern is a starting point for using Red Hat OpenShift AI.
This is extension of Multicloud GitOps pattern with Red Hat Openshift AI component to show the value of using Intel AMX.
This is an extension of Multicloud GitOps pattern with additional application using Intel SGX.
