NVIDIA RTX Series cards are very useful for Edge AI, basically to run local AI and local LLMs. Instead of running LLMs on the cloud or using the general ChatGPT, you can run your own LLMs on-premise. So it helps a lot. Though it is not pretty expensive, NVIDIA has enterprise-level graphics cards like H100 and other Blackwell cards which are very expensive. We don't use those; we use the smaller models, less than 32 GB, and they are pretty good. The use case for NVIDIA RTX Series is for RAG systems, Retrieval-Augmented Generation. For local RAG, we provide the solution for companies who want to have AI queries on their information locally so that it is not shared. If you go to ChatGPT and try to find information, you will be sharing your personal information, so there will be an issue of privacy there. Our main customers are small hospitals and law firms. Right now, we will have more, especially for radiologists and other things.
NVIDIA RTX Series cards are very useful for Edge AI, basically to run local AI and local LLMs. Instead of running LLMs on the cloud or using the general ChatGPT, you can run your own LLMs on-premise. So it helps a lot. Though it is not pretty expensive, NVIDIA has enterprise-level graphics cards like H100 and other Blackwell cards which are very expensive. We don't use those; we use the smaller models, less than 32 GB, and they are pretty good. The use case for NVIDIA RTX Series is for RAG systems, Retrieval-Augmented Generation. For local RAG, we provide the solution for companies who want to have AI queries on their information locally so that it is not shared. If you go to ChatGPT and try to find information, you will be sharing your personal information, so there will be an issue of privacy there. Our main customers are small hospitals and law firms. Right now, we will have more, especially for radiologists and other things.