I have been working on AI, still on the LLMs which are there on Hugging Face. Hugging Face is one of the premier sites where you get, it is a repository for the AI. I'm currently using in the lab, which is an AI lab, NVIDIA RTX Series 5090. NVIDIA RTX Series 5090 is a graphics card, GPU card, 32 GB. If you search for NVIDIA RTX Series 5090, you will find that. Those are the latest Blackwell GPUs from NVIDIA. I have been using their previous cards: NVIDIA RTX Series 3090, 4090. Now it is 5090. NVIDIA RTX Series 5090 has 32 GB VRAM, and they are pretty good. Our current work involves more data crunching rather than visual things. We are not doing much on visual right now, but once we get into radiologists and other things where we use image processing, then ray tracing will be very much useful. Right now, we do inferencing more than ray tracing. Compared to 3090 and 5090, if you do a comparison between these two, the DLSS here has much better frame rates, much faster, and it also consumes a lot of power. In the winter, I don't need the room heater. The heat generated by the system is good enough to keep the room warm. For local AI, once the unified memory systems come into play, they consume less power. My 5090 system requires at least a 1000-watt power supply, and it consumes at least 650 to 800 watts. That is quite high for one card. These new unified memory systems will run at a lower wattage and also give you the required output faster. I don't run any metrics because we don't do evaluation of various things. It depends on various models: it should be compatible, it should be running well, and it should give us a response faster, meaning inferencing faster. We work on models which are less than 20 GB, between 20 to 30 GB, and NVIDIA RTX Series 5090 has 32 GB RAM, VRAM there. Because we spend and give solutions at the lowest possible cost, we have a good return on investment. There are no worries on that. I communicate with the local vendor, not directly with NVIDIA. They are very helpful and professional. On the driver's side, when there was an issue with the driver and the system was failing, I had to ask which driver works best with which version of Linux, and they provided a proper answer. It wasn't for any repair or anything. They are good because I'm not talking directly to the vendor; I'm talking to a local vendor who is a supplier. So that is good enough. My overall rating for this product is eight out of ten.
I have been working on AI, still on the LLMs which are there on Hugging Face. Hugging Face is one of the premier sites where you get, it is a repository for the AI. I'm currently using in the lab, which is an AI lab, NVIDIA RTX Series 5090. NVIDIA RTX Series 5090 is a graphics card, GPU card, 32 GB. If you search for NVIDIA RTX Series 5090, you will find that. Those are the latest Blackwell GPUs from NVIDIA. I have been using their previous cards: NVIDIA RTX Series 3090, 4090. Now it is 5090. NVIDIA RTX Series 5090 has 32 GB VRAM, and they are pretty good. Our current work involves more data crunching rather than visual things. We are not doing much on visual right now, but once we get into radiologists and other things where we use image processing, then ray tracing will be very much useful. Right now, we do inferencing more than ray tracing. Compared to 3090 and 5090, if you do a comparison between these two, the DLSS here has much better frame rates, much faster, and it also consumes a lot of power. In the winter, I don't need the room heater. The heat generated by the system is good enough to keep the room warm. For local AI, once the unified memory systems come into play, they consume less power. My 5090 system requires at least a 1000-watt power supply, and it consumes at least 650 to 800 watts. That is quite high for one card. These new unified memory systems will run at a lower wattage and also give you the required output faster. I don't run any metrics because we don't do evaluation of various things. It depends on various models: it should be compatible, it should be running well, and it should give us a response faster, meaning inferencing faster. We work on models which are less than 20 GB, between 20 to 30 GB, and NVIDIA RTX Series 5090 has 32 GB RAM, VRAM there. Because we spend and give solutions at the lowest possible cost, we have a good return on investment. There are no worries on that. I communicate with the local vendor, not directly with NVIDIA. They are very helpful and professional. On the driver's side, when there was an issue with the driver and the system was failing, I had to ask which driver works best with which version of Linux, and they provided a proper answer. It wasn't for any repair or anything. They are good because I'm not talking directly to the vendor; I'm talking to a local vendor who is a supplier. So that is good enough. My overall rating for this product is eight out of ten.