Try our new research platform with insights from 80,000+ expert users

NVIDIA RTX Series vs NVIDIA Tesla [EOL] comparison

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

NVIDIA RTX Series
Average Rating
8.0
Number of Reviews
1
Ranking in other categories
Enterprise GPU (1st)
NVIDIA Tesla [EOL]
Average Rating
8.6
Number of Reviews
2
Ranking in other categories
No ranking in other categories
 

Featured Reviews

Khasim Mirza - PeerSpot reviewer
Independent IT Security Consultant at Kinetic IT
Local ai has protected sensitive data and has enabled private rag workflows for small clients
NVIDIA RTX Series, if they provide more, any AI LLM is dependent on VRAM. Right now, the NVIDIA RTX Series 5090 comes with only 32 GB VRAM. But luckily, Apple has come up with their Mac Studios which have unified memory where we could use that unified memory as VRAM. If unified memory systems come into the picture, then NVIDIA might lose its value. Nowadays, people are buying Mac Minis which have unified memory to run local AI, local LLMs. Still, they are not as fast as NVIDIA, but there is a chance. It is a horse race; you don't know which horse will win next. It all depends on each hardware's capability, how many Tensor Cores it has, and at what frequency it is running. So I'm not sure how I can assess it; it all depends on how the architecture is and how fast your system is.
reviewer2309676 - PeerSpot reviewer
Team Lead, High-Performance Computing (HPC) at a manufacturing company with 1,001-5,000 employees
Simplifies our processes and helps us handle complex computations effectively
The initial setup process for Tesla was straightforward for me since I had prior experience working with the product. Setting it up requires two people. The deployment process involves a two-step approach: hardware deployment and software development. After that, we use Ansible for automatic software installation. This includes getting the operating system in place using Foreman and adding necessary components like NVIDIA CUDA drivers. The deployment time varies based on the number of servers, but for around ten servers, it typically takes about two hours. We deploy them in parallel to streamline the process. Maintaining Tesla involves routine tasks like updating drivers and addressing security issues. We handle this by taking about 10% of our servers offline at a time, using a slow scheduler to ensure a controlled process.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"Because we spend and give solutions at the lowest possible cost, we have a good return on investment."
"The ease of use is a significant advantage."
"The most valuable aspects of Tesla are its CUDA software framework, which boosts our computing capabilities, and NVIDIA's NGC cloud support."
 

Cons

"Initially, there were teething problems since the drivers had issues. It only runs on Linux platforms or their own platforms."
"I believe there should be an effort to lower costs, especially considering the higher price of the latest update."
"It would be beneficial to see broader application support and compatibility with different workloads."
 

Pricing and Cost Advice

Information not available
"Generally, the price is affordable, but the most recent update comes with a notable increase in cost."
report
Use our free recommendation engine to learn which Enterprise GPU solutions are best for your needs.
884,976 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Comms Service Provider
14%
Computer Software Company
10%
Manufacturing Company
10%
University
10%
Comms Service Provider
18%
Computer Software Company
12%
Financial Services Firm
8%
Government
8%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
No data available
No data available
 

Questions from the Community

What is your experience regarding pricing and costs for NVIDIA RTX Series?
NVIDIA RTX Series 5090 is around, I am in Australia, and it will cost you around $4,500 to $5,000 per piece from the dealer. If you're going for the Pro series, NVIDIA RTX Series Pro 6000, those ar...
What needs improvement with NVIDIA RTX Series?
NVIDIA RTX Series, if they provide more, any AI LLM is dependent on VRAM. Right now, the NVIDIA RTX Series 5090 comes with only 32 GB VRAM. But luckily, Apple has come up with their Mac Studios whi...
What is your primary use case for NVIDIA RTX Series?
NVIDIA RTX Series cards are very useful for Edge AI, basically to run local AI and local LLMs. Instead of running LLMs on the cloud or using the general ChatGPT, you can run your own LLMs on-premis...
Ask a question
Earn 20 points
 

Also Known As

TITAN V
No data available
 

Overview