The primary use case for Amazon SageMaker is leveraging its compute power, particularly for tasks like securing LMM notebooks using node instances. Additionally, its GPU capabilities are valuable for executing large language models. Users can create endpoints and access them from anywhere as needed.
We've had experience with unique ML projects using SageMaker. For example, we're developing a platform similar to ChatGPT that requires models. We utilize Amazon SageMaker to create endpoints for these models, making accessing them convenient as needed.
The main function I prefer in Amazon SageMaker is the ability to create endpoints for large models. I haven't explored features like Studio Lab yet, but I've found the tutorials very helpful. The platform is user-friendly, with documentation attached to everything, making it easy to navigate and learn. Overall, I especially like the Studio Lab feature.
In the Studio Lab, tutorials provide direct snippets for tasks like connecting to S3 from Amazon SageMaker. These standard snippets make implementation straightforward and simplify the development process for me.
In my opinion, one improvement for Amazon SageMaker would be to offer serverless GPUs. Currently, we incur costs on an hourly basis. It would be beneficial if the tool could provide pay-as-you-go pricing based on endpoints.
In the three months I've been using it, I've noticed that higher GPU instances can be quite costly. To mitigate this cost impact, serverless GPUs would be beneficial.
I have been working with the product for three months.
I rate the solution's stability a nine out of ten.
I rate the tool's scalability an eight out of ten. No issues with scalability as long as we ensure we have the necessary quotas in place before implementing a scalable process. I needed to request quota increases for certain services beforehand, and once those were provided, I could adjust the main and max nodes accordingly based on our planned requirements. My company has 25 users.
We can schedule a direct call with the support team.
Amazon SageMaker's Studio Lab feature differentiates it from products like Azure ML Studio. With Studio Lab, I can directly interact with the environment, making navigating and accessing documentation easier. In contrast, finding documentation and navigating Azure ML Studio was challenging.
However, we also use Azure for the Azure OpenEdge service, which operates on a pay-per-minute token basis. This payment model is not available in Amazon SageMaker.
The initial setup and deployment process for Amazon SageMaker is straightforward. The only complexity I encountered was gaining access to the needed resources, which relied on coordination with the DevOps team. Once I had access sorted out, implementing my ideas for large language models and other models was comfortable.
The tool's pricing is reasonable.
I rate the overall solution an eight out of ten.