Technical Lead at a construction company with 1-10 employees
MSP
Top 5
Apr 7, 2026
While Google Cloud Run does a great job of reducing costs, to mitigate cold starts, users can set minimum instances, but scaling to zero costs has become a cost-saving advantage even when idle, and idle instance costs might occur. This can be avoided because whenever a heavy container causes a cold start, it affects latency and sensitivity of the application. When it starts from idle to full run, it scales slowly, causing slowness in the service deployed; sometimes some of the APIs fail to respond. It is difficult to identify the bug because it is often due to the idleness of the application and not the code, which can be quite frustrating. Additionally, simple tasks for developers who just want to run a few lines of code can be more complex. I would say optimizing concurrency can improve the experience; currently, managing multiple containers in a single Google Cloud Run instance becomes quite difficult, especially tasks such as logging and local proxy monitoring. Google provides fully orchestrated alternatives, but Cloud Run is better for scaling. Slowness in applications is a notable issue.
Container Management is essential for efficiently deploying and maintaining applications in microservices architectures, enhancing scalability and reliability.This practice involves orchestrating containerized applications to optimize resources and streamline application lifecycle management. Users gain improved control over infrastructure, enabling faster deployment cycles and simplification of complex cloud-native operations. Popular solutions offer automated updates, clustering, and...
While Google Cloud Run does a great job of reducing costs, to mitigate cold starts, users can set minimum instances, but scaling to zero costs has become a cost-saving advantage even when idle, and idle instance costs might occur. This can be avoided because whenever a heavy container causes a cold start, it affects latency and sensitivity of the application. When it starts from idle to full run, it scales slowly, causing slowness in the service deployed; sometimes some of the APIs fail to respond. It is difficult to identify the bug because it is often due to the idleness of the application and not the code, which can be quite frustrating. Additionally, simple tasks for developers who just want to run a few lines of code can be more complex. I would say optimizing concurrency can improve the experience; currently, managing multiple containers in a single Google Cloud Run instance becomes quite difficult, especially tasks such as logging and local proxy monitoring. Google provides fully orchestrated alternatives, but Cloud Run is better for scaling. Slowness in applications is a notable issue.