Find out what your peers are saying about Amazon Web Services (AWS), Google, Microsoft and others in Public Cloud Storage Services.
Moving infrequently accessed data to cheaper classes like Glacier is beneficial for long-term storage at a lower cost.
I rate the technical support from Amazon for S3 a ten out of ten.
An engineer is assigned based on the severity of the issue.
The technical support for Amazon S3 is rated ten out of ten.
If I want to change something on my resources or directly access it via the portal, there is not a service level agreement of 100%, and sometimes it's quite difficult to access.
When issues arise, it takes them at least three to four hours to respond.
I have a premium subscription for Oracle Cloud, which ensures a high level of support.
Data placed in an S3 bucket is replicated across availability zones in a region, ensuring scalability and availability.
The level of scalability allows storage to automatically scale on demand, without the need for manual intervention.
Amazon S3's automatic scaling has benefited me, as I don't need to plan storage requirements.
The storage is very scalable, so you can effortlessly scale it.
The downloading of high volume data and scalability processes face performance issues related to high data volumes.
We can adjust the number of CPUs as per our need and reduce the load.
There is zero latency or downtime.
Transitioning between S3 storage classes, like moving data from the standard class to Glacier or Glacier Deep Archive, has been challenging.
Amazon S3 is highly stable.
The downloading of high volume data and scalability processes face performance issues related to high data volumes.
The system is stable, especially when set up in a single region.
An improvement could be associating the naming with personal accounts, allowing more familiar or desired names without conflicting with global conventions.
The practice of protecting data could be more streamlined or mandatory.
I would like to see an increase in the data upload limit, similar to DynamoDB, where there is no data limit.
When using Data Lakes for analytics and frequently pulling data from our source database to Microsoft Azure Blob Storage, there should be faster methods to download large data from Microsoft Azure Storage Blob to different locations.
The improvement needed for Microsoft Azure Object Storage is to reduce the transactional charges, as these read and write operation charges are higher.
The product can be difficult to use compared to how it is marketed.
I could not find a load balancer in OCI similar to the support provided by Microsoft Azure and AWS.
I've used the free tier and haven't been charged yet.
S3 offers multiple classes, allowing you to move data to cheaper classes for cost savings.
It is somewhat justified due to the benefits, but there is room for reconsideration.
It's a pay-per-use solution and a good idea for proof of concept and value.
The licensing cost of Microsoft Azure Object Storage is cheaper compared to other competitors, such as Google or third-party solutions, which easily engages customers.
It is an expensive product.
The pricing for Oracle Cloud is comparable to other cloud providers.
We have to pay the price, and the cost is higher.
Its stability and scalability are also impressive, as it allows for increased storage space according to demand.
I appreciate its capability to create static websites and integrate with services like CloudFront, EC2, and DynamoDB.
Security measures like encryption, access controls, and the block public access feature are also important.
Since I work mostly with AI/ML, data piping, data integrations, and ETL tools, these features are valuable.
When talking about storage accounts, customers utilize Azure file as a replica of the file server, allowing them to access file shares or storage accounts from their systems, making it easy to use while automatically syncing data with the cloud.
The ability to store everything inside Blob or Object storage and use it for archiving data is beneficial.
High availability, a large dataset, and stability are our main criteria for using this solution.
I find manageability, scalability, and security of the product most valuable.
It also provides multi-region support, enhancing data accessibility and safety.
Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers.
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.
REST-based object storage for unstructured data in the cloud
Scale with no new hardware - eliminates new capital expenditures, opens up data center space and reduces power and cooling requirements.
Elastic storage - shared infrastructure allows for infinite scalability. Eliminates forecasting and long procurement cycles.
Pay as you go and subscription models - purchase capacity with no commitment or reduce costs with longer-term agreements
Simple to manage industry standard OpenStack and RESTful APIs streamline management integration, freeing resources to accelerate other cloud projects.