750 hours of Amazon EC2 Linux t2.micro instance usage (1 GiB of memory and 32-bit and 64-bit platform support) -- It's enough hours to run continuously each month.
Linux administrator with 10,001+ employees
we have 750 hours of Amazon EC2 Linux t2.micro instance usage, but it's expensive.
What is most valuable?
How has it helped my organization?
It gives us the time to check.
What needs improvement?
Charges are high at the moment.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Owner at a tech consulting company with 51-200 employees
Amazon Web Services: Security Processes in the EC2 Cloud
Customer trust and confidence is at the heart of Amazon’s business and with so many customers using Amazon’s platforms to run their business securely and efficiently, Amazon has gone to great lengths to operate and manage a comprehensive control environment. The environment supports secure Amazon Web Services cloud web offerings by ensuring that all necessary policies and processes are used in compliance with AWS certifications.
Within the last few years Amazon Web Services security has achieved notable certifications which include SAS70 Type II audits, PCI DSS Level 1 which involves meeting Payment Card Industry Data Security Standards, ISO 27001 for Information Security Management Systems, and compliance within the Federal Information Security Management Act (FISMA) to properly serve government agency FedRAMP requirements for AWS GovCloud on the Amazon platform.
When Amazon introduced Amazon EC2 it started a process rolling for business customers to run their applications in Amazon’s computing environment. EC2 is the Elastic Compute Cloud which allows business customers to access Amazon’s secure cloud environment through a virtual machine. The platform deploys EC2 security which also supports Amazon Web Services for FedRAMP compliance.
Using Amazon EC2 business customers can create an image of their operating system and applications which is known as an Amazon Machine Image. Once the image is created it is uploaded to Amazon S3 which is Amazon’s Simple Storage Service. The AMI is then registered in Amazon EC2 allowing the customer to summon virtual machines as they are needed. The result is an AWS Virtual Private Cloud for business customers to conduct operations without the exorbitant expense of IT infrastructure. For this reason, Amazon must ensure the environment meets all compliance and security standards hence the acquisition of the certification described earlier.
Amazon EC2 Security Processes
Amazon’s approach to AWS security involves layered security processes which maintain data integrity and provide secure EC2 instances while still maintaining configuration flexibility to meet the individual requirements of EC2 business customers.
- Administration Hosts: For business customers who require access to the management platform, Amazon uses a level of security to accommodate administration hosts without posing a risk to data integrity and other users. Through the use of AWS Identity and Access Management, this is accomplished by auditing all access activity and using a log to track the activity. If the user accessing the management platform terminates their authentication privileges then the privileges are automatically discontinued which ensures secure AWS applications.
- Customer Controlled Instances: Amazon EC2 allows for virtual instances which are solely controlled by the customer. Business customers exercise full control and at no time can Amazon intervene by logging in to the customer’s operating system. For this reason, a set of practices is in place to guide the customer on authentication processes for AWS VPC in order to access the virtual instances. This involves designing an authentication and privilege system which can be enabled and disabled according to changing needs of virtual machine users.
- Firewall: As part of the AWS Security Center, EC2 Business customers have access to a complex firewall solution which can be configured to meet the individual needs of each business customer. For example, the firewall for Amazon EC2 is typically configured by default to block all traffic. If the customer wants to allow inbound traffic they must open the necessary ports to allow inbound traffic while blocking unwanted traffic. The firewall also provides a host of options for setting specific protocols for inbound traffic such as by IP address and other identifications. Added security is in place since the business customer must use their x.509 certificate to change firewall configurations.
- Xen: Another layer of AWS security for EC2 is the Xen Hypervisor which separates different instances running on the same virtual machine. The firewall is situated in the Xen Hypervisor which means packets for instances must pass through the firewall thereby adding enhanced security to isolated instances.
Finally, Amazon Web Services Cloud uses a layer of security known as Amazon EBS or Elastic Block Storage which restricts access to data snapshots to the specific Amazon Web Services account which created it. Business customers can make the data snapshots available to other AWS accounts however; this process should be carefully considered since there may be files with sensitive information.
Prior to releasing Elastic Block Storage to the customer, Amazon wipes old data in accordance with the National Industrial Security Program guidelines. Plus EBS allows business customers to encrypt their data on the block device using algorithms that comply with individual security standards.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Hi Henry,
we'll post something on S3 security as well soon. aws.amazon.com
Buyer's Guide
Amazon AWS
April 2025

Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: April 2025.
857,028 professionals have used our research since 2012.
Practice Manager - Cloud, Automation & DevOps at a tech services company with 501-1,000 employees
It has a massive library of services for you to use in developing cloud-based solutions.
Originally posted at https://vcdx133.com/2015/06/12/tech101-amazon-web-services
As part of my NPX preparation (AWS Certified Solutions Architect – Professional is one of the recommended qualifications) and my RapidMatter GitHub project (will run from AWS), I have been delving into the world of Amazon Web Services. One statement: “Wow!” I can see why they are the world leader in Public Cloud services.
Here is the cool thing, as an Enterprise/Cloud Architect you have a MASSIVE library of services (40+ at time of writing) that you can use to develop Cloud-based solutions for your customers. As you read through the list below, you will see the fundamental building blocks for every solution. By having this service matrix, you do not have to reinvent the wheel; it already exists and is ready to go. Thus, you can focus on making sure your customer requirements are being met with elegant and innovative designs.
Getting Started (takes 5 minutes)
- You have a PC with a responsive and usable Internet connection
- Create an AWS account
- Provide a valid Credit Card
- Provide a valid phone number that must be verified
- Start using AWS immediately – there is a free tier (1 year trial period) for some services in some regions (micro instances)
- The UI is very intuitive and easy to use
- WARNING: You can spin-up most of the service catalogue. Do not forget and leave them running, your credit card will be charged
Core Services of AWS
- EC2 – Elastic Cloud Compute – Virtual Machines you can provision (Instances) from a massive library of templates (AMI – Amazon Machine Images free and paid from the AWS Community/Marketplace) no
- EBS – Elastic Block Store – Persistent Virtual Disks for your VMs (Instances)
- S3 – Simple Storage Service – Scalable, Object-based Storage in the Cloud
- Glacier – Archive Storage in the Cloud
Under The Hood: AWS uses a heavily customised version of Xen as its hypervisor.
Pricing Models
- On-Demand – Pay-as-you-go
- Reserved Instances – Pay up front
- Spot Requests – Bid for excess AWS resources against other AWS users
Compute
- EC2 Container Service – Run and Manage Docker Containers
- Lambda – Run Code in Response to Events
Storage & Content Delivery
- Storage Gateway – Integrates On-Premises IT Environments with Cloud Storage
- Elastic File System – Fully Managed File System for EC2
Edge Services (to be close to all of your customers around the world)
- Route53 – Scalable DNS and Domain Name Registration
- CloudFront – Global Content Delivery Network – Caches static content regionally
Simple Micro-services that just work
- SQS – Simple Message Queue Service
- SES – Simple Email Service
- SWF – Simple Workflow Service
- AppStream – Low Latency Application Streaming
- Elastic Transcoder – Easy-to-use Scalable Media Transcoding
- CloudSearch – Managed Search Service
Databases
- RDS – Relational Database Service – MySQL, Oracle, SQL Server & Amazon Aurora
- DynamoDB – Predictable and Scalable NoSQL Data Store
- ElastiCache – In-Memory Cache
- Redshift – Managed Petabyte-Scale Data Warehouse Service
Networking
- VPC – Virtual Private Cloud – Isolated Cloud Resources
- Direct Connect – Dedicated Network Connection to AWS
Administration & Security
- Directory Service – Managed Directory Services in the cloud
- Identity & Access Management – Access Control and Key Management
- Trusted Advisor – AWS Cloud Optimisation Expert
- CloudTrail – User Activity and Change Tracking
- Config – Resource Configurations and Inventory
- CloudWatch – Resource and Application Monitoring
Deployment & Management
- CloudFormation – Templated AWS Resource Creation (for Sysadmins)
- Elastic Beanstalk – AWS Application Container (for Developers)
- OpsWorks – DevOps Application Management Service
- CodeDeploy – Automated Deployments
Analytics
- EMR – Managed Hadoop Framework
- Kinesis – Real-time Processing of Streaming Big Data
- Data Pipeline – Orchestration for Data-Driven Workflows
- Machine Learning – Build Smart Applications Quickly and Easily
Mobile Services
- Cognito – User Identity and App Data Synchronisation
- Mobile Analytics – Understand App Usage Data at Scale
- SNS – Simple Notification Service – Push Notification Service
Enterprise Applications
- WorkSpaces – Desktops in the Cloud (VDI)
- WorkDocs – Secure Enterprise Storage and Sharing
- WorkMail – Secure Email and Calendaring Service
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Program and Project Manager at a tech services company with 10,001+ employees
As with all public clouds, there is still a dilemma with security, but provides a rich set of services for IaaS, PaaS, and SaaS.
AWS is a good platform for non-MySQL, MySQL, and Hadoop databases, but it’s not as good for RDBMS ones like MS SQL. It still has many missing features -- like replication, backup policy, and the ability to store/attach databases from a local drive -- but it has many good features for big data.
Its new AppStream service will pre-process graphics, including 3D renderings, and blast the results to mobile clients. Its Kinesis service for streaming data sets the stage for building big data apps on AWS, the basic architecture for the internet of things. As for its Hadoop capabilities, AWS launched its Elastic MapReduce (EMR) a long time back. It is the best cloud services provider for open source software for databases, operating systems hosting apps, and many other customized applications.
As discussed with many tech professionals, there is still a dilemma with security like there was a decade ago when online e-commerce business started and people weren’t prepared to share their credit card and bank details. Now, as online shopping is common, I am expecting the same trend will grow for public cloud very soon. AWS security is very good and they are following all the required security regulations as much other public cloud providers are doing, as they know any security breach could impact their entire business.
Pricing is another key concern when private cloud is used for big business and multiple growth on data. The price is a big debate and requires lot of analysis, as it is a question for big organizations. But no doubt, AWS is quite good for a small setup as it’s very cost effective and provides an eco-setup.
AWS is quite for good cloud services such as IaaS, PaaS, and SaaS. It provides a rich set of services and integrated monitoring tools alongside a competitive pricing model. AWS offers a full range of computer and storage offerings, including on-demand instances and specialized services such as Amazon EMR, and Cluster GPU instances. Amazon Cloud Trail and Amazon CloudWatch services are very good monitoring and AWS Identity and Access Management (IAM) is a good administration and security feature for administrators to use.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Thank you for the information on the AWS solution.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Lambda and other AWS enhancements
A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.
AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).
Some recent AWS announcements prior to re:Invent include
- AWS Adds EU (Frankfurt) Region
- Amazon Linux AMI Updates
- AWS Systems Manager for Microsoft System Center Virtual Machine Manager
- T2, the New Low-Cost, General Purpose Instance Type for Amazon EC2
- Windows Server 2012 R2 AMI Updates
- Zocalo Enterprise File Sync & Share updates (read more Zocalo here )
- AWS Management Portal for vCenter Setup Enhancements
AWS vCenter Portal
Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.
AWS re:invent content
November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.
November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.
AWS re:Invent announcements
Announcements and enhancements made by AWS during re:Invent include:
- Key Management Service (KMS)
- Amazon RDS for Aurora
- Amazon EC2 Container Service
- AWS Lambda
- Amazon EBS Enhancements
- Application development, deployed and life-cycle management tools
- AWS Service Catalog
- AWS CodeDeploy
- AWS CodeCommit
- AWS CodePipeline
Key Management Service (KMS)
Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here
AWS Database
For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below). Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).
In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.
Amazon RDS for Aurora
Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.
Amazon EC2 C4 instances
AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.
Amazon EC2 Container Service
Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.
AWS Lambda
In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.
Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.
Various application code deployment models
Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.
How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.
Why use AWS Lambda vs. an EC2 instance
Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?
If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premise environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.
However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.
View AWS Lambda pricing along with free tier information here.
Amazon EBS Enhancements
AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.
Application development, deployed and life-cycle management tools
In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.
AWS Config (Preview e.g. early access prior to full release)
Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.
AWS Service Catalog
AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.
AWS CodeDeploy
To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.
AWS CodeCommit
For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.
AWS CodePipeline
To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.
What this all means
AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities.
Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.
Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace.
AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.
The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
I like the ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS.
Cloud Conversations: AWS S3 Cross Region Replication storage enhancements
Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.
The Problem, Issue, Challenge, Opportunity and Need
The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.
Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).
Understanding the challenge and designing a strategy
The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).
What is AWS S3 Cross-region replication
Highlights of AWS S3 Cross-region replication include:
- AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
- S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
- Policy based replication tied into S3 versioning and life-cycle rules
- Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
- Keeps region to region data replication and movement within AWS networks (potential cost advantage)
To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.
- Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
- As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
- As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
- Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
- Click here to see current AWS S3 fees for various regions
S3 Cross-region replication and alternative approaches
There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.
However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.
AWS S3 cross-region hands on experience (first look)
For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.
I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).
While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.
It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.
Where to learn more
Here are some links to learn more about AWS S3 and related topics
- Cross-Region Replication for Amazon S3
- Cloud conversations: If focused on cost you might miss other cloud storage benefits
- Data Protection Diaries
- Cloud Conversations: AWS overview and primer
- Eight Ways to Avoid Cloud Storage Pricing Surprises
- Cloud and Object Storage Center
- Are more than five nines of availability really possible?
- How do primary storage clouds and cloud for backup differ?
- What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.
Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
I like the new ability for moving S3 objects within AWS, however I will continue to use other tools for moving data in and out of AWS.
Cloud Conversations: AWS S3 Cross Region Replication storage enhancements
Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.
The Problem, Issue, Challenge, Opportunity and Need
The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.
Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).
Understanding the challenge and designing a strategy
The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).
What is AWS S3 Cross-region replication
Highlights of AWS S3 Cross-region replication include:
- AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
- S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
- Policy based replication tied into S3 versioning and life-cycle rules
- Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
- Keeps region to region data replication and movement within AWS networks (potential cost advantage)
To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.
- Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
- As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
- As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
- Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
- Click here to see current AWS S3 fees for various regions
S3 Cross-region replication and alternative approaches
There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.
However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.
AWS S3 cross-region hands on experience (first look)
For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.
I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).
While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.
It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.
Where to learn more
Here are some links to learn more about AWS S3 and related topics
- Cross-Region Replication for Amazon S3
- Cloud conversations: If focused on cost you might miss other cloud storage benefits
- Data Protection Diaries
- Cloud Conversations: AWS overview and primer
- Eight Ways to Avoid Cloud Storage Pricing Surprises
- Cloud and Object Storage Center
- Are more than five nines of availability really possible?
- How do primary storage clouds and cloud for backup differ?
- What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.
Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Salesforce/Amazon/AWS Trainer at a tech consulting company with 51-200 employees
The product is simple and can be learned with online documentation.
What is most valuable?
EC2, EBS, Security, and RDS services are all good.
How has it helped my organization?
Currently I'm using the product for learning purposes.
For how long have I used the solution?
I have used if for over a year.
What was my experience with deployment of the solution?
No issues encountered.
What do I think about the stability of the solution?
No issues encountered.
What do I think about the scalability of the solution?
Not yet.
How are customer service and technical support?
Customer Service:
Very nice.
Technical Support:Very good.
Which solution did I use previously and why did I switch?
Yes I used Microsoft Azure but it only provides a free trial for one month. This duration is not sufficient to learn cloud services. Hence I switched to AWS as Amazon provides AWS cloud as a free trial for one year. That is an ample amount of time to grasp the cloud concepts and gain hands-on experience.
What about the implementation team?
The product is simple and can be learned with online documentation.
What was our ROI?
Very convenient.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.

Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros
sharing their opinions.
Updated: April 2025
Popular Comparisons
Microsoft Azure
Red Hat OpenShift
Oracle Cloud Infrastructure (OCI)
Akamai Connected Cloud (Linode)
Google Cloud
VMware Tanzu Platform
SAP Cloud Platform
Salesforce Platform
Pivotal Cloud Foundry
Alibaba Cloud
Google Firebase
IBM Public Cloud
Nutanix Cloud Clusters (NC2)
SAP S4HANA on AWS
Heroku
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- Gartner's Magic Quadrant for IaaS maintains Amazon Web Service at the top of the Leaders quadrant. Do you agree?
- PaaS solutions: Areas for improvement?
- Rackspace, Dimension Data, and others that were in last year's Challenger quadrant became Niche Players: Agree/ Disagree
- Does anybody have experience negotiating the terms and conditions with AWS?
- Which would you prefer - Amazon AWS or IBM Public Cloud?
- Do you have an Amazon AWS certification, and do you think it is important to earn one?
- Would you recommend Amazon AWS to cloud computing beginners?
- Which Amazon AWS features and services do you use the most often and why?
- How does Amazon compare to alternative cloud solutions?
- What are some smart ways to streamline AWS data transfer costs?
Aws has trusted advisor. Also look at park my cloud. Also if you use spot instances it will help you reduce costs. TCO - total cost of ownership