Try our new research platform with insights from 80,000+ expert users
it_user242517 - PeerSpot reviewer
Information Security Consultant at a tech services company with 51-200 employees
Consultant
The low cost is attractive, but stored procedures don't exist.

When I first had the idea to build https://report-uri.io, the biggest thing that jumped out at me was that there could be potentially huge amounts of inbound data that would need to be logged, stored and queried in an efficient manner. Doing some quick research it's obvious that most of the time, sites shouldn't really be generating CSP or HPKP violation reports, or so I thought. Once you have setup and refined your policy, you'd expect not to be getting any reports at all unless there was a problem, but this turned out not to be the case. Even excluding things like malvertising, ad-injectors and advertisers serving up http adverts on https pages, which I see a steady stream of constantly, there were things like policy misconfiguration and a genuine XSS attack that could also cause reports to be generated and sent, potentially in huge numbers. Every browser that visits a page with a violation would send a report and there could, and regularly is, multiple violations on a single page. Multiplied by a few heavily trafficked sites and you could very quickly have hundreds if not thousands of reports flooding in every single minute.

SQL Database

My first thought, as is fairly typical when one thinks 'I need a database', was towards the time tested SQL Server (or MySQL depending on your preference). Having had plenty of interactions with SQL Server in the past, I knew that it was more than capable of handling the simple requirements of a site like this. That said, I was also aware that the requirements of running a high performance and highly available database can be quite demanding. I knew I was going to want someone else to take care of this for me so I started looking around at different cloud providers. It became apparent pretty quickly that SQL Server in the cloud was fairly pricey for the budget I had in mind for the site!

SQL Azure was coming in at between £46 and £92 a month for a database capable of handling just a few thousand transactions a minute. Relatively cheap to some I have no doubt, but considering that all I'd looked at so far was the cost of the database, it wasn't a great start. Amazon also have their own offering of various flavours of RDBMS hosting but again, for a reasonable level of throughput and performance, I was looking at starting prices in the £40 - £50 a month region just to meet some basic needs.

My largest concern with having a fixed throughput would be the easy ability for an attacker to saturate it given the nature of the site. If the database is only provisioned for 5,000 transactions per minute, the number of inbound reports, queries against the data and my session store (more on that in another blog) could be quite demanding and if the database becomes unavailable, the whole site stops working. I needed something without the throughput restrictions and a lot cheaper.

NoSQL Database

Having used MongoDB for one of my previous projects the next logical step was to look and see what what was available in terms of NoSQL databases. Again, the hosted solutions seemed to be fairly pricey and were constrained by the typical CPU/RAM tiers or just a given performance metric. With great database as a service offerings from both Amazon and Microsoft in the form of DynamoDB and Table Storage respectively, I fired up a small test on both to try them out. One of the first things that cropped up with DynamoDB was the provisioned throughput again. You aren't actually billed for the transactions you make, you're billed to have a maximum available throughput after which transactions will start to fail. If you don't use them, you're still paying for them, but as soon as you go over the limit, you're in trouble. This means that you'd need to provision a good portion above your average requirements to be able to handle bursts in traffic.

Still, it's a little cheaper at ~£30 a month for the equivalent level of throughput as the SQL Server database mentioned above, but, we still have that maximum throughput limit. Microsoft do things a little differently with Table Storage in Azure and you're only billed for the transactions you actually use, there is no concept of provisioning for throughput. Each storage account can use as much or as little of the of the scalability limits as is required, and you never pay any more or less, just the per transaction cost.

Microsoft Azure Table Storage

Having been fairly impressed with my initial testing of Table Storage, I decided to throw some numbers on a piece of paper and see what the costs were going to come out at. Each storage account has a performance target of 20,000 transactions per second. Yes, 20,000 per second! That means that my application can perform up to this limit with 1 restriction. There is a 2,000 transaction per second target on a Partition, which is similar to the concept of a table in a traditional relational database. This shouldn't be a problem as long as the data is partitioned properly, a note for later on. Beyond this though, there aren't any other limitations. If you make 1 transaction in a second you pay the cost of 1 transaction, if you make 1,000 transactions in a second you pay the cost for 1,000 transactions. There are no penalties or additional costs as your throughput increases. The really staggering part is that the cost of a single transaction is £0.000000022, or, to make that a bit easier to get your head around, £0.022 per 1,000,000 transactions. Not only is the incredibly low cost really attractive here, the requirements of my application don't really fit very will with being fixed into a set throughput limit, and Table Storage does away with that.

Beyond this, the only additional cost, like all other providers, is storage space for the database and outbound bandwidth, both of which are again billed based on exactly what you use without any limits or requirements to provision allowances. Data storage is billed at £0.0581/GB/month and the first 5GB of outbound bandwidth is free with a cost of £0.0532/GB after that.

To sum all of this up with a really simple example, I drew up the following.

To store 5Gb of data, with 5Gb of egress and to issue 10 million transactions against that data would cost: £0.5105 per month. That's less money that I lose down the side of the couch each month!

Even if we get really silly with these numbers and put 100Gb in the database with 100Gb of egress and issue 200 million transactions against the data, we're still only talking £15.264 per month! That equates to an average of about 4,629 transactions per minute, a fraction of any other quote from other providers and proved attractive enough to tip the balance in favour of Azure Table Storage.

What's the catch?

Well, there isn't really a catch, as such, but Table Storage does have a very limited feature set when compared to something like SQL Server. That's no to say it's a bad thing, but it can be difficult not having some of the things that you're typically used to. You can read up much more on the difference between the two in Azure Table Storage and Windows Azure SQL Database - Compared and Contrasted. There are no foreign keys for example, joins and stored procedures don't exist either, but the biggest thing for me to get my head around was the lack of a row count feature. In Table Storage if you want to keep track of your row count, you have to keep track of it yourself. If you don't keep track of your row count the only way to obtain it is to query out your entire dataset and count the records in it. That's an incredibly slow, inefficient and arduous task! In coming blogs I'm going to be covering a lot of the problems that I hit whilst trying to adapt to using Table Storage and how I adapted my implementation of the service to get the best possible performance and scale out of it. Keeping track of the count of incoming reports, querying against potentially huge datasets efficiently, offloading my PHP session storage to Azure so that I could have truly ephemeral application servers behind my load balancers and much, much more.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user7842 - PeerSpot reviewer
it_user7842Owner with 51-200 employees
Vendor

Writing such an article today shall not miss the Azure DocumentDB, especially when you talk about NoSQL. Table storage is not real NoSQL. It is just a massive-scale Key-Value store.

See all 2 comments
PeerSpot user
Consultant at a tech services company with 51-200 employees
Consultant
Learning curve to get it up and running but it's scalable and flexible.

What is most valuable?

Scalability, affordability and the flexibility of the product.

How has it helped my organization?

  • You can off-site required services to help mitigate risk
  • You can take advantage of the scalability to have short term high intensive processes be used by their services
  • You can totally take away the need for any server hardware in your organisation

What needs improvement?

Probably the 2 main areas where things could be improved are getting direct console access to VM's and its Azure backup solution to add backup types (eg System State).

For how long have I used the solution?

3 months

What was my experience with deployment of the solution?

We went with the volume license of credits and still find it difficult to activate those credits as there is a particular website you have to go to.

What do I think about the stability of the solution?

There has been documented issues with stability, however we did not experience it.

What do I think about the scalability of the solution?

No issues encountered.

How are customer service and technical support?

Customer Service:

Good, I can shoot email questions and get responses in a good amount of time.

Technical Support:

I haven't had to raise a technical support for Azure though for Office 365 I have and I've found it excellent.

Which solution did I use previously and why did I switch?

We looked at AWS, a locally based cloud provider and a datacentre.

How was the initial setup?

It was a learning curve to get it up and running. If I had prior training I would have found it straight forward but the time lines for implementation meant I had to "dive in".

What about the implementation team?

In-house

What was our ROI?

We are able to provide services to clients that allows us get a good ROI once we have deployed Azure to them.

What's my experience with pricing, setup cost, and licensing?

  • Setup cost with Azure is minimal for what they are supplying. Everything takes less than 10 minutes to deploy.
  • Day to Day costs is what you use, we can now review those costs and look at the new features (Automation) to make those costs even more efficient.

Which other solutions did I evaluate?

We looked at AWS, a locally based cloud provider and a datacentre.

What other advice do I have?

Get onboard with Microsoft and the Azure team and listen out to their partner training. They did a big Azure for IT Pros via their channel 9 msdn a few weeks ago. There is plenty of webinars and e-books which will teach you what you want to know.

Disclosure: My company has a business relationship with this vendor other than being a customer. Microsoft Partner
PeerSpot user
Buyer's Guide
Microsoft Azure
August 2025
Learn what your peers think about Microsoft Azure. Get advice and tips from experienced pros sharing their opinions. Updated: August 2025.
865,738 professionals have used our research since 2012.
PeerSpot user
CTO at a healthcare company with 51-200 employees
Vendor
The management console needs work but we would not be building the Healthcare IT application if it were not for Azure.

What is most valuable?

The fully integrated capabilities of a PaaS service.

How has it helped my organization?

Basically we would not be building the Healthcare IT application if it were not for Azure.

What needs improvement?

The management console needs work and the pricing calculator is truly user hostile.

For how long have I used the solution?

I have been using the solution 3 years.

What was my experience with deployment of the solution?

We have some issues with functional differences between Azure Active Directory and how Ad is implemented on a standalone server.

What do I think about the stability of the solution?

We have had a few outages with Visual Studio Online – though they have not been a major work impact.

What do I think about the scalability of the solution?

None so far.

How are customer service and technical support?

Customer Service: Mixed. Microsoft still has too many support queues and if you end up in the wrong one you can spend hours being bounced from one to the other. OTOH once you figure out how to get to the Business support queue – response times are fantastic for “business outage” issues.Technical Support: When in the right queue – the understanding is unparalleled.

Which solution did I use previously and why did I switch?

I have previously used AWS and Amazon Elastic Beanstalk. Azure is a PaaS whereas AWS is a server stack environment and thus AWS does not provide the OpEx cost reductions that Azure does. Elastic Beanstalk is a PaaS but it is not as fully developed as Azure.NET is.

How was the initial setup?

Very straightforward. In fact almost too much so. Its easy to try and overthink the setup and waste time looking for things you don’t need to be looking for.

What about the implementation team?

Combination of in-house and vendor. Vendor team was new to Azure.

What was our ROI?

Full ROI on the current project is still to be determined as we are not shipping yet. But I would estimate a 100% cost reduction over the traditional way of building a server based app.

What's my experience with pricing, setup cost, and licensing?

Our monthly dev costs for infrastucture are running $300-$500 for the initial work and we expect to have full OpEx costs of $1,000-$3,000/mo once we are launched.

Which other solutions did I evaluate?

AWS and Elastic Beanstalk.

What other advice do I have?

Have a team that understands .NET development and particularly someone who understands Active Directory very well
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Owner with 51-200 employees
Vendor
Implementation is simple but anything to do with referencing other packages is simply frustrating

What is most valuable?

The ability to kick start a new web application or website and having them up and running in matter of minutes still amazes me.

How has it helped my organization?

Deploying new release for existing web applications has now become a matter of right clicking the name of the project and clicking ‘publish’, the same activity before might have been done in hours to days – depending on the complexity for the app.

What needs improvement?

Anything to do with referencing other packages is simply frustrating, this is mostly a problem with the Windows framework and not the hosting service, but it is still the most time consuming and irritating thing about using ASP.NET

Which solution did I use previously and why did I switch?

Yes, well back in the days we used to install physical servers, then we went to manually virtual servers, that to hosted virtual server, then to Amazon machines and now switched to Windows Azure.

How was the initial setup?

As simple as can be.

What about the implementation team?

We do all technical work in house and with Azure there was really no need for external assistance.

What's my experience with pricing, setup cost, and licensing?

When starting out I became member of the BizSpark program so the initial cost was 0 (FREE!), on customer’s projects the cost can go from $50 to $200 per month which is really cost effective for them.

Which other solutions did I evaluate?

Yes for every project we will compare Azure with Amazon, with virtual servers with a physical server. But in almost every case choosing Azure will be a no brainer.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user150756 - PeerSpot reviewer
it_user150756Director of Engineering at a tech services company
Consultant

How do you go about solving L7 loadbalancer issues in Azure? Also when you want to do L7 redirection from one datacenter to another datacenter , how do you solve them in Azure ?

See all 2 comments
it_user7839 - PeerSpot reviewer
Consultant at a tech vendor with 51-200 employees
Vendor
Java and Azure

The Announcement

Microsoft recently announced a partnership with Oracle which brings a number of Oracle technologies to the Windows Azure cloud.

In short, they announced:

  • Hyper-V (i.e. the virtualisation technology underpinning Azure) is now certified to run Oracle software.
  • Virtual Machine images will be available with the Oracle Database, and Weblogic preconfigured.
  • Properly licensed and supported Java on Azure.

Items 1 & 2 I’m not that excited about, but it’s item 3 which is interesting.

The current state of Java on Azure

Microsoft have supported Java on Windows Azure since the start. There’s an SDK and tooling built into Eclipse.

However, Microsoft haven’t been able to install Java, you’ve had to do that yourself. This has meant that the package you deploy to Azure has had to contain the Java installer, any frameworks and web servers you need, as well as your application code. This makes the package too large to work with, in some cases too large to even upload. It certainly slows down application updates.

Utilities such as AzureRunMe have helped bridge this gap, by splitting your package into separate zip files in blob storage which are only downloaded when required. However, ultimately you have to do more work to get Java running, and it takes longer to spin machines up.

Having said that, Java applications work surprisingly well in Azure. Applications are coded against the JVM, rather than the operating system and make relatively few assumptions about the environment. They also tend to use ORMs (like Hibernate) giving you a simple database schema which is easy to port to Windows Azure SQL Database.

I often find that Java applications are quicker to get running in Azure than similar .NET apps.

The JVM

When we talk about Java here, we’re really talking about the Java Virtual Machine (JVM), the runtime which hosts Java applications.

Java is just one of the languages supported by the JVM, there has recently been a small explosion in language options on the JVM, including:

  • Clojure
  • Scala
  • Groovy
  • Jython
  • JRuby
  • Kotlin

…to name just a few.

What does this mean for the future?

I haven’t got any special knowledge here, but there are few things Microsoft could do now:

  1. Provide a ‘Java’ role in Cloud Services. This would have Java and (optionally) Tomcat pre-installed, making deployment of Java applications faster and easier.
  2. Enable the JVM as a hosting option on Windows Azure Websites (alongside Python, .NET, PHP and Node.js).
  3. JVM support gives you Ruby (using JRuby). There’s already a Ruby SDK for Azure, and JRuby seems to be the fastest Ruby runtime (AFAIK). This is potentially true for this long list of languages too.
  4. This is probably good news for Hadoop on Azure.
  5. Enterprise Java developers should certainly take note. The capabilities of the Azure Service Bus, coupled with competent PaaS and IaaS offerings and the low-cost SQL Database, make Azure an attractive option.

Hold On, Load balancing…

Java web applications (in my experience) often hold large object graphs in memory, as state stored against each user session. This means that sticky sessions are required, and the Azure load balancer in Cloud Services is round-robin (sort of). Sticky sessions aren’t very cloud friendly, but it’s difficult to make legacy application stateless.

Whilst there are ways to work around this, they all rely on ‘un-balancing’ the load balancer, and will frequently add network hops and overhead to the processing of each request. We need to be able to select a load balancing strategy on endpoints configured in Azure (i.e. round robin/sticky/performance based decision).

As a side note, the load balancing strategy for Windows Azure Websites is sticky.

Conclusion

Better support for Java on the Microsoft cloud goes well beyond one language, and unlocks a number of possibilities, such as better support for Ruby and Hadoop for example.

Load balancing is one pain point, but something we know they can fix in the platform.

Azure remains an exciting place for everyone from the smallest startup, to the largest enterprise.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Owner with 51-200 employees
Vendor
Windows Azure Migration cheat-sheet

I was recently asked whether I do have some cheat-sheet for migrating applications to Windows Azure. The truth is that everything is in my head and I usually go with “it should work” – quickly build, pack and deploy. Then troubleshoot the issues. However there are certain rules that must be obeyed before making any attempt to port to Windows Azure. Here I will try to outline some.

Disclaimer

What I describe here is absolutely my sole opinion, based on my experience. You are free to follow these instructions at your own risk. I describe key points in migrating an application to the Windows Azure Platform-as-a-Service offering – the regular Cloud Services with Web and/or Worker Roles. This article is not intended for migrations to Infrastructure Services (or Windows Azure Virtual Machines).

Database

If you work with Microsoft SQL Server it shall be relatively easy to go. Just download, install and run against your local database the SQL Azure Migration Wizard. It is The tool that will migrate your database or will point you to features you are using that are not compatible with SQL Azure. The tool is regularly updated (latest version is from a week before I write this blog entry!).

Migrating schema and data is one side of the things. The other side of Database migration is in your code – how you use the Database. For instance SQL Azure does not accept “USE [DATABASE_NAME]” statement. This means you cannot change database context on the fly. You can only establish connection to a specific database. And once the connection is established, you can work only in the context of that database. Another limitation, which comes as consequence of the first one is that 4-part names are not supported. Meaning that all your statements must refer to database objects omitting database name:

[schema_name].[table_name].[column_name],

instead of

[database_name].[schema_name].[table_name].[column_name].

Another issue you might face is the lack of support for SQLCLR. I once worked with a customer who has developed a .NET Assembly and installed it in their SQL Server to have some useful helpful functions. Well, this will not work on SQL Azure.

Last, but not least is that you (1) shall never expect SQL Azure to perform better, or even equal to your local Database installation and (2) you have to be prepared for so called transient errors in SQL Azure and handle them properly. You better get to know the Performance Guidelines and Limitations for Windows Azure SQL Database.

Codebase

Logging

When we target own server (that includes co-locate/virtual/shared/etc.) we usually use local file system (or local database?) to write logs. Owning a server makes diagnostics and tracing super easy. This is not really the case when you move to Windows Azure. There is a feature of Windows Azure Diagnostics Agent to transfer your logs to a blob storage, which will let you just move the code without changes. However I do challenge you to rethink your logging techniques. First of all I would encourage you to log almost everything, of course using different logging levels which you can adjust runtime. Pay special attention to the Windows Azure Diagnostics and don’t forget – you can still write your own logs, but why not throwing some useful log information to System.Diagnostics.Trace.

Local file system

This is though one and almost always requires code changes and even architecting some parts of the application. When going into the cloud, especially the Platform-as-a-Service one, do not use local file system for anything else, but a temporary storage and static content that is part of your deployment package. Everything else should go to a blob storage. And there are many great articles on how to use blob storage here.

Now you will probably say “Well, yeah, but when I put everything into a blob storage isn’t it vendor-lock-in?” And I will reply – depending on how you implement this! Yes, I already mentioned it will certainly require code change and, if you want to make it the best way and avoid vendor-lock-it, it will probably also require architecture change for how your code works with files. And by the way, file system is also “vendor-lock-in”, isn’t it?

Authentication / Authorization

It will not be me if I don’t plug-in here. Your application will typically use Forms Authentication. When you redesign your app anyway I highly encourage you rethink your auth/autz system and take a look into Claims! I have number of posts on Claims based authentication and Azure ACS(Introduction to Claims, Securing ASMX web services with SWT and claimsIdentity Federation and Sign-out, Federated authentication – mobile login page for Microsoft Account (live ID), Online Identity Management via Azure ACS, Creating Custom Login page for federated authentication with Azure ACSUnified identity for web apps – the easy way). And couple of blogs I would recommend you to follow in this direction:

Other considerations

To the moment I cant dive deeper in the Azure ocean of knowledge I have to pull out something really important that fits all types of applications. If it happens, I will update the content. Things like COM/COM+/GDI+/Server Components/Local Reports – everything should work in a regular WebRole/WorkerRole environment. Where you also have full control for manipulating the operating system! Windows Azure Web Sites is far more restrictive (to date) in terms of what you can execute there and to what part of the operating system you have access.

Here is something for you think on: I worked out with a customer who was building SPA Application to run in Windows Azure. They have designed a bottleneck for scaling in their core. The system manipulates some files. It is designed to keep object graphs of those files in-memory. It is also designed in a way that end-user may upload as many files as day want during the course of their interaction with the system. And the back-end keeps a single object graph for all the files user submitted in-memory. This object graph cannot be serialized. Here is the situation:

In Windows Azure we (usually, and to comply with SLA) have at least 2 instances of our server. These instances are load balanced using round-robin algorithm. The end user comes to our application, logs-in and uploads a file. Works, works, works – every request is routed to a different server. Now user uploads new file, and again, and again … each request still goes to a different server.

And here is the question:

What happens when the server side code wants to keep a single object graph of all files uploaded by the end user?

The solution: I leave it to your brains!

Conclusion

Having in mind the above mentioned key points in moving application to Windows Azure, I highly encourage you to play around and test. I might update that blog post if something rather important comes out from the deep ocean of Azure knowledge I have. But for the moment, these are the most important check-points for your app.

If you have questions – you are more than welcome to comment!

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Owner with 51-200 employees
Vendor
Session Affinity and Windows Azure

Everybody speaks about recently announced partnership between Microsoft and Oracle on the Enterprise Cloud. Java has been a first-class citizen for Windows Azure for a while and was available via tool like AzureRunMe even before that. Most of the customers I've worked with are using Apache Tomcat as a container for Java Web Applications. The biggest problem they face is that Apache Tomcat relies on Session Affinity.

What is Session Affinity and why it is so important in Windows Azure? Let's rewind a little back to this post I've written. Take a look at the abstracted network diagram:

So we have 2 (or more) servers that are responsible for handling Web Requests (Web Roles) and a Load Balancer (LB) in front of them. Developers has no control over the LB. And it uses one and only one load balancing algorithm – Round Robin. This means that requests are evenly distributed across all the servers behind the LB. Let's go through the following scenario:

  • I am web user X who opens the web application deployed in Azure.
  • The Load Balancer (LB) redirects my web request to Web Role Instance 0.
  • I submit a login form with user name and password. This is second request. It goes to Web Role Instance 1. This server now creates a session for me and knows who I am.
  • Next I click "my profile" link. The requests goes back to Web Role Instance 0. This server knows nothing about me and redirects me to the login page again! Or even worse – shows some error page.

This is what will happen if there is no Session Affinity. Session Affinity means that if I hit Web Role Instance 0 first time, I will hit it every time after that. There is no Session Affinity provided by Azure! And in my personal opinion, Session Affinity does not fit well (does not fit at all) in the Cloud World. But sometimes we need it. And most of the time (if not all cases), it is when we run a non-.NET-code on Azure. For .NET there are things like Session State Providers, which make developer's life easier! So the issue remains mainly for non .net (Apache, Apache Tomcat, etc).

So what to do when we want Session Affinity with .NET web servers? Use the SessionAffinity or SessionAffinity4 plugin. This basically is the same "product", but the first one is for use with Windows Server 2008 R2 (OS Family = 2) while the second one is for Windows Server 2012 (OS Family = 3).

I will explain in a next post what is the architecture of these plugins and how exactly they work.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user8577 - PeerSpot reviewer
Director of Infrastructure at a tech consulting company with 51-200 employees
Consultant
When to use different Azure IaaS storage types…

I’m been using Azure IaaS in a lot of enterprise deployments lately and I’ve noticed that there is some confusion regarding the different storage types available and provisioned for the virtual machines.  In many ways the capabilities associated with Azure storage is its greatest strength, but unless you configure it properly, you might be in for a surprise regarding your results.    The key message to understand is the difference between the operating system disk, the temporary disk, and data disks as they have different performance characteristics and will impact your systems in different ways when used correctly or incorrectly.

The operating system disk:

This disk is used for the operating system install and it will exhibit great read performance.   It is not however scalable for write performance, so you shouldn’t use it for any write-centric or data-centric use.   It would NOT be the place where you would put your Microsoft SQL data, or your file server.

The cache / temporary disk

The cache disk is used for temporary data that you don’t want to keep.  It might seem like it is retained, but eventually you will find this disk refreshed when the system is booted back up, or undergoes a “repair”.  The cache disk is really only appropriate for storage of data you don’t want to keep.

The data disk

The data disk is where you should put any of your important information, especially databases and file stores.  The data here can be effectively scaled out through striping several data disks together.  A rule of thumb to use is that each data disk is worth approximately 500 IOPS.  If you stripe several together you’ll see that number increase.  At this point you might find it helpful to run some tests against the disks you’ve allocated to ensure you’ve added the appropriate IOPS for your capacity requirement.  I’ll note that the disk IOPS will increase as it is used, which is a component of the caching engine of the data disk type.  The cool thing about the data disk is that they are easy to provision and you can create stripes of a lot of disks (16) which will provide excellent scalability to your application.

The key point?  Use the right disks for the right things.  If you don’t, then you’ll get a different performance experience than you’re expecting.  Now move some workloads to Azure and take advantage of the scalability!

Want to learn more?  Check out the Azure internals session from TechEd!

Azure Internals

Also, check out Azure Storage Testing, which checked a standard Azure hard disk against a local SSD and a small server.  This performance can be improved by striping multiple Azure disks together.

Disclosure: The company I work for is a Microsoft Partner

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free Microsoft Azure Report and get advice and tips from experienced pros sharing their opinions.
Updated: August 2025
Buyer's Guide
Download our free Microsoft Azure Report and get advice and tips from experienced pros sharing their opinions.