Testing your disaster recovery plan is crucial to ensure that your organization is prepared to minimize downtime in the event of a disaster. Here are some steps you can take to test your disaster recovery plan:
Define Test Scenarios: Define test scenarios that simulate real-world disaster scenarios. These scenarios should be designed to test specific aspects of your disaster recovery plan, such as data recovery, network failover, and application availability.
Involve All Relevant Parties: Involve all relevant parties in the testing process, including IT staff, business unit leaders, and third-party vendors. This will help to ensure that everyone is on the same page and understands their role in the event of a disaster.
Document Test Results: Document the results of each test scenario, including any issues or areas for improvement. This will help you to refine your disaster recovery plan and ensure that it's as effective as possible.
Test Regularly: Test your disaster recovery plan on a regular basis, such as quarterly or bi-annually. This will help to ensure that your plan remains up-to-date and effective in the face of evolving threats and technologies.
Automate Where Possible: Automate as much of the testing process as possible, such as data replication, failover, and recovery. This will help to minimize the risk of human error and improve the overall efficiency of your disaster recovery plan.
By following these steps, you can test your disaster recovery plan to work on minimizing downtime and ensure that your organization is prepared to quickly recover from a disaster.
SQL Database Administrator at Aurora Mental Health Center
Mar 17, 2023
The key to recovery from a Ransomware attack is the boy scout motto "Be Prepared". In our case, not only did we have backups at the DR site but both the Production site and DR site each had a NAS on a different subnet with different Admin passwords that had backup copies, so 4 total backups. We also were using iSCSI connections to our SAN which the ransomware was not able to cross when they polluted the connection file. This was an unexpected bonus. We were basically back up and running in 4 hours after wiping and restoring files. Lessons learned were to separate as much as possible so if one part of the domain/forest gets corrupted it cannot travel to the other areas. We now use Veeam for Hyper-V windows VMs and Zerto for VMware VMs, another separation of business functions with different admin passwords. Nothing is foolproof but by making it as difficult as possible then makes more time to catch and stop the attack sooner.
We’re launching an annual User’s Choice Award to showcase the most popular B2B enterprise technology products and we want your vote!
If there’s a technology solution that’s really impressed you, here’s an opportunity to recognize that. It’s easy: go to the PeerSpot voting site, complete the brief voter registration form, review the list of nominees and vote. Get your colleagues to vote, too!
Welcome back to PeerSpot's Community Spotlight! Below you can find the latest hot topics posted by your fellow PeerSpot Community members. Read articles, answer questions, and contribute to discussions that are relevant to you and your expertise. Or ask your peers for insight on topics that interest you!
Here are some topics that your peers are discussi...
Director of Community at PeerSpot (formerly IT Central Station)
Aug 2, 2022
@Chris Childerhose, @PraveenKambhampati, @Deena Nouril, @Shibu Babuchandran and @reviewer1925439,
Thank you for contributing your articles and sharing your professional knowledge with 618K PeerSpot community members around the globe as well as with a much bigger readers audience!
Every Virtualization and System Administrator deals with having the ability to recover servers, files, etc. and having a Backup Solution to help with recovery will ease the burden. But how do you know which one is right for you? How would you go about choosing the right solution that will help you in your daily tasks?
When choosing a backup solution there are many things t...
Dear PeerSpot community members,
Welcome to the latest PeerSpot Community Spotlight, where we sum up the most relevant recent postings by your peers in the community.
Check out the latest questions, articles and professional discussions contributed by PeerSpot community members!
Here are some topics that your peers are discussing at the moment:
What is your recomme...
Hi PeerSpot community members,
This is a fresh-from-the-oven Community Spotlight for you. Here, we've summarized and selected the latest posts (professional questions, articles and discussions) by PeerSpot community members. Check them out!
Also, please share with us your feedback and suggestions by commenting below!
See what is trending at the moment and chime in to discuss!
I would suggest Veeam with the underlying storage being provided by a Pure FlashArray//C.
The FlashArray will provide the throughput you are after (its all-flash), the encryption (FIPS 140-2 certified, NIST compliant), data reduction (Veeams isn't that great) which should provide price parity to spinning disk, provides Immutability which you may also need & is a certified solution with Veeam.
The other storage platform worth looking at is VAST Storage, which has roughly the same features set as the Pure arrays, but uses a scale-out, disaggregated architecture and wins hands down in the throughput race against the Pure's.
I don't think there are backup appliances with the 100Gbps interfaces that exist.
This speed is not needed for the backups, as the network is hardly ever the bottleneck.
Nowadays Cisco and other vendors are coming up with 25 Gig & 100 Gig Ports. On the physical setup of your physical or ESXi(Including backup servers) it should be planned in a way which can connect to this switches to have 100 Gig Pipe. DataDomain, HPE storeonce & Quantum DXI supports you the hardware encryption. Identify the right hardware model which supports the right I/O for your disk backups.This will eliminate your bottleneck after having the 100 Gig N/W. On software you can go for Netbackup, Veeam or Commvault. Each has its own option to reduce the frequent data flow by having client side deduplication
This is not the right question, while there are different areas in backup solutions like Software, Network and Storage and they are calculated by different measures than simple network bandwidth.
You can buy the best hardware in the world but earn not a good performance at all, e.g stream data have better performance on rotational drives than flash drives. In a backup process, you must consider data size and type, storage type, IOPS and throughput, backup time window, and much more.
The backup speed depends on:
- number of concurrent I/O streams
- data type
- read/write speed of backup repository
- data encryption enable or not
- terabytes of front end data to be backed up
The question is not clear enough, to sizing a high scalable, high throughput environment. To archive the 100Gbps throughput, you have to list down the mentioned information.
For a very large environment, I strongly recommend using either NetBackup or CommVault.
There is no such thing as best "anything" let alone backups. There are plenty of enterprise solutions that can handle the load you mentioned plenty are available in the market and it all comes down to your needs.
Hardware encryptions might be much more secure (tougher to hack but still hackable) than software encryptions however they open doors for vendor lock-in and that in certain situations can affect the recoverability of your data.
My advice to you is to focus on looking for a backup solution that can help you guarantee the recoverability of your data at the event of a disaster rather than focus on best backup 100gbps with hardware encryptions.
At the end of the day what's the point of a backup solution if it can do all that you mentioned and fails you at the event of a disaster.
If you can give me more environment details such as what kind of platforms and apps are being utilized I may be able to assist other than that my answers to you are there is no such thing as the best backup for 100gbps with hardware encryption.
We live in a world where everything is software-defined and it's safe to say that that's the way everyone should go.
We use the smallest Cohesity cluster possible with three nodes and have 60GBps of available bandwidth. I assume with more nodes you could get to 100Gbps. They have flash and an unbelievable filesystem. Do you have a use case for 12,500 megabytes per second of backup throughput? I'm having trouble envisioning an admin who would be in charge of a source capable of that coming to a forum like this with your exact question!
It seems an object storage with inline dedupe could fit but would need to be sized for the performance. Backup targets are typically tuned for the ingest. Is the data dedup-able or compressible? How much data are you looking to backup and in how much time? How much data do you need to restore and in how much time?
Your question is not cearly enough for calculate best scenario for your question, Because there are many factors depend on such as :
-Backup for what physical or virtualization environment.
-Network speed on all devices.
-Storage tybe flash or tap.
-What is the read/write speed of your disks/tape, AND the bus/controller speed that the disk is attached to?
-How many files and, how much data are you backing up?
-Is your backup application capable of running multiple jobs and sending multiple streams of data simultaneously?
Some potential points for improvement might include:
Upgrading switches and ethernet adapters to Gigabit Ethernet or greater.
Investing in higher performing disk arrays or subsystems to improve read and write speeds.
Investing in LTO-8 tape drives and consider a library, if you are not already using one, so that you can leverage multiplex (multistream) to tape.
To be able to reach that speed of read and writes, other factors also play a role. For example, network topology, NIC speeds and the backup client speed for data delivery.
Aside from that, you'll need larger files to reach that speed, since with smaller files there is always a speed ramp up time.
So there is no straightforward answer.
But what kind of data or machines is he trying to backup? What OS, DB, and type of apps, will help to give a definite answer.
Solutions that will always deliver is Netbackup (All apps, OS' and DB's), Backup Exec (MS apps, Win and Linux and some DB's) and Veeam.
While we do not sell/offer backup SW per se, we do work with a lot of providers like Commvault, Rubrik, Veeam et el, I can say a majority of our user base, large global companies with 10+ Offices, do use Rubrik, and implement it with the generic S3 out that points to s3.customer_name.rstor.io .As RStor pricing model, we do not charge for Puts/Gets Reads/Writes, Ingress/egress fees. And with triple geographic replication a standard offering, the customer data moves fast in all regions with a super-fast network multiple +100G connections LAG's together, transferring 1PB in 22.5hrs, from SJC to LHR!
There are plenty of tools out there at the moment, many include features like data encryption, e-discovery, and instant restore.
For the current use case, small company/no data center - I would recommend Acronis.
The commercial version of the product even includes a proprietary feature called active protection that is a ransomware defense tool that is unlike anything else on the market.
There are a lot of details that you would need to provide to properly answer your question. How much and what type of data? Is the data being access at the same time in production as it is being ingested? Can you use an agent on a client to do things like client-side dedup? The source (storage) is most likely going to be the bottleneck. Without knowing the answers to the questions and many more I am not sure I can answer properly.
Having been in the industry for 20+ years and constantly staying up on the technology, I can tell you that the fastest backup and recovery solution I have seen to date is:
Source - PureStorage FlashArray (excryption is always on)
Data type - VMWARE 6.x VMDK and VVOL (included VMs with DBs and Apps)
Backup Software - NetBackup 8.x using CloudPoint snapshot manager
Storage Target - Pure Storage Flash Array over 16Gb FC
Backups and Recovery were extremely fast (seconds).
But with everything... the devil is in the details and mileage will vary.
Personally, I would recommend looking at ExaGrid. It is the fastest Backup & Recovery target that can scale from 3TB to 2PB. It also can have 1GbE, 10GbE or 40GbE connection speeds. Any appliance greater than 7TB can come with encrypted disks. Any backup software works to it accept the proprietary ones such as Avamar, Rubrik and Cohesity. Also, it has no forced obsolescence like the NetBackup or other such appliances.
There are N numbers of good backup software. If the environment covers with both Physical, Virtual environment, I would suggest to try with Netbackup 8.1 backup Software or EMC Networker. Commvault is too good, however, its license and maintenance costings are higher than Veritas, EMC.
Apart from the file system level backup, for any Exchange, SQL, Oracle, NAS level backup, they are good at managing backup and recovery.
The throughput totally depends on the media server bandwidth, Switch connectivity, FC/FCoE connection, efficient backup resources for backup media servers with high-level planning & designing.
For exact recommendation, as all of us are suggesting, the details and capacity required for the overall environment.
You might just approach the backup software vendor to have a POC testing and DEMO overview before you finalize any tool.
There isn´t one single back up target device/appliance doing 100Gbs throughput in the marker. To achieve that number requires multiple appliances like HPE StoreOnce. Also, it requires a lot from the primary disk array and infrastructure to provide 100Gbs ie multiple mid-range/high-end all-flash disk arrays etc.
In my experiences i generally prefer netbackup appliances for fast backup and recovery including encrypted data.
We've gone for cohesity its a clustering solution that does distributed backups over the nodes. This would give you more bandwith when growing the cluster. So it mostly depends on the amount of data you need to back up in the environment.
Possibly CommVault can do it but what's the data type? If they are too small files, no backup program show 100Gbps. And also what kind of backup repository will you use? Maybe full flash repository can handle this IO.
What are your RPO and RTO requirements, as well as what SLA's do you have for your clients? Backup and Recovery aren't normally performance-driven since it's not a tier 1 storage.
COHESITY is the solution. Parallel Ingest provides the best performance, impenetrable file system, with native encryption to the F142 layer.
Look into Veeam or Zerto provided backup tools.
It's all dependant upon the server/storage technology being deployed.
Netbackup 8.x with Netbackup Appliance 5340 would be good to back up any Enterprise Backup Environment with a higher amount of speed.
For more detailed suggestions. Please share your Backup Environment - Server count, Type of backups, Volumetrics per month, Data retentions etc..
Usually, you need backup data, not the hardware. Please share your current design, then we will be able to share our thoughts.
In general, it can be any backup software that you are familiar with, you just need properly size the HW for it.