Veeam + NetApp = Availability in the datacenter incl. v9

Since version 8 of Veeam’s availability suite backup and replication is delivering a deep integration with NetApp FAS systems. Right now several ways to backup virtualized objects are available. In this post we gonna look to the possible options and what the benefits of each of them are.

1. Backup from storage snapshot
When looking back for some years the first available backup solution for VMware was VMware’s consolidated backup framework (VCB) script to create an image level backup of individual VMs. Since then you could use mainly three ways to backup the data. You can go for network based backed (nbd), San based backup or hot add in most of the backup applications. Depending on your network interface, network based is the worst choice as it is using only around 60% of the ESXi management interface bandwidth. With San based you can speed up the backup as it is using direct San access to get the data out of the datastore. But all of the ways have the same problem. We are living in an IT-world where data is exploding. If you look to VMs it is common to have VMs with 1tb or more of disk space. Now the main problem with the current available ways is, that a VMware snapshot is created at the beginning of the backup and remains open until the job completes. In case of a 1tb backup this could be for a long time. The problem with that is that over the time the VMware snapshot is growing and the VM will get slow. And by far the worst point is that when the job is completed it takes sometimes hours to commit the VMware snapshot afterward.

To avoid that scenario Veeam introduced backup from storage snapshot (BfSS) in version 8.

With BfSS the way how it works is different. At the beginning there will be a VMware snapshot as it was in the past. But right after the VMware snapshot a NetApp snapshot is created on the volume to save the consistent VM state on the NetApp level. The nice thing is, that Veeam does not require any complex volume design or even raw device mappings to provide the application consistency. In a simple design you could have all your VMs (AD, SQL, Exchange…) in one volume and will get application consistency. When the NetApp snapshot was created successfully, the VMware snapshot will be removed and the data will be backuped directly from the NetApp snapshot. With that technology the time how long a VMware snapshot remains open is reduced to a few minutes instead of hours. After the backup is completed, the NetApp snapshot gets removed.

With the current version 8 it is possible to backup data from the primary storage controller.
2015-09-23 15_42_15-Roadshow_v1_ger - PowerPoint

With the upcoming version 9 you will be able to backup from the secondary NetApp controller as well. So you can provide a fast, efficient backup without having load on your primary storage system.
2015-09-23 15_42_31-Roadshow_v1_ger - PowerPoint

2. NetApp snapshot orchestration
The second way to backup with Veeam in combination with NetApp filer is to use Veeam as a orchestration and management tool only. That means it is possible to create application consistent VMware snapshots followed by the creation of an NetApp snapshot so that you get a 100% consistent NetApp snapshot at the end. This can be combined with NetApp SnapMirror and/or SnapVault technology to get the data to a second NetApp. So everything is staying within OnTap a Veeam is only used to be the orchestration tool. The nice thing on that is again that Veeam does not require any complex volume design or even raw device mappings to provide application consistent NetApp snapshots.

2015-09-23 15_41_12-Roadshow_v2_eng - PowerPoint

Cisco c3x60 – RAID design with Veeam

Over the last couple of weeks I got a lot of question on how to configure the Cisco C3x60 server RAID-Groups when used with Veeam.
As Veeam needs a repository with good performance and usually a lot of space and CPU/RAM performance the C3x60 servers are a good choice.
If you look into how a C3x60 server is designed you will see that it has internally 4 rows with 14 disks each, so 56 internal disks.
The following image shows the internal disk design of the server:

The next image show the additional 4 disks which can be added by a expansion module at the rear site of the server chassis:

As there are “only” 56 disks internally it is a good idea to build the RAID and capacity design based on the 56 disks instead of 60 disks. At least when you look at the C3260 you will see that the 4 disks will be replaces by the 2nd server node. So at the end you will have a maximum of 56 disks for data usage in the system. For sure there are the 4 SSDs as well but they are usually used for OS/Application and don’t count in the 60 disks of the C3x60 system.

Here you can see how the system looks like when a C3260 is used:

Regarding expanding a system it is highly recommended to work with full staffed 14 disk rows only. Means if you’re going to start with the base system and 14 disks and your’re going to extend it with more disks after some time, fill it up with 14 disks again. Please also keep in mind that right now it is not possible to expand an exsiting RAID-Group with more disks. You will need to create a new one.

When looking into the RAID configuration when used with Veeam you can go for some different designs. First of all you should decide if you need one large repository or if you can go with a couple of smaller ones.

The following shows a table with possible RAID configurations within the CIMC controller:

If you go for a couple of RAID6 you will see a bit better performance especially when you run into a disk failure and the RAID needs to rebuild the failed disk. With RAID6 you will only see a lower performance on the repository which is on that particular RAID group. If you go for RAID60 over let’s say 28 disks, the whole repository will be slower if one of the disks in the underlying RAID6 groups fails. Because of that RAID6 and multiple repositories is the better choice when using C3x60 and Veeam.