Microsoft’s Data Protection Manager (DPM) 2012 R2

DPM 2012 R2 Testing Summary

A summary of capabilities, limitations, and real world data gathered/discovered during my testing phase

For this adventure, my primary backup goals were focused mainly on backing up Physical SQL clusters for Short Term Data retention with off-site replication. I am are also not using tape and will most likely not be using DPM for Hyper-v VM backups (using Veeam currently) so I have skirted past some areas in this summary.


General DPM Capabilities/Terminology:

  •  Synchronization (Incremental backups – log backups in SQL)
  • Recovery Point (Express Full Backup)
    • Default interval = 8am, 12pm, 6pm
    • MAX of 4200 for SQL Databases per Week
    •  Express full backups do not truncate SQL logs.
  • Write to Tape – various Vendor libraries
  • Write Locally and to the Cloud (Azure compliant)

Summary of Features/Limitations important for my testing scenario:

  • DPM Short term Disk to Tape protection only supports Full + Incremental backups. There is no differential backup support.
  • A Synchronization does not create a restore point
  • By default, you get one recovery point per day and 3 synchronizations (8am,12pm, 6pm) … these can be increased but are limited as described in the ‘Problem’ below
  • Most features added to DPM 2012 allows DPM to incorporate with Windows Azure and Private/public/service provider clouds

High Availability:

  • With DPM 2012  will be able to use a clustered SQL server for its database to provide some redundancy
  • Microsoft has no roadmap to cluster DPM itself
  • DPM HA? Run it as a VM on a Hyper-V failover cluster – dynamic VHDX files can be used to store the DPM backups – do not need to be fixed size (vm pauses if VHDX fills) – offline resize is necessary
  • Unsupported Scenarios/untrusted domains:
  • Running DPM as a VM alloed me to backup clustered resources in an untrusted domain over a non-routed one-hop VLAN. This bypasses some of the unsupported scenarios and adds a bit of complexity.
  •  DPM can back itself up or another DPM instance can back-up the first


  • Accomplished by backing-up to cloud, Azure or another DPM instance elsewhere


  • DPM does not support compression if you choose to encrypt data in a protection group.
  • Compression appears to only apply to transfer over WAN links and for tape backups – but not for other destinations (more info needed on this)
  • There is an option where the data is compressed and encrypted at the DPM server before it’s send to Azure. Maybe this is also possible for the local backup devices but i cant find it.
  • On-the-Wire Compression is available – On-the-Wire Compression decreases the size of data being transferred during replica creation and synchronization and allows more data throughput with less impact to network performance. On-the-Wire Compression applies to replica creation, synchronization, and consistency check operations. Recovery jobs can also use this type of compression.


  • I’ve seen many references to a deduplication feature in DPM 2012 but cannot find this in the 2012 R2 Eval… DPM developers seem to be avoiding any ‘built-in’ DeDupe capabilities and are expecting end-users to simply rely on the DeDupe feature of the Storage Platform.
  • A Microsoft DPM development representative said this when questioned about DeDupe: “DPM team has been working on multiple fronts to provide DeDupe solution for its customers.  As a priority, DPM 2012 is being tested with two hardware Providers”. Some HW DeDupe providers that DPM will work well with are BridgeStor, and NetApp. They suggest the use of VTL and offsite storage and suggest companies such as EMC (Data Domain) and Quantum to fulfill that need.

Integration with SCOM:

  • just go here and publish as Windows events, install the MOM pack, and point the SC servers at this implementation – that’s it.


  • each DPM Server only supports protection of approximately 300 data sources.. This is a limit that derives from the Logical Disk Manager maximum number of volumes that can coexist on a Windows system. DPM requires 2 volumes to protect a data source, 1 for the replica and 1 for the recovery point volume. This means if I protect the supportable 300 data sources there will be at least 600 volumes created on the system.
  • DPM auto-grows volumes – but a volume can only have 32 extents so, when the dataset has been extended 32 times  since its initial creation, a new backup volume has to be created – deleting the old backup and it’s data manually afterwards is then necessary. There are tools available to move data around to assist with this.
  •  Limit of 4200 Express Full Backups (see *Problem Below)
  • Requires Windows backups software to be installed on each client
  • DPM 2012 is that any workload that comes with a VSS writer can now be recognized and protected by DPM – termed ‘Generic Data source protection’.
  • DPM relies on VSS snapshots as one and only backup method … and e all know how stable VSS is! LOL
    • VSS supports only 64 snapshots. This limits retention periods and forces archiving for longer-terms.
  • Exclusions are odd – you can uncheck portions os the path that lead to areas of the protected data – no wild cards or policies. You can also exclude only certain days of the week. Ultimately, this inflexibility results in ineffective storage utilization and larger than necessary data sets.
  •  space reservation – each time you create a Protection Group you reserve space for it. this space is about 1.5 times what you are backing up (estimate).
  • you can use only DAS, FC or iSCSI to store your data
  • DPM works only on 64-bit Windows Server 2008 platforms (Windows Server 2003 is not supported). DPM doesn’t support Bare Metal Recovery (BMR) of Windows Server 2003.
  • Limit for SQL Server =  Up to 2000 databases and 80TB MAX.
  • Some other random limitations are summarized nicely here:
  • A single DPM server can store up to 9,000 disk-based snapshots, including those retained when you stop protection of a data source. The snapshot limit applies to express full backups and file recovery points, but not to incremental synchronizations
  • The following table lists examples of the number of snapshots that result from different protection policiesDPM_2012_r2_2
  • item level recovery for SharePoint – DPM 2012 instead attaches the database files on a recovery point to a SQL Server instance remotely and recovers the item. This can also be done for data in SQL Filestream content databases. Another improvement for SharePoint is farm level protection where new sites added to a farm are automatically protected.
  • If you use DPM 2012 to back up Hyper-V virtual machines, DPM 2012 speeds up the process and simplifies it a bit. Further, DPM 2012 provides the ability to recover individual items even when DPM is running inside a virtual machine.
  • DPM 2012 will not directly support backing up ESXi.
  • You will be able to upgrade from DPM 2012 beta to release candidate (RC) to RTM and upgrade from DPM 2010 to DPM 2012.
  • Centralized console will manage DPM 2010 servers as well as DPM 2012. The centralized management extends further and lets you perform remote recovery, take corrective actions and consolidate alerts across backup environments.
  • DPM keeps all data on a raw volume. Raw volumes are more efficient in terms of disk I/O performance. The speed with the block level backups for full system images are much faster than other products. Backup Exec as an example =  a domain controller on DPM is about 15 minutes compared to 1 hours or more for other software.
  • I really like the inline problem identification and Instant Resolution … much nicer than other products.


  • Decent reports:



Real World Data:


Current Synchronization and Recovery Points for Protection Groups:

Data Transfer is Minimal for both Synch and Recovery Point


Running DPM in a VM and Disk Use/configuration

one 15TB Dynamic GPT drive gets automatically partitioned by DPM – the other DPM install for Protection Group is on the 4.5TB volume


Resource Usage of DPM VM

low memory demand for  the VM


One Problem* for my specific set-up* – limitation of 4200 Express Full Backups

 DPM truncates the SQL DB logs each time a ‘Synchronization’ (Incremental backups – log backups in SQL) takes place. There is no way to turn this off.  I need to have a FULL SQL backup at hourly intervals (policy) but don’t want to allow DPM to truncate SQL logs every 60  min [this is because I have  SQL maintenance jobs also truncating when they fire off every hour already (more policy)]. So … I have 2 backup systems (both SQL Agent Maint Plan and DPM) both truncating logs. The SQL maintenance plan must do the log truncation due to imposed policy.

Partial workaround: Currently, I have turned log truncation in DPM off by only doing Recovery Point (Express Full) Backups. This is a full backup of each DB every hour.

This workaround is causing problems in DPM because of the hard coded limitation of 4200 Express Full Backups. This could be circumvented in DPM 2010 with a Registry hack – but that hack no-longer works in 2012. This limitation forces me to lower the backup interval of backups from 1/hour to 12/day … or even 6/day on servers with an abundance of databases.

Additionally, this causes me to have to back-up the SQL Databases separately from the remaining elements of the SQL Cluster and servers because i also need a frequent synchronization schedule for these components but need to keep my EF backups at a minimum. So i set up another plan to syn other clustered components frequently and EF only once daily. Increases complexity of the overall solution – restoration is also more complex.

The more proper workaround would be to allow only DPM to truncate logs but policy prevents me from doing this.


  • This image above shows a proactive warning Alert raised by DPM to let the user know that he has either 1) Protected too many SQL data sources belonging to the same PS per DPM server, or 2) has set the Express Full frequency too high, or 3) a combination of both; and that DPM has not been tested to complete this many Express Fulls on time for “typical” databases to ensure that the SLA you have in mind will be met.
  • So if you take 4200/7 you get 600/#DB’s on PS and that will give you the [Max number of Express Full backups per day] to avoid the alert.
  • If the user ignores this alert, DPM should just continue to work, however there can be two possible outcomes. First, if the individual databases are very small and have little churn, the backups will work just fine. Second, if the databases are larger and have a lot of churn, some Express Full backups will fail with errors such as “another backup is going on at the same time”. If you wait for a week and monitor the failure rate of jobs you will have a good idea about whether you should do something about this alert or just ignore it
  • The Protection Group has to be set this way to avoid this warning limit:


Note – I am not hitting my hourly interval goal with these settings shown in these images due to amount of EF backups i require



             I welcome comments and constructive criticism


Darren Dudgeon

Sr. systems Administrator at Waterloo Managed software Services

 – Perpetually testing backup software in search of a final solution –

One thought on “Microsoft’s Data Protection Manager (DPM) 2012 R2

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s