Quantcast
Channel: System Center Data Protection Manager
Viewing all 119 articles
Browse latest View live

Erasing unsupported tapes & utility automation sample

$
0
0

You may encounter tapes in DPM libraries that are written with an unsupported block size and first need to be erased because DPM cannot evaluate such tapes and therefore will not do anything to it. The ZIP file that can be downloaded here contains a script, script user guide and two utilities to accomplish this on a DPM server that controls a library. The utilities actually are testing tools that provide the necessary functions to control changers and tape drives. The script automates the various steps and simplifies the syntax for erasing tapes.

Usage: DPMeraseTape.Ps1 <slot number> <unique part of library friendly name>

The script will load media from <slot number> into an empty tape-drive, erase the tape and move media back into the slot again. Further details are described in the user guide. Link: http://cid-b03306b628ab886f.office.live.com/self.aspx/.Public/DPMeraseTape.zip
---
The script can also be seen as a sample of automation with utilities designed for interactive use that maintain their own command shell. In this case “Mytape.exe” but the Windows “Diskshadow” utility would be another good sample. Typically series of meta-commands in a particular order make-up a task that is very hard to implement yourself. Although these utilities can take an input file with commands it provides no step by step interaction and conditional control. This script demonstrates how to run such a utility as separate process, send commands and process resulting output as asynchronous events.
Have fun…


Why good scripts may start to fail on you for instance with timestamps like “01/01/0001 00:00:00”!

$
0
0

You may have scripts running fine since early DPM2007 days but start to show unexpected results with DPM2010. This blog explains what is likely happening and how to resolve that.

Typically information returned from cmdlets is assumed to be valid unless some error occurred, right? That is no longer a valid assumption for some cmdlets in DPM2010 if it ever was valid and may surface now were it did not before. We pick 1 sample to explain; the “Get-DataSource” cmdlet which returns data source objects. Data source objects have properties like ‘OldestRecoverypoint’, ‘LatestRecoverypoint’ and ‘TotalRecoverypoints’ that are useful for SLA and other monitoring purposes. However these properties are computed asynchronously and signal an event when done at which point property values are valid. This means when the “Get-Datasource” cmdlet returns objects, their property values may not all have computed yet and typically shows timestamps as mentioned in the title. Depending on your script flow and efficiency this may or may not occur with some or all objects. Yes I know;
“… ‘may or may not’ is hideous, how do I know if and when I can use these…?”

Let’s look into solutions…

From the above we understand that we need to catch those events signaling that property values are valid, so how is that done? Data source objects have event members one of which is called “DataSourceChangedEvent” and is signaled when values computation completes. PowerShell v2 and later provide cmdlets to work with events. Take a look at the sample script below; we start with “Disconnect-DPMserver” to clear caches, later more on that!
For each protection group we collect all data sources into the “$dss” collection. Subsequently for all data sources we register on the ‘DataSourceChanged’ event with an action block that only increases the global $RXcount variable by 1. We access the ‘LatestRecoverypoint’ property of all data sources to trigger events for all objects. Then we go into a wait loop until the expected number of events have been processed (1 for each data source) or 30 seconds elapsed to implement some limit and not wait forever. We check that we got all expected events after which we know all “$dss” objects contain valid property values and may continue using them. Finally we unregister all at once.

Disconnect-DPMserver #clear object caches
$dss = @(Get-ProtectionGroup (&hostname) | foreach {Get-Datasource $_})
$dss = $dss | ?{$_} #remove blanks
$global:RXcount=0
for ($i=0; $i -lt $dss.count;$i++) {
    [void](Register-ObjectEvent $dss[$i] -EventName DataSourceChangedEvent -SourceIdentifier "TEV$i" -Action {
        $global:RXcount++})
}
# touch properties to trigger events and wait for arrival
$dss | select latestrecoverypoint  > $null #do not use [void] coz does not trigger
$begin = get-date
$m = Measure-Command {
    while (((Get-Date).subtract($begin).seconds -lt 30) -and ($RXcount -lt $dss.count) ) {sleep -Milliseconds 100}
}
if ($RXcount –lt $dss.count) {write-host “Less events arrived [$RXcount] than expected [$($dss.count]”}
Unregister-Event *

This approach is efficient when working with collections and just need to validate that computations for all objects in the collection completed. In other cases you may want to process inside the “-Action { …}” block and unregister only the event that triggered it. See “Get-Help Register-ObjectEvent“ for more on this.
There is a drawback to the above flow, you cannot simply execute the above by connecting to remote DPM servers because by default event delivery is local to your Powershell session. In such cases you must use the “–Forward” with the “Register-Event” cmdlet. Then again remoting all of the above (see “Get-Help Invoke-Command”) consuming just the end result would be more efficient. I will soon post a blog that does this collecting recovery point status information across one or many DPM servers.

A small variation…

Rather then maintaining a count in the –Action {} block you can also add the sending object that signaled it into a separate collection. You then have a collection of only objects that got updated. This is more suitable if you plan to go ahead with what you have anyway regardless if all objects got signaled or not.

$global:RXobj = @()
for ($i = 0; $i -lt $dss.count;$i++) {
    [void](Register-ObjectEvent $dss[$i] -EventName DataSourceChangedEvent -SourceIdentifier "TEV$i" -Action {
            #Look at our own sourced events only
            if ($Event.SourceIdentifier -match "TEV") {$global:RXobj += $event.Sender}
  }
}
#touch properties to trigger events and wait for arrival
$dss | select latestrecoverypoint > $null #do not use [void] coz does not trigger
$begin = get-date
while (((Get-Date).subtract($begin).seconds -lt 10) -and ($RXobj.count -lt $dss.count) ) {sleep -Milliseconds 250}
Unregister-Event *

Alternative to this sample…

I can imagine you do not want to go into ‘eventing’ just yet. The alternative would be to get all recovery points for each data source, sort that on ‘RepresentedPointInTime’  and you also have easy access to oldest, latest and total. Be aware with a few dozen data sources, a couple of recovery points per day and 14 days retention you quickly request thousands of objects just to get 3 values.

Some more on objects in DPM…

DPM caches objects for a variety of reasons and not each access of a property also produces an event. That’s why we start with “Disconnect-DPMserver” which clears object caches resulting in events being produced on 1st access. You can loop through an object collection all of which have an event registered (like above sample) but you cannot loop on the same DPM data source object and expect an event each time you access a property unless the object actually changed.
For those that are more familiar with using events; this is different to objects that change more regularly like the process object with rerouted standard output and the ‘OutputDataReceived’ event, each time the process writes standard output another event is generated.

About defining DPM protection from CLI

$
0
0

Those with interest in modifying and creating DPM protection from CLI probably appreciate this otherwise there is little sense in reading further I guess. Those that seek to save and recreate protection configurations are up for a task that proved to be a daunting one, you may want to look into a twin script called DPMsaveConfig / DPMcreateConfig.
We will assume a working knowledge of Powershell and involved commandlets as described
here; http://technet.microsoft.com/en-us/library/bb808941.aspx. This article supplements the aforementioned TechNet article for broader purposes than originally intended but is by no means complete in terms of all combinations and conditions you may encounter. Still, this covers a good deal of what you may want to know.

Required object instances

More complex tasks intended to be performed through the UI sometimes require objects and initialization normally provided by the UI under the hoods and therefore not documented as such. This also applies to parameter values that must fall within a particular range or fixed list of allowable values. Because we seek essentially an automated form of UI operations we have to mimic some functionality the UI uses under the hood.
How would you know when and how to do that?  Well, continue reading… More details later but for now you need to understand that most object instances come from a preceding “Get-..” commandlet to be used in a subsequent “Set-…” and some have to be instantiated by yourself, for instance using;
$obj = New-Object -Typename <class name> [-Argumentlist arg1, arg2,…]

To be aware of

This section discusses some facts, tips & tricks you might appreciate.

One general caution is to not correlate wordings used in the UI with names used in the CLI. The UI is designed to be intuitive for common interpretation whilst the CLI is more geared towards technical implementation accuracy. In other words; a CLI parameter with a name also used in the UI may have a very different meaning or scope. Similarly, options in the UI may not even exist as parameter or property but implemented through multiple different combinations.

Order of executing protection commandlets

It may not be clear that commandlets involved with protection definitions must be used in a specific order because these cross influence each other. If you do not pay attention to this you may get confused by the same commandlet and parameter values returning different results or errors. You do not always have to use all of the below, just be aware of the ordering aspect;

(1) Set-ProtectionType: “how” like ‘DiskToDisk’ or ‘DiskToDiskToTape’ and so forth...
(2) Set-PolicyObjective: “goals” like recovery/retention range or in other words “how much”
(3) Set-PolicySchedule: “when” like daily at 08:00 or every 2 weeks and so forth…
(4) <other options>: like storage allocation, tape options and so forth…
(5) Set-ProtectionGroup: commits protection group changes to the DPM configuration

For instance, if protection type is set to ‘DiskToDisk’ you cannot use policy parameters that explicitly apply to long term protection only. Or if you change protection type from ‘DiskToDisk’ to include any long term protection the current objective and schedules may not be properly initialized. Although defaults apply, default interpretation may not always match your goal and may produce convoluting errors when executing the next commandlet or attempt to commit the change. This also means that a problem condition may not originate in the commandlet that reports it but from an earlier step, or omitted step and supplied defaults or current values may not fit what you are trying to accomplish.
Note: particularly with errors that report “Some prerequites not met…” should be read carefully because this often tells you which previous step was not done properly or which parameter does not fit into the current context. If there is no additional information on prerequisites you probably forgot to “Get-ModifiableProtectionGroup”.

About day and time parameter usage

Objectives and schedules deal with day and time specifications in various places that exist in singular and plural form with slight differences.
There is a ‘-TimeOfDay’ with values like “08:00 AM” but there is also a ‘-TimesOfDay’ which is an array of 1 or more “08:00 AM” like values. I recommend using 12 hour AM/PM notation format for self-documentary purposes. Similarly there is a ‘-DayOfWeek’ with values like “Monday” but also ‘-DaysOfWeek’ which is an array of short weekdays as in the note below.
Note: were the UI shows “every day” on the CLI level this specified as an array of all weekdays; “Mo”,”Tu”,”We”,Th”,”Fr”, “Sa”, Su”.

Long term interval frequency oriented parameter values come in a form like “Daily”, “Weekly”, “BiWeekly”, “Monthly” and so forth.
Short term intervals are specified in minutes from 1 to 1440 (24 hours).
Period oriented parameter values come in the form of a range/unit value collection like; 2,”weeks“ wherein range is “2” and unit is “weeks“ but note that for instance 2 years is specified as range “24” and unit “Months” (not 2, “Years” as you might anticipate).

A remarkable way of finding lacking information

I have no doubt there will be moments where you cannot find the information you need and like to share bit of trickery retrieving a required class and allowable parameter values. Not so spectacular is passing an object of the wrong type to a commandlet. The error message tells you something like “cannot convert … of type-A into … of type-B” where type-B literally is the class name you need to typecast or use with “New-Object –Typename” that you also find in various blogged samples.
There is more fun with just specifying some parameter to an object that is not null and not a zero length string as value for which you have no clue what values are allowed. This will typically trigger an error message specifying possible values for the current context. This works best if you fully qualify all parameters by name like ‘–ProtectionGroup $pg’ and use objects of proper type but possibly incorrect value. One word of caution; in rare cases the CLI may decide to terminate this escapade. The file DPMCLI0curr.ERRLOG should provide a hint why and sometimes even lists allowable values. Here is a nice little sample excerpt of such (reformatted for readability);

01/14       19:36:32.386           69            WatsonIntegrator.cs(101)   -------------------
01/14       19:36:32.386           69            WatsonIntegrator.cs(107)  Expanding inner exceptions
01/14       19:36:32.386           69            WatsonIntegrator.cs(114)  --------------------
01/14       19:36:32.386           69            WatsonIntegrator.cs(114)  Interval can only be 1 3, 6 or 12
01/14       19:36:32.386           69            WatsonIntegrator.cs(114)  -----------------
01/14       19:36:32.386           69            WatsonIntegrator.cs(114)                                           
at Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.ArchiveSchedule.Validate
(DateTime startTime, Int32 interval, IntervalComponent component, Boolean isCustomScheme)

This exception occurred because I was playing with multiple custom schedules without first informing DPM that I wanted to go this way.

Let’s get crackin…

In this section the $mpg variable is assumed to be a modifiable group object obtained from New-ProtectionGroup or Get-ModifiableProtectionGroup. There are great many options but we limit to some common topics.

Notes on Set-ProtectionType

The following types can be selected. The ‘–ShortTerm’ and ‘–LongTerm’ parameters are shown separate for clarity but must be combined in 1 command if both protection schemes are desired.

  • SetProtectionType –ProtectionGroup $mpg –ShortTerm “Disk”
  • SetProtectionType –ProtectionGroup $mpg –ShortTerm “Media” (meaning tape)
  • DPM2007: SetProtectionType –ProtectionGroup $mpg –LongTerm 
  • DPM2010: SetProtectionType –ProtectionGroup $mpg –LongTerm  “Tape” (required)
  • <there some more for future use>

Notes on Set-PolicyObjective

-RetentionRangeInDays/inWeeks are short term parameters, note that these take an integer, not a retention range object. You combine this either with  ‘-SynchronizationFrequency’ in minutes or the switch ‘-BeforeRecoveryPoint’ but not both.

-RetentionRange is a long term parameter and takes a retention range object to be instantiated like below for a range of 3 and unit in months;

$RR= new-object -TypeName Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.RetentionRange -ArgumentList 3,"months”

You combine this with –LongtermBackupFrequency witch takes a string, for example;

Set-ProtectionObjective –ProtectionGroup $mpg –RetentionRange $RR –LongtermBackupFrequency “Weekly”

This would keep backup for 3 months and run weekly. Okay but when does that run?
That’s part of Set-PolicySchedule we talk about later.

-RetentionRangeList / -FrequencyList / -GenerationList
These are used to specify multiple objectives and are arrays referred to as custom schedules. We already talked about retention range and frequency types. The –GenerationList parameter may contain these values “GreatGrandFather”, “GrandFather”, “Father” and “Son” although ‘Son’ is used with the, for DPM unusual short term daily tape scenario.

Note: better tell DPM that we are going to play with custom schemes otherwise you may hit the exception as in the DPMUI0Curr.ERRLOG excerpt shown earlier;

$mpg.ArchiveIntent.IsCustomScheme = $true

So we need to populate these lists, say for 1 copy as is most usual, note oldest/longest first, in order and the same the across the triplets.

  • Create the retention range list (GreatGrandFather, GrandFather, Father)

$rrGGF = New-Object -TypeName Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.RetentionRange -ArgumentList 12,"months"           
$rrGF = New-Object -TypeName Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.RetentionRange –ArgumentList 3,"months"         
$rrF= New-Object -TypeName Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.RetentionRange –ArgumentList 1,"months"               
$RRlist = @($rrGGF,$rrGF,$rrF) 

  • Create the generation list

$GENlist = @(“GreatGrandFather”,”GrandFather”,”Father”)

  • Create the frequency list

$FRQlist = @(12,3,1) #corresponds to ‘range’ property of recovery range object

We also need to initialize and assign a label information structure when using custom lists;

  • Instantiate the label information object and assign values

$linfo = New-Object -TypeName Microsoft.Internal.EnterpriseStorage.Dls.XsdClasses.MM.Interface.labelinfo
$linfo.Vault = @("Offsite","Offsite","Offsite")                       # all 3 long term

$linfo.label=@("YearLabel","MonthLabel","Weeklabel")      # just a sample format
$
linfo.Generation=@("GreatGrandFather","GrandFather","Father)    

  • Assign the mandatory protection group property and execute commandlet      

$mpg.ArchiveIntent.LabelInfo=$linfo
Set-PolicyOBjective –Protectiongroup $mpg –RetentionRangeList $RRlist –FrequencyList $FRQlist –GenerationList $GENlist

That was for one copy, for two copies the lists and label information would have 6 elements. The order would be; first both GreatGrandFathers, then both GrandFathers and then both Fathers and all of the corresponding values in that order. Ensure both copy triplets have unique values for labels and vaults.

Notes on Set-PolicySchedule

Up to here we talked about how much we keep but not when, this we will talk about now.

Here too sets of parameters apply to short term or long term scenarios. Based on “Get-PolicySchedule” an object of desired type must be obtained and passed to the “Set-PolicySchedule” commandlet using modifying parameters. You typically obtain the object like (we’ll explain $jobtype later);

$sched = Get-PolicySchedule -ProtectionGroup $mpg -ShortTerm  |  where { $_.JobType -eq  $jobtype }
$sched = Get-PolicySchedule -ProtectionGroup $mpg -LongTerm  |  where { $_.JobType -eq  $jobtype }

What types ($jobtype) apply? The ones described below are those that relate to defining protection but there are more job types not shown here.

Short term

  • “ShadowCopy” : this is a recovery point for file/volume protection
  • “OnsiteSonArchive” : this is a disk to tape job
  • “Replication” : synchronization job for volume or application incremental protection
  • “FullReplicationForApplication” : this is an application ‘ExpressFull’ job
  • “DiskAutoValidation” : this is a scheduled consistency check job

Long Term

  • “OffsiteFatherArchive”, “OffsiteGrandFatherArchive”, “OffsiteGreatGrandFatherArchive”
    These are the 3 possible custom schedules for long term to tape protection schedules.

Now we are almost ready to set a policy schedule according our intentions, almost because we have to logically translate what our intentions are and how that maps to possible DPM operations. First think about the ‘type’ of operation and when that should happen rather than in terms of short and long term.

Take “ShadowCopy” and “FullReplicationForApplication” because those are similar. These can occur on 1 or more specific days or all and 1 or more times per day, in other words these are ‘plural’. Let’s assume we want to run these every day at 10:00 AM and 04:00 PM, that would be specified as;

$dow = @(Mo,Tu,We,Th,Fr,Sa,Su)          # just omit days you don’t want
$tod = @(10:00 AM, 04:00 PM)                # leave out or modify as you desire
Set-PolicySchedule -ProtectionGroup $mpg -Schedule $sched  -DaysOfWeek $dow –TimesOfDay $tod 

A “Replication” job runs every so and so with an offset, say every 30 minutes or every 2 hours rather than a specific day and time. What can be a little confusing here is that synchronization is set as part of the objectives rather than in a schedule and leaves us to set the offset of say 3 minutes;

Set-PolicySchedule -ProtectionGroup $mpg -OffsetInMinutes 3

Archive jobs such as “OffsiteFatherArchive” run on a particular time and day, like Sundays at 11 PM;

Set-PolicySchedule -ProtectionGroup $mpg -Schedule $sched  -DayOfWeek “Sunday” –TimeOfDay “11:00 PM” 

These can also have relative specifications like 1st day of each month etc…but I’ll let you play with that now you got the hang of it.

Notes on exclusions

So although we understand pretty much how to define protection objectives and schedules by now, there is this matter of excluding one or more folders or file types. Say you are protecting a volume but want to exclude MP3 and ISO files, this is how it is done;

Set-DatasourceProtectionoptions  $mpg  -FileType “MP3,ISO” –Add

The online help could be a little confusing with double negatives but just remember this; “-Add” creates more exclusions and less is protected, “-Remove” reduces exclusions and more gets protected. So if we have a whole bunch of types excluded and decide that we do want to protect ISO files and not exclude them from protection, we would do;

Set-DatasourceProtectionoptions  $mpg  -FileType “ISO” –Remove

So how do we exclude one or more folders from being protected, something similar? No, that would be too easy but we do stick to the ‘remove’ paradigm. We talk about data source hierarchy later but the idea is to remove a child from an already defined protection that normally includes the child. Forget about how we get the objects for a while and assume we have a protected “D:\” and $child object reflecting “D:\TEMP” we want to exclude. Then you would apply the exclusion as follows;

Remove-ChildDatasource $mpg –ChildDatasource $child

A data source is a top hierarchy below which ‘childs’ can exist. For volume protection childs are folders and files and obviously a child can in turn have childs of its own, deeper positioned folders and files. A volume root directory (like “D:\”) represents the entire volume. So how do we get objects for these to include or exclude? If you want to protect just one or few directories you prefer to ‘add’ just those rather than the entire volume and subsequently perform dozens if not hundreds or more ‘removes’, right? To complete the aforementioned exclude sample we first protect “D:\” and remove the single item we don’t want but note that we start at the parent (Get-Datasource) containing level and query that (Get-ChildDatasource). Subsequently we single out “D:\TEMP” to protect only.

$ps = Get-ProductionServer –DPMservername “DPMSERVER” | where {$_.ProductionServername –eq “PRODSERVER”}
$allds = Get-Datasource –ProductionServer $ps –Inquire
$parent = $allds | where {$_.LogicalPath –eq “D:\”}
Add-ChildDatasource –Protectiongroup $mpg –ChildDatasource $parent
$child = Get-ChildDatasource –ChildDatasource $parent -Inquire  | where {$_.Logicalpath –eq “D:\TEMP”} 
Remove-ChildDatasource $mpg –ChildDatasource $child

If you only want to protect “D:\TEMP” in this case you would;

$ps = Get-ProductionServer –DPMservername “DPMSERVER” | where {$_.ProductionServername –eq “PRODSERVER”}
$allds = Get-Datasource –ProductionServer $ps –Inquire
$parent = $allds | where {$_.LogicalPath –eq “D:\”}
$child = Get-ChildDatasource –ChildDatasource $parent -Inquire  | where {$_.Logicalpath –eq “D:\TEMP”}   
Add-ChildDatasource –Protectiongroup $mpg –ChildDatasource $child           

Obviously not proficient with deep complex structures for which we should use DPM management shell search capabilities. That’s food for another blog on selectively working with data sources that also applies to recovery where it is much harder needed.

Notes on “Get-Datasource”

The ‘Get-Datasource’ commandlet has many formats for a specific purpose that are not interchangeable. The ‘-inquire’ switch queries the protected target and requires the target to be well connected like servers. The ‘-computernames’ switch is the opposite, it is intended for disconnected clients and just adds the target to the DPM configuration inheriting settings from the client protection group such as which directories to include and exclude and which file extensions to exclude (if any). Mark however that you could add a client data source as if it was a server (and reachable) but would not render disconnected client protection as most likely was the intend for client. Furthermore, if once adding a client data source as server protection you may have to  uninstall and re-install the agent on this client to be able to protect it as a disconnected-client data source.

  • to ‘get’ datasources for server protection you have to use the “-inquire” switch.
  • to ‘get’ datasources for client protection you have to use the following format;
    $ds = Get-Datasource -DpmServerName <dpm name> -ComputerNames <client name>

Notes on disk allocation and ‘–CalculateSize’

The last thing you do before committing protection using “Set-ProtectionGroup” is to ‘Get- and Set-DatasourceDiskallocation’. This section is by no means complete and just a few hints to overcome most common problems. So, what does this switch do? DPM initializes default disk allocations with “Get-DatasourceDiskAllocation” based on the volume size and the –CalculateSize switch tells DPM to look closer what is actually needed, for instance if you only protect one or few folders that require far less space than the entire volume. The general format is;

Get-DatasourceDiskallocation –Protectiongroup $mpg –Datasource $parent –CalculateSize 
Set-DatasourceDiskallocation –Protectiongroup $mpg –Datsource $parent
Set-ReplicaCreationmethod –Protectiongroup $mpg –Manual                             
Set-Protectiongroup $mpg

The –CalculateSize switch is sometimes optional, sometimes mandatory and sometimes not allowed, nice isn’t it? First of all note that on DPM2007 this switch only applies to protection of volumes, files, folders and shares. On DPM2010 there can be other scenarios. Whether or not to use this switch depends on the scenario and are too many to walk through, instead we clarify some errors you may encounter;

“Set-DatasourceDiskallocation” may complain that prerequisite steps are not properly executed in which case first verify that the –Inquire switch was used with “Get-[Child]Datasource” on server protection and add or remove the –Calculate switch. If the switch is not allowed it is usually stated in the error and the reason why.
Note: on these errors the “Set-Protectiongroup” commit may still succeed but possibly using a larger disk allocation than needed.

“Set-Protectiongroup” may also complain about prerequisites and could apply to anything discussed in this blog not being done properly but start with the previous bullet. It may also report that disk allocation was not calculated and the “–calculate” switch can be added to server protection.

As general guideline; if you create a protection group from scratch and add a complete volume or application data source then do not use the –Calculate switch. When adding multiple folders or a folder on a volume a part of which is already protected then do use the –Calculate switch. Think about whether DPM has to create a new allocation or extend an existing allocation, in latter case you must use the –Calculate switch. As stated earlier, I am aware this does not cover all but the best I can do without going ballistic as far as this blog did not already.

If nothing else you do get an appreciation of what it takes to make a ‘simple’ backup product easy and intuitive to use when taken from the UI perspective, right?

Have fun!

Protect, Unprotect, Protect, Unprotect – Understanding how DPM 2010 retention works

$
0
0

With special thanks to Fahd Kamal for the backgrounder content.

Imagine that you are experimenting with DPM 2010.  You protect some data, and then you remove that protection group.  Then, showing DPM to one of your friends, you protect the data again, and later unprotect it.  Later that same day, you try to protect the data again to show some other friends, and it breaks.   The reason is that if you attempt to [re]protect the same data source for the 3rd time before the retention of previous protection expired (or remove those recovery points) DPM will fail. A similar condition may arise when migrating data sources. We will explain why and how this could be mitigated.

Whilst not very common, similar scenarios have occurred and hence this blog.

When protection is stopped with retaining data and protected again DPM will re-associate the now inactive protected data (and recovery points) with the new protection, great, exactly what you want. But what really happens under the hoods changed with DPM2010 on behalf of collocating  data source with SQL, Hyper-V and Client protection.

Skipping the details of combinations; we can no longer assume that the inactive replica allocation is appropriate or efficient for the new protection. To deal with that DPM creates a second allocation according the new protection parameters. DPM2010 is designed to handle only 2 allocations per data source, the “previous” and “current”, say R1 and R2 respectively. Remember the aim is to reduce the number of volumes as much as possible and maintaining more ‘previous’ allocations would be working in the wrong direction, hence only the 2 allocations we truly need to accommodate a change. 
From a recovery perspective; recovery points relating to both R1 and R2 are available and is transparent.

If we repeat a change sequence or migrate the same data source a 2nd time (=3rd protection) and retain formerly protected data, there is no ‘slot’ to designate current R2 into R1 and create a new R2 without dropping the current R1. This is not done automatically because that would remove data you may still need and instead reported as a failure. What to do next?

To execute the protection change or data source migration, all recovery points relating to the current R1 need to be removed first. R1 disappears on the next pruning run (midnight) if the latest R1 recovery point expires or all R1 recovery points are removed for that data source, not the entire protection group of non-collocated data sources and not R2 recovery points which is the most recent protected data!

However, if the current R1 (to be removed) already belongs to a collocated group, removing recovery points affects all collocated data sources! Remember, the goal of collocation is that multiple data sources use a single allocation, if that (R1) needs to go it applies to all replicas and recovery points associated with that allocation (not with the current R2 allocation that is to become the new R1 after the change or migration we want to execute).

Shortening the retention period overnight to expire recovery points is not a good idea because this also applies to all non-collocated data sources in that group and you still need to know up to what point in time recovery points are associated with the R1 to be removed. Therefore the following script can be used to remove recovery points for the data source we want to change or migrate. This script removes all recovery points for the selected data source that are older than the newest replica (R2) creation time. As a result the oldest replica (R1) will be automatically deleted.

#begin script

$dss = @(Get-ProtectionGroup (&hostname) | foreach {Get-Datasource $_})
$dss +=  Get-Datasource -Inactive
for ($i=0; $i -lt $dss.Count; $i++) {Write-Host "[$i] $($dss[$i].name) on $($dss[$i].productionservername)"}
$ds = $dss[(Read-Host "Select index " )]
$paths = @(Get-ChildItem "C:\Program Files\Microsoft DPM\dpm\Volumes\Replica" -Recurse -Filter "*$($ds.id.guid)*" | ? {$_.PsIscontainer})
if ($paths.Count -lt 2) {Write-Host "No multiple replicas found, aborting...";exit 0}
$cutoff = ($paths | sort creationtime -Descending)[0].creationtime
$rp = @(Get-RecoveryPoint $ds | ? {$_.representedpointintime -lt $cutoff})
$resp = @(Read-Host "Confirm deleting [$($rp.count)] recovery points from `"$($ds.name) _on_ $($ds.productionservername)`" y/N")
if ($resp[0] -ne "y") {write-host "Aborting..."; exit 0}
$rp | foreach {    Remove-RecoveryPoint $_ -ForceDeletion }

#end script

How to use SAN recovery option and mapping data source volumes to Windows disks

$
0
0

Say you want to use the DPM “SAN recovery” option. This requires storage management steps for which it is useful to know which Windows disks (LUN’s) hold the associated DPM volumes for a given data source. First a generic recap of the potential ‘DPM SAN recovery’ advantages and steps;

  • ‘SAN recovery’ is only beneficial when recovering large amount of replica data relative to effective bandwidth (typically xxxGB or more).
  • ‘SAN recovery’ partially offloads the LAN (replica data only) and may reduce data transfer time. Usually the incremental size is ~10% of the replica size but there are huge exceptions.
  • ‘SAN recovery’ only reduces transfer time if the SAN bandwidth is more than twice the LAN bandwidth. Note that data travels the disk path twice, once to read from snapshot and again to write to target volume.

  1. Stop or wait for completion of any ‘in progress’ synchronization job for the data source to recover.
  2. Create a recovery point without synchronization using the DPM UI or script (some storage vendors describe a script). This is to ensure DPM metadata is current and DPM has the same current ‘view’ on data as the hardware snapshot that is to be created.
  3. Create a hardware snapshot of the involved storage pool LUN’s and expose these to the production server. The snapshot LUNs must appear ‘Online’ on the production server to recover data on. You can use any kind of snapshot that can be exposed to the production server, the DPM agent will only read from it. See below to find which DPM storage pool disks or Windows volume ID’s are involved.
  4. Start recovery as desired using the DPM UI or script using the “SAN Recovery” option. DPM will instruct the agent to locate replica data on the exposed snapshot disks and copies the replica to the recovery target location. If the recovery point is an incremental DPM will subsequently transfer additional data (log files) across the LAN and ‘roll-forward’ as appropriate for the type of data source and recovery.    
  5. On successful completion you can start using the recovered application or files and remove (unmask) the snapshot LUN’s from the production server. The hardware snapshots and provisioning are no longer needed by DPM but be careful in how to dispose these.

Note: during this operation DPM likely synchronized other (unrelated) data sources that happen to have protected data on the same DPM storage pool disks. Therefore we need be careful removing the hardware snapshot, let’s examine that a bit closer.

Suppose we are recovering a data source represented by “V2” in the illustration below which requires a hardware snapshot of “Disk-1”. Say we successfully complete the SAN recovery between T0 and T1 but in that time-span DPM may also execute protection jobs and apply changes to unrelated data sources represented by “V1” and “V3” also on “Disk-1”. For this discussion it does not really matter whether these volumes are replica or recovery point volumes or a combination.

image_thumb10

Typically we want to release the hardware resources (reserved LUN’s) by removing the no longer needed hardware snapshot. This has 2 possible results;

  • “Disk-1” keeps the current T1 situation and continues normally, this what you need to do! This matches DPM ‘knowledge’ for both content and status information. All blocks that belong to the T1 (current) hardware point in time should remain such that effectively nothing changed outside DPM awareness and control. Hardware snapshots vary in technical details; abilities, option naming and you have to select the appropriate actions for your storage implementation such that the intended behavior is realized.
  • “Disk-1” is changed back to the T0 situation and is not what you want to do.
    This would cause 2 problems; DPM volumes change outside DPM control invalidating “V1+V3” protection and corrupts “V3” on the NTFS level because it spans onto “Disk-2” that is not part of the snapshot. We did not account for the “V3” volume span because that was not of interest for the “V2” recovery. DPM built-in integrity check will detect and alert that “V1” needs a consistency check (“V2” anyway due to recovery) but “V3” likely needs to be recreated from scratch. If the “V3” volume-span was accounted for and “Disk-2” included in the same snapshot then “V3” but also “V4” would become invalid and need a consistency check. Clearly this causes a much bigger consistency check impact then necessary or worse. Also note that in reality this typically involves dozens or more data sources.

DPM allocates replica and recovery point volumes on different disks (if possible) and DPM volumes may span disks like “V3” above. So, which disks do you need to snap? To help with this you can use the script posted below that reports for each data source which ‘NTdisks’ (as in Disk Manager) and Windows volume ID’s are involved. If a suitable hardware VSS provider is installed and configured on DPM and target server this can be used to automate the entire recovery (particularly step 3 and 4 above)  but is outside the scope of this blog.

Cut & past the script text between “#begin script”  and “#end script” into a Powershell script to run in the DPM Management Shell; say “MyScript,Ps1” as shown below.
If you run this script like: .\Myscript.Ps1 | format-list

you get output for each data source similar to the screenshot below;image
If you run this script like: $mydata = .\Myscript.Ps1
$mydata will be an array each element of which has the following properties;

- Datasource : description of the data source
- NTdisks : comma separated list of disk manager numbers  (3,2)
- Inactive: whether or not it is an inactive data source on disk
- Replica: Replica volume ID you could use with DISKSHADOW
- Diff: Recovery point volume ID you could use with DISKSHADOW

#begin script

function GetDpmMP {
    param([string]$volumetype = "ShadowCopy" )
    # Get DPM mount points by type: Diffarea or Replica or Shadowcopy, defaults to shadowcopy
    # (parsing mountvol appreared most predictable across windows versions and conditions)
    # builds $Vols array of $dpmmp structures
    #         <obj>.Target = volume ident in format \\?\Volume{guid} without trailing slash
    #         <obj>.Path = mount point path including trailing slash 
    switch ($volumetype) {
        ShadowCopy {$paths = @(mountvol | ?{($_ -match "Microsoft DPM") -and ($_ -match "ShadowCopy") }) }
        Replica {$paths = @(mountvol | ?{($_ -match "Microsoft DPM") -and ($_ -match "Replica") }) }
        Diffarea {$paths = @(mountvol | ?{($_ -match "Microsoft DPM") -and ($_ -match "Diffarea") }) }
        Default {$paths = @(mountvol | ?{($_ -match "Microsoft DPM") -and ($_ -match "ShadowCopy") }) }
    }
    $paths = $paths | ?{$_} #ensure no blanks
    if ($paths.Count -lt 1) {
        if ($skipdpmbackup) {Throw "No volume paths found, run at least once without -SkipDPMBackup switch"}
        Throw "No $volumetype paths found!"
    }
    #Ensure we have unique paths to query volumes for
    $paths = @($paths | sort -Unique)
    #build array of target volumes and paths
    $Vols = @()
    foreach ($p in $paths) {
        $dpmmp = "" | Select Target, Path
        #retrieve volume for path
        $dpmmp.Target=(mountvol ("`"{0}`" /L" -f , $p.trimend("\").trim())).trimend("\").trim()
        $dpmmp.Path=$p.trim()
        $Vols += $dpmmp
    }
    return $Vols
}
$dss = @(Get-ProtectionGroup (&hostname) | foreach {Get-Datasource $_})
$dss += Get-Datasource (&hostname) -Inactive
$disks = Get-DPMDisk (&hostname)
$dpmvols = GetDpmMp "Replica"
$dpmdiffs = GetDpmMp "Diffarea"
$list=@()
foreach ($ds in $dss) {
        $item = "" | select Datasource, NTdisks, Inactive, Replica, Diff
        $item.Replica = ($dpmvols | ? {$_.path -match $ds.AssociatedReplica.PhysicalReplicaId}).Target
        $item.Diff = ($dpmdiffs | ? {$_.path -match $ds.AssociatedReplica.PhysicalReplicaId}).Target
        $item.Inactive = $ds.IsDiskInactive
        $item.Datasource = "$($ds.DisplayPath) on $($ds.ProductionServername)"
        $item.NTdisks = (($disks | ?{ ($_.PgMember | select DatasourceId) -match $ds.Id}) | foreach {$_.NtdiskId}) -join ","
        $list += $item
}
return $list

#end script

CLI script: Create protection groups for Disk based backups

$
0
0

The following script creates a protection group with disk based protection, for a simple folder. It can be easily extended to add more data sources of different kinds – like Microsoft Exchange, SQL, Sharepoint, System state or Virtual Servers. The Synchronization frequency and retention ranges etc. can be easily modified to suit your needs.  Also, since we are protecting a sub-folder in a volume, we are using the CalculateSize parameter in Get-DataSourceDiskAllocation, which would calculate the exact size needed for all the items in that folder. This is not needed when protecting an application like Exchange/SQL or protection the entire File system volume.

 

---------------------------------------------- Start of Script ---------------------------------------------------

# To D2D create PG and do the initial replication

# This script is for creating Disk to Disk PG for File System

# For details contact mukuls[at]microsoft[dot]com

# Create a .ps1 file with this script and run under DPM Management Shell

 

# Customize these values as per your environment

 

$dpmname = "DPMServername.somedomain.com"

$psname = "PSservername.somedomain.com"

$dsname = "G:\"

$poname = "G:\ProtectableFolder"

$pgname = "MyCLIPG"

 

function CreatePG

{

                param($dpmname, $psname, $dsname, $poname, $pgname)

 

                write-host "Creating a D->D PG --> $pgname..."

 

                trap{"Error in execution... ";break}

                &{           

                                Write-Host "Getting PS: $psname from DPM: $dpmname"

                                $ps = Get-ProductionServer -DPMServerName $dpmname | where { ($_.machinename,$_.name) -contains $psname }

                               

                                Write-Host "Running Inquiry on PS: $psname for datasource $dsname"

                                $ds = Get-Datasource -ProductionServer $ps -Inquire | where { ($_.logicalpath,$_.name) -contains $dsname }

                               

                                Write-Host "Getting Child-datasource $poname from datasource $dsname"

                                $po = Get-ChildDatasource -ChildDatasource $ds -Inquire | where { ($_.logicalpath,$_.name) -contains $poname }

 

                                write-host "Create New PG ..."

                                $pg = New-ProtectionGroup -DPMServerName $dpmname -Name $pgname

 

                                write-host "Adding child datasource..."

                                Add-childDatasource -ProtectionGroup $pg -ChildDatasource $po

 

                                write-host "Setting Protection Type..."

                                Set-ProtectionType -ProtectionGroup $pg -ShortTerm disk

 

                                write-host "Setting Policy Objective...retention range - 10Days, synchronizationFrequency 15"

                                Set-PolicyObjective -ProtectionGroup $pg -RetentionRangeInDays 10 -SynchronizationFrequency 15

 

                                write-host "Setting Policy Schedules ..."

                                $ShadowCopysch = Get-PolicySchedule -ProtectionGroup $pg -ShortTerm| where { $_.JobType -eq "ShadowCopy" }

                                Set-PolicySchedule -ProtectionGroup $pg -Schedule $ShadowCopysch -DaysOfWeek mo -TimesOfDay 02:00

 

                                write-host "Setting Disk Allocation, with optimization (will take a few minutes to complete)"

                                Get-DatasourceDiskAllocation -Datasource $ds -Calculatesize

                                Set-DatasourceDiskAllocation -Datasource $ds -ProtectionGroup $pg

 

                                write-host "Setting Replica Creation Method ..."

                                Set-ReplicaCreationMethod -ProtectionGroup $pg -NOW

 

                                write-host "Commiting PG"

                                Set-protectiongroup $pg

                }

}

 

 

function waitforIRtoComplete

{

                param($waittime)

 

                write-host "Wait for IR to complete"

               

                $val = $waittime/30

                while($val -gt 0)

                {

                                Write-host "Wait for IR to complete... $val"

                                start-sleep 30

                                $val--

                }

               

}

 

Connect-DPMServer -DPMServerName $dpmname;

createPG $dpmname $psname $dsname $poname $pgname;

waitforIRtoComplete 120;

 

---------------------------------------------- End of Script ----------------------------------------------

 

- Mukul Singh Shekhawat, Balaji Hariharan

DPM PowerShell Script -- invoking a Consistency Check

$
0
0

By design, DPM 2007 should be ‘fire and forget’ – meaning that after initial replication, data changes will automatically and routinely replicate. 

However, due to a variety of external factors, the data set may become inconsistent.  Usually, DPM will correct itself within a replication cycle.  Depending on how often you have configured replication, this may not be soon enough.  One resolution to this is to have one’s management solution (e.g. System Center Operations Manager) see the alert that a data set is inconsistent and then automatically run this ‘consistency check’ to revalidate the data in a more timely manner.

Attached is a sample PowerShell script to invoke a consistency check of on a DPM data source.

# This script do a consistency check on the file system data source. The parameters have to be initialized first as given below.  Please give the values of parameters as appropriate for your environment.  You can customize this easily as per your needs. Save the attached file as a .ps1 file and invoke through the DPM Management Shell.

$dpmname = "DPM Server Name";
$pgname = "My PG";
$dsname = "G:\";

function StartDatasourceConsistencyCheck
{
    param($dpmname, $pgname, $dsname, $isheavyweight)

    write-host "Start consistency check on $dsname "

    trap{"Error in execution... $_";break}
    &{
        write-host "Getting protection group $pgname in $dpmname..."
        $clipg = Get-ProtectionGroup $dpmname | where { $_.FriendlyName -eq $pgname }

         if($clipg -eq $abc)
          {
              Throw "No PG found"
          }

        write-host "Getting $dsname from PG $pgname..."
        $ds = Get-Datasource $clipg | where { $_.logicalpath -eq $dsname }

        if($ds -eq $abc)
         {
              Throw "No Data Source found"
         }

        if( $isheavyweight -ne "true")
        {
            write-host "Starting light weight consistency check..."
            $j = Start-DatasourceConsistencyCheck -Datasource $ds
            $jobtype = $j.jobtype
            if(("Validation") -notcontains $jobtype)
                {
                    Throw "Shadow Copy job not triggered"
                }
            while (! $j.hascompleted ){ write-host "Waiting for $jobtype job to complete..."; start-sleep 5}
            if($j.Status -ne "Succeeded") {write-host "Job $jobtype failed..." }
            Write-host "$jobtype job completed..."
        }
        else
        {
            write-host "Starting Heavy weight consistency check..."
            $j = Start-DatasourceConsistencyCheck -Datasource $ds -HeavyWeight
            $jobtype = $j.jobtype
            if(("Validation") -notcontains $jobtype)
                {
                    Throw "Shadow Copy job not triggered"
                }
            while (! $j.hascompleted ){ write-host "Waiting for $jobtype job to complete..."; start-sleep 5}
            if($j.Status -ne "Succeeded") {write-host "Job $jobtype failed..." }
            Write-host "$jobtype job completed..."
        }

    }
}

#Example for usage

StartDatasourceConsistencyCheck $dpmname $pgname $dsname "false"
StartDatasourceConsistencyCheck $dpmname $pgname $dsname "true"

-- Mukul

CLI Script: To remove all datasources in inactive protection state

$
0
0

The attached script removes all inactive datasources under a given DPM server. It provides options to remove inactive datasources on disk/tape/both. Save this as a .ps1 file and invoke it from inside the DPM Management Shell. Please contact us if you need any further assistance in running the script or face any issues.

 

- Madhan S


CLI Script: To recover a DPM replica volume from data stored in tape

$
0
0

When a disaster occurs, and you lose your replica volume for any datasource, you could re-seed the replica from the backed-up data you have in tape. This little script initializes replica from a tape recovery point. The parameters have to be customized for your environment first as given below. 

Save the attached file as a .ps1 file and invoke through the DPM Management Shell.

 

- Madhan S

DPM CLI Tips 'n Tricks: Powershell Basics

$
0
0

The scripts posted in this blog require knowledge of Powershell and DPM cmdlets. So we thought we would present some tips ‘n tricks to become power users!

 

Introduction

The first difference between any normal Command Line Interface (CLI) (typically, Windows cmd.exe based environment) & Powershell is that, Powershell is a full blown .Net environment and is object oriented. In other words, it treats all input/output parameters as objects and when you pipe one command to another, objects flow instead of plain text as in old shells. Similarly, DPM Management Shell also takes inputs in the form of objects – for example, DPM, Protection Group, Datasource, Library, Tape, Tape Drive, Disk etc. are all .Net objects. Let’s now move on to more interesting stuff.

 

Finding cmdLets (Get-Command or gcm)

For knowing all the cmdLets present in the Powershell instance, use Get-Command (or gcm in short). When run, the output will be something like this:

 

PS D:\ > Get-command

 

CommandType   Name                  Definition

-----------       ----                   ----------

Cmdlet              Add-Content       Add-Content [-Path] <String[...

Cmdlet              Add-DPMDisk       Add-DPMDisk [-DPMDisk] <Disk...

Cmdlet              Add-History         Add-History [[-InputObject] ...

Cmdlet              Add-Member        Add-Member [-MemberType]

Cmdlet              Add-PSSnapin      Add-PSSnapin [-Name] <String...

Cmdlet              Add-Tape            Add-Tape [-DPMLibrary] <Libr...

Cmdlet              Clear-Content      Clear-Content [-Path] <Strin...

 

Finding DPM cmdLets (Get-DPMCommand)

Similarly for getting all the cmdLets belonging to only DPM, use the Get-DPMCommand.

 

PS D:\ > Get-DPMCommand

 

CommandType  Name                      Definition

-----------      ----                        ----------

Cmdlet            Add-DPMDisk             Add-DPMDisk [-DPMDisk] <Disk...

Cmdlet            Add-Tape                 Add-Tape [-DPMLibrary] <Libr...

Cmdlet            Connect-DPMServer   Connect-DPMServer [-DPMServe...

Cmdlet            Disable-DPMLibrary     Disable-DPMLibrary [-DPMLibr...

Cmdlet            Disable-TapeDrive      Disable-TapeDrive [-TapeDriv...

 

How to use a cmdlet?

 

1. Understanding the cmdlet parameters (Get-Command and Format-List)

There are two parts to understanding a cmdlet. First, to look at the various input parameters and various usages, you can use the Get-Command itself on a specific cmdlet, in the following fashion – Get-Command <cmdlet> | format-list (or gcm <cmdlet> | fl, in short).

 

PS C:\> gcm Set-Alias | fl

 

Name                     : Set-Alias

CommandType         : Cmdlet

Definition                : Set-Alias [-Name] <String> [-Value] <String> [-Description <String>] [-Option <ScopedItemOptions>] [-PassThru] [-Scope <String>] [-Force] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-ErrorVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>] [-WhatIf] [-Confirm]

 

 

2. Reading Help documentation for a cmdlet (Get-Help or help)

For reading the Help documentation for any cmdlet use the following (Get-Help or help)

 

Get-Help <cmdlet>

 

For example:

 

PS D:\ > Get-Help Add-Tape

 

NAME

    Add-Tape

 

SYNOPSIS

    Adds a tape to a DPM library.

 

 

SYNTAX

    Add-Tape [-DPMLibrary] <Library> [-Async] [-JobStateChangedEventHandler <Jo

    bStateChangedEventHandler>] [<CommonParameters>]

…..

 

3. Getting detailed help, and seeing sample scripts

For seeing additional information on each of the cmdLets, you can use the –Full or –Detailed parameters in Get-Help.

 

e.g.    Get-Help Add-Tape –Detailed

Get-Help Add-Tape –Full

 

DPM Object Properties (Get-Member or gm)

 

The DPM cmdlets are logically divided in three groups Protection (Backup), Recovery and Management related (Library & Disk). All the tasks that can be done from the DPM UI in these areas, can be done from the cmdLets in these areas. Infact, the CLI provides additional features than the UI, in some scenarios.

 

You can get the member properties of any object by pipelining the output to Get-Member:

 

For example:

 

$lib = Get-DPMLibrary -DPMServerName “Testing Server Name”

$lib | Get-Member

 

Will give all the members of $lib (Library object).

 

PS D:\ > $lib | get-member

 

TypeName: Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.LibraryMan

agement.Library

 

Name              MemberType    Definition

----                ----------       ---------

ClearCache       Method            System.Void ClearCache()

Dispose            Method            System.Void Dispose()

 

We hope this was useful, feel free to add comments as feedback to this post. And in the next version, we can add more tricks!

 

- Mukul Shekawat, Balaji Hariharan

CLI Script: Auto re-running consistency checks

$
0
0

Some customers had non-DPM issues, like network issues because of which consistency check (CC) jobs failed too often. For the benefit of them, we have added a script that would re-try CC until it succeeds. Note: In some cases, CC would impact the protected computer’s performance and hence use this script appropriately.

 

- Vikash Jain

DPM CLI: Quick reference help

$
0
0

While using the DPM Management Shell, one would like to have a list of all cmdlets and its short help for quick reference. Also, we heard it would be useful to group the various cmdlets by the function they do – e.g. Library, Disk, Recovery or Protection related etc. Keeping this in mind, we have published this quick reference for you based on DPM 2007 Management Shell. You could take a print out of this and keep it handy while creating scripts. Btw, new cmdlets or parameters may be introduced in future versions or service packs of DPM, so you might want to update this list manually.

 

- Mukul Shekhawat, Balaji Hariharan

DPM CLI Tips 'n Tricks: Powershell Basics - Part II

$
0
0

Tab Completion

The most fascinating part of DPM Management Shell is Tab Completion of cmdlets. By learning the common verbs in Powershell (like Get, Set, Start etc.), a Windows or an Exchange admin can easily use that knowledge and learn the DPM cmdlets. This is because the same verbs are used in DPM Management Shell too.

 

For example: To get the list of protected servers backed-up by a DPM server, one needs to just type Get-P and keep pressing tab. This would result in Powershell suggesting the various cmdlets, and you can choose the right one needed. The ones that you would see in this example are - Get-Process, Get-ProductionCluster, Get-ProductionServer etc.

 

In addition, if you can also tab complete the various parameter names too in the same way, by typing a “-“ after the cmdlet name and pressing tab.

 

 

Examples of cmdlet usage(Get-Help <cmdlet> -example)

 

Getting directly to the example usage of a cmdlet can be done easily with optional parameters in Get-Help - Get-help <cmdletname> -example. This will directly print only the example usages of the cmdlet:

 

For example:

 

PS D:\> Get-help Get-ProtectionGroup -example

 

NAME

 Get-ProtectionGroup

 

SYNOPSIS

 Retrieves the list of protection groups on the DPM server.

 

EXAMPLE 1

 Get-ProtectionGroup -DPMServerName TestingServer

 

This command returns the protection group on a DPM server.

 

Getting only the cmdlet syntax (Get-Command <cmdlet> -syntax)

Another quick help about the syntax of cmdlet would be get by typing

 

PS D:\> Get-Command Get-Datasource -syntax

 

Get-Datasource [-DPMServerName] <String> [-Inactive] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-ErrorVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>]

 

Get-Datasource [-ProductionServer] <ProductionServer> [-Async] [-Inquire] [-Replica] [-Tag <Object>] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-ErrorVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>]

 

Get-Datasource [-ProtectionGroup] <ProtectionGroup> [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-ErrorVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>]

 

Get-Datasource [-DPMServerName] <String> [-SearchPath] <String> [[-ProductionServerName] <String>] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-ErrorVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>]

 

 

Using the object member properties (Get-Member)

With the help of get-member you can verify/set the property of the object.

 

PS D:\> $pg = Get-ProtectionGroup “MyDPMServerName”

PS D:\> $pg | get-member

 

TypeName: Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.ProtectionGroup

 

Name                                  MemberType    Definition

----                                    ----------       ----------

AddProtectableObject             Method           System.Void AddProtecta...

AddProtectionOptions             Method           System.Void AddProtecti...

.

.

FriendlyName                        Property           System.String FriendlyN...

InitializationType                   Property           Microsoft.Internal.Ente...

 

Now these properties can be used to filter and get a specific PG.

 

For Example:

 

$clipg = Get-ProtectionGroup $dpmname | where { $_.FriendlyName -eq $pgname }

 

 

CLI Help Updates and Errata

Any additional help information or errata gets updated per cmdlet and is available at http://go.microsoft.com/fwlink/?LinkId=95130.

 

- Mukul Shekhawat, Balaji Hariharan

CLI Script: To generate status reports

$
0
0
The attached script generates a comprehensive report of the status of all backups and the storage utilization for each Exchange Server that is protected by DPM on a per SG basis.

CLI Script: DPM status report

$
0
0

The following script generates a comprehensive report of failures and the storage utilization for each Exchange Server that is protected by DPM on a per SG basis.


CLI Script: Script to generate DPM configuration report

$
0
0

The attached script generates a report of the mapping between each Exchange Server, the backup Protection Groups and its associated SGs.

 

Krishna Mangipudi

Hyper-V Protection with DPM 2010 Beta - How to automatically protect new Virtual Machines

$
0
0

System Center Data Protection Manager 2010

We had a great question come into the DPM Newsgroup recently. How do I automatically protect new VMs added to a Hyper-V host using DPM?

In any virtualized environment, adding new VMs is a frequent operation. While backup administrators can protect an entire Hyper-V host using the DPM Management Console, the protection group had to be modified manually to include the new virtual machines that have come up on the Hyper-V host.

This blog post aims to make this an automated task by adding new virtual machines to protection on the given Hyper-V hosts using simple PowerShell scripts. These scripts are expected to work with DPM 2010 Beta. By using the information in this blog post, you should be able to quickly put together a script that can enable the auto protection of your hyper-v hosts.

Download Scripts :  AddNewClusteredVM.ps1  and AddNewStandAloneVM.ps1

Note: These scripts work on an existing protection group and do not create a fresh protection group.

 

The attached scripts automate the task of adding any new virtual machines recognized in the Hyper-V hosts protected by the DPM server into existing protection groups. There are different scripts for Hyper-V clusters (AddNewClusteredVM.ps1) and standalone Hyper-V hosts (AddNewStandAloneVM.ps1). You would still use the script for standalone servers to automatically protect the non-clustered virtual machines of any Hyper-V host that is part of a cluster.

 

Let us now walk you through the scenario and the scripts...

Walk Through

Protecting standalone Hyper-V hosts

The script for standalone servers (AddNewStandAloneVM.ps1) takes as input the following two values in order:

VariableExplanationExample
Server NameFully Qualified Domain Name of the Hyper-V host server.hyperv01.contoso.com
Protection GroupName of the existing protection group to which we are adding the new virtual machines.Protection Group 3

 

The script performs the following tasks:

1. Takes FQDN of protected server and name of protection group as input.

2. Searches for the protected server and the protection group.

3. Runs inquiry on the Hyper-V host and obtains the list of unprotected virtual machines.

4. Adds the obtained list of virtual machines to the protection group.

5. Saves the changes to the protection group and exits.

 

Example usage:

This example takes the following values as inputs:

hyperv01.contoso.com – replace this with the name of your Hyper-V host

dpm-server01.contoso.com – replace this with the name of your DPM server

PS C:\Program Files\Microsoft DPM\DPM\bin> .\AddNewStandAloneVM.ps1 hyperv01.contoso.com "Protection Group 3"

Name                                                     Domain

----                                                         ------

dpm-server01.contoso.com                CONTOSO.COM

Running Inquiry on hyperv01.contoso.com

Adding data source Backup Using Child Partition Snapshot\StandaloneVM to Protection Group 3

Adding new Hyper-V data sources to Protection Group 3

Exiting from script

Protecting Hyper-V clusters

The script for clustered servers (AddNewClusteredVM.ps1) takes as input the following two values in order:

VariableExplanationExample
Cluster NameFully Qualified Domain Name of the Hyper-V cluster.csv01.contoso.com
Protection GroupName of the existing protection group to which we are adding the new virtual machines.Protection Group 2

 

The script performs the following tasks:

1. Takes FQDN of protected cluster and name of protection group as input.

2. Searches for the protected cluster and the protection group.

3. Runs inquiry on the cluster to get the list of resource groups.

4. Runs parallel inquiry for each resource group and obtains the list of unprotected virtual machines under them.

5. Adds the unprotected virtual machines to the protection group.

6. Saves the changes to the protection group and exits.

The difference in this script is in Step 3 & 4 in AddNewClusteredVM.ps1 where we run inquiry on the cluster to get the list of resource groups, followed by inquiry on the resource groups. Also, inquiry on resource groups is run in parallel unlike what we do for standalone servers. We run inquiry in parallel for the cluster to avoid a performance overhead. Such an overhead is not seen for standalone servers.

Example usage:

This example takes the following values as input:

csv01.contoso.com – replace this with the name of your Hyper-V cluster

dpm-server01.contoso.com – replace this with the name of your DPM server

PS C:\Program Files\Microsoft DPM\DPM\bin> .\AddNewCLusteredVM.ps1 csv01.contoso.com "Protection Group 2"

Name                                                     Domain

----                                                        ------

dpm-server01.contoso.com                CONTOSO.COM

Running Inquiry on csv01.contoso.com

Running Inquiry on Cluster Group

Running Inquiry on Available Storage

Running Inquiry on SQLLoadVM

Running Inquiry on SharepointLoadVM

Running Inquiry on Win7VM

Waiting for inquiry to complete 0 item(s) obtained...

.

Waiting for inquiry to complete 1 item(s) obtained...

.

.

Waiting for inquiry to complete 5 item(s) obtained...

Inquiry listed 5 item(s)...

Adding data source Backup Using Child Partition Snapshot\Win7VM to Protection Group 2

Adding new Hyper-V data sources to Protection Group 2

Exiting from script

You can now write a batch file to call the above scripts one after the other and schedule it using the Windows Task Scheduler to run as frequently as needed.

Important:

· Shared disks that may be listed under the resource groups of your Hyper-V cluster are not Hyper-V data sources, and are not considered for automatic addition using this script.

· Any new virtual machines that are finally added to a protection group are scheduled for immediate replica creation, overriding any existing protection group behavior. You may modify the respective script to change this after referring the specific cmdlet help option.

 

-- Angad Pal Singh | DPM Team (Author)

Recover-Recoverable Item

$
0
0

Its 6:00PM.  You were supposed to leave at 5:00PM.  Your boss catches you as you walk out the door and says he needs a file restored for a presentation first thing in the morning.  You say, “No problem, I will restore it real quick before I head home.”  You head to the DPM server and try and open the console.  To your dismay, the console will not open.  You inform your boss of the situation.  He asks if there is any way at all to restore the file because he cannot wait until the morning.  You remember that when you installed DPM that it required PowerShell.  Maybe there is a way to restore files from the DPM Management Shell?  You are exactly right!  But you have never done it.

This blog will walk you through the steps of doing a simple file restore from Powershell.

From previous blogs you learned some of the fundamentals of Powershell; specifically arrays and index values.  In this PowerShell cmdlet you will put that knowledge to the test.

This is the command as listed in Technet:

Recover-RecoverableItem [-RecoverableItem] <RecoverableObject[]> [-RecoveryOption] <RecoveryOptions> [-RecoveryPointLocation <RecoverySourceLocation[]>] [-JobStateChangedEventHandler <JobStateChangedEventHandler>] [-RecoveryNotification <Nullable`1>] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-ErrorVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>]

 

Looks too difficult to attempt at 6:15PM right?  Wife is waiting for you to eat dinner, kids need their baths, etc… Sound familiar?

Let’s simply this cmdlet; you will need some information to do the recovery.

What you will need:

1.       Recoverable Object

2.       Recovery Options

3.       Recovery Point Location

First we need to get a recoverable object. Easy enough right?  Wrong, it requires three variables and those have to be indexed into the get-recoverypoint cmdlet so technically you get two cmdlets for the price of one in this blog.

You will be creating three variables

1.       $pg = get-protectiongroup –dpmservername dpmserver1

a.       This will return an array; assign an index value to the first one you see as 0, the second one will be 1, etc…clip_image002

2.       $ds = get-datasource –protectiongroup[$pg_arrayindexvaluefromabove]

a.       This will return an array

clip_image004

3.       $rp= get-recoverypoint –datasource[$ds_arrayindexvalue]

clip_image006

4.       $gr = get-recoverableitem –recoverableitem $rp[0]

clip_image008

Once you have the above three variables created you can now determine which recovery point you want to restore.  No, we are not done yet but we now have most of the information necessary to perform the recovery.  Just a few more variables and we are done.

Let’s look at recover-recoverableitem again.  This time we will look at the bare minimum that you will need to get your bosses file restored so you can get out the door to go home.

Recover-RecoverableItem [-RecoverableItem] <RecoverableObject[]> [-RecoveryOption] <RecoveryOptions> [-RecoveryPointLocation <RecoverySourceLocation[]>]

 

Recover-recoverableitem requires 3 pieces of information.  We have the recoverable object in the array index value from $rp above.

We also need recovery options ($rop.)  This is another variable.  It takes information that you input and when the item is recovered, it uses these variables to overwrite the file, restore the file, etc…

 $rop

Once we have the $rop variable we can finally restore the file.

Captureend

You can see from the above screenshot that we are now in progress of putting the file back where it came from.

Walt Whitman

Support Escalation Engineer

Microsoft System Center Support

How to create and delete recovery points for DPM via PowerShell

$
0
0

Knowing how to manipulate DPM by way of PowerShell can come in handy in many situations. For example, I had a case to where I was troubleshooting the DPM console that was crashing and there was a concern as to whether or not recovery points where being made for specific protection groups. Using the PowerShell command we could not only verify this, but we could create a recovery point to any protection group we wished. Another example would be if you wanted to delete some recovery points to reclaim some disk space.  Recovery points cannot be deleted by the GUI so you have to use PowerShell to accomplish that goal.

This video covers how to use PowerShell to both create and delete recovery points:

Get Microsoft Silverlight

Shane Brasher | Senior DPM Support Escalation Engineer

Checking LDM database space usage

$
0
0

When should I be concerned with this?

Usually not at all and the DPM2010 RTM release will check actual LDM consumption providing an early warning. Neither will DPM2010 consume all LDM space allowing space for re-consolidation purposes to reduce extents should that be needed.  However, …

…on large scale storage migrations you may want to check what is still possible before migrating hundreds of volumes. For instance with 300 data sources and zero extents you can only migrate 193 data sources at a time because volumes must co-exist for a while. At that point, old migrated volumes will have to be deleted to free-up LDM space.

…upgrading DPM2007 to DPM2010 maintains the storage pool which may already be in a less desirable state due to large number of extends created during the DPM2007 life time. In a bad case you may want to opt for migration rather than in place upgrade.

 

How to check what is still available?

I like to point to a companion script by Sid Ramesh that provides useful information on how many volumes and extents are used and how many more data sources can be added.

Viewing all 119 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>