Reinflating stubs on the Celerra from secondary storage


After looking around the web I couldn’t see any obvious way to reinflate files on a secondary filesystem back to the primary on the Celerra.

However, the solution is quite simple. When you delete the dhsm connection from a file system you can opt to have the Celerra scan and move all the stubbed data back to the primary storage.

If you’re planning on re-archiving the data to new storage you can do both at the same time.

In this setup we have: a rainfinity, a centera and a CIFS-based archive storage. The aim is to reinflate from the centera and re-archive back to the CIFS storage without a) filling up the primary storage filesystem or b) auto-extending the primary filesystem.

Here’s an example of a filesystem with a single secondary archive storage (on a centera in this case which loops through the rainfinity):

[root@mycelerra bin]# /nas/bin/fs_dhsm -connection myfilesystem -info
myfilesystem:
state                = enabled
offline attr         = on
popup timeout        = 0
backup               = offline
read policy override = none
log file             = on
max log size         = 10MB
 cid                 = 0
   type                 = HTTP
   secondary            = http://myrainfinityserver.mydomain.com/fmroot
   state                = enabled
   read policy override = none
   write policy         =        full
   user                 = rainfinityuser
   options              = httpPort=8000 cgi=n

It loops through the rainfinity as the celerra is unable to talk to the centera directly; with CIFS storage it can, cutting the rainfinity out of the chain.

Now to perform the migration:

On the rainfinity:

1) Create a new policy with the new secondary storage as the destination

2) Disable the existing rainfinity schedule that archives to the centera

3) Create a new rainfinity schedule that archives to the new secondary storage. Select “Capacity Used” as the trigger to start the archiving. You’ll want to set the % about 10% larger than the current filesystem utilization. So if the filesystem is 26% full then set the trigger at about 35 or 40%.

4) Manually run this new schedule against the filesystem. This should automatically create a new cid (so you’ll have two attached to the same filesystem):

[root@mycelerra bin]# /nas/bin/fs_dhsm -connection myfilesystem -info
myfilesystem: state                = enabled offline attr         = on popup timeout        = 0 backup               = offline read policy override = none log file             = on max log size         = 10MB  cid                 = 0    type                 = HTTP    secondary            = http://myrainfinityserver.mydomain.com/fmroot    state                = enabled    read policy override = none    write policy         =        full    user                 = rainfinityuser    options              = httpPort=8000 cgi=n
 cid                 = 1
   type                 = CIFS
   secondary            = \\mycifsshare.mydomain.com\mynewarchive$\
   state                = enabled
   read policy override = none
   write policy         =        full
   local_server         = mycelerra.mydomain.com
   admin                = mydomain.com\mycifsuser
   wins                 =

Notice that the cid =0 for the old archive storage and cid=1 for the new storage.

Now we can delete the dhsm connection, cid=0, with recall to just recall the data back from the old secondary storage:

[root@mycelerra bin]# /nas/bin/fs_dhsm -connection myfilesystem -delete 0 -recall_policy yes
myfilesystem:
state                = enabled
offline attr         = on
popup timeout        = 0
backup               = offline
read policy override = none
log file             = on
max log size         = 10MB
 cid                 = 0
   type                 = HTTP
   secondary            = http://myrainfinityserver.mydomain.com/fmroot
   state                = recallonly [ Migration: ON_GOING ]
   read policy override = none
   write policy         =        full
   user                 = rainfinityuser
   options              = httpPort=8000 cgi=n
 cid                 = 1
   type                 = CIFS
   secondary            = \\mycifsshare.mydomain.com\mynewarchive$\
   state                = enabled
   read policy override = none
   write policy         =        full
   local_server         = mycelerra.mydomain.com
   admin                = mydomain.com\mycifsuser
   wins                 =
 Done

As you can see that “state” of the connection has changed from “enabled” to “recallonly”. This means that no more data will be archived to the old secondary and that the stubbed data is being recalled back to the primary. You can check on the status by using:

[root@mycelerra bin]# /nas/bin/fs_dhsm -connection myfilesystem -info
myfilesystem:
state                = enabled
offline attr         = on
popup timeout        = 0
backup               = offline
read policy override = none
log file             = on
max log size         = 10MB
 cid                 = 0
   type                 = HTTP
   secondary            = http://myrainfinityserver.mydomain.com/fmroot
   state                = recallonly [ Migration: ON_GOING ]
   read policy override = none
   write policy         =        full
   user                 = rainfinityuser
   options              = httpPort=8000 cgi=n

There are also some log files you can monitor at the root of the filesystem (e.g. \\mycifsshare.mydomain.com\c$\myfilesystem\) and are named migErr_vdmname_myfilesystem and migLog_vdmname_myfilesystem. The error file will contain any filemames which have failed to be recalled. The log file contains a running log of the recall, including errors.

Once the files have been recalled the connection (cid) will be removed. If there is an issue recalling any files the migration status will change to ERROR (meaning there was a problem and the migration is continuing) and FAILED (meaning that the migration has had at least one error and stopped).

As the primary filesystem fills up with the recalled data the % used will grow until it hits the threshold set in rainfinity to trigger an archive (40% in our case). Fortunately the archiving process is considerably faster than the recalling process and the data will be recalled then archived repeatedly until all the data has been moved from one secondary storage to the other.

If any users access any files which are on the secondary filesystem which is being recalled it will trigger that file to be recalled back to the primary filesystem.

Obviously how long the process takes will depend on the amount of data and the speed of your disks.

Advertisements
Tagged , , , , ,

5 thoughts on “Reinflating stubs on the Celerra from secondary storage

  1. I was doing crossfit on Wednesday when I discovered your website. I simply must tell you that your blogs are awesome. Keep it up! Thanks for the advice!

  2. tech adviser says:

    Your creative potential seems limitless. You have brought up a very superb ideas You have a great sense of humor. That is a smart way of thinking about it.

  3. Hello. impressive job. I did not expect this. This is a fantastic story. Thanks!

  4. Jayson says:

    Hello, we have a similar setup that we are trying to do. We have a rainfinity, a centera and a CIFS-based archive storage (celerra). We are trying to migrate (reinflate the data) to a secondary storage device on a different location. The performance of moving the data to re-inflate is 2-4MB/s. The question do you know a way to just re-inflate the data from the Rainfinity, Celerra, Centerra to secondary storage with the best amount of speed (about 8TB of data)?

    • stujordan says:

      When you inflate from the centera the traffic is passed through the rainfinity box as the celerra has no way of accessing the centera directly. I guess the throughput you get depends on the file sizes etc and the performance through your network.

      My setup is something like this
      Rainfinity FMA Virtual Edition (now cloud) with a Gbic connection
      Celerra has two Gbic links bonded together
      Centera a single Gbic.

      It did take a while to get all the data back and to be honest I expected it to take a while. I also had about 8TB to migrate off the centera too and it took me a few weeks of juggling space etc to get the data re-migrated to the CIFS archive.

      2-4MB does sound quite slow though…4MBps equates to about 24 days constant copying to get 8TB across.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: