An alternative approach to snaprestore rollbacks for virus outbreaks

overview

Figure 1

Figure 1

Netapp’s snap restore product is a fantastic tool in a storage admin’s arsenal. It works well. It’s fast, and it doesn’t need to “restore” data, it just makes a previous snapshot the active file system. That said, I keep seeing the same scenario put forth by netapp and various other folks as a use case for snaprestore, and it’s one where snaprestore would be my second choice. The scenario is this: We’re taking hourly snapshots over the course of a day. A virus breaks out. The files on our cifs shares have been compromised. We’re sure that our current data is infected, and we know that at least some of the data was infected an hour ago. Two hours ago is uncertain, but we are sure that the virus hadn’t broken out three hours ago, so we know the file system at this point is clean. The recommendation you always hear is to use snap restore to roll back the file system to three hours ago, and you’re now clean and not infected. Unfortunately, as a side effect, we’ve lost all the data after that point – which, I should mention, is the intended result: if we hadn’t our files would still be infected, right? My solution is to instead clone the volume using a really neat tool called flexclone.

some snapshot basics

I’d like to suggest an alternate method, but before I get into it, it probably makes sense to talk about how snapshots work. The idea is very simple. Each file on a file system has an inode. An inode contains information about a file – the owner, when it was created, etc. If you’re on a windows box, and you view the “properties” of a file, you’re seeing the information contained in an inode. A hard drive is made up of blocks of data, and another thing that an inode does is point to the blocks that make up the data for a given file. If a file is big, the inode won’t actually point to the data blocks themselves, it will point to indirect blocks which then point to data blocks ( inode -> indirect level 1 -> data ). If a file is really big we can have multiple levels of indirection, like in figure 1. A snapshot is, for all intents and purposes, a copy of just that top level inode block. This block then contains pointers to the indirect blocks, and those indirect blocks to the data blocks. ( Is this fun yet? ) Once we’ve made a copy of the inode (parent) the indirect and data blocks (children) are effectively frozen, since a block cannot be changed unless it’s parent, or parents (original inode and now the snapshot) agree. The way we handle modifications to files in the netapp world is to write those changes to a special reserved section of the volume called the “snapshot reserve” (apt) and to update the original inode to point to those blocks. It’s pretty slick, really.

flexclone

In order to understand how this alternative solition works, we need to understand how flexclone works. Flexclone – to over simplify things – is a writeable snapshot. Actually, it’s really not that much of an over simplification, really. If we go back to our snapshot basics where we talked about how each file has an inode which points to data blocks, I should mention that under the hood, everything on a filesystem is actually just, well, a file… even directories. A directory is just a file that contains a list of file names and where the inodes for those files are, that’s the data in that “file’s” data blocks. The top of any volume (think of this as a drive) is a directory, and that directory in turn, has an inode. This one we call the “root inode”. When we take a snapshot, of volume, this is the inode we’re basing our snapshot of. So, what we end up with is: [ root inode ] [ snapshot copy of root inode ]. What we can do is add one more copy to make a “clone” of the volume. this leaves us with: [root inode ] [ snapshot copy of inode] [clone copy of inode]. Now we can still write to our normal volume by writing to that snapshot reserve that we talked about, and what we do when we create a flex clone is associate a *new* snapshot reserve with the clone. The cool thing is initially the clone takes up no space. Ok, that’s a lie, it takes up a coupe kilobytes, but we’re talking storage systems here, a couple of kilobytes is nothing. This is really useful for things like backing up a database, or doing QA -theres a whole bunch of use cases, such as, oh, avoiding the unneccessary dataloss of using snap restore in the event of a virus outbreak.

the actual solution

Ok, now that we’ve brushed up on our netapp basics, lets review our problem:

  • we’ve taken hourly snapshots
  • our active file system is infected with a virus, we’ll assume the volume name is vol1
  • so is our previous hourly snapshot and probably the one before that
  • the normal recommeded solution would be to snap restore to hourly.2 (three hours previous) before the virus broke out

Using this method would mean that we would lose three hours worth of data. That’s not good. Preventing data loss is the whole job of a storage admin, as far as I’m concerned. So, an alternate method:

  1. take cifs offline, using the cifs terminate command
  2. on your admin host copy the file /vol/vol0/etc/cifsconfig_share.cfg -> cifsconfig_share.cfg.YYMMDD.pre-virus.bak
  3. make a clone of vol1 called “vol1_clone” with the command: vol clone create vol1_clone -b vol1 hourly.3
  4. open /vol/vol0/etc/cifsconfig_share.cfg and do a find and replace, changing every instance of vol1 to vol1_clone
  5. run the command cifs restart
  6. run the command cifs shares -add oldvol1 /vol/vol1
  7. lock it down with cifs access <your user> “Full Control”

So what did this do?

We quarantined the file system by stopping cifs and removing client access. Then we created a new clone of the volume that had been infected off of the snapshot we new was clean. We updated all of our shares using a quick find and replace giving our users access to their data, so they can get back to work. We also exposed our old volume via a new cifs share, which we’ve restricted only to our user, who theoretically knows better than to muck with infected files. So why did we do all this work? Later, when new virus definitions come out, we can scan that volume over night and have our anti-virus program go clean up our mucky infected files. Once we do, we can open up the share, send out an email, and give our users the ability to recover any important documents that had been created during those three hours, saving extra work, and potentially saving compliance headaches.

There is a final part of this, in which we split off the clone using the vol split command. I would note – you will (at least temporarally) need to have enough free space on your aggregate to contain both of the volumes, w/o any space savings. Once your split is finished, you can do a vol offline vol1 ; vol destroy vol1 to get rid of the old volume and free up that space.

I think this is a better solution to the problem, and one that’s much more elegant than using a snap restore. I’d love to hear any feedback or improvments to this process, so if you found this useful, or can think of a better way, please let me know.

This entry was posted in tech, Uncategorized and tagged , , , , , , . Bookmark the permalink.

One Response to An alternative approach to snaprestore rollbacks for virus outbreaks

  1. Scott says:

    Love the idea. With three “user” volumes (users, groups, common) totaling roughly 5TB, it might get a bit tight when it comes time to split off the flexclone.

    Of course, the pessimistic storage admin side comes out and says, “It will be hard to delete those old volumes because someone will always want them to stick around another day, week, month, year, decade, century, epoch ….

Leave a Reply