Feliz Nuevo Ano Reader!,
Thanks for sticking with me and my erratic schedule through 2012. One of my resolutions for 2013 is to get better about regularly blogging and writing about new things we are seeing/doing. The new book is in copyedit at the moment, http://www.amazon.com/Computer-Forensics-Infosec-Guide-Beginners/dp/007174245X/ref=sr_1_21?ie=UTF8&qid=1357249339&sr=8-21&keywords=computer+forensics, not quite sure why they changed the title but it's supposed to be Computer Forensics, A beginner's guide. It's meant for those people who already in IT and moving into a DFIR role either within their company or on their own. I'm actually putting together a series of Youtube videos to go along with it, they'll be found here http://www.youtube.com/learnforensics and I'll be uploading some sample cases to work through that match the book. More on all of that when the book is released though!
Now for why you are (most likely) here, NTFS internal forensics. Over the past year or so if you've been reading or watching me speak you'll know that we've been focused on the $logfile. Since then we've expanded our research into other file systems (we have a working ext3/4 journal parser now that can recover deleted files names and re-associate them with their inodes/metadata) but we are always keeping an eye on NTFS to see what else we can do to expand our knowledge and capabilities. After re-evaluating the USN Journal thanks to Corey Harrell's blog (http://journeyintoir.blogspot.com/2013/01/re-introducing-usnjrnl.html) we've come to recognize that to get a bigger picture our view of previous file system activities can link up to form the NTFS Forensics TRIFORCE! It's dumb I know but it works, see the illustration below.
The $MFT, Master File Table, is always the primary indicator of what the current state of the file system is. If a defrag hasn't run, which windows 7 is very aggressive about and defaults to once a week now for auto defragging (mine is set for every Wednesday but I don't know if that's a default setting), then you can see the deleted/Inactive NTFS file/directory entries prior to the last defrag run. That's not enough for us as forensic investigators though, we need to know more about the prior states of the file system in order to perform our work. So how can we roll back time and see what happened before? A good answer in windows vista/7 systems for many has been shadow copies and shadow copies are amazing for the forensic investigator. What the MFT and the contents of the file system for a shadow copy show you though is the current state of the file system at the time it was capture, a snap shot of the file system. To know what actions took place between snap shots or before snap shots you have to look deeper. The two file system journals, and in this post we are just focusing on file system journals not forensic artifacts that are logging actions related to a specific user activity, that exist are the $logfile and the $USNJrnl.
The $logfile, the primary focus of our initial research, contains the before and afters or undo/redo for each change to the MFT. It contains the full entry of what is being changed (File records, $STDINFO blocks, etc...) and the metadata contained within them (with the exception of resident files on windows 7 journals whose contents appear to be nulled out). The $logfile is great for getting a very granualar level exactly what changes have occurred to a file system.
The $USNJrnl creates a summary of actions taken against a document from the time its opened to the time its closed. The $USNJrnl like the $logfile is circular being that overtime it will be overwritten but like the $logfile if you access the volume shadow copy using libvshadow you can get access to prior backups of it. The $USNJrnl keeps the name of the file being changed, the file id of the file being changed in the MFT, the Parent ID of the directory that contains the file in the MFT and the date the change occured as well as the USN which if the offset into the $USNJrnl where the data regarding the file begins.
Each of these data sources by themselves provide a wealth of information, but all are incomplete without each other. The MFT does not reflect past states, the $USNJrnl does not contain metadata and the $logfile does not always reference a file by name and file id. However, taken together they link up as seen in more detail below to create a view of historical actions like we've never been able to see before:
Ok so what does all that mean?
For any individual file we can determine the following:
a. What changes have occurred to the file and when
b. What metadata did the file have before and after each change (modification, creation, access, size, location)
c. What was a file renamed to?
d. What files previously existed in a directory?
Who is this useful too?
1. Malware Analysts you can now see every file system change happening including anti-forensic attempts like time stamp alteration, deletion, renaming and overwriting
2. Incident Responders you can do the same against attackers who are attempting to hide their activities and track what files they are accessing even if access dates are disabled
3. Forensic Investigators - so much more data regarding a suspects activities on the file system including the detection of spoliation
4. Everyone!
What are the limitations?
1. Both the $logfile and the $usnjrnl are circular with a max size, so there is a finite amount of data each will keep before it begins to overwrite themselves
2. If you are using an operating system such as OSX/Linux using the ntfs 3G drivers for writing/accessing a NTFS volume they do not update the $logfile or $usnjrnl for their activities. It will update the $logfile to show that the file system was cleanly mounted though.
3. Because the $USNJrnl keeps less data than the $logfile its like that the $usnjrnl will contain more historical data than the $logfile
What can you do to make this even better?
1. If you analyze machines in your environment you can alter the sizes of the $usnjrnl and the $logfile so that they retain much more data.
2. You can get a copy of our forthcoming NTFS-TRIFORCE parser which will take in data from all three sources to get a complete view of the file system.
More details you say? I'll do that in the next blog post (promise) it's taken too long to just write this one.
Thanks for sticking with me and my erratic schedule through 2012. One of my resolutions for 2013 is to get better about regularly blogging and writing about new things we are seeing/doing. The new book is in copyedit at the moment, http://www.amazon.com/Computer-Forensics-Infosec-Guide-Beginners/dp/007174245X/ref=sr_1_21?ie=UTF8&qid=1357249339&sr=8-21&keywords=computer+forensics, not quite sure why they changed the title but it's supposed to be Computer Forensics, A beginner's guide. It's meant for those people who already in IT and moving into a DFIR role either within their company or on their own. I'm actually putting together a series of Youtube videos to go along with it, they'll be found here http://www.youtube.com/learnforensics and I'll be uploading some sample cases to work through that match the book. More on all of that when the book is released though!
Now for why you are (most likely) here, NTFS internal forensics. Over the past year or so if you've been reading or watching me speak you'll know that we've been focused on the $logfile. Since then we've expanded our research into other file systems (we have a working ext3/4 journal parser now that can recover deleted files names and re-associate them with their inodes/metadata) but we are always keeping an eye on NTFS to see what else we can do to expand our knowledge and capabilities. After re-evaluating the USN Journal thanks to Corey Harrell's blog (http://journeyintoir.blogspot.com/2013/01/re-introducing-usnjrnl.html) we've come to recognize that to get a bigger picture our view of previous file system activities can link up to form the NTFS Forensics TRIFORCE! It's dumb I know but it works, see the illustration below.
The $MFT, Master File Table, is always the primary indicator of what the current state of the file system is. If a defrag hasn't run, which windows 7 is very aggressive about and defaults to once a week now for auto defragging (mine is set for every Wednesday but I don't know if that's a default setting), then you can see the deleted/Inactive NTFS file/directory entries prior to the last defrag run. That's not enough for us as forensic investigators though, we need to know more about the prior states of the file system in order to perform our work. So how can we roll back time and see what happened before? A good answer in windows vista/7 systems for many has been shadow copies and shadow copies are amazing for the forensic investigator. What the MFT and the contents of the file system for a shadow copy show you though is the current state of the file system at the time it was capture, a snap shot of the file system. To know what actions took place between snap shots or before snap shots you have to look deeper. The two file system journals, and in this post we are just focusing on file system journals not forensic artifacts that are logging actions related to a specific user activity, that exist are the $logfile and the $USNJrnl.
The $logfile, the primary focus of our initial research, contains the before and afters or undo/redo for each change to the MFT. It contains the full entry of what is being changed (File records, $STDINFO blocks, etc...) and the metadata contained within them (with the exception of resident files on windows 7 journals whose contents appear to be nulled out). The $logfile is great for getting a very granualar level exactly what changes have occurred to a file system.
The $USNJrnl creates a summary of actions taken against a document from the time its opened to the time its closed. The $USNJrnl like the $logfile is circular being that overtime it will be overwritten but like the $logfile if you access the volume shadow copy using libvshadow you can get access to prior backups of it. The $USNJrnl keeps the name of the file being changed, the file id of the file being changed in the MFT, the Parent ID of the directory that contains the file in the MFT and the date the change occured as well as the USN which if the offset into the $USNJrnl where the data regarding the file begins.
Each of these data sources by themselves provide a wealth of information, but all are incomplete without each other. The MFT does not reflect past states, the $USNJrnl does not contain metadata and the $logfile does not always reference a file by name and file id. However, taken together they link up as seen in more detail below to create a view of historical actions like we've never been able to see before:
Ok so what does all that mean?
For any individual file we can determine the following:
a. What changes have occurred to the file and when
b. What metadata did the file have before and after each change (modification, creation, access, size, location)
c. What was a file renamed to?
d. What files previously existed in a directory?
Who is this useful too?
1. Malware Analysts you can now see every file system change happening including anti-forensic attempts like time stamp alteration, deletion, renaming and overwriting
2. Incident Responders you can do the same against attackers who are attempting to hide their activities and track what files they are accessing even if access dates are disabled
3. Forensic Investigators - so much more data regarding a suspects activities on the file system including the detection of spoliation
4. Everyone!
What are the limitations?
1. Both the $logfile and the $usnjrnl are circular with a max size, so there is a finite amount of data each will keep before it begins to overwrite themselves
2. If you are using an operating system such as OSX/Linux using the ntfs 3G drivers for writing/accessing a NTFS volume they do not update the $logfile or $usnjrnl for their activities. It will update the $logfile to show that the file system was cleanly mounted though.
3. Because the $USNJrnl keeps less data than the $logfile its like that the $usnjrnl will contain more historical data than the $logfile
What can you do to make this even better?
1. If you analyze machines in your environment you can alter the sizes of the $usnjrnl and the $logfile so that they retain much more data.
2. You can get a copy of our forthcoming NTFS-TRIFORCE parser which will take in data from all three sources to get a complete view of the file system.
More details you say? I'll do that in the next blog post (promise) it's taken too long to just write this one.