New Epilog Signature files released

Epilog Signature files allow users to add specific support for new databases they encounter and although they are designed so that Epilog’s users can create their own signatures when the need arises, CCL-Forensics are committed to updating and releasing a sets of signatures, pre-written and ready to use.

In this new release we have had a real focus on smartphones adding support for:
• iOS6
• Android 4.0 (Ice Cream Sandwich)
• Android 4.1 (Jelly Bean)
• Android 3rd Party Applications
• iOS 3rd Party Applications
• Skype

We always welcome suggestions for signatures that you’d like to see added to the signature collection so please get in touch on epilog@ccl-forensics.com

For more information on epilog please visit our website – www.cclgroupltd.com/Buy-Software/

Chrome Session and Tabs Files (and the puzzle of the pickle)

In this blog post Alex Caithness investigates the file format and contents of Chrome’s “Current Tabs”, “Current Session”, “Last Tabs” and “Last Session” files and discovers that, even with the original source code at your side, you can still end up getting yourself into a Pickle.

A link to a Python script for automating the process can be found at the end of the post.

I’ve been on a bit of a browser artefacts kick as of late, digging around both on desktop and mobile platforms for stuff I haven’t tackled before. Taking a peek in my preferred browser’s (Chrome) “AppData” folder revealed that the ubiquitous-ness of SQLite as a storage format means that inspecting the data for a lot of artefacts has been made pretty simple. I had also recently tackled the Chromium web-cache format for another project (the format is now also used both on Android and RIM Playbooks) and, with the pain that caused me still fresh in my mind I had no desire to revisit it. There were, however, four likely looking candidates for a quick probing in the form of the “Current Tabs”, “Current Session”, “Last Tabs” and “Last Session” files.

Chrome AppData Folder

Hello there…

Broadly speaking, these files store the state of the opened tabs, their back-forward lists and the sites displayed therein. The files can be used by Chrome to restore your previous browsing session when you restart the browser (if that’s how you have it set up) or in the event of a crash. It turns out that these files can contain some really rich data, but first you had to do battle with the file format…

In previous posts I’ve made mention of the usefulness of having access to the source code that governs the format in which the data is to be stored, and as Chrome is open source I was heartened. “This shouldn’t be too tricky,” I thought to myself as I set about finding the ‘few lines of code’ which would unlock the file’s secrets… Let me tell you now: the Chrome source is a sprawling behemoth and my journey across the codebase (and on one occasion, outside of it) was long and arduous, and, when it comes down to it, it all boils down to understanding the ‘Pickle’…

Header of the Session file

The file header

The file header was easy to track down, I headed over to the definition for session_backend (src/chrome/browser/sessions/session_backend.cc) where we confirm that “SNSS” is simply a file signature followed by a 32bit integer giving the version of the file, which, at the time of writing, should always be 1 (all data is stored little-endian). Also in this file we encounter a method named “AppendCommandsToFile” which appears to be responsible for writing the details into the files. The method describes that for each record, a 16-bit integer is written to the file giving the size in bytes of the record (not including this value), followed by an 8-bit “id” (which appears to relate to the ‘type’ of the record) and the contents of the “SessionCommand”.

Record structure overview

Record structure overview

So now I knew what the overview of the structure in the file was: a nice simple size, contents, size, contents, size, contents… etc. file format, with the records written sequentially, one after another. But I still had no information about the structure of those contents. SessionBackend was operating with a SessionComand object so I tracked down the source code describing this object (src/chrome/browser/sessions/session_command.h) but was disappointed to find the following explanation in the source code’s comments:

“SessionCommand contains a command id and arbitrary chunk of data. The id and chunk of data are specific to the service creating them.”

OK, so the information I wanted isn’t going to be here, but the comments go on to say:

“Both TabRestoreService and SessionService use SessionCommands to represent state on disk”

Aha! So although I hadn’t quite found what I was looking for here, I have found a useful signpost pointing in the right direction. Now, neither “TabRestoreService“ (src/chrome/browser/sessions/tab_restore_service.h) nor “SessionService” (src/chrome/browser/sessions/session_service.h) themselves give us the information we’re after, but both of them ‘inherit’ from a common base class called “BaseSessionService” (src/chrome/browser/sessions/base_session_service.cc) (I gave a brief overview of object oriented principals including inheritance in a previous blog post)  and it is in BaseSessionService where we finally get what we’re after…

BaseSessionService contains a method called “CreateUpdateTabNavigationCommand” which is responsible for writing that “arbitrary chunk of data” into the SessionCommand which eventually gets written to disk. The record starts with a 32 bit integer which gives the length of the data (this is in addition to the length value outside the SessionCommand). The rest of the SessionCommand’s contents structure is described in the table below.

SessionCommand serialisation

SessionCommand structure

Data type Meaning
32 bit Integer Tab ID
32 bit Integer Index in this tab’s back-forward list
ASCII String (32 bit Integer giving the length of the string in characters followed by an ASCII string of that length) Page URL
UTF-16 String (32 bit Integer giving the length of the string in characters followed by a UTF-16 string of that length) Page Title
Byte string (32 bit Integer giving the length of the string in bytes followed by a byte string of that length) “State” (A data structure provided by the WebKit engine describing the current state of the page. We will look at it in detail later)
32 bit Integer Transition type (explained below)
32 bit Integer 1 if the page has POST data, otherwise 0
ASCII String (see above) Referrer URL
32 bit Integer Referrer’s Policy
ASCII String Original Request URL (for example if a redirect took place)
32 bit Integer 1 if the user-agent was overridden, otherwise 0

As SessionCommands contents can be populated by other means, not every Session command contains data formatted as shown above. During testing it was shown that it is the SessionCommand’s 8-bit ID which identifies whether the record contains this kind of data (when the ID was 1 or 6 then this data format was found). Those with other IDs were typically much shorter (usually around16-32 bytes in length) and did not appear to contain information which was of so much interest.

There are a few fields in the table above which are worth taking a closer look at; the “State” field we’ll explore in detail later as it’s a complicated one. The “Transition type” is a little easier to explain; this field tells Chrome how the page was arrived at. The field will be an integer number, the meaning of which is described in the tables below. The value is essentially split into two sections: the least significant 8-bits of the integer give a type of transition and the most-significant 24-bits form a bit-mask which gives other details. These details are gathered from page_transition_types (content/public/common/page_transition_types.h).

Least Significant 8-bits Value Meaning
0 User arrived at this page by clicking a link on another page
1 User typed URL into the Omnibar, or clicked a suggested URL in the Omnibar
2 User arrived at page through a  bookmark or similar (eg. “most visited” suggestions on a new tab)
3 Automatic navigation within a sub frame (eg an embedded ad)
4 Manual navigation in a sub frame
5 User selected suggestion from Omnibar (ie. typed part of an address or search term then selected a suggestion which was not a URL)
6 Start page (or specified as a command line argument)
7 User arrived at this page as a result of submitting a form
8 Page was reloaded; either by clicking the refresh button, hitting F5 or hitting enter in the address bar. Also given this transition type if the tab was opened as a result of restoring a previous session.
9 Generated as a result of a keyword search, not using the default search provider (for example using tab-to-search on Wikipedia). Additionally a transition of type 10 (see below) may also be generated for the url: http:// + keyword
10 See above
Bit mask Meaning
0x01000000 User used the back or forward buttons to arrive at this page
0x02000000 User used the address bar to trigger this navigation
0x04000000 User is navigating to the homepage
0x10000000 The beginning of a navigation chain
0x20000000 Last transition in a redirect chain
0x40000000 Transition was a client-side redirect (eg. caused by JavaScript or a meta-tag redirect)
0x80000000 Transition was a server-side redirect (ie a redirect specified in the HTTP response header)

NB during testing, although the transition types looked correct in the “Current Session” and “Last Session” files, in the “Current Tabs” and “Last Tabs” files the transition type was always recorded as type 8 (Reloaded page).

When it comes to the record structure, there is still a little more to the story, and yes, this is where the Pickles come in.

This data structure is not being written directly to a file, but rather to what Chrome calls a “Pickle” (src/base/pickle.h). A Pickle is a sort of ‘managed buffer’; a way for Chrome to write (and read) a bunch of values, like those in the tables above, into an area of memory in a controlled way. Indeed, the “length-value” structure we see with the strings is down to the way Pickles write strings into memory, as is the, apparently superfluous, extra ‘length’ field at the start of the record structure. One other pickle-related side-effect which isn’t necessarily immediately obvious when you look at the data in a hex editor is that pickles will always write data so it is uint32-aligned. This means that data will always occupy blocks of 4 bytes and if needed (such as in the case of strings) will be padded to ensure that the next data begins at the start of the next 4-byte block.

It turns out that the contents of the mysterious “State” field are also governed by a Pickle. This field contains serialised data from the WebKit engine. The data is held in a “NavigationEntry” (content/public/browser/navigation_entry.h) “content state” field, but is originally populated by glue_serialize  (webkit/glue/glue_serialize.cc). It duplicates some of the data that we have already described from the outer record, but also contains some more detailed information regarding the state of the page, not least the contents of any forms on the page. The code describing the serialisation process is found in glue_serialize in the WriteHistoryItem method.

The state byte string begins with a 32 bit Integer giving the length of the rest of the record (this is in addition to the length defined in the outer record structure) and then continues with the “WebHistoryItem” structure shown in the table below:

WebHistoryItem structure

WebHistoryItem structure

Data type Meaning
32 bit Integer Format Version
String (see below) Page URL
String (see below) Original URL (for example if a redirect took place)
String (see below) Page target
String (see below) Page parent
String (see below) Page title
String (see below) Page alternative title
Floating point number (see below) Last visited time
32 bit Integer X scroll offset
32 bit Integer Y scroll offset
32 bit Integer 1 if this is a target item otherwise 0
32 bit Integer Visit count
String (see below) Referrer URL
String Vector (see below) Document state (form data) – explained in more detail below
Floating point number (see below) Page scale factor (Only present if the version field is greater than or equal to 11)
64 bit Integer “Item sequence number” (Only present if the version field is greater than or equal to 9)
64 bit Integer “Document sequence number” (Only present if the version field is greater than or equal to 6)
32 bit Integer 1 if there is a “state object” otherwise 0 (Only present if the version field is greater than or equal to 7)
String (see below) “State Object” (only present if the value above is 1 and the version field is greater than or equal to 7)
Form data (see below) Form data
String (see below) HTTP content type
String (see below) Referrer URL (again, for backwards compatibility apparently)
32 bit Integer Number of sub-items in the field below
WebHistoryItem Vector (see below) A number of sub items (for example embedded frames). Each record has the same structure as this one

That table has a lot of “See below” in it, so let’s get down to explaining some of the subtleties/oddities that this data structure provides.

Strings: strings are actually stored differently to those in the outer record. Despite the fact that the data is still being written into a Pickle, the source code uses a different mechanism to do so. The source code forsakes the Pickle’s built in string serialisation methods (for reasons best known to the Chrome programmers), instead taking a more direct route of writing the length of the string directly, followed by the in-memory representation of the string. Basically, this results in the string fields comprising a 32-bit Integer giving the length of the string followed by a UTF-16 string only, this time the length refers to the length in bytes, not the length in characters. To further confuse matters, if the length is -1 (0xFFFFFFFF) this indicates that the string is not present (or ‘null’ in programming terms) or un-initialised (and therefore empty). There is an exception to this structure: if the version field is 2, where, as the comments in the source code suggest, the format was “broken” and stored the number of characters, this was fixed in version 3 onwards.

String Vector: “Vector” in this case essentially means ‘List’. The vector begins with a 32-bit Integer giving the number of entries in the list which is then followed by that many strings in the format described above. In the data structure above this is used to serialise what is described as the “document state”. In testing this appeared to contain information regarding any form fields that may be present on the page (including hidden fields). The list of strings can be broken up into groups of 3 strings, the first of which gives the name of the form field, the second the type of field and the third the current contents of the field.

Floating Point Numbers: IEEE 754 double-precision floating point numbers are used as a representation, but Pickles do not directly support this data type. Because of this, the code uses the Pickle’s “WriteData” method, passing the internal, in-memory representation of the floating point number into the Pickle. The upshot of using the “WriteData” method is that the 64-bit floating point number is prefaced with a 32-bit integer giving the length of the data (which will always be 8 for a double-precision float).

Form Data: the (slightly convoluted) format for this data serialisation is detailed in the WriteFormData method in glue_serialize, however across testing this data was never populated so I can’t vouch for its contents.

Sub items: this contains further WebHistoryItems for any embedded pages or resources on the page. During testing I saw it used to store details of adverts, Facebook “like” buttons and so on. The structure for these sub items is identical to the structure described in the table (note, however, that unlike the top-level WebHistoryItem they do not begin with a size value).

So that’s the structure of the file – not the most pleasant file format I’ve ever dealt with and, even with the source code on hand, it was a lengthy task. So was it worth it?

Well first the case against: a lot of the data is duplicated in other places, not least the History database (which is SQLite so much nicer to work with), and between the “Current” and “Last” versions of the files you only have information regarding 2 sessions worth of browsing, although, increasingly in today’s “always-on” culture, this could still account for a significant period of browsing. Which brings me to the other significant disappointment for these files – timestamps (or rather the apparent lack of them); of course, this makes perfect sense when you consider what Chrome needs the files for – timestamps simply aren’t required for restoring sessions, all the same, it’d make the file more useful to us if they were there.

But it’s not all doom and gloom (which is lucky, otherwise this blog post would be a bit of a waste of time). Firstly, although we only have 2 sessions worth of browsing live on the system, colleagues have already demonstrated to me that there is plenty of scope for recovering previous examples of the files – especially from volume shadow copies, and the 8-byte long static header means that carving files from unallocated space may be possible (no footer though, so some judgement would need to be made regarding the length of the files). Probably more importantly these files give us access to information which it would be tricky to acquire otherwise (or at the very least another opportunity to recover information which may have been deleted); the form contents are obviously a nice additional source of intelligence, both in terms of user credentials, email addresses and possibly message contents (I was able to recover Facebook chat messages from the form data in the “document state” for example). Also, the presence of the transition types, referrer and requested URL fields means that you can build up detailed browsing behaviour profiles, tracking the movement between sites and tabs.

This is not a file format that I would want to parse by hand again, so to automate the process I have written a Python script which we’re happy to make available to the forensics community. The script is designed both as a command line tool which generates a simple HTML report and a class library in case anyone wishes to integrate it into other tools (or create a different reporting format). You can download the script from http://code.google.com/p/ccl-ssns/.

As always, if you have any comments or questions you can get in touch in the comments or by emailing research@ccl-forensics.com

Alex Caithness

The Forensic Implications of SQLite’s Write Ahead Log

By Alex Caithness, CCL-Forensics

SQLite is a popular free file-based database format which is used extensively both on desktop and mobile operating systems (it is one of the standard storage formats available on both Android and iOS). This article sets out to examine the forensic implications, both pitfalls and opportunities, of a relatively new feature of the database engine: Write Ahead Log.

Before we begin, it is worth taking a moment to describe the SQLite file format. Briefly, records in the database are stored in file which in SQLite parlance is called the ‘Database Image’. The database image is broken up into “pages” of a fixed size (the size is specified in the file header). Each page may have one of a number of roles, such as informing the structure of the database, and crucially holding the record data itself. The pages are numbered internally by SQLite starting from 1.

Historically SQLite used a mechanism called “Rollback Journals” for dealing with errors occurring during use of the database. Whenever any data on a page of the database was to be altered, the entire page was backed up in a separate journal file. At the conclusion of a successful transaction the journal file would be removed; conversely if the transaction was interrupted for any reason (crash, power cut, etc.) the journal remained. This means that if SQLite accesses a database and finds that a journal is still present something must have gone wrong and the engine will restore the database to its previous state using the copies of pages in the journal, avoiding corrupted data.

From version 3.7.0 of the SQLite engine an alternative journal mechanism was introduced called “Write Ahead Log” (ubiquitously shortened to “WAL”). WAL effectively turned the journal mechanism on its head: rather than backing up the original pages then making changes directly to the database file, the database file itself is untouched and the new or altered pages are written to a separate file (the Write Ahead Log). These altered or new pages will remain in the WAL file, the database engine reading data from the WAL in place of the historic version in the main database. This continues until a “Checkpoint” event takes place, finally copying the pages in the WAL file into the main database file. A Checkpoint may take place automatically when the WAL file reaches a certain size (by default this is 1000 pages) or performed manually by issuing an SQL command (“PRAGMA wal_checkpoint;”) or programmatically if an application has access to the SQLite engine’s internal API.

Initial state: No pages in the WAL

Initial state: No pages in the WAL

Page Altered - new version written to WAL

Page 3 is altered. The new version of the page is written to the WAL and the database engine uses this new version rather than the old version in the database file itself.

Checkpoint

A checkpoint operation takes place and the new version of the page is written into the database file.

It is possible to detect whether a database is in WAL mode in a number of ways: firstly this information is found in the database file’s header; examining the file in a hex editor, the bytes at file offset 18 and 19 will both be 0x01 if the database is using the legacy rollback journal or 0x02 if the database is in WAL mode. Secondly you can issue the SQL command “PRAGMA journal_mode;” which will return the value “wal” if the database is in WAL mode (anything else indicates rollback journal). However, probably the most obvious indication of a database in WAL mode is the presence of two files named as “<databasefilename>-wal” and “<databasefilename>-shm” in the same logical directory as the database (eg. if the database was called “sms.db” the two additional files would be “sms.db-wal” and “sms.db-shm”).

The “-wal” file is the actual Write Ahead Log which contains the new and updated database pages, its structure is actually fairly simplistic. The “-wal” file is made up of a 32 byte file header followed by zero or more “WAL frames”. The file header contains the following data:

Offset Size Description
0 4 bytes File signature (0x377F0682 or 0x377F0683)
4 4 bytes File format version (currently 0x002DE218 which interpreted as a big endian integer is 3007000)
8 4 bytes Associated database’s page size (32-bit big endian integer)
12 4 bytes Checkpoint sequence number (32-bit big endian integer which is incremented with every checkpoint, starting at 0)
16 4 bytes Salt-1 Random number, incremented with every checkpoint *
20 4 bytes Salt-2 Random number, regenerated with every checkpoint
24 4 bytes Checksum part 1 (for the first 24 bytes of the file)
28 4 bytes Checksum part 2 (for the first 24 bytes of the file)

* In testing it was found that although the official (and at the time of writing, up to date) command line version of SQLite v3.7.11 behaved correctly, when using SQLite Expert v3.2.2.2.2102 this value appeared to be regenerated after each checkpoint (which is assumed by the author to be incorrect behaviour)

The WAL Frames that follow the header consist of a 24 byte header followed by the number of bytes specified in the file header’s “page size” field which is the new or altered database page. The Frame Header takes the following form:

Offset Size Description
0 4 bytes Database page number (32-bit big endian integer)
4 4 bytes For a record that marks the end of a transaction (a commit record) this will be a 32-bit big endian integer giving the size of the database file in pages, otherwise 0.
8 4 bytes Salt-1, as found in the WAL header at the time that this Frame was written
12 4 bytes Salt-2, as found in the WAL header at the time that this Frame was written
16 4 bytes Checksum part 1 – cumulative checksum up through and including this page
20 4 bytes Checksum part 2 – cumulative checksum up through and including this page

There are a number of potential uses and abuses for the WAL file in the context of digital forensics, but first, the behaviour of SQLite while in WAL mode should examined. A number of operations were performed on a SQLite database in WAL mode. After each operation the database file along with its “-shm” and “-wal” files were copied, audited and hashed so that their states could be examined.

Step 1: Create empty database with a single table:

8a9938bc7252c3ab9cc3da64a0e0e06a *database.db
b5ad3398bf9e32f1fa3cca9036290774 *database.db-shm
da1a0a1519d973f4ab7935cec399ba58 *database.db-wal

1,024       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  0
WAL Salt-1: 3046154441
WAL Salt-2: 220701676

Viewing the database file using a hex editor we find a single page containing the file header and nothing else. As noted as well as creating a database file, a table was also created, however this data was written to the WAL in the form of a new version of this page. The WAL contains two frames, this new version of the first page in addition to a second frame holding an empty table page. When accessing this database through the SQLite engine this information is read from the “-wal” file transparently and we see the empty table, even though the data doesn’t appear in the database file itself.

Step 2: Force a checkpoint using PRAGMA command:

dd376606c00867dc34532a44aeb0edb6 *database.db
1878dbcefc552cb1230fce65df13b8c7 *database.db-shm
da1a0a1519d973f4ab7935cec399ba58 *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  0
WAL Salt-1: 3046154441
WAL Salt-2: 220701676

Using the pragma command mentioned above, the database was “checkpointed”. Accessing the database through SQLite we see no difference to the data but examining the files involved, we can clearly see that the database file has changed (it has different hash) furthermore it has grown. Looking inside the database file we can see the two pages from the “-wal” file have now been written into the database file itself and SQLite will be reading this data from here rather than the “-wal” file.

The WAL Checkpoint number and salts were not changed at this point, as we will see they are altered the next time that the WAL is written to.

Another interesting observation is that the “-wal” file was left completely unchanged during the checkpoint process – a fact that will become extremely important in the next step.

Step 3: Insert a single row:

dd376606c00867dc34532a44aeb0edb6 *database.db
6dc09958989a6c0094a99a66531f126f *database.db-shm
e9fc939269dbdbfbc157d8c12be720ed *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  1
WAL Salt-1: 3046154442
WAL Salt-2: 534753839

A single row was inserted into the database using a SQL INSERT statement. Once again we arrive at a situation where the database file itself has been left untouched, evidenced by the fact that the database file’s hash hasn’t altered since the last step.

The “-wal” file hasn’t changed size (so still contains two WAL frames) but clearly the contents of the file have changed. Indeed, examining the file in a hex editor we find that the first frame in the file contains a table page containing the newly inserted record as we would expect. What is interesting is that the second frame in the file is the same second frame found in the file in the previous two steps. After a checkpoint the “-wal” file is not deleted or truncated, it is simply reused, frames being overwritten from the top of the file.

Examining the Frame’s headers we see the following:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154442 534753839
2 2 2 3046154441 220701676

Both frames relate to the same page in the database but their salt values differ. As previously noted these two salt values are copied from the WAL file header as they are at the time of writing. Salt-2 is regenerated upon each checkpoint, but key here is Salt-1 which is initialised when the WAL is first created and then incremented upon each checkpoint. Using this value we can show that the page held in second frame of the WAL is a previous version of page held in the first frame: we can begin to demonstrate a timeline of changes to the database.

Step 4: Force a checkpoint using PRAGMA command:

704c633fdceceb34f215cd7fe17f0e84 *database.db
a98ab9ed82393b728a91aacc90b1d788 *database.db-shm
e9fc939269dbdbfbc157d8c12be720ed *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  1
WAL Salt-1: 3046154442
WAL Salt-2: 534753839

Once again a checkpoint was forced using the PRAGMA command.  As before the updated pages in the WAL were written into the database file and this operation had no effect on the contents of the “-wal” itself. Viewing the database using the SQLite engine shows the same data as in the previous step.

Step 5: Insert a second row, Update contents of the first row:

704c633fdceceb34f215cd7fe17f0e84 *database.db
d17cf8f25deaa8dbf4811b4d21216506 *database.db-shm
ed5f0336c23aef476c656dd263849dd0 *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  2
WAL Salt-1: 3046154443
WAL Salt-2: 3543470737

A second row was added to the database using a SQL INSERT statement and the previously added row was altered using an UPDATE statement.

Once again, and as is now fully expected, the database file is unchanged, the new data has been written to the WAL. The WAL contains two frames: The first holds a table page containing the original record along with our newly added second record; the second frame holds a table page containing the updated version of our original record along with the new, second record. Examining the frame headers we see the following:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154443 3543470737
2 2 2 3046154443 3543470737

In this case both frames contain data belonging to the same page in the database and the same checkpoint (Salt-1 is the same for both frames); in this case the order of events is simply detected by the order in which the frames appear in the file – they are written to the file from the top, down.

Step 6: Insert a third row:

704c633fdceceb34f215cd7fe17f0e84 *database.db
5ac6d9e56e6bbb15981645cc6b4b4d6b *database.db-shm
672a97935722024aff4f1e2cf43d83ad *database.db-wal

2,048       database.db
32,768      database.db-shm
3,176       database.db-wal

WAL Checkpoint Number:  2
WAL Salt-1: 3046154443
WAL Salt-2: 3543470737

Next, a third row was added to the database using an INSERT statement. Viewing the database logicaly with the SQLite engine we see all three records. While database file remains unchanged, the “-wal” file now contains 3 frames: the first two are as in the previous step with the third and final new frame holding a table page with all three records. The frame headers contain the following information:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154443 3543470737
2 2 2 3046154443 3543470737
3 2 2 3046154443 3543470737

We now have three versions of the same page, as before the sequence of events is denoted by the order they occur in the file.

Step 7: Force a checkpoint using PRAGMA command:

04a16e75245601651853fd0457a4975c *database.db
05be4054f8e33505cc2cd7d98c9e7b31 *database.db-shm
672a97935722024aff4f1e2cf43d83ad *database.db-wal

2,048       database.db
32,768      database.db-shm
3,176       database.db-wal

WAL Checkpoint Number:  2
WAL Salt-1: 3046154443
WAL Salt-2: 3543470737

As we have observed before the checkpoint results in to the up-to-date records being written into the database, the “-wal” file is unaffected.

Step 8: Delete A Row:

04a16e75245601651853fd0457a4975c *database.db
dca5c61a689fe73b3c395fd857a9795a *database.db-shm
3b518081a5ab4a7be6449e86bb9c2589 *database.db-wal

2,048       database.db
32,768      database.db-shm
3,176       database.db-wal

WAL Checkpoint Number:  3
WAL Salt-1: 3046154444
WAL Salt-2: 2798791151

Finally in this test, the second record in the table (the record added in Step 5) was deleted using an SQL DELETE statement. Accessing the database using the SQLite engine shows that the record is no longer live in the database.

As per expectations the database file is unaffected by this operation, the changes instead being written to the WAL. The “-wal” file contains three frames:  the first frame holds a table page with the second record deleted (the data can still be seen, and could be recovered using a tool such as Epilog, however the metadata on the page shows that the record is not live). The remaining two pages are identical to the final two frames in the previous step. Examining the frame headers we see the following:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154444 2798791151
2 2 2 3046154443 3543470737
3 2 2 3046154443 3543470737

Here we once again see three frames all containing data from the same database page, this time the most recent version of the page is found in frame 1 as it has the highest Salt-1 value; the other two frames have a lower Salt-1 value and are therefore older revisions; as they both share the same Salt-1 value we apply the “position in file” rule, the later in the file the frame occurs, the newer it is. So in order of newest to oldest the frames are ordered: 1, 3, 2.

Summarising the findings in this experiment:

  • Altered or new pages are written to the WAL a frame at a time, rather than the database file
  • The most up-to-date pages in the WAL are written to the database file on a Checkpoint event – this operation leaves the “-wal” file untouched
  • After a Checkpoint, the “-wal” file is reused rather than deleted or truncated,  with new frames
  • Multiple frames for the same database page can exist in the WAL, their relative ages can be derived by first examining the frame header’s Salt-1 value with newer frames having higher values. Where multiple frames have the same Salt-1, their age is determined by their order in the WAL, with newer frames occurring later

Pitfalls and Opportunities

The most obvious opportunity afforded by the Write Ahead Log is the potential for time-lining of activity in database. To prove the concept, a small Python script was written which would automate the analysis of the frames in a WAL file and provide a chronology of the data; a sample output is shown below:

Header Info:
    Page Size: 1024
    Checkpoint Sequence: 3
    Salt-1: 3046154444
    Salt-2: 2798791151

Reading frames...

Frame 1 (offset 32)
    Page Number: 2
    Commit Size: 2
    Salt-1: 3046154444
    Salt-2: 2798791151

Frame 2 (offset 1080)
    Page Number: 2
    Commit Size: 2
    Salt-1: 3046154443
    Salt-2: 3543470737

Frame 3 (offset 2128)
    Page Number: 2
    Commit Size: 2
    Salt-1: 3046154443
    Salt-2: 3543470737

Unique Salt-1 values:
    3046154443
    3046154444

Chronology of frames (oldest first):

Page Number: 2
    Frame 2
    Frame 3
    Frame 1

With further work it should be possible to display a sequence of insertions, updates and deletions of records within a database – a feature which is a top priority for the next update of Epilog. Even without the ability to timeline, it is clear that deleted records can be stored and recovered from the WAL (functionality already present in Epilog).

One behaviour which hasn’t been described in full so far is that a database file in WAL mode isolated from its associated “-wal” file is, in almost all circumstances, a valid database in its own right. For example, consider the test database above as it is at the end of Step 8. If the database file was moved to another directory, as far as the SQLite database engine is concerned this is a complete database file. If this isolated database file was queried, the data returned will be that which was present at the last checkpoint (in our test case, this would be the 3 live records present at the checkpoint performed in step 7).

This raises an important consideration when working with a SQLite database contained in a disk image or other container (eg. a TAR archive): if the database file is extracted from the image or container without its associated WAL files, the data can be out-of-date or incomplete. The other side of the coin is that the “full up-to-date” version of the data (viewed with the WAL present) may lack records present in the isolated database file because of deletions pending a checkpoint. There is, then, an argument for examining databases both ways: complete with WAL files and isolated as it may be possible to obtain deleted records “for free”.

Summing Up

The Write Ahead Log introduced in SQLite 3.7 may afford digital forensics practitioners new opportunities to extract extra data and behaviour information from SQLite databases; however the mechanism should be understood to get the most of the new opportunities and avoid confusion when working with the databases.

If you have any comments or questions, please leave a comment below or get in touch directly at research@ccl-forensics.com.

References:

SQLite File Format

Write Ahead Log

Alex Caithness, CCL-Forensics

New version of PIP (XML and plist parser) coming soon

Our ever-popular XML and plist parsing tool, PIP, is coming of age with a new, improved version.

Currently in beta-test, and with the updated version available free to current PIP users (following its official release, obviously!), we’ll be announcing more details over the coming weeks.

As a teaser, this is what you can expect from the new version (v1.1):

  • Improved GUI – the layout of the application has been updated to improve work-flow
  • New Tree View – view your data graphically to see, at-a-glance, the structure of your data
  • Automatic XPath Building – Now you can use the Tree View to show PIP the data that you are interested in and PIP will generate the XPath automatically. This even works with Apple’s Property List ‘dictionary’ structures.
  • Import/Export Batch Jobs – Set-up a batch job of XPaths for a particular folder structure (iOS Library or Application folders for example) and then export the batch so that you, or anyone else in your lab can re-use it when you next come across the same data
  • Command line version – version 1.1 of PIP comes with the “pipcmd” command-line utility, allowing you to integrate PIP into tool chains and other automated tasks
For more information about XML and plist parsing, please visit http://www.ccl-forensics.com/pip or email us at pip@ccl-forensics.com.

Epilog version 1.1 to the launch pad

The long-awaited upgrade to epilog has arrived.

It is available as a free upgrade for existing epilog users and can be purchased by new users from our website.

Read on to find out what’s new – and take a look at our explanatory video on our YouTube channel…

Well, first off: epilog 1.1 includes a database rebuilder. For analysts with tools and scripts designed only to operate on live data, this will be a sanity saver. It’s an integrated solution for rebuilding recovered records into a copy of the live database, enabling deleted data to be parsed or processed.

It also allows the user to choose whether to include the current live records, options to disable triggers and remove constraints from the database schema to tailor the rebuilding.

We’ve been keeping up with new developments in the world of SQLite. Version 3.7 of the SQLite library introduced a new journal format called “Write Ahead Log” or WAL. The new version of epilog will permit WAL file parsing. It differs from the traditional journal mechanism in that it writes new data into a separate file when specifically asked to by the database engine, rather than backing up data to a rollback journal.

In epilog 1.1 the requirement for an “associated database” when conducting a raw data or disk image search has been removed, and instead the user can provide the database page seize and text encoding manually (the option to use an associated database is still available for when it’s more convenient). There are also extra options for improving results when reading from raw dumps from flash chips.

Epilog 1.1 will now mark in grey records that have been recovered but which are truncated; this allows the user to make more informed decisions about the data. We’ve also improved the signature search algorithm to remove the need for “in the case of multiple concurrent deletion” signatures.

New export modes have been added, allowing users to output to a flat tab separated values (tsv) file. The “INSERT export” has been overhauled to make it more convenient to use.

And finally, what was formerly the “Table Analysis” feature has been upgraded to “Database and Table Details” and now reports further information regarding the database structure and parameters.

The epilog team is always happy to receive comments and suggestions, so please feel free to get in touch either by leaving a comment below, or emailing epilog@ccl-forensics.com.

Digital forensic software – grab it while it’s hot!

CCL-Forensics is offering its software at introductory prices for just one more week, so take a look at what’s on offer and squeeze as much into tight budgets as you can.

The tried-and-tested software, developed by analysts, for analysts, has been used extensively in the field by CCL-Forensics’ own investigators and by many other digital investigators from around the world.

From March 31st, prices will be increasing, so take advantage of the lower rates now.

Leading research and development in digital forensics

CCL-Forensics’ research and development team has produced a series of forensic software tools to aid them in digital investigations.

epilog allows investigators to recover deleted data from the widely-used database format, SQLite. Whatever the type of device – computers, mobile phones, SatNavs or others – epilog can be used to recover information, regardless of the type of data stored.

PIP allows analysts to present often-complex data from XML files quickly and efficiently. The tool also parses data from Apple’s property list (plist) files – both in XML and binary format. It can be used to look at computers, mobile phones and SatNavs.

dunk! can uncover potential new web activity evidence from locally-stored web cookies, putting web evidence into context and adding an extra dimension to investigations. It also parses Google Analytics cookies, showing how often, from where, and how a user arrived at a particular site, as well as presenting any search terms used to find the page.

Find out more

For more information about what CCL-Forensics can offer or to purchase the software tools, please visit our website, call us on 01789 261200 or email info@ccl-forensics.com.

Hidden digital evidence part 4

Welcome to the final instalment of our short blog series on places you may not have thought to look for digital evidence.

  1. Biometric data

The use of fingerprints, retinal scans and facial recognition is no longer the stuff of science fiction. Its use may not be widespread yet, but it is certainly becoming a reality on a number of advanced devices.

Computers, laptops and mobile devices may be protected from unauthorised use by such biometric technology. If the data can be retrieved, it can help significantly with the attribution of that device – especially useful on mobile phones if cell site analysis is being carried out as part of the case.

It’s an emerging technology, so keep an eye on developments over the coming months and years. More and more evidential opportunities will present themselves.

  1. Near-field communications

This refers to technology such as the Oyster Card on London’s transport system, and credit cards are also being equipped with chips, which when waved near to a reader device can perform the same function as when it is inserted into a chip and pin device.

Such technology is also finding its way into mobile phones – the Google Wallet has been in the news recently (for all the wrong reasons – namely security issues). This is an app that enables users to pay for goods using their mobile phone using a near-field communication device.

Data may be recorded on the phone – including when it was used, where, and for what purpose. This could be a rich vein of potential evidence, and although its use is not widespread at the moment, it’s certainly worth keeping up with developments.

  1. Games consoles

Games consoles such as Xboxes, Playstations and Wiis now have web browsers as standard. They are much more than just a platform for gaming, with many allowing users to stream music over the internet, rent movies and interact with others online.

External hard drives can also be attached to consoles, enabling people to play music and movies from their collections through the console.

Consoles also enable instant messaging to take place, within games and simply via the messaging function in the console itself.

Use of all these features leaves a digital trail on the hard drive, making such devices a rich source of potential evidence.

  1. Throwaway culture

The focus has been on modern emerging technology, particularly smartphones. But it’s sometimes sensible to consider the other end of the scale: antique technology can also provide interesting evidential opportunities.

Many people tend to upgrade their handsets – and laptops and PCs – and put the old ones in a drawer or box. There they languish, gathering dust, until they’re needed for a criminal case.

It’s entirely possible to extract all sorts of evidence from these older devices, some of which can prove as tricky as the higher-end handsets and computers. Just because a phone or other device looks simple and cheap, the data it may contain could be crucial.

Here’s an example: CCL-Forensics used its web cache toolkit on a six-year-old Nokia 6230i handset and recovered 600 pages of web history – not necessarily the result you’d expect from a modest old device.

  1. With the network provider

A vast quantity of valuable evidence can be gathered from mobile phone network providers using cell site analysis. Network providers keep a variety of information about a phone’s use for a period of time – including the cell masts it used to make and receive calls.

This type of analysis can help to build up a picture of where the phone has been used, and can help in attributing different phones to their users – especially where criminals make use of “clean” and “dirty” phones. (Clean phones are those used for “normal”, everyday activities; dirty phones are those used in the course of crime, and are often thrown away afterwards.)

If a mobile phone has been used in crime, don’t discount the possibility that a wealth of evidence could be gathered using cell site analysis.

Here endeth our list of possible locations for digital forensic evidence – although it is, of course, not exhaustive. As technology advances, more and more types of digital evidence will become available for investigators.

CCL-Forensics’ R&D team keeps pace with new technology, developing tools and techniques for extracting this new data. If you have any questions, they’re always happy to chat. Just drop us a line at research@ccl-forensics.com.