Reverse Engineering Evernote Penultimate (or: When is a picture not a picture?)

In this post Alex Caithness takes a look at “Penultimate” on the iPad and discovers that a picture paints a thousand words… but only once you work out how that picture is stored.

Recently we came up against an iPad note-taking application by the name of “Penultimate”. As a user it’s actually really quite a nice app to use: you can create multiple “notebooks” each with as many pages as you require on which you can jot down notes, draw diagrams or just doodle with a stylus or your finger. For a user it’s nice, simple and intuitive, for an intrepid forensic analyst, it’s not quite as straight-forward…

Penultimate Hello World Screenshot

Penultimate Screenshot

After installing the application on a test iPad, creating a couple of notebooks and having a doodle in each of them, I performed an extraction of the device. Poking through the application’s folder, this is the layout of the files which we can acquire with a backup-style acquisition:

com.cocoabox.penultimate
└───Library
 ├───Preferences
 │     com.cocoabox.penultimate.plist
 │ 
 └───Private Documents
 │     notebookList
 │ 
 └───notebooks
 │     notebookListBackup
 │ 
 ├───23FDA4FF-CE5E-4353-8D9A-B0C3E3E8AAE7
 │     100F53F2-BD1F-4046-8104-714E42264DAE
 │ 
 └───DD47D226-0CB3-460F-A0F9-3A7E2A795B3D
       BBDC9CD6-2685-4AFF-BCB4-D370F975A3F1
       BEB76173-1F62-472B-B0DC-F69EF039A59F
       C1766626-9BD2-44F9-A2DD-D7C4F23367A8

Now, ideally what we’d liked to have seen at this point are some image files, a nice friendly JPEG, GIF or PNG or two would have been just lovely. But no, it’s not going to be that easy (and, in fairness, if it had been I wouldn’t be bothering with this blog would I?).

So what do we actually have? Well, we can see the familiar “Library” folder in which we find the (also familiar) “Preferences” folder which contains a property list, the contents of which are mostly mundane (though we do have application launch counts and dates which may be of interest in some situations). More interesting is the “Private Documents” folder and the files and folders which are contained within it.

In the root of the Private Documents folder is a binary property list file named “notebookList”. It is a ‘NSKeyedArchiver’ plist, which means, as we discussed in a previous blog, it needs some additional processing in order for us to see the actual structure of the data. Luckily I had PIP and the “ccl_bplist” Python module to hand, both of which can perform that transformation. With the data structure untangled I proceeded with digging around to see what, if anything, we could extract from the “notebookList”.

Among some less interesting data, the root object contains a list of the notebook objects which are stored by the device, under the “notebooks” key.

notebookList in PIP

notebookList in PIP

Each of the notebook objects (which are represented by dictionaries in the property list) contain the keys: ‘name’, ‘title’, ‘pageNames’, ‘created’, ‘modified’,  ‘changeCount’, ‘blankPages’, ‘creatingDeviceId’, ‘editingPageName’, ‘pageDataIsCooked’, ‘versionMajor’, ‘versionMinor’, ‘pagePaperStyles’, ‘paperStyle’, ‘imported’ , ‘coverColor’, ‘originalName’.

The first few there are of immediate interest: the value of “title” does indeed contain the title of the notebooks that I created during my testing and the “created” and “modified” timestamps also ring true. The values of the “name” keys are familiar as well, albeit for a different reason; in my test data we see the “name” values “DD47D226-0CB3-460F-A0F9-3A7E2A795B3D” and “23FDA4FF-CE5E-4353-8D9A-B0C3E3E8AAE7” – matching the directory names that were extracted underneath the “notebooks” folder (see above). Beyond this, the “pageName” key contains a list of values which also match the names of files in each of the “name” directories.

So, with the “notebookList” file we have some useful metadata and a helpful guide to how the other files are organised, but there’s still no sign of the content of the notes themselves. Delving deeper into the folder structure, our next stop is one of the files which was named in the “pageName” list mentioned above.

Opening one of the “page” files we find another “NSKeyedArchiver” property list. After unravelling the structure of the file we find a top-level object containing further metadata (including a “blankDate” which appears to match the “created” timestamp reported in the “notebookList” and the dimensions of the note) along with a list of “layers”. Each of the “layer” objects (again represented by dictionaries) have keys for the layer’s colour (more on that later) the layer’s dimensions and a list of “layerRects” – sections of the layer where the user has drawn their notes; and that’s where we finally find the image itself.

Sort of.

Structure of the page object

Structure of the page object

Each of the “layerRects” objects are represented by (and this shouldn’t be a surprise by now) a dictionary. There are two keys: firstly “rect” which contains a string of two sets of “x,y” co-ordinates: the bounds of this layerRect on the page. The second key is “values” which requires a little extra explanation. As noted in my previous post on NSKeyedArchiver files, their function is to represent a serialisation of objects; the object under the “values” key is a “CBMutableFloatArray” which is programmer talk for a list of floating-point numbers. With this in mind we quite reasonably expect to see just that – a list of floating point numbers; but no, instead we get a data field containing binary data! As nothing else in this data had been straight-forward this was disappointing, but by this point didn’t surprise me. Floating-point numbers (like any numerical data) can be represented quite happily in binary, so I set about trying to turn the binary data into a list of floating-point numbers. I extracted one of these data fields and with a nice Python one-liner did just that:

>>> struct.unpack("<18f", data)
(282.0, 589.0, 5.506037712097168, 281.8666687011719, 588.2999877929688, 5.506037712097168, 281.73333740234375, 587.5999755859375, 5.506037712097168, 281.5999755859375, 586.9000244140625, 5.506037712097168, 281.4666442871094, 586.2000122070312, 5.506037712097168, 281.33331298828125, 585.5, 5.506037712097168)

So now we can see the numbers, but what did they mean? The meaning becomes clearer if we group them into threes:

282.0, 589.0, 5.506037712097168
281.8666687011719, 588.2999877929688, 5.506037712097168
281.73333740234375, 587.5999755859375, 5.506037712097168
281.5999755859375, 586.9000244140625, 5.506037712097168
281.4666442871094, 586.2000122070312, 5.506037712097168
281.33331298828125, 585.5, 5.506037712097168

What we have here is sets of x-y co-ordinates and another value (the meaning of which will become clearer later). So, here, finally is our drawing; stored as a list of co-ordinates rather than any conventional graphics format – the combination of the points in the “layerRects” in each of the layer objects gives you the full picture.

But how to present this data? A list of co-ordinates like this is of little to no use; we want to see the image itself. So I got to thinking: how does the application treat these co-ordinates? Well, when the user draws in the application they are essentially ‘painting’: moving a circular ‘brush’ around the screen making up the lines in the drawing, so if I was able to plot a circle at each of these co-ordinates perhaps I would see the picture? This thought process also gave me an idea as to what the third value in each of the co-ordinate groupings might represent: part of what makes the drawings look natural is that the “line” that is drawn is not of a uniform width, it grows and shrinks with the speed at which the finger or stylus moves, giving an impression of weight – perhaps the final value related to this?

So how to plot these co-ordinates? First of all I looked at various Python imaging libraries (PIL and PyGame both came on my radar), but there were issues with both (especially with PIL which still lacks a proper Py3k port) so I turned my attention to an alternative solution: SVG. SVG (Scalable Vector Graphics) is a graphics format specified by the W3C; it uses XML to describe 2D graphics. Because SVG just uses XML, no imaging libraries would be required; I could simply generate textual mark-up describing where each of my circular points should be plotted on the screen. Taking the data above extracted above as an example, the mark-up would be along the lines of:

<!DOCTYPE svg>
<svg height="865" version="1.1" viewBox="0 0 718 865" 
     width="718" xmlns="http://www.w3.org/2000/svg">
    <circle cx="282.0" cy="589.0" 
            fill="#000000" r="5.506037712097168"/>
    <circle cx="281.8666687011719" cy="588.2999877929688" 
            fill="#000000" r="5.506037712097168"/>
    <circle cx="281.73333740234375" cy="587.5999755859375" 
            fill="#000000" r="5.506037712097168"/>
    <circle cx="281.5999755859375" cy="281.5999755859375" 
            fill="#000000" r="5.506037712097168"/>
    <circle cx="281.4666442871094" cy="586.2000122070312" 
            fill="#000000" r="5.506037712097168"/>
    <circle cx="281.33331298828125" cy="585.5" 
            fill="#000000" r="5.506037712097168"/>
</svg>

Each “circle” tag has an x and y co-ordinate (“cx” and “cy”) a fill colour (here expressed as an html colour code – black in this case) and a radius (“r”). Running some tests gave some good output; but things weren’t quite right, firstly, and most importantly, the image was mirrored in the y-axis, presumably caused by a difference of opinion between Penultimate and SVG as to where the y-axis originates, an easy fix (just subtract the ‘y’ value from the height of the image and use that value instead). Also, the lines in the writing all looked very chubby, making writing very tricky to read. Theorising that this was caused by Penultimate storing diameters and SVG requiring radii, I halved the value which improved things, but comparing the output with what I could see on the screen things still weren’t quite looking right so on a whim I halved the value again which made things look right (I’m not entirely sure why it should be the case that you have to quarter the value – it may be to do with penultimate adding a ‘feathered’ fade to the brush which increases it’s diameter though).

I created a proof-of-concept Python script to check that my thinking was correct and I was pleased to see that the output now matched what I could see on the iPad’s screen, save for the fact that the iPad was in glorious Technicolor and my script’s output was monochrome. I mentioned previously that the “layer” objects contained colour information under the unsurprisingly named “color” key.

The object stored in the “color” key has values for red, blue and green – each one a value between 0 and 1. Those readers who have had even the briefest dalliance with graphic manipulation programs will be familiar with colour-sliders: combining red, green and blue in different proportions in order to generate different colours, which is what these values represented. My SVG generated output was working with HTML colour codes which are made up of 3 bytes, each representing an amount of red, green and blue, this time using values from 0x00 to 0x0FF, to get colour into my output, all I had to do is multiply 0xFF by the correct value from the “layer” object’s colour fields and recombine those values into an HTML colour code. I modified my script and now the output reflected both the form and colour displayed by the App on the iPad.

Comparison between iPad and script output 1

Comparison between iPad and script output (1)

Comparison between iPad and script output 2

Comparison between iPad and script output (2)

Working on extracting data from Penultimate was a particularly enjoyable experience as it required a combination of a number of different concepts and led to the use of a number of different technologies to create a solution to automate the extraction of the data, which when it all boiled down was satisfyingly simple.

And as to the question: when is a picture not a picture? Well, quite simply: when it’s a series of serialised floating-point co-ordinate triplets representing points on a page, broken up into rectangles on a layer on a page which is stored in a NSKeyedArchiver property list file (obviously!).

If you have any questions, comments or queries regarding this post, as always you can contact us by dropping an email to research@ccl-forensics.com or by leaving a comment below.

Update: As requested I’ve uploaded the script for research purposes. You can find it here: http://pastebin.com/VYenpXUi 

Alex Caithness, CCL Forensics

New Epilog Signature files released

Epilog Signature files allow users to add specific support for new databases they encounter and although they are designed so that Epilog’s users can create their own signatures when the need arises, CCL-Forensics are committed to updating and releasing a sets of signatures, pre-written and ready to use.

In this new release we have had a real focus on smartphones adding support for:
• iOS6
• Android 4.0 (Ice Cream Sandwich)
• Android 4.1 (Jelly Bean)
• Android 3rd Party Applications
• iOS 3rd Party Applications
• Skype

We always welcome suggestions for signatures that you’d like to see added to the signature collection so please get in touch on epilog@ccl-forensics.com

For more information on epilog please visit our website – www.cclgroupltd.com/Buy-Software/

July 2012 cell site blog: The top five (potential!) pitfalls in cell site analysis.

By Nicholas Patrick-Gleed, Cell Site Analyst

This month’s cell site blog takes on a slightly different style.  The team here at CCL-Forensics has been discussing the most common potential pitfalls encountered in the world of cell site evidence, and thought it would be a useful exercise to commit some of them to the blogosphere.  So, rather than focusing on a particular topic, we’ll look at the top five (as we see them) issues which need to be at the forefront when planning and, more importantly, carrying out a cell site investigation.

We’ve touched on some of these in previous blogs, but they form a concise summary of some of the ‘issues’ we have seen experts (almost) experience.

This Month’s Topic: Five things to be wary of in cell site analysis

1. Exhibits without interpretation

When working for the defence, we regularly see prosecution evidence which can best be described as “exhibits without interpretation”.  A good example of this is a series of maps plotted by an intelligence analyst, who has carried out a series of instructions based on some call data records, but presented them without any explanation of what they mean.  This not only causes confusion and delay within the criminal justice system (the defence will, no doubt, ask for the explanation at some point – so it may as well be provided at the outset) but also means that an opportunity could be missed as part of the investigation stage.  Simply ‘blindly’ plotting information on a map is hardly investigative – but we have seen it more than once.  What is the point of an exhibit without context?

From the prosecution’s perspective this is an obvious potential pitfall – as it means that the evidence does not include something which could enhance the prosecution’s case.

There have also been occasions where the defence leaves it until the 11th hour before ‘complaining’ that the person who has produced the exhibit is not an expert – and the judge could rule that the prosecution needs to carry out more expert analysis.

It’s simply not worth chancing these situations.  Moral of the story: produce exhibits which mean something; it makes for a smoother investigation.

2. Who’s who on the call data records?

Cell site is full of idiosyncrasies.  It’s what keeps us experts on our toes.  But there are small variations between networks and circumstances can lead to major confusion.  The best example of this is when you are analysing a call data record, and the person is in contact with someone on the same network.  There are occasions when both parties cell IDs appear on the same CDR – which can immediately confuse things.  Furthermore, and lets use the ‘3’ network as an example here, if an incoming call to the subject phone is unsuccessful, then the cell ID for the person making the call still appears on the CDR.  This is particularly a problem, as the CRD doesn’t differentiate between the A and B phone (in columns) and so this needs to be taken into account.  It’s pretty easy to spot if there’s a day’s worth of cell IDs in London, and one in Edinburgh – but when both parties are geographically close, then vigilance is the watchword.

This is especially the case if the person plotting the calls is not trained in these nuances – as they may easily go unnoticed.

Moral of the story: be thorough.

3. Timely surveys

Networks change and evolve.  Nothing new there, but the sooner the survey is carried out after the incident in question, the better.  It means the results will be more accurate and better reflect what happened.

We previously touched on our use of historic data, which may help to counteract this problem – and this is a benefit of the robust methodology which CCL uses.  But, timeliness is still a big potential pitfall for a number of reasons.

One of the biggest is the evolution of “Everything Everywhere” – or the merger of
T-Mobile and Orange as most people still know it.  This means that “Everything Everywhere” now has many more channels available than each of their competitors – and consolidating cells seems like a sensible thing to do.  If there are two cells covering the same approximate area, it seems only prudent to use just one of them and either deactivate the other, or reallocate it to, say, the new 4G networks, which have been in the news recently.  This clearly impacts on the survey, especially if the cell in question is no longer transmitting.

Moral of the story: Consider the impact of the T-Mobile and Orange merger before surveying.  What are you expecting to see – and what are you expecting NOT to see?

4. Getting the whole picture – not just a small slice

Cell site is all about focusing on a phone’s movements around the time of a crime, right?  Wrong.  Yes, this is often the best place to start, but it can also be vitally important to look at the patterns of usage within the data as a whole, rather than just isolating and concentrating on a small piece of evidence.

There may be no evidence of a phone being in an area of interest at a particular time, but the best advice here it to stop, look around and think.

There may be behaviour patterns, where the time in question shows some deviation from the norm. There may be evidence elsewhere of the use of ‘clean’ and ‘dirty’ phones.  There may be evidence someone ‘casing the joint’ before the crime, which goes against the usual pattern of usage.

One just doesn’t see these when points are blindly plotted on a map.  The solution is to have as much data available as possible at the outset of a cell site assignment (or as much as can be reasonably requested under RIPA).

At the end of the day, it depends on what question you are trying to answer, but the moral of this story is: Don’t just rely on data from the time of the incident.  More complex investigations need more data.

5. Surveying techniques

Quite honestly, this is something of a bugbear of ours, and a topic which we have covered numerous times.  With that in mind, I won’t go into any major detail, but just summarise something which we think all cell site experts should adopt.  (And we’ve had this published in a peer-reviewed journal, so it’s more than just a passing fad!)

Movement is key to getting an accurate overall picture of how a phone interacts with cells.  The concept of ‘dragging’ a cell can be key to determining if a cell provides coverage at a location.  Driving to a location from a number of directions can result in a different cell providing coverage, depending on which direction you arrive from.  This is because the phone has a tendency to “hold onto” a cell, rather than chopping and changing – (to reduce the risk of a dropped call).  Spot samples (i.e. turning up at a location, surveying without moving, and then leaving, is hardly comprehensive).  This is about so much more than simply dotting the i’s and crossing the t’s.

While we’re on the subject, it’s worth touching on tracking frequencies.  Network Operators, typically use two or three 3G frequencies at their cell sites.  When moving geographically, a phone may use a new cell which uses a different frequency than the original one.  This created a potential pitfall when surveying, as the expert needs to be mindful of how many frequencies are available, and ensure the most appropriate survey is therefore carried out.  The moral of this part of the story: remember there is more than one available frequency – and be as thorough as the investigation requires.

I hope you’ve enjoyed our whistle-stop tour through the potential pitfalls of cell site analysis – and as, ever, we’re always keen to hear your thoughts on the matter.  If you would like to discuss any aspect of cell site analysis, please don’t hesitate to drop us a line at cellsite@ccl-forensics.com

Next month

Next month, Dr Iain Brodie analyses comments made by a judge during a recent case, and highlights what the criminal justice system REALLY wants from cell site experts.

Cell site blog – Never mind the quality, feel the width

Thoughts and observations on how ‘more’ could mean ‘less’ in the presentation of cell site analysis.

By Matthew Tart, Cell Site Analyst

This month – we look at quality over quantity in cell site analysis – with particular emphasis on a recent example where a pile (literally) of maps could easily have left jurors’ heads spinning.  And cost the prosecution a considerable sum.

This month’s topic: Getting the balance right in cell site analysis 

This blog starts with a case we were involved in recently, involving a high profile crime with a number of defendants.  On this occasion we were working for the defence, but this story acts as a useful pointer for the prosecution by illustrating techniques that experts used by the Crown should – and should not – be doing.  We’ll focus on a method used by a large number of cell site analysts (but not ourselves) which is not necessarily robust or stand up to close scrutiny.

Q: What were the details of the case?

A: The prosecution were investigating the probability of a suspect being at a crime scene – a pub in an inner city location.  At the time of the crime, one of the suspects (we were working for that suspect’s defence solicitor in this case) made a phone call.  The call data records showed that this phone call was made on what we’ll call ‘cell A’, which was on a mast near the crime scene – but also near to his home address which was about 500m away.

The suspect’s alibi was that he was at home at the time of the crime and the phone call.

The prosecution’s outsourced expert carried out ‘spot samples’ (i.e. turned up at a location with a piece of equipment) at both the crime scene and the alibi location.  Their report showed a different cell serving at each location.  Cell A was shown as best serving at the crime scene – but not at the alibi location.

Q: So what did we do differently?

A: We carried out a much more extensive survey i.e. a drive survey at the home address and the surrounding area.  This was carried out with regard to the cell of interest (Cell A), and we used multiple pieces of equipment and repeatedly moved in and out of the area.  We found that cell A provided coverage north, south, east and west of both locations (crime and alibi scene), and based upon this, could not distinguish between the mobile phone being at either location.  The evidence was simply not strong enough to suggest one or the other.

Q:  So, were both sides saying something different?

A: Yes and no. Before the court date, the prosecution’s outsourced expert asked for a copy of our defence report, which we provided.  We then discussed the contents with the expert over the phone, who claimed that he wouldn’t expect cell A to provide coverage at the home address.  After looking at our evidence, he admitted that our assertion that the cell served at both addresses was actually the most valid interpretation of the evidence.  This is a worrying admission/u-turn to say the least.  This is despite his evidence not documenting that cell A also serves at that crucial home address.

Q:  The other side claimed that a different cell provided service at the home address.  Did your survey find that cell as well?

A: Yes, but we found four cells which served at the home address.  Cell A, the one the other side claimed – AND two others.

Q: How was this data presented by the prosecution’s expert?

A: In a rather cumbersome, and lengthy fashion, to say the least.  There were a number of suspects, and their report showed the same maps over and over – and over – again.  It showed the locations of interest, calls for varying time periods, and whether the cells used actually covered the locations.  This came to more than 100 (one hundred) maps.  All printed on A3 paper and bound into a daunting, unwieldy piece of physical evidence, which the jury would have to absorb.

I would defy even the most attentive juror to have easily made sense of this massive tome.  Notwithstanding the threatening size of the document, but all the pages were practically the same, or almost identical copies of other similar pages.  You simply wouldn’t be able to take it all in.  Especially as one wouldn’t expect jurors to be familiar with this type of evidence – making it all the more crucial to have it presented in a friendly form.

Q: What would we have done differently?

A: Firstly, not produced a huge weighty un-jury-friendly document. The best way of presenting this evidence (for which we would have had MUCH more survey data, having done more than carry out simple spot samples) would have been a series of two or three detailed maps which can be presented interactively at court with the relevant points being highlighted by the expert in the course of presenting the evidence.  These maps would have covered specifically the period of interest – and would have a secondary, financial, benefit.

By not producing hundreds of maps, we would have saved a considerable amount of time – and therefore cost.  We would estimate that producing this unmanageable number of maps and documents would have potentially cost tens of thousands of pounds. Our approach would almost certainly have been cheaper AND more robust.

Q:  So the lesson here is…

A: …to think about what you need to achieve, and the best way of doing it.  Don’t be held to ransom by an outsourced experts ‘way of doing something’.  Hopefully this example has shown two things.  One, that carrying out spot samples (as we’ve mentioned in previous blogs) may not be the most appropriate way of surveying.  And secondly, that the end product i.e. what the jury see and have to understand, can be something a little more sophisticated than a batch of similar-looking, repetitive – and quite frankly, uninspiring – maps and tables.  Technology has moved on.  So has cell site analysis.  And so has the presentation of evidence in court.

In terms of maps it is quality not quantity that delivers the most impactive conclusions in relation to the possible locations of a mobile phone.

For more information about this – or any aspect of cell site analysis, please contact Matthew Tart (or any of our other cell site analysts) on 01789 261200 or by emailing cellsite@ccl-forensics.com

Benefits of hindsight: Why ‘nurturing’ data can prove valuable for cell site analysts.

It’s at this time of year, when the sun is shining (well, it IS at the very moment these words are being written), that carrying out an intensive cell site survey seems almost like a perk of the job. But, as everyone involved knows, a cell site expert shouldn’t expect that each assignment should come with such luxuries.  Surveys aren’t always necessary – and even where they are, there could be a smarter solution.  In this month’s cell site ‘blog’, Matthew Tart looks at the possibility of using data from previous cases.

This month’s topic: Use of historic cell site data

Q: First of all, precisely what do we mean by historic cell site data?

A: Simply, it’s data CCL-Forensics has generated from carrying out previous cases, which we have organised and stored in an ever-growing database.  The reason we have this, is because we don’t carry out static surveys as we have found they have limited repeatability and failed validation (this is described in our previous blog here), which I’ll go into in a little more detail later.  So if we’re interested in where a particular cell serves, there is a potential that we’ve already surveyed relevant areas.  We’re not (quite!) up to UK wide coverage yet, but the database is growing rapidly, plus we’re getting additional cell data every time we travel between our offices and the general area of the survey.  

Q: So, how does this actually benefit the case. Surely, all cases are different, and you may still have to do some surveying?

A:  This is not necessarily a replacement for any future survey, it is an enhancement, but it does have some significant advantages.  The first being that it can help us to scope out a case, and therefore produce a more accurate strategy – keeping costs down.  The more CCL-Forensics know about the network infrastructure of the location in question, the more easily we can produce the most cost-effective forensic solution to the problem at hand. It’s also been useful in court, where very specific questions have been raised about the coverage of a cell; if we have relevant data, our expert in the witness box can easily (and with no cost to anyone involved), use that area information – adding value to proceedings.  In short, it means that we simply have more data with which to work – and scientifically, that’s a very good place to be.

Q: But isn’t there a risk that the data may be out of date?

A: The timeliness of data is always a consideration in cell site analysis.  One of the most common concerns tends to surround how the network may have changed in the (often) months between the incident and the survey.  By using data collected in the past, it could be more relevant to the time of the incident.  Additionally further surveys can be undertaken to assess whether that there have been changes in the network over a period of time – and not even the networks themselves can give us information as reliable as that.

Q:  Why aren’t all cell site analysts doing this?

A: I’m not saying it’s only CCL-Forensics who are doing it, but there are analysts who practice certain surveying techniques where keeping a database would not be appropriate.  Earlier, I mentioned the concept of carrying out static surveys.  CCL-Forensics have gone through this at length before, but this just goes to reinforce why turning up at a scene, carrying out a few measurements and then leaving again, is not the strongest piece of science in the world.

Using static surveys each job stands alone and in isolation from any other examination.

Q: It must take up a serious amount of storage space

A: CCL-Forensics had to buy a huge new server to hold this data, but it’s already been worth it.  We’ve saved so much time by having access to this data, and passed on significant cost savings as a result.  It also means that our clients are getting stronger evidence, effectively for free.  It’s stronger because we can more accurately assess service areas, and also get an educated idea of network changes, which can inform expectations at the outset of an investigation and same time during it.

For us, it’s definitely an investment in the future and ideally, we’d like to be a position where we have the whole country mapped, but that’s a little way away at the moment!

To finish with an example: There was an urgent pre case management hearing relating to an incident in a city 150 miles from our base.  We were asked to provide some analysis of call data records on a Wednesday morning, which were needed by the Thursday evening. 

CCL-Forensics checked our database and found extensive surveys in and around the locations of interest, and could make an informed estimate of cell service at those addresses without the disruption, delay and cost of travelling to the area and carrying out a survey.  Basically this would not have been possible, given the time constraint, had we not had the historic data. 

Of course it is not just the survey time that causes delay, it’s the preparation, travelling, data manipulation, analysis and reporting of that data.  But, as we’d already surveyed the area, the client had the report they needed well ahead of the deadline: something which would have been practically impossible otherwise. 

To summarise, every case we do makes CCL-Forensics service stronger.

For more information about historical cell site data usage, or any of the other issues highlighted in this month’s blog, please email Matthew Tart at mtart@ccl-forensics.com or call 01789 261200

 

Mystery box reveals digital secrets

Arun Prasannan, member of CCL-Forensics’ R&D team. 

Every now and again, an unusual device arrives for analysis at CCL-Forensics, which proves interesting – but above all, significant to an investigation.

Earlier this month, a UK law enforcement agency submitted what can only be described as a ‘black box’.  It was plastic, no bigger than a packet of cigarettes, and from the outside, it had only a slot for a SIM card and a socket for power.

Working closely with the investigating agency, a member of CCL-Forensics’ R&D team carried out an in-depth analysis of what was inside the device, and what data it was capable of storing.

It was initially suspected that it was some kind of tracking device, and when disassembled, it was found to contain a battery, and two separate circuit boards, to one of which was attached a mercury switch which detected movement.  One board contained all the circuitry one would normally expect on a mobile phone, and had everything it needed to connect to a GSM network.  When examined VERY closely, it was labelled (in very small print) with an IMEI number.  From this, we could identify the board, and then research all the available documents about that piece of hardware.

Interestingly, it was a widely used GSM module found in many mobile devices such as GPS trackers, Fax machines and even some phones.

The SIM card was analysed separately, and it was strongly suspected that there was additional data on the board itself.

Our analysts procured a test module, and carried out a comprehensive technical analysis to validate what data it could store.  It was found to have the capacity to store call data (made, received, missed), SMS and contacts – as well as some call timers.  It was also determined that SMS messages could be extracted without changing their status. 

Following this comprehensive research, it was found that the suspect device DID contain a number of phone numbers and call times – which were presented back to the investigator in the case.  This was a level of potentially vital evidence which would have been missed without this very low-level investigation of the device and the data it contained.

It also highlights the talents of CCL-ForensicsR&D department, and the value investigators can derive by not simply opting for a ‘plug and play’ forensic examination.

For more information, please contact us at research@ccl-forensics.com

The Forensic Implications of SQLite’s Write Ahead Log

By Alex Caithness, CCL-Forensics

SQLite is a popular free file-based database format which is used extensively both on desktop and mobile operating systems (it is one of the standard storage formats available on both Android and iOS). This article sets out to examine the forensic implications, both pitfalls and opportunities, of a relatively new feature of the database engine: Write Ahead Log.

Before we begin, it is worth taking a moment to describe the SQLite file format. Briefly, records in the database are stored in file which in SQLite parlance is called the ‘Database Image’. The database image is broken up into “pages” of a fixed size (the size is specified in the file header). Each page may have one of a number of roles, such as informing the structure of the database, and crucially holding the record data itself. The pages are numbered internally by SQLite starting from 1.

Historically SQLite used a mechanism called “Rollback Journals” for dealing with errors occurring during use of the database. Whenever any data on a page of the database was to be altered, the entire page was backed up in a separate journal file. At the conclusion of a successful transaction the journal file would be removed; conversely if the transaction was interrupted for any reason (crash, power cut, etc.) the journal remained. This means that if SQLite accesses a database and finds that a journal is still present something must have gone wrong and the engine will restore the database to its previous state using the copies of pages in the journal, avoiding corrupted data.

From version 3.7.0 of the SQLite engine an alternative journal mechanism was introduced called “Write Ahead Log” (ubiquitously shortened to “WAL”). WAL effectively turned the journal mechanism on its head: rather than backing up the original pages then making changes directly to the database file, the database file itself is untouched and the new or altered pages are written to a separate file (the Write Ahead Log). These altered or new pages will remain in the WAL file, the database engine reading data from the WAL in place of the historic version in the main database. This continues until a “Checkpoint” event takes place, finally copying the pages in the WAL file into the main database file. A Checkpoint may take place automatically when the WAL file reaches a certain size (by default this is 1000 pages) or performed manually by issuing an SQL command (“PRAGMA wal_checkpoint;”) or programmatically if an application has access to the SQLite engine’s internal API.

Initial state: No pages in the WAL

Initial state: No pages in the WAL

Page Altered - new version written to WAL

Page 3 is altered. The new version of the page is written to the WAL and the database engine uses this new version rather than the old version in the database file itself.

Checkpoint

A checkpoint operation takes place and the new version of the page is written into the database file.

It is possible to detect whether a database is in WAL mode in a number of ways: firstly this information is found in the database file’s header; examining the file in a hex editor, the bytes at file offset 18 and 19 will both be 0x01 if the database is using the legacy rollback journal or 0x02 if the database is in WAL mode. Secondly you can issue the SQL command “PRAGMA journal_mode;” which will return the value “wal” if the database is in WAL mode (anything else indicates rollback journal). However, probably the most obvious indication of a database in WAL mode is the presence of two files named as “<databasefilename>-wal” and “<databasefilename>-shm” in the same logical directory as the database (eg. if the database was called “sms.db” the two additional files would be “sms.db-wal” and “sms.db-shm”).

The “-wal” file is the actual Write Ahead Log which contains the new and updated database pages, its structure is actually fairly simplistic. The “-wal” file is made up of a 32 byte file header followed by zero or more “WAL frames”. The file header contains the following data:

Offset Size Description
0 4 bytes File signature (0x377F0682 or 0x377F0683)
4 4 bytes File format version (currently 0x002DE218 which interpreted as a big endian integer is 3007000)
8 4 bytes Associated database’s page size (32-bit big endian integer)
12 4 bytes Checkpoint sequence number (32-bit big endian integer which is incremented with every checkpoint, starting at 0)
16 4 bytes Salt-1 Random number, incremented with every checkpoint *
20 4 bytes Salt-2 Random number, regenerated with every checkpoint
24 4 bytes Checksum part 1 (for the first 24 bytes of the file)
28 4 bytes Checksum part 2 (for the first 24 bytes of the file)

* In testing it was found that although the official (and at the time of writing, up to date) command line version of SQLite v3.7.11 behaved correctly, when using SQLite Expert v3.2.2.2.2102 this value appeared to be regenerated after each checkpoint (which is assumed by the author to be incorrect behaviour)

The WAL Frames that follow the header consist of a 24 byte header followed by the number of bytes specified in the file header’s “page size” field which is the new or altered database page. The Frame Header takes the following form:

Offset Size Description
0 4 bytes Database page number (32-bit big endian integer)
4 4 bytes For a record that marks the end of a transaction (a commit record) this will be a 32-bit big endian integer giving the size of the database file in pages, otherwise 0.
8 4 bytes Salt-1, as found in the WAL header at the time that this Frame was written
12 4 bytes Salt-2, as found in the WAL header at the time that this Frame was written
16 4 bytes Checksum part 1 – cumulative checksum up through and including this page
20 4 bytes Checksum part 2 – cumulative checksum up through and including this page

There are a number of potential uses and abuses for the WAL file in the context of digital forensics, but first, the behaviour of SQLite while in WAL mode should examined. A number of operations were performed on a SQLite database in WAL mode. After each operation the database file along with its “-shm” and “-wal” files were copied, audited and hashed so that their states could be examined.

Step 1: Create empty database with a single table:

8a9938bc7252c3ab9cc3da64a0e0e06a *database.db
b5ad3398bf9e32f1fa3cca9036290774 *database.db-shm
da1a0a1519d973f4ab7935cec399ba58 *database.db-wal

1,024       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  0
WAL Salt-1: 3046154441
WAL Salt-2: 220701676

Viewing the database file using a hex editor we find a single page containing the file header and nothing else. As noted as well as creating a database file, a table was also created, however this data was written to the WAL in the form of a new version of this page. The WAL contains two frames, this new version of the first page in addition to a second frame holding an empty table page. When accessing this database through the SQLite engine this information is read from the “-wal” file transparently and we see the empty table, even though the data doesn’t appear in the database file itself.

Step 2: Force a checkpoint using PRAGMA command:

dd376606c00867dc34532a44aeb0edb6 *database.db
1878dbcefc552cb1230fce65df13b8c7 *database.db-shm
da1a0a1519d973f4ab7935cec399ba58 *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  0
WAL Salt-1: 3046154441
WAL Salt-2: 220701676

Using the pragma command mentioned above, the database was “checkpointed”. Accessing the database through SQLite we see no difference to the data but examining the files involved, we can clearly see that the database file has changed (it has different hash) furthermore it has grown. Looking inside the database file we can see the two pages from the “-wal” file have now been written into the database file itself and SQLite will be reading this data from here rather than the “-wal” file.

The WAL Checkpoint number and salts were not changed at this point, as we will see they are altered the next time that the WAL is written to.

Another interesting observation is that the “-wal” file was left completely unchanged during the checkpoint process – a fact that will become extremely important in the next step.

Step 3: Insert a single row:

dd376606c00867dc34532a44aeb0edb6 *database.db
6dc09958989a6c0094a99a66531f126f *database.db-shm
e9fc939269dbdbfbc157d8c12be720ed *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  1
WAL Salt-1: 3046154442
WAL Salt-2: 534753839

A single row was inserted into the database using a SQL INSERT statement. Once again we arrive at a situation where the database file itself has been left untouched, evidenced by the fact that the database file’s hash hasn’t altered since the last step.

The “-wal” file hasn’t changed size (so still contains two WAL frames) but clearly the contents of the file have changed. Indeed, examining the file in a hex editor we find that the first frame in the file contains a table page containing the newly inserted record as we would expect. What is interesting is that the second frame in the file is the same second frame found in the file in the previous two steps. After a checkpoint the “-wal” file is not deleted or truncated, it is simply reused, frames being overwritten from the top of the file.

Examining the Frame’s headers we see the following:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154442 534753839
2 2 2 3046154441 220701676

Both frames relate to the same page in the database but their salt values differ. As previously noted these two salt values are copied from the WAL file header as they are at the time of writing. Salt-2 is regenerated upon each checkpoint, but key here is Salt-1 which is initialised when the WAL is first created and then incremented upon each checkpoint. Using this value we can show that the page held in second frame of the WAL is a previous version of page held in the first frame: we can begin to demonstrate a timeline of changes to the database.

Step 4: Force a checkpoint using PRAGMA command:

704c633fdceceb34f215cd7fe17f0e84 *database.db
a98ab9ed82393b728a91aacc90b1d788 *database.db-shm
e9fc939269dbdbfbc157d8c12be720ed *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  1
WAL Salt-1: 3046154442
WAL Salt-2: 534753839

Once again a checkpoint was forced using the PRAGMA command.  As before the updated pages in the WAL were written into the database file and this operation had no effect on the contents of the “-wal” itself. Viewing the database using the SQLite engine shows the same data as in the previous step.

Step 5: Insert a second row, Update contents of the first row:

704c633fdceceb34f215cd7fe17f0e84 *database.db
d17cf8f25deaa8dbf4811b4d21216506 *database.db-shm
ed5f0336c23aef476c656dd263849dd0 *database.db-wal

2,048       database.db
32,768      database.db-shm
2,128       database.db-wal

WAL Checkpoint Number:  2
WAL Salt-1: 3046154443
WAL Salt-2: 3543470737

A second row was added to the database using a SQL INSERT statement and the previously added row was altered using an UPDATE statement.

Once again, and as is now fully expected, the database file is unchanged, the new data has been written to the WAL. The WAL contains two frames: The first holds a table page containing the original record along with our newly added second record; the second frame holds a table page containing the updated version of our original record along with the new, second record. Examining the frame headers we see the following:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154443 3543470737
2 2 2 3046154443 3543470737

In this case both frames contain data belonging to the same page in the database and the same checkpoint (Salt-1 is the same for both frames); in this case the order of events is simply detected by the order in which the frames appear in the file – they are written to the file from the top, down.

Step 6: Insert a third row:

704c633fdceceb34f215cd7fe17f0e84 *database.db
5ac6d9e56e6bbb15981645cc6b4b4d6b *database.db-shm
672a97935722024aff4f1e2cf43d83ad *database.db-wal

2,048       database.db
32,768      database.db-shm
3,176       database.db-wal

WAL Checkpoint Number:  2
WAL Salt-1: 3046154443
WAL Salt-2: 3543470737

Next, a third row was added to the database using an INSERT statement. Viewing the database logicaly with the SQLite engine we see all three records. While database file remains unchanged, the “-wal” file now contains 3 frames: the first two are as in the previous step with the third and final new frame holding a table page with all three records. The frame headers contain the following information:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154443 3543470737
2 2 2 3046154443 3543470737
3 2 2 3046154443 3543470737

We now have three versions of the same page, as before the sequence of events is denoted by the order they occur in the file.

Step 7: Force a checkpoint using PRAGMA command:

04a16e75245601651853fd0457a4975c *database.db
05be4054f8e33505cc2cd7d98c9e7b31 *database.db-shm
672a97935722024aff4f1e2cf43d83ad *database.db-wal

2,048       database.db
32,768      database.db-shm
3,176       database.db-wal

WAL Checkpoint Number:  2
WAL Salt-1: 3046154443
WAL Salt-2: 3543470737

As we have observed before the checkpoint results in to the up-to-date records being written into the database, the “-wal” file is unaffected.

Step 8: Delete A Row:

04a16e75245601651853fd0457a4975c *database.db
dca5c61a689fe73b3c395fd857a9795a *database.db-shm
3b518081a5ab4a7be6449e86bb9c2589 *database.db-wal

2,048       database.db
32,768      database.db-shm
3,176       database.db-wal

WAL Checkpoint Number:  3
WAL Salt-1: 3046154444
WAL Salt-2: 2798791151

Finally in this test, the second record in the table (the record added in Step 5) was deleted using an SQL DELETE statement. Accessing the database using the SQLite engine shows that the record is no longer live in the database.

As per expectations the database file is unaffected by this operation, the changes instead being written to the WAL. The “-wal” file contains three frames:  the first frame holds a table page with the second record deleted (the data can still be seen, and could be recovered using a tool such as Epilog, however the metadata on the page shows that the record is not live). The remaining two pages are identical to the final two frames in the previous step. Examining the frame headers we see the following:

Frame Page Number Commit Size Salt-1 Salt-2
1 2 2 3046154444 2798791151
2 2 2 3046154443 3543470737
3 2 2 3046154443 3543470737

Here we once again see three frames all containing data from the same database page, this time the most recent version of the page is found in frame 1 as it has the highest Salt-1 value; the other two frames have a lower Salt-1 value and are therefore older revisions; as they both share the same Salt-1 value we apply the “position in file” rule, the later in the file the frame occurs, the newer it is. So in order of newest to oldest the frames are ordered: 1, 3, 2.

Summarising the findings in this experiment:

  • Altered or new pages are written to the WAL a frame at a time, rather than the database file
  • The most up-to-date pages in the WAL are written to the database file on a Checkpoint event – this operation leaves the “-wal” file untouched
  • After a Checkpoint, the “-wal” file is reused rather than deleted or truncated,  with new frames
  • Multiple frames for the same database page can exist in the WAL, their relative ages can be derived by first examining the frame header’s Salt-1 value with newer frames having higher values. Where multiple frames have the same Salt-1, their age is determined by their order in the WAL, with newer frames occurring later

Pitfalls and Opportunities

The most obvious opportunity afforded by the Write Ahead Log is the potential for time-lining of activity in database. To prove the concept, a small Python script was written which would automate the analysis of the frames in a WAL file and provide a chronology of the data; a sample output is shown below:

Header Info:
    Page Size: 1024
    Checkpoint Sequence: 3
    Salt-1: 3046154444
    Salt-2: 2798791151

Reading frames...

Frame 1 (offset 32)
    Page Number: 2
    Commit Size: 2
    Salt-1: 3046154444
    Salt-2: 2798791151

Frame 2 (offset 1080)
    Page Number: 2
    Commit Size: 2
    Salt-1: 3046154443
    Salt-2: 3543470737

Frame 3 (offset 2128)
    Page Number: 2
    Commit Size: 2
    Salt-1: 3046154443
    Salt-2: 3543470737

Unique Salt-1 values:
    3046154443
    3046154444

Chronology of frames (oldest first):

Page Number: 2
    Frame 2
    Frame 3
    Frame 1

With further work it should be possible to display a sequence of insertions, updates and deletions of records within a database – a feature which is a top priority for the next update of Epilog. Even without the ability to timeline, it is clear that deleted records can be stored and recovered from the WAL (functionality already present in Epilog).

One behaviour which hasn’t been described in full so far is that a database file in WAL mode isolated from its associated “-wal” file is, in almost all circumstances, a valid database in its own right. For example, consider the test database above as it is at the end of Step 8. If the database file was moved to another directory, as far as the SQLite database engine is concerned this is a complete database file. If this isolated database file was queried, the data returned will be that which was present at the last checkpoint (in our test case, this would be the 3 live records present at the checkpoint performed in step 7).

This raises an important consideration when working with a SQLite database contained in a disk image or other container (eg. a TAR archive): if the database file is extracted from the image or container without its associated WAL files, the data can be out-of-date or incomplete. The other side of the coin is that the “full up-to-date” version of the data (viewed with the WAL present) may lack records present in the isolated database file because of deletions pending a checkpoint. There is, then, an argument for examining databases both ways: complete with WAL files and isolated as it may be possible to obtain deleted records “for free”.

Summing Up

The Write Ahead Log introduced in SQLite 3.7 may afford digital forensics practitioners new opportunities to extract extra data and behaviour information from SQLite databases; however the mechanism should be understood to get the most of the new opportunities and avoid confusion when working with the databases.

If you have any comments or questions, please leave a comment below or get in touch directly at research@ccl-forensics.com.

References:

SQLite File Format

Write Ahead Log

Alex Caithness, CCL-Forensics