jump to navigation

Solving Combinatoric Problems with List Comprehensions 10 May 2015

Posted by Oliver Mason in algorithm, erlang.
Tags: , ,
add a comment

My daughter had some maths homework the other day:

You have 4 bags, each full of the numbers 1, 3, 5, and 7 respectively. Take 10 of the numbers that when added up make 37. What numbers are they?

So far so good: that sounds easy enough. But a bit of trial and error quickly leads nowhere. Something can’t be right. So, let’s get the computer to work it out.

As I haven’t done much Erlang recently I thought I’d give it a go. And, during a casual glance at Armstrong’s Programming in Erlang I thought I’d finally understood list comprehensions, so I wrote the following program:
-module(comb).
-export([result/0]).
result() ->
[{A+B+C+D+E+F+G+H+I+J,A,B,C,D,E,F,G,H,I,J}||
A <- [1,3,5,7],
B <- [1,3,5,7],
C <- [1,3,5,7],
D <- [1,3,5,7],
E <- [1,3,5,7],
F <- [1,3,5,7],
G <- [1,3,5,7],
H <- [1,3,5,7],
I <- [1,3,5,7],
J <- [1,3,5,7],
A+B+C+D+E+F+G+H+I+J =:= 37].

I declare a module with one function, `result/0`. This finds me ten variables that can take any of the four specified values and add up to 37. Simples!

The list comprehension has ten generators, and one filter; it will return a tuple with the sum and the individual variables’ values.

Erlang R16B01 (erts-5.10.2) [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V5.10.2 (abort with ^G)
1> comb:result().
[]
2>

WTF???! An empty list?! So I try changing the 37 to another value, like 36.
3> comb:result().
[{36,1,1,1,1,1,3,7,7,7,7},
{36,1,1,1,1,1,5,5,7,7,7},
{36,1,1,1,1,1,5,7,5,7,7},
{36,1,1,1,1,1,5,7,7,5,7},
{36,1,1,1,1,1,5,7,7,7,5},
[etc, etc].

So it does work! Only, there doesn’t seem to be an answer to the question. And with a bit of logical reasoning it is obvious: when adding two odd numbers, you get an even number. So adding ten odd numbers also yields an even number, but 37 is odd.

What I learnt from this exercise: thinking about the problem beforehand can save you time, as there was no need to write a program at all. But then, I did get to use list comprehensions, and have learnt how powerful they are. And it neatly shows Erlang’s Prolog roots as well.

More than eye candy 19 February 2013

Posted by Oliver Mason in Apple, iphone, objective-c.
add a comment

For an undergraduate module in Digital Humanities I am currently coding an iOS app designed by our students (see @iBrumApp for the related twitter feed). This is a tourist attraction app about Birmingham. The students collect data on a number of interesting locations (images, descriptions, coordinates, …) and encode them in XML ready to put into the app.

In the app so far there is a map screen, which shows the locations of the attractions. This can be swapped for a list view, which shows the attractions as a table in text form, possibly with a short description and a category label. A segmented control allows switching between the two views.

Under the hood there are two views on top of each other, with one of them hidden. Pressing the segmented control unhides the hidden one and hides the previously visible one; that works fine. There is a simple method in the view controller which swaps the two views:

-(void)switchViewsIn:(UIView*)toShow 
                 out:(UIView*)toHide {

    [toHide setHidden:YES];
    [toShow setHidden:NO];
}

However, it feels a bit sudden and ‘in your face’, the way the views are changing.

So, it might be better to soften the transition somewhat, for example with a cross-fade animation. Basically the visible view would become increasingly transparent, while the hidden view becomes increasingly opaque, until the old one is completely invisible. This is very easy to do with CoreAnimation:

-(void)switchViewsIn:(UIView*)toShow 
                 out:(UIView*)toHide {

    [toHide setAlpha:1.0];
    [toShow setAlpha:0.0];

    [UIView beginAnimations:nil context:NULL];
    [UIView setAnimationDuration:0.75];
    [UIView setAnimationCurve:
            UIViewAnimationCurveEaseInOut];
    [toHide setAlpha:0.0];
    [toShow setAlpha:1.0];
    [UIView commitAnimations];
}

[Note: there is probably a more elegant way using the new block syntax of Objective-C, but this works just fine].

This fading animation has indeed the desired effect, making the transition much smoother and less sudden; I feel this is an improvement in the feel of the app. It’s only a small thing, but if there is one thing you pick up as a developer in the Apple ‘eco-system’ it’s that small things matter. Attention to detail is important.

One thing I have not yet explored (as I’m only testing it with a small sample data set of two locations) is the performance: I have the suspicion that having two superimposed views with transparency might slow down the whole thing, as the iPhone tries to render the other view despite it being transparent. But in that case I can just add a line that sets the ‘hidden’ property to disable the view completely should that prove to be an issue.

Using Neotoma to parse PEG in Erlang 25 February 2011

Posted by Oliver Mason in erlang, programming.
4 comments

For a project I need some easy and simple way to read structured data from a text file. Initially I considered JSON, and found a JSON parser for Erlang, but then decided that this was just overkill for what I needed. Ideally there would be a better match between the data structures I needed (lists, atoms, strings) and the file format.

I then decided to use Lisp-like S-expressions; at least a simplified version thereof. The data I read from the file is basically a list which can contain other lists, strings (which technically are also just lists), and atoms. A while ago I wrote a simple Erlang module to process something similar, but that had made certain assumptions that didn’t hold anymore, and I felt something more maintainable was required. And what better way to do that than by using a formal grammar to describe the file format and a tool to generate a parser from that?

A simple and straight forward grammar formalism is PEG, Parsing Expression Grammar, and there is already an Erlang parser available for it, Neotoma by Sean Cribbs. Installation was easy, and so was writing a grammar:

list <- open elem* close;
elem <- list / atom / sstring / dstring;
atom <- [a-z0-9_]+ space*;
dstring <- '"' [^"]* '"' space*;
sstring <- "'" [^']* "'" space*;
open <- '(' space* ;
close <- ')' space* ;
space <- ' ' / '\t' / eol;
eol <- '\r\n' / '\n' / '\r';

A list is something (or nothing) enclosed in quotes (with optional spaces). An element is a choice of things, atoms are lower case letters and digits (at least one), and with strings I allow both double and single quotes. This grammar is saved in a file “terms.peg”:
Eshell V5.7.3 (abort with ^G)
1> neotoma:file("terms.peg").
ok
2> c(terms).
{ok,terms}

and you’re ready to go. I created four short one-line test files, with the following content:

  1. (atom)
  2. ( “string” )
  3. (foo bar)
  4. (())

This is the output:
3> terms:file("test1").
[["(",[]],[["atom",[]]],[")",["\n"]]]
4> terms:file("test2").
[["(",[" "]],[["\"","string","\"",[" "]]],[")",["\n"]]]
5> terms:file("test3").
[["(",[]],[["foo",[" "]],["bar",[]]],[")",["\n"]]]
6> terms:file("test4").
[["(",[]],[[["(",[]],[],[")",[]]]],[")",["\n"]]]

Not all that helpful, as there is a lot of noise in there, such as the spaces in “test2”, and all the line-breaks. So I need to go back to the AST and extract just those bits from the parse tree that I actually want. In Neotoma you can do this by adding bits of Erlang code to the grammar definition, like so:
list <- open elem* close
`[Open, Elem, Close] = Node, Elem`
;
atom <- [a-z0-9_]+ space*
`[Atom, Space] = Node, list_to_atom(Atom)`
;
dstring <- '"' [^"]* '"' space*
`[Quote, Str, Quote, Space] = Node, Str`
;
sstring <- "'" [^']* "'" space*
`[Quote, Str, Quote, Space] = Node, Str`
;

(All other lines are unchanged as in the grammar listed above)

What I do here is to split the Node into its component parts, and then discard the bits I don’t want. In the ‘list’ rule I am only interested in the elements, but not in the enclosing brackets, so I just return ‘Elem’. For the ‘atom’ I ignore the spaces and convert the matched character sequence into an atom. Now the output looks like this:
7> neotoma:file("terms.peg").
ok
8> c(terms).
{ok,terms}
9> terms:file("test1").
[atom]
10> terms:file("test2").
["string"]
11> terms:file("test3").
[foo,bar]
12> terms:file("test4").
[[]]

Much better, and just what I wanted. The ‘terms.elr’ file that neotoma generated is 7kb in size, just over 220 lines, and just under 8kb compiled.

The only issue is speed and memory consumption: on my 8GB MacBook Pro a file of less than 40k runs out of memory and crashes after 30+ seconds. If I take a part off at the end to make it 35k, the parser succeeds, but needs 35 seconds (hand-timed). So I think I will have to revisit my hand-made parser again after all… :(

UPDATE:
I had an email exchange about this with Sean, who informs me that this is a limitation of the memoisation, which creates multiple duplicates as (unoptimised) lists. So, not a fault of neotoma, but of the algorithm in general. There are ways around this, but available time to implement is as always a limiting factor!

Update/Correction to “Elegant IR with Erlang” 14 October 2010

Posted by Oliver Mason in algorithm, erlang, programming.
add a comment

When I tried to actually use my implementation of tf-idf that I described in the previous post, I realised that it’s not quite what I wanted: as it is, I get a different tf-idf value for each token and each document. So with a collection of 1000 documents I get 1000 dictionaries containing the tokens in each text. However, what I really want is ONE dictionary with all the tokens in, and ONE tf-idf value for each token.

Merging the values is tricky, as it involves relative frequencies, so I needed to make some subtle changes. First, the term_freq/1 method now deals with absolute frequencies, and returns a tuple containing the frequency values and the document size in tokens, so that the relative frequencies can easily be computed if required:

term_freq(Text) ->
    term_freq(Text, 0, dict:new()).

term_freq([], Sum, Dict) ->
    {Dict, Sum};

term_freq([Token|Rest], Sum, Dict) ->
    term_freq(Rest, Sum+1,
           dict:update_counter(Token,1,Dict)).

No change really, only the terminating clause of term_freq/3 has dropped its dict:map to compute the relative values, and instead returns the tuple with the frequency dictionary and the document size.

This also requires a minor change in the inv_doc_freq/3 function, where we need to deal with the tuple and extract the dictionary from it in the second and final clause:

old

inv_doc_freq([Doc|Rest], DocNum, Dict) ->

new

inv_doc_freq([{Doc, _Sum}|Rest], DocNum, Dict) ->

The biggest change, however, is in the combined tf_idf/1 function, as the algorithm has somewhat changed. Originally the function was a full screen in the editor, but I have extracted two functions to make them easier to follow; the gain in clarity will surely outweigh the minute performance penalty…

tf_idf(Docs) ->
    Idfs = inv_doc_freq(Docs),
    DocLen = total_doc_size(Docs),
    DocTotalFreqs = total_token_freqs(Docs),
    dict:map(
        fun(Key, Value) ->
            dict:fetch(Key, Idfs) * Value / DocLen
            end,
        DocTotalFreqs).

I need to calculate the overall size (in tokens) of the full document collection, and then add up the token frequency over all documents. These have been factored out into separate functions. Then all is left is a map over all tokens to calculate the tf-idf value from the relative frequency in the document collection multiplied by the idf value as calculated earlier.

Computing the total document size is trivial: we loop over the list of term frequency dictionaries and this time extract the lengths, ignoring the actual dictionaries:

total_doc_size(Docs) ->
    lists:foldl(
        fun({_Doc, DocSum}, Total) -> Total + DocSum end,
        0,
        Docs).

And finally, that leaves computing the total frequencies of all tokens.

total_token_freqs(Docs) ->
    lists:foldl(
        fun({Doc, _Sum}, Current) ->
            dict:fold(
                fun(Key, Value, AccIn) ->
                    dict:update_counter(Key,Value,AccIn)
                    end,
                Current,
                Doc)
            end,
        dict:new(),
        Docs).

Here we process the document list (as there are likely to be fewer documents than tokens) and fold each dictionary, adding the tokens with their respective frequencies to our accumulator dictionary.

Apologies for this correction; but sometimes you only really realise that a particular interpretation of an algorithm is not the right one when you actually need to use it. The curse of developing libraries without proper specification of the requirements…

Elegant IR with Erlang 11 October 2010

Posted by Oliver Mason in erlang, programming.
1 comment so far

I am currently working on a project that requires processing documents. As part of that I wanted to use term weighting as used in information retrieval (IR); the individual texts I’m working with are of course of different lengths and contain different sets of words, and I didn’t want that to mess things up as it did when I initially worked with raw token frequencies only.

What I actually wanted is tf-idf, the product of term frequency (tf) and inverted document frequency (idf); essentially you see how often a word/term/token occurs in a text, and multiply that with a measure of how ‘bursty’ it is. The idea being that common words (the, of, and etc) occur in pretty much every document and are thus useless for categorisation of the content. In a way it is a more sophisticated approach to using a stop word list. Sophisticated because you don’t have to create such a list, and it is also not binary include/exclude, but assigns each token a continuous weight depending on its distribution.

Term Frequency

This is simply the relative frequency of occurrence, the number of times a token occurs in the text divided by the text length. As input I assume that the text has already been tokenised and is represented as a list of tokens. The output should be a dictionary (ie a set of key/value tuples) with each token as a key and its tf as the value:

term_freq(Text) ->
    term_freq(Text, 0, dict:new()).

term_freq([], Sum, Dict) ->
    dict:map(
        fun(_Key, Value) -> Value / Sum end,
        Dict);

term_freq([Token|Rest], Sum, Dict) ->
    term_freq(Rest, Sum+1, 
        dict:update_counter(Token,1,Dict)).

In case another token is available, I simply update its frequency by one, add one to the text size, and re-run the function on the rest of the text. If no more tokens are left, then I map the dictionary (which at this point contains absolute frequencies) to another dictionary by way of dividing each value by the text size; this new dictionary is then returned.

Inverted Document Frequency

For the idf I count how many documents each token occurs in, and divide the total number of documents by that number; so the rarer the token, the larger the resulting value. The token the should just give a result of 1.0; however, to make it a bit more complicated we then take the logarithm (base-10) of it, so that the final value will be greater than or equal to zero.

This time the input is a list of dictionaries, one for each document. The dictionary representing each document is the output of our term_freq/1 function, ie the keys are the tokens, and the values the term frequencies. We don’t really care about the frequencies here, as they all will be greater than zero – a word that does not occur in a text will not be a key in the respective dictionary. As output we will have a single dictionary of all tokens that occur in our document collection, with the values being the idf of each token.

inv_doc_freq(Docs) ->
    inv_doc_freq(Docs, 0, dict:new()).

inv_doc_freq([], DocNum, Dict) ->
    dict:map(
        fun(_Key, Value) -> math:log10(DocNum/Value) end,
        Dict);

inv_doc_freq([Doc|Rest], DocNum, Dict) ->
    inv_doc_freq(Rest, DocNum+1,
        dict:fold(
            fun(Key, _Value, AccIn) -> 
               dict:update_counter(Key,1,AccIn) end,
            Dict,
            Doc)
    ).

Again we iterate over all elements of our input list (ie the documents), and this time we iterate over all tokens of the document using a dict:fold/3 function, by adding 1 to the count for each token of the current document that we have already encountered, or entering it with a frequency of 1 if we haven’t yet. We also increment the document count by 1. This time the dict:map/2 function performs the calculation for the idf value as soon as we have reached the end of our document list.

tf-idf

At this stage we have a dictionary for each document containing the term frequencies, and a dictionary for the whole document collection containing the inverted document frequencies for all the tokens. Combining the two we then get the value for the tf-idf, which is different for each document (so the output is a list of dictionaries, one per document).

To make things easier, the call to compute the idf is integrated into the tf_idf/1 function, so the input is the same as for the inv_doc_freq/1 function, a list of term frequency dictionaries:

tf_idf(Docs) ->
    Idfs = inv_doc_freq(Docs),
    lists:map(
        fun(TFs) -> dict:map(
            fun(Key,Value) -> Value *
                dict:fetch(Key, Idfs) end,
            TFs) end,
        Docs).

Here we map the list of term frequency dictionaries (Docs) to a list of dictionaries containing the tf-idf values. For this mapping we map each (document) term frequency dictionary to the respective (document) tf-idf dictionary by multiplying each token’s term frequency by its idf value as computed by inv_doc_freq/1.

Summary

Calculating a set of values from texts is very concise with Erlang. In languages like C or Java one would have to code various (nested) loops, but this can easily be accomplished by using the map and fold functions that operate on lists and dictionaries in Erlang. It does need a bit of mental acrobatics, but if you are familiar with Prolog, then the basic structure of an Erlang program is not too difficult to follow. It’s those nested mappings that sometimes can be a little confusing.

The beauty of Erlang, of course, is that each map can be done in parallel; if you have a large list of documents and a processor with several cores then it is not hard to make use of its full power by simply using a parallel map function. To do this in other languages where nested loops are used in place of the map function is not trivial.

So Erlang is not only very concise, but it can also be future-proof by allowing easy concurrency.

On Planning and Reality 3 June 2010

Posted by Oliver Mason in Apple, iphone, objective-c, programming.
add a comment

When I got my iPhone a little more than a year ago, and started developing programs for it, I had a clear idea what my first program was going to be. However, as always, things turn out quite different from how you think they are going to be…

First, it did take me a bit to get used to Objective-C. Not because it is very different from Java (I used to program in C after all before Java came along), but because all the classes in the Cocoa framework need to be learned. There are subtle differences between those and their Java cousins, and after a bit more experience I believe that the Cocoa classes are actually more powerful and easier to use than their Java counterparts.

Some teething troubles, lack of automatic memory management on the iPhone, and a surfeit of squa brackets meant further delays. Finally I had a program written, but it needed more work on the graphics side, artwork and so on. The stuff that really makes a difference, but is very time-consuming and hard if you’re not used to using graphics software. So the easier way out was to write a different program, which is lighter on the artwork.

This then was a todo-list program, which is also suitable for planning small projects. I wanted a program like that, but didn’t want to fork out the money for Things, which also looked a bit like overkill. On the life hack blog I read an article by Dustin Wax on his moleskine setup, and that seemed like something usable, which I then went about implementing as an iPhone app. With a bit of help from a friend with the icon design, and thanks to freely available sound files and icons, ePlanner was born.

In ePlanner I tried out Core Data, which is really a lot easier than messing about with SQLite directly. It uses both tabs and navigation views, and a lot of tables. I found it rather tedious in that all the classes were almost identical, but only almost, not 100%, and it’s hard to see how that could be changed. The behaviour of those classes is ever so slightly different.

The submission procedure was very easy, thanks to a description I found on the web. My app did get rejected, due to a crash on a 3GS; but I don’t have a 3GS, so I could only test it on a 3G and an iPod touch. Thanks to Instruments I could track down the error, which was of course a memory management issue, but one without consequences on the machines I could test it on. After that was changed, the app went through, and has indeed been bought by people all over the world.

It is really a nice feeling to think that someone in Argentina is using my app, as is someone in Hong Kong, some people in the US, Sweden, etc. I used some free Google advertising at the beginning, but that is really expensive, though when I stopped it, sales began to trail off. But that could also have been an effect of it slipping out of the ‘newly released’ slots.

It is indeed not too hard coming up with a program that does sell. The overall process is not too hard, though there were some frustrating moments battling with the various code signing and certificate issues that Apple requires.

I since have bought an iPad, and am thinking of porting ePlanner to this; however, I’ll give it a while so that I get used to how the iPad works. Knowing your way round the platform makes it a lot easier to develop good software, and I am not yet sure how the UI design for the small iPhone screen can best be translated to the iPad’s larger display. But it will come, and I will describe the process on this blog…!

In the meantime, I will re-visit some of my previous program ideas, as it is really not hard to turn them into something that will end up in the App Store, and it is really satisfying to do so.

Go – Went – Gone 30 December 2009

Posted by Oliver Mason in erlang, programming.
add a comment

I did play around with the unhelpfully named ‘go’ programming language, another output of the don’t-be-evil company. Trying to find any web resources for it is pretty much impossible, for one thing because it was too new, and then because of the name. I would have expected something more search-friendly from the number 1 web search engine!

There were a few things I liked about go. It’s smallish, C-like, has garbage collection, built-in support for concurrency, and unicode strings. Hash-tables (‘maps’) as a first-class data type. A nicely-looking set of libraries for all sorts of purposes. Not quite fast, but with lots of scope for performance improvements. No header files. First class support for unit tests.

This was looking attractive as opposed to Erlang, which is older and more mature/stable, but still not very high-performance, has slightly awkward string handling, and exactly three data types (list, tuple, atom). And a Prolog-style syntax with a number of inconveniences about the use of commas, semicolons, and full stops. Editing a clause is never straightforward.

I have since abandoned go again. It also has inconsistencies (the use of ‘new’ for some data types and ‘make’ for others), and worst of all, there was so much talk about wanting to add generics to the language that I fear they will become a feature of it. I don’t like generics: they seem to me to be more trouble than it’s worth. They make code really hard to read, and inflexible. They might make some kinds of bugs impossible, but in my view that is a feeble gain for wrecking a language. As Knuth (I think) said, part of writing programs is aesthetics. I cannot like Java code full of abstract type annotations. Objective-C is so much cleaner in comparison. And so was go, until now.

Another reason is the concurrency support. Go uses pipes for that, which seems awkward. I much prefer Erlang’s mailboxes, which neatly work together with pattern matching to respond to certain messages and ignore others. You do not need to worry about the order in which messages arrive as much, and the whole communication process is a lot easier with only the basic data types.

So I’m going back to Erlang. I will dig out the string library that I started, and get back into thinking recursively. At least I know where I am with it, and it is not suddenly going to change!

What’s a UITextEffectsWindow? And why is it receiving messages? 17 September 2009

Posted by Oliver Mason in Apple, iphone, objective-c, programming.
1 comment so far

I just spent several hours (or at least it felt like several hours!) in frustration, searching a trivial bug. I’ve been testing a quick’n’easy prototype screen with an UIImageView and four UIButtons. The buttons are linked via an action to a view controller. And every time I press a button, my app conks out complaining that -[UITextEffectsWindow buttonPressed:] was an unrecognised selector. I checked the memory address, and it said it was my view controller, just before that exception was thrown.

I was ready to put the blame on some mistakes with Interface Builder, until I came across the solution (indirectly) in a blog: here the problem described related to properties, and the difference between vc = … and self.vc = …. I had another look at my code and quickly found the offending line: I had the view controller as a local variable in the app delegate’s ‘viewDidLoad’ method, and I autoreleased it. In other words, by the time the button was pressed the view controller no longer existed, and hence I got that weird error message.

This was not helped by the fact that ‘UITextEffectsWindow’ is not mentioned in the documentation anywhere, as it seems to be an internal UIKit class, but at least it appears to be consistent.

So, if your button presses send messages to ‘UITextEffectsWindow’, make sure to check that your view controller is still alive!

Application Promiscuity 7 September 2009

Posted by Oliver Mason in Apple, iphone, objective-c.
1 comment so far

I was getting a bit bored with the slow progress on my Esperanto dictionary app, and over the holidays I started work on a few other ideas I had. One was a Maths-drill program for kids, as the ones that are already out there (at least the ones I tried) don’t seem to be ideal (nothing ever is ideal, though!). So I tried writing another app so our kids could play and practice their maths skills.

That app is almost done, just the artwork and sound effects are missing. At the moment it looks pretty rubbish (but looks aren’t that important as long as it works and doesn’t crash!), and the sound effects are nicked from somewhere, so I have to replace them with free ones. Again, the purpose was to try things out.

That app was quite fun, and also easy to do. More on that later…

The next app is one that supports teaching and learning students’ names. This makes use of a navigation controller, which is slow going. I’m picking up loads of experience in Objective-C quirks along the way. For example: avoid using NSNumbers as the keys in a dictionary if you want to save it using writeToFile later on…

Overall it’s very exciting, and the iPhone is a fun platform. It’s really great to see your own stuff amongst all those polished apps, and provides great motivation to do better.

NSData Naughtiness 13 July 2009

Posted by Oliver Mason in Apple, objective-c, programming.
add a comment

Well, this is not exactly NSData’s fault, but I ran into a problem (for the second time; the first time I bypassed it with a short-cut) when reading text data from a file.

Occasionally there was random garbage at the end of a line, which I could not understand. Incidentally, I was reading a number of full files in one go each into an NSData instance, and converted that into an NSString with the correct encoding; this I would then tokenise and add to another file. So the garbage was actually at the end of each file. I then found that I can directly initialise an NSString with the contents of a file, and the problem disappeared.

Now I want to produce concordance lines, and I jump into the middle of the file to read a stretch. First I run into trouble with the encoding: as the data is UTF8-encoded, a random jump can end up in the middle of a multi-byte character. NSString does not like that… but here I can just test for that and skip the initial bytes. The same problem obviously also happens at the end, where the final multi-byte character could be incomplete. Again, truncation seems the easy way out.

But I also then had the issue with the occasional random garbage again! NSData seems to be at fault, and this time I can’t bypass it, as NSString can only read a full file. Quick websearch, and the solution crops up (in an aside) on stackoverflow.com: the data that NSData returns from the -bytes method is not zero-terminated, but NSString’s -stringWithUTF8String expects that, hence the random garbage of the unterminated data. In a way I’m surprised that it actually worked most of the time!