Update....I've discovered that on a lot of occasions the row which represents the test case is duplicated maybe 2, 3, 4 or even 5 times with exactly the same time stamp, and because I also get the request out on the table log I can prove that the request is identical in each case. The request test step also repeats in some cases with exactly the same time stamp (although I can't prove these are the same as the request is linked to the test case rather than the test step, but I suspect that they are.)
This leaves me in a mess with analysing the stats as I have to go through each file deleting duplicate rows in order to make sense of the data and make meaningful graphs etc.
What this is doing to the other table I don't know? Do all of these duplicates get averaged into the results on the average table (thus skewing the results) or does it generate its results from elsewhere?