Not only that, it writes a single memo content to a single file. It's meeant that way, as this kind of dump does not have any structure to write many memo values and distinguish between them. The simplest file format that would do that would be the fpt format or rather similar, because it consists of the concatenated memo values and a length. Within the dbf all that's stored is an offset from the file begin.
So as you want a more stable file format, that's what I'd recommend. To get all memo fields, you'd add a call for each memo field. To cover all records it's sufficient to do this in insert and update triggers, as each such tale operation then will create the file for that specific memo.
So if you have repeated and reproducable problems with fpt files, you may investigate your network hardware and performance. If this is a one time case that let's you think about fpt files this way, I can't help you other than by saying that in my 9 years experience with VFP since VFP6 I had no corrupt fpt file, I had corrupt cdx and dbfs, but no fpt. And even if some records are corrupt, you can read out the intact values from the fpt, it's not much more than the concatenation of values, what's stored in such a file.
Bye, Olaf.