Thanks for all these questions and ideas. What you propose known as applying a 'language model' to OCR output in order to correct the output so it conforms to that model. We do have a spellcheck program that uses Levenshtein distances and a large (3million) dictionary to correct some of the sorts of things you mention.
It is much better, of course, if these errors are not be corrected, but never happen in the first place. The best way to make sure that happens is to give the OCR engine enough training. If you consider that these results were generated with about 8 pages of input, then some of these forms, especially unusual combinations of vowels and accents, might never have yet appeared in the training data, or just once. Or maybe they appeared in a slightly different way. A good number of your examples are missed spaces, for instance: in effect the character 'space' was not recognized. Well, with your training data, you've just given hundreds more instances of the character 'space', and by applying this training, the engine will get better at recognizing these. Not perfect, of course, and you might have had the experience I did where I think: "you know, the computer might be right, there really isn't much of a space there!"
So we're at the stage now where we can get better results with training. But once that maxes out, we will dehyphenate the results (so that words are complete) and apply spellcheck.
You also asked about column separation. The most important step in that is to discover and remove the inter-column letters that Migne uses for a citation scheme. With those in place, no column-finding algorithm is going to get things right. With them gone, most do. So a colleague and I wrote a paper on that problem, a poster for which is here: http://heml.mta.ca/lace/datech14
. This routine allows us to store the letters' coordinates, so that we can re-introduce the citation scheme at a later time. It's not a completely solved issue, since sometimes Migne crosses the page with remaining Greek text, and that can mess up a OCR page layout analyzer, but at that point, we can hopefully split these bilingual texts by finding the dividing line between the Latin-heavy and Greek-heavy halves of the page.