'Filtering OCR Result [closed]

I'am working on OCR, which I have working, but now I'm stuck on how to filter the OCR Result to move each string into a set of text fields.

For Example, OCR Result :

Name : Jhon

No : 12345

Address : Canada

...but sometimes it assigns "Jhon" to the text field "Address", or "Jhon" to text field "No".



Solution 1:[1]

For data, which have checksum incorporated (usually bank account numbers), you can validate the checksum, and if you really want very low false-positive failures, you need video stream input, and keep doing OCR for some time to accumulate several results. When most of the "checksum-valid" ones are same, that's very likely (99,5+%) the correct string then.

Without video stream and cumulative results you can get probably into 97-99,5% with checksummed data.

Without checksum: well, you can't really tell.

For fields like "No" you can at least reject alphabetical results, and for "Name" you can penalize numbers (although I think there are some obscure countries where a digit in name is valid?), in Address you may give bonus confidence to "alphabet_digit" results, plus having dictionary of all street/cities strings, but in the end there's no way to say which result is more correct than other.

Again having video stream input and accumulating several results over longer period of time (1-5s) may give you enough results to run some statistics on them, then if you have some large enough threshold of same part of result appearing in the OCR, to consider it "correct".

Even then the reliability of such strings will be probably under 98%, more toward 90-95%, for generic texts without any hint (digit/letter/size/position) you can get even into 50-80% reliability range (as whole string, as the OCR itself has about 95-98% per single char).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Ped7g