Does "Key Term Mismatch" QA support double-byte language as source language?

Hi,

I have a translation file that has Chinese Traditional as source and English as target. After translation I ran Xbench QA, but it seems the “Key Term Mismatch” QA cannot detect mismatched terms. I suspect it is because Xbench QA does not yet support Chinese or other Asian double-byte languages as source, but want to raise this question for a firm confirmation. (I checked Xbench user guide but there is no mention of this.)

Thank you.

Are you using a recent Xbench 3.0 build? If so, it should work, in the sense of that if the source term starts or ends with a CJK character, then the algorithm is slightly different. If it does not work, could you please provide an example so that we can check?

I have a similar problem with Key Term Mismatch checks when the source language is Japanese. I’m using the most recent build (3.0, Build 1410, 32-bit).

Here’s an example.

Key Terms file: Trados MultiTerm Glossary (.sdltb, Japanese/English)
Term: 章 / chapter

The check returns only 1 result, and this is the relevant part of the source message:
…g>章(CAD元図…

But in fact, when another check is run using a tab-delimited text file as the Key Terms file, I get many more mismatch results. Here are the source messages for some of the results that were wrongly omitted in the first check:
…要領は4.1.2章(部品図…
…3.1.2章(図面…
…の文章</g…

Apparently, when a Trados glossary is used as the Key Terms file, Xbench only detects cases where the Japanese term is preceded and followed by non-alphanumeric glyphs. In the example above, 章 is preceded by > and followed by (. But when 章 is preceded by 2 or 文, Xbench doesn’t recognize the Japanese term.

This algorithm (presumably designed to limit the number of false positives in English and other space-using languages) doesn’t and shouldn’t apply to Japanese. Can you check if this is the cause?

@ysim, thank you for your issue report. We’ve been able to reproduce this difference in behavior between .txt and .sdltb files and have created a task in our DB so that it is fixed in the next update.