Handling Inconsistent QA Check Results in Xbench

Hello

I have been experiencing inconsistent QA check results in Xbench; where the same set of files sometimes trigger errors and warnings, while at other times, they pass without issues. :upside_down_face:

This happens even when no changes have been made to the project settings or translation files. Some errors appear to be skipped randomly; making it difficult to ensure reliable quality control. :innocent:

I’ve tried reloading the files, clearing the cache & even reinstalling Xbench, but the issue persists. :thinking:

It seems like the problem might be related to how Xbench processes large files or certain language pairs, but I’m not entirely sure. Could there be a setting or workflow adjustment that ensures consistency in QA checks? :thinking: checked Working with QA Features :: ApSIC Xbench DocumentationTalend Training guide related to this and found it quite informative.

Has anyone else encountered similar behavior? What troubleshooting steps or best practices have you found to keep Xbench’s QA results stable and reliable? Any insights would be greatly appreciated!

Thank you !! :slightly_smiling_face:

If the same set of files are checked when selecting the same QA list of checks, options and filter issues and you get inconsistent QA results, contact support so that they can investigate this issue.

Specify which steps you follow, QA options selected, attach files and a pair screenshots of different QA results when contacting them.