Quality assurance steps for technical translation compliance
- 4 hours ago
- 10 min read

A single mistranslated dosage instruction, a flipped negation in a safety clause, or an ambiguous regulatory term can trigger product recalls, failed audits, or legal liability that dwarfs the cost of the translation project itself. For compliance and regulatory managers in life sciences, legal, defense, and finance, this is not a theoretical risk. ISO 17100 mandates a structured QA process covering qualified translators, independent revision, project management, and post-production verification, because the standard’s authors understood what happens when that structure is absent. This article walks you through every step you need.
Table of Contents
Key Takeaways
Point | Details |
ISO 17100 is essential | Following ISO 17100 ensures QA procedures are audit-ready and regulatory compliant. |
Independent review required | Second-person linguistic revision is not optional; it is key for compliance. |
Address AI and edge cases | Modern QA must account for technology risks and regulatory-specific edge cases. |
Choose the right metrics | Use MQM for compliance-critical projects and DQF for routine fluency checks. |
Understand regulatory frameworks and ISO 17100 requirements
Before you can build a defensible QA process, you need to understand what the governing standards actually require and why those requirements exist in the first place.
ISO 17100 translation standards set the baseline for virtually every regulated sector that commissions technical translation. The standard defines competency requirements for translators, mandates that every translation undergo independent revision by a second qualified linguist, and specifies project management obligations that create an auditable trail. That auditable trail matters enormously. When a regulatory body or a court asks for evidence that your translated labeling or contract was accurately rendered, ISO 17100 certification gives you a documented answer.
As TÜV SÜD notes, ISO 17100 provides the gold standard for verifiable QA in regulated technical translation, specifically because it emphasizes independent revision over single-translator workflows. This is not a bureaucratic preference. It reflects the well-documented finding that a translator reviewing their own work misses error categories that a fresh pair of qualified eyes catches routinely.
The compliance reality: Regulated industries do not treat translation as a communication service. They treat it as a controlled document process. QA failure is not a quality issue; it is a regulatory event.
Key regulatory obligations vary by sector, but they share common structural demands:
Life sciences and medical devices: EU MDR Article 10 requires accurate translation of instructions for use and labeling into all relevant member state languages. Errors are grounds for market withdrawal.
Legal and financial: Contract ambiguity caused by translation errors can invalidate agreements or create enforceable obligations that were never intended.
Defense: NATO AQAP 2110 and related standards require documented quality management across all technical documentation, including translated materials.
Cross-border pharmaceutical: HIPAA and GDPR both create additional data handling obligations on top of accuracy requirements, meaning the translation process itself must be compliant, not just the output.
Organizations that treat translation as a commodity procurement decision typically discover the consequences during an audit or a regulatory inspection, not before.
Step-by-step QA process for technical translation
With the regulatory landscape defined, it’s time to detail each concrete step that should form your QA workflow. This is not a generic checklist. Each step has a specific function and failure mode.
Initial scoping and requirements analysis. Define the regulatory regime, the target audience’s technical level, the document type, and any sector-specific terminology obligations. Skipping this step means your linguist team is making assumptions that should be documented decisions. Confirm which quality standard applies (ISO 17100, ISO 18587 for machine translation post-editing, or sector-specific frameworks like MDR).
Selection of qualified linguists with subject-matter expertise. A general translator working on a medical device instructions for use is a liability, not a resource. Your translation partner must assign linguists with verifiable domain credentials. This means engineers for technical manuals, legal scholars for regulatory submissions, and medical professionals for clinical documentation.
Asset integration: Translation Memories and Term Bases. Before a single sentence is translated, ingest client Translation Memories ™ and Term Bases (TB). This step enforces terminology consistency across the entire document, prevents drift between sections, and ensures that previously approved translations are not contradicted. For regulated content, this is the foundation of terminology governance.
First-pass translation with terminology enforcement. Whether human-led or AI-assisted, the first-pass output must be constrained by the approved terminology framework from step three. Unconstrained translation, especially from general-purpose neural machine translation engines, produces inconsistent term usage that creates compliance risk at the review stage.
Mandatory independent revision. A second qualified linguist who was not involved in the first-pass translation reviews the output for technical accuracy, regulatory compliance, contextual nuance, and terminology adherence. This is required by ISO 17100 and is the single most effective intervention in the QA process. The revision step is documented and becomes part of the audit trail.
Post-production QA: formatting, tags, and sensitive entity preservation. Numbers, dates, product codes, regulatory references, and chemical formulas must be verified after formatting is applied. Layout changes can corrupt numerical data or create ambiguous line breaks in safety-critical text. Tag verification prevents localization markup errors that can break software interfaces or regulated digital content.
Final verification and regulatory compliance audit. A project manager with compliance awareness reviews the completed file against the original scoping requirements, confirms that all QA steps are documented, and signs off the delivery package. This sign-off is the evidence your auditors will request.
Pro Tip: Build your QA workflow around the audit documentation first. Every step should generate a record. If your current process cannot produce a revision history, a terminology report, and a sign-off log on demand, it is not audit-ready regardless of how good the translators are.
Here is a comparison of translation workflow approaches and their QA suitability for regulated content:
Approach | Terminology control | Independent revision | Audit trail | Regulatory suitability |
Legacy MT (machine translation) | None | Not built in | None | Very low |
Public NMT (SaaS engines) | Limited or manual | Not built in | Variable | Low to medium |
Human-only traditional | Manual | Optional | Paper-based | Medium to high |
AI+HUMAN hybrid (ISO-aligned) | Enforced via TM/TB | Mandatory SME review | Full digital trail | High |
For regulated sectors, the bottom row describes the minimum viable standard. Anything below it introduces audit exposure.

Use the QA checklist for compliance and the technical translation step-by-step resources to map your current process against each required stage.
Address technical translation edge cases and AI-specific risks
Alongside standard procedures, it’s crucial to address tricky edge cases and the unique risks that emerge when advanced technology is involved. These are the failure modes that standard QA checklists often underweight.
Text expansion and contraction is a structural risk that many compliance managers overlook until it surfaces in a printed regulatory submission or a software interface. German technical text typically expands 25 to 35 percent relative to English source. Spanish expands 15 to 25 percent. Japanese can contract significantly. If your QA process does not include a layout verification step, expanded text overflows fields, truncates safety warnings, or wraps in ways that alter the reading order of a numbered procedure.
Special characters and script-specific formatting create compliance issues that are invisible in plain text review but break rendered documents. Arabic and Hebrew require right-to-left rendering checks. Chinese and Japanese require character encoding verification. In regulated pharmaceutical labeling, a corrupted special character in a chemical name or a measurement unit is not a typo; it is a potential mislabeling event.
Cultural adaptation and regulatory localization require human judgment that no AI system handles reliably. A dosage unit that is standard in the US may be unfamiliar in a target market. A legal term of art in one jurisdiction may have a different operative meaning in another. These are not translation errors in the traditional sense; they are localization decisions that require a subject-matter expert with regulatory knowledge of the target market.
AI hallucination and lineage tracking represent the most significant new risk category in technical translation QA. As translation quality research confirms, AI-assisted translation introduces the risk of hallucinated content, fabricated regulatory references, and plausible-sounding but incorrect technical specifications. Standard fluency-based QA checks do not catch these errors because hallucinated content reads naturally. Semantic verification, which checks meaning against source, and lineage tracking, which traces every segment back to its source and revision history, are the required controls.
Risk category | Detection method | QA stage |
Text expansion/layout corruption | Layout rendering check | Post-production |
Special character errors | Encoding and script review | Post-production |
Hallucinated content (AI) | Semantic verification vs. source | Revision stage |
Terminology inconsistency | TM/TB compliance check | First-pass and revision |
Entity errors (numbers, dates) | Structured entity check | Post-production |
Regulatory localization gaps | SME regulatory review | Revision stage |
Pro Tip: When working with AI-assisted translation in regulated contexts, request a segment-level lineage report from your translation partner. This document maps every translated segment to its source, the AI output, and the human revision. It is your primary defense in a content accuracy challenge.
For a detailed breakdown of how to manage these risks in an AI-assisted workflow, the AI-human translation compliance guide covers the control architecture in practical terms. For sector-specific legal translation considerations, this overview is also worth reviewing.
Measure quality with robust metrics: MQM vs DQF
Equipped with process steps and edge-case controls, the next stage is systematic quality measurement. Without quantified metrics, QA is opinion, not evidence. Two frameworks dominate the field: MQM and DQF.
MQM (Multidimensional Quality Metrics) provides a granular, hierarchical error taxonomy. It categorizes errors by type (accuracy, fluency, terminology, locale convention, style) and by severity (minor, major, critical). For compliance-critical audits, MQM’s granularity is essential. You can demonstrate to a regulator not only that errors were found and corrected, but exactly what categories of errors were present, at what frequency, and at what severity level. That level of documentation supports continuous improvement programs and justifies process investments to internal stakeholders.
DQF (Dynamic Quality Framework) operates on simpler rating scales and is designed for faster, less granular assessment. It is well-suited to high-volume, lower-risk content where fluency and adequacy checks are sufficient. For internal communications, marketing materials, or non-regulated documentation, DQF delivers quality information efficiently without the overhead of full MQM scoring.
The practical combination: Use MQM for all regulated, safety-critical, and legally binding content. Apply DQF as a rapid screening tool for supporting documentation that feeds into a larger regulated workflow. The two frameworks complement each other when applied to the right content categories.
As translation quality metric analysis confirms, MQM’s detailed error taxonomy outperforms DQF’s simpler scales when precision is the requirement. DQF serves quick fluency checks where throughput matters more than audit depth.
Key considerations when implementing quality metrics in a regulated workflow:
Define severity thresholds before translation begins. A “critical” error in a medical device instruction should trigger immediate escalation and full re-review.
Store MQM scoring records as part of the project file. These records are the quantitative evidence your audit trail needs.
Review metric trends across projects. Rising error rates in a specific category, such as terminology inconsistency, signal a systemic process gap rather than an isolated failure.
Align your quality thresholds with the applicable regulatory standard. ISO 17100 and ISO 18587 both reference quality targets; your internal metrics should reflect the standard you claim conformance with.
The translation quality standards list provides a practical reference for mapping metric frameworks to sector-specific compliance obligations.
Why single-translator QA falls short: Lessons from compliance audits
Here is something that comes up repeatedly when organizations review their translation-related compliance failures: the process looked adequate on paper. There was a qualified translator. There was a review step. The document was delivered on time. The failure was not in the credentials of the individual involved. It was in the structure of the review itself.
Single-translator workflows, even with self-review steps, have a well-documented limitation. A translator who has spent hours producing a document develops strong expectations about what the text says. Their cognitive model of the document overrides what is actually on the page. They read what they intended to write. This is not a skill gap; it is a human cognitive constant. ISO 17100’s requirement for independent revision by a second qualified linguist exists precisely because the standard’s authors understood this limitation.
What the second reviewer catches is not primarily gross errors. A competent translator does not produce gross errors at high frequency. What independent revision surfaces is the subtler error category: the negation that was dropped in a complex conditional clause, the numerical value that was transposed between two similar parameters, the regulatory term that was rendered correctly in isolation but inconsistently across the document. These are the errors that fail audits and generate regulatory findings.
The second lesson from compliance audits is that QA built in as a project phase, with its own documented workflow, its own sign-off requirements, and its own deliverable, is categorically different from QA treated as a final check before delivery. When QA is an afterthought, it compresses under schedule pressure. When it is a defined phase with scope and accountability, it holds. The step-by-step QA framework we use treats each QA stage as a project milestone with its own completion criteria, not as a variable time buffer at the end.
The third lesson is about evidence. Organizations that have invested in structured QA but failed to document it are in nearly the same position as organizations that skipped QA entirely when a regulator asks for proof. Documentation is not the overhead of QA. Documentation is QA’s regulatory output.
Ready for audit-proof technical translation?
If the steps above describe gaps in your current translation QA process, you are not alone. Most compliance managers inherit workflows that were built for speed and cost efficiency, not regulatory scrutiny. Closing those gaps requires both the right process architecture and the right partner.

AD VERBUM’s AI+HUMAN hybrid translation approach was designed specifically for the requirements described in this article. The workflow begins with asset integration using your existing Translation Memories and Term Bases, moves through proprietary LLM-based generation with terminology enforcement, and then applies certified subject-matter expert review before ISO 17100 and ISO 18587 aligned QA sign-off. Every step is documented and traceable. Data is processed on private, EU-hosted infrastructure with ISO 27001 certification, supporting both GDPR and HIPAA obligations. Explore the full localization solutions portfolio and review the quality assurance features to see how a structured, audit-ready process works in practice.
Frequently asked questions
What is ISO 17100 and why is it critical for technical translation QA?
ISO 17100 is the international standard for translation service providers, requiring rigorous steps including independent revision, qualified linguist selection, and project management documentation that together produce an auditable compliance record.
How do you detect errors unique to AI-assisted translation?
AI-assisted translation requires dedicated semantic verification checks to detect hallucinated content, plus lineage tracking that maps every translated segment back to its source to confirm that no fabricated regulatory or technical information has been introduced.
What makes MQM better than DQF for regulated sectors?
MQM’s granular error taxonomy enables category-level audit evidence, severity classification, and trend analysis that regulators can evaluate directly, whereas DQF’s simpler fluency scales do not produce that level of documented detail.
Why is second-person revision mandatory in technical translation QA?
Independent revision by a second qualified linguist is required by ISO 17100 because self-review consistently misses subtle errors such as dropped negations and inconsistent terminology, and the revision record itself serves as primary compliance documentation.
Recommended