📌 Key Takeaways
Lock specifications only after you’ve reviewed verifiable evidence—not vendor promises.
- Evidence Before Promises: Request lot-specific Certificates of Analysis (COA) with named ISO/TAPPI test methods, measurement uncertainty values, and lot traceability before finalizing any specification or signing purchase orders.
- Machine Conditions Drive Runnability: Moisture conditioning protocols, reel/core fit, splice types, and tension windows determine whether on-spec material runs cleanly on your converting equipment—document these upfront to prevent line stoppages.
- Variation Bands Reveal Reality: A vendor’s typical production spread (not just contractual tolerance) tells you whether their process consistently holds target values or frequently drifts to specification limits.
- Acceptance Criteria Need Structure: Tier your specifications by impact (critical/major/cosmetic), map AQL levels to each tier per ISO 2859-1, and define triage protocols before material arrives to eliminate acceptance debates.
- Evidence Quality Predicts Performance: Use the 10-point pre-award rubric to score vendor evidence packages—require 8/10 minimum before proceeding to pilot trials, ensuring lab accreditation scope covers your specified test methods.
Verifiable evidence transforms procurement from promise-based negotiations into data-driven decisions.
Procurement professionals and quality managers at packaging converters will find this framework here, preparing them for the detailed evidence collection and vendor qualification processes that follow.
This article’s primary purpose is to serve a B2B user in decision mode. The goal is to build final purchase confidence, de-risk the choice, and guide the user toward a clear, logical action.
TL;DR:
- Request test-method-named COAs with uncertainty values before finalizing any specification
- Machine conditions (moisture conditioning, reel fit, tension windows) directly affect runnability—document them upfront
- Variation bands with typical ranges prevent post-award disputes and enable realistic acceptance sampling
At a Glance: Vendor Evidence Checklist
Before you lock specifications and award a contract, request these critical data points:
- Certificate of Analysis (COA) with named test methods (ISO 536, ISO 287, TAPPI T 403) and measurement uncertainty
- Lot/date traceability linking the COA to specific production runs
- Moisture window with conditioning protocol (typically 6.5-8.5%, ISO 287)
- GSM tolerance bands (±1-2% for most grades)
- Cobb60 range appropriate to your application (20-40 g/m² for most packaging grades)
- Burst, Tensile, and SCT/RCT/ECT values with method identifiers
- Machine moisture conditioning requirements and typical storage time
- Reel/core compatibility, splice type, and tension window specifications
- Sampling plan with AQL levels mapped to your critical specifications
- Lab accreditation status (ISO/IEC 17025) and calibration certificates
What Vendor Evidence Means (and Why It Precedes Spec Lock-In)
Picture this: A procurement manager sits across from a mill representative. The quote looks competitive. The technical datasheet lists impressive numbers—180 g/m², 3.2 kPa burst, excellent printability. Everything checks out on paper.
Three months later, the first shipment arrived. The incoming inspection team measures moisture at 9.2%. The corrugator reports waviness issues within the first hour. Production stops. The vendor insists the material meets “industry standards.” The buyer points to the datasheet. Neither party has verifiable evidence tied to the specific lot delivered.
This scenario repeats across the industry because specifications get locked before evidence gets collected. Vendor evidence isn’t about mistrust—it’s about creating a shared foundation of measurable, reproducible data that both parties can verify. When you request evidence before finalizing specs, you’re essentially asking: “Can you prove this mill consistently holds these tolerances using these specific test methods?”
The “micrometer on money” principle applies here. Small variations in moisture content (±0.5-1.0%) can shift material behavior enough to cause line stoppages. A 2% difference in basis weight affects material cost, box strength, and freight efficiency. Without method-named evidence, you’re comparing apples to oranges—one vendor’s “burst strength” might be tested per ISO 2758 while another uses TAPPI T 403, and the results won’t align perfectly.
Evidence before promises means you document what’s typical, what’s achievable, and what’s already been proven in production before you commit commercial terms.
COAs: The Minimum Viable Evidence
A Certificate of Analysis serves as the technical passport for a material lot. But not all COAs provide equal clarity. The difference between a useful COA and a generic datasheet comes down to specificity.
What Makes a COA Verifiable

A robust COA includes six essential elements that transform it from a compliance document into decision-support evidence:
Test method identification with version year. Instead of “Basis Weight: 180 g/m²,” a verifiable COA states “Basis Weight: 180.3 g/m² (ISO 536:2019)” or “Burst: 340 kPa (TAPPI T 403 om-15).” The method name ensures you’re comparing measurements across vendors. ISO and TAPPI methods differ slightly in conditioning, sample prep, and calculation—small differences that matter when you’re arbitrating acceptance disputes. TAPPI methods are published and available through the ANSI webstore.
Measurement uncertainty. Every lab measurement carries inherent variability. A complete COA reports this as expanded uncertainty with a coverage factor, following internationally recognized principles for evaluating measurement uncertainty. For instance: “Moisture: 7.2%, U = ±0.3%, k = 2 (95% confidence, ISO 287).” This uncertainty band tells you whether a 7.5% reading at your dock represents normal variation or a genuine drift from specification. Without uncertainty data, you can’t distinguish measurement noise from real quality shifts.
Lot and date traceability. Generic datasheets describe what a mill can produce. Lot-specific COAs prove what was produced for your order. The COA should reference the production date, reel numbers or pallet IDs, and the specific jumbo rolls in your shipment. This linkage is critical if you need to trace a quality issue back to mill conditions or if you’re running acceptance sampling plans tied to production batches.
Sampling protocol disclosure. How many reels did the lab sample? From which positions in the production run? A COA based on one sample from the first reel of a 40-ton lot provides far less confidence than one based on systematic sampling per ISO 186. Knowing the sampling approach helps you assess whether the reported values represent the entire lot or just a best-case snapshot.
Lab accreditation confirmation. ISO/IEC 17025 accreditation demonstrates that a testing lab follows internationally recognized quality procedures, maintains calibrated equipment, and participates in proficiency testing. While not every mill lab carries formal accreditation, knowing the lab’s quality framework helps you weigh the evidence. If disputes escalate, accredited lab results carry more authority in arbitration. Verify that the accreditation scope explicitly covers the test methods named in your specification.
Signature and contact information. A COA should identify who authorized the test results—typically a lab manager or quality supervisor—and provide contact details for clarification. This isn’t just formality; it establishes accountability and gives you a clear path if questions arise during incoming inspection.
When you interpret lab reports during vendor evaluation, focus on these six elements. If a vendor provides only a generic datasheet, request lot-specific documentation before finalizing your specifications.
| COA Field | Why It Matters | What to Verify |
| Test method with ID | Ensures comparable measurements across vendors | Check that method matches your spec (ISO vs. TAPPI) and includes version year |
| Measurement uncertainty | Distinguishes real variation from measurement noise | Confirm expanded uncertainty (U) with coverage factor k = 2 at 95% confidence |
| Lot/date traceability | Links results to specific production batches | Verify reel numbers or production date appear on both COA and shipping docs |
| Sampling protocol | Establishes how representative the data is | Ask how many samples per lot and from which reel positions |
| Lab accreditation | Validates testing quality and calibration | Confirm ISO/IEC 17025 status and verify scope covers your specified methods |
| Authorized signature | Creates accountability for reported values | Note contact details for follow-up questions |
The distinction between a datasheet and a lot-specific COA becomes crucial during acceptance inspection. A datasheet tells you what the mill targets; a COA tells you what your specific shipment delivered. When assembling your vendor qualification package, link both types of documentation as outlined in the ‘Passport’ evidence pack approach.
Machine-Condition Disclosures That Change Runnability
Basis weight and burst strength get most of the attention during specification discussions. But even perfectly on-spec material can fail at the corrugator if machine-condition variables aren’t controlled. These factors govern how paper behaves during handling, unwinding, and converting—not just its static properties on a test bench.
Moisture Conditioning and Storage Requirements

Paper is hygroscopic. It constantly exchanges moisture with its environment until it reaches equilibrium. A reel that left the mill at 7.0% moisture might arrive at your dock at 8.5% after crossing humid coastal regions, or drop to 6.0% in a dry winter climate. This drift creates real problems.
High moisture content (above 8.5% for most kraft grades) increases the risk of core crushing, telescoping during transit, and waviness during unwinding. Low moisture (below 6.5%) makes paper brittle, increases static, and can lead to edge cracking during high-speed converting. The problem isn’t just the absolute value—it’s the moisture gradient across the reel width (cross-direction profile) and through the reel’s depth.
Smart vendors specify conditioning protocols upfront. A typical requirement might state: “Condition reels at 23°C ±3°C and 50% ±5% RH for a minimum 48 hours before unwinding. Target equilibrium moisture: 7.0-7.5% per ISO 287.” This guidance prevents you from running material straight off the truck into a climate-controlled converting area where rapid moisture exchange will cause dimensional instability.
Ask vendors about their recommended acclimatization buffer. Some mills suggest 24 hours; others insist on 72 hours for thick calipers or extreme climate deltas. This isn’t negotiable flexibility—it reflects real physics about moisture diffusion rates through paper. Request specific time, temperature, and relative humidity targets so your receiving team knows exactly how to prepare material for production.
Reel Core and Splice Specifications
Your converting equipment was designed around specific tolerances. If reel cores don’t fit your mandrels precisely, you’ll fight vibration and tension issues from the first meter. If splices don’t match your system’s capabilities, you’ll see breaks and downtime.
Document these machine-condition details before specs are locked:
Core inside diameter (ID) and wall thickness. Standard cores run 76 mm or 152 mm ID, but confirm your equipment’s requirements. A loose fit causes wobble; an interference fit damages cores during mounting. Wall thickness affects crush resistance—critical if you stack loaded reels or if your unwind tension is high.
Splice type and frequency. Will the mill provide butt splices (edges meet with no overlap), lap splices (overlapping layers), or tape splices? Each type behaves differently at your unwind station. Specify the maximum acceptable splice frequency—for instance, “no more than two splices per 10,000 linear meters” for high-speed operations.
Tension window during unwinding. Mills wrap reels at a specific tension. If that tension is too high for your unwind system, you’ll induce stretch and dimensional instability. If it’s too low, the reel might telescope or have soft spots that cause feeding problems. Request the mill’s typical winding tension (often expressed in N/m or lbf/in) and compare it against your converting equipment’s recommended operating range. A mismatch here explains many mysterious runnability issues that surface only after material hits the line.
Edge quality and reel diameter uniformity. Clean, square edges prevent tracking problems. Consistent outer diameter across multiple reels matters if you’re running parallel lines or if your material handling system relies on uniform reel geometry.
| Machine-Condition Variable | Impact on Runnability | Typical Specification Range |
| Moisture conditioning | Prevents curl, static, dimensional drift | 48-72 hr at 23°C/50% RH; target 7.0-7.5% moisture |
| Core ID tolerance | Eliminates vibration and mandrel fit issues | 76 mm or 152 mm ±0.5 mm; wall thickness ≥8 mm |
| Splice type and frequency | Reduces breaks during high-speed unwinding | Max 2 splices per 10,000 m; specify lap or butt |
| Winding tension | Matches unwind system capacity | Confirm tension window (e.g., 3-5 kg/cm width or equivalent N/m) |
| Storage and handling | Prevents telescoping and edge damage | Store vertically or with <3 high horizontal stacking |
These variables rarely appear on a standard datasheet. You have to ask. Once disclosed, fold them into your RFQ Data Pack so that every competing vendor quotes against the same machine readiness baseline.
Variation Bands That Actually Matter (and How to Read Them)
Specifications define targets. Variation bands define reality. No mill holds every property at a single point value—manufacturing processes introduce natural variability. The question isn’t whether variation exists, but whether the vendor can quantify it and keep it within bounds you can convert successfully.
Understanding Typical Ranges vs. Guaranteed Limits

When you ask a vendor “What’s your typical basis weight range for a 180 g/m² grade?” you’re trying to understand process capability, not contractual limits. A competent mill might respond: “Target 180 g/m², typical range 178-182 g/m² (±1.1%), Cpk 1.45 based on last six months’ production data.”
This answer tells you three things. First, the mill centers their process close to the target (180 g/m²). Second, their routine production stays within a ±1.1% band. Third, their process capability index (Cpk) of 1.45 suggests they’re comfortably inside that range—capability indices above 1.33 indicate good process control, while values below 1.0 signal a process that frequently drifts near or beyond specification limits.
Not every vendor will share Cpk data—it reveals production maturity. But the typical range disclosure is reasonable to request, particularly for your critical-to-quality (CTQ) properties. It helps you understand whether accepting ±2% contractual tolerance means you’ll see the full ±2% swing regularly (poor control) or whether actual deliveries cluster much tighter (strong control with safety margin).
Critical Properties and Acceptance Mapping
Focus your tightest tolerances on properties that directly affect your process outcomes and finished product performance. For packaging grades, these typically include:
Basis weight (GSM). Material cost scales with weight. A 180 g/m² order that consistently delivers at 182 g/m² costs you 1.1% more than you budgeted—compounded across thousands of tons, this becomes significant. Conversely, light-weight deliveries might fail strength requirements. Request ±1-2% tolerance and ask for the vendor’s actual process spread. The test method should be specified as ISO 536 (for paper and board grammage) or TAPPI T 410 as an alternative.
Moisture window. Discussed earlier under machine conditions, but worth repeating: a 6.5-8.5% window is typical for kraft grades, but the narrower you can negotiate, the better. Moisture affects weight (you’re paying for water), dimensional stability, and converting performance. If your climate control is tight and your line speed is high, consider pushing for 7.0-8.0% with a ±0.5% measurement uncertainty. Specify ISO 287 as the test method. Note that in some pulp-related contexts, suppliers may reference ISO 638 for dry matter determination; confirm that the method specified is appropriate for paper and board testing, not pulp evaluation.
Cobb60 value. Cobb measures water absorption, which governs adhesive pick-up, ink holdout, and moisture resistance. For mailers and wraps, 20-35 g/m² is typical. For corrugating medium, you might accept 25-40 g/m². The key is consistency—wild Cobb swings cause adhesive and printing problems. Ask vendors about their typical Cobb range and whether they track cross-direction (CD) profile variation.
Burst, tensile, and compression properties. Specify which properties are critical for your application. If you’re making boxes, burst strength (ISO 2758 or TAPPI T 403) and edge crush test (ECT per TAPPI T 839) are primary. For wrapping grades, tensile strength (ISO 1924-2) and tear resistance matter more. Don’t specify tight tolerances on properties you won’t test—it adds cost without value.
SCT, RCT, and ECT for containerboard. Short-span compression (SCT per ISO 9895), ring crush (RCT per TAPPI T 822), and edge crush (ECT per TAPPI T 839) predict box performance. If your customers specify a minimum BCT (box compression test), work backward to establish the minimum liner and medium ECT values you need, then request typical ranges from vendors.
Linking Variation Bands to Acceptance Sampling
Once you know a vendor’s typical ranges, you can design intelligent acceptance criteria. If the vendor’s typical GSM range is 178-182 g/m² (±1.1%) but your contractual spec is 180 g/m² ±2%, you know that most incoming lots should pass easily. This knowledge lets you set appropriate AQL (Acceptable Quality Limit) levels per ISO 2859-1, the international standard for sampling procedures for inspection by attributes:
- Critical properties (those affecting safety, structural integrity, or regulatory compliance): AQL 1.0% or lower (e.g., 0.65%, 1.0%)
- Major properties (those affecting performance or customer satisfaction): AQL 1.5% – 2.5%
- Cosmetic properties (appearance issues with no functional impact): AQL 4.0% – 6.5%
An AQL of 1.0 means you’ll accept a lot even if up to 1% of samples fall outside specification—appropriate for critical properties where you need near-zero defects. An AQL of 4.0 allows up to 4% out-of-spec samples, suitable for properties with wider tolerance or less severe consequences.
Effective acceptance sampling starts with knowing the vendor’s actual process capability. If their Cpk is strong (≥1.33) and their typical range sits well inside your spec, you can use standard AQL plans. If their process barely fits your specification window, you’ll need tighter sampling or you’ll spend significant time managing borderline lots.
| Specification Field | Typical Variation Band | Test Method | Acceptance Mapping |
| Basis weight (GSM) | Target ±1-2% | ISO 536 or TAPPI T 410 | AQL 1.0-2.5; affects cost and strength |
| Moisture content | 6.5-8.5% (target 7.0-7.5%) | ISO 287 | AQL 1.0-2.5; critical for runnability |
| Cobb60 | 20-40 g/m² (application dependent) | ISO 535 or TAPPI T 441 | AQL 2.5-4.0; affects adhesion and print |
| Burst strength | Target ±5-8% typical | ISO 2758 or TAPPI T 403 | AQL 1.0-2.5 for structural grades |
| Tensile strength | Target ±5-10% typical | ISO 1924-2 or TAPPI T 494 | AQL 2.5-4.0; varies by application |
| ECT (Edge Crush) | Target ±8-10% typical | TAPPI T 839 | AQL 1.0-2.5 for box performance prediction |
Understanding these bands transforms acceptance inspection from a pass/fail gate into a process control feedback loop. When you see incoming moisture trending toward the high end of the window (8.3%, 8.4%, 8.5% across successive lots), you can alert the vendor before material goes out of spec.
Context note: Typical variation bands vary by substrate, caliper, furnish, and climate. Use these as negotiation and acceptance scaffolds, not as universal values. Your plant-specific conditions and the vendor’s actual process capability should guide the final specification windows you establish.
Evidence Sufficiency Rubric (Pre-Award)
Before you finalize a supplier agreement, verify that their evidence package meets minimum standards for credibility and usability. This 10-point rubric provides a structured pre-award assessment:
1. Are test methods named with standard identifiers? Look for ISO or TAPPI codes with version years, not just generic property names.
2. Do reported values include measurement uncertainty? Without expanded uncertainty (U) with coverage factor k, you can’t interpret whether variation is real or just measurement noise.
3. Is lot/date traceability clear? Can you match the COA to specific reels in a shipment?
4. Does the COA cover all properties in your specification? Missing values create gaps in your acceptance criteria.
5. Is the sampling plan documented? Know how many samples were taken and from where in the production run.
6. Is lab accreditation status disclosed? ISO/IEC 17025 or equivalent quality framework, with scope confirmed to cover the named methods.
7. Are typical variation bands provided? Not just the contractual tolerance, but the vendor’s actual process spread over the last 6-12 months.
8. Are machine conditions specified? Moisture conditioning, reel/core details, tension windows, splice frequency.
9. Is there a named contact for technical follow-up? Quality manager, lab supervisor, or technical service representative.
10. Are supporting documents available? Calibration certificates, past audit reports, or process capability studies.
Score each line 0 (not provided) or 1 (provided and satisfactory). Require a minimum score of 8/10 to proceed to pilot trials. A vendor who scores 8-10 on this rubric demonstrates evidence of maturity. They’ve thought through what buyers need and they’re prepared to stand behind their data. Scores of 5-7 suggest gaps you can address through negotiation. Scores below 5 indicate the vendor may not have robust quality systems or may be reluctant to disclose process performance—both red flags.
This rubric connects directly to ongoing assurance. Strong upfront evidence correlates with fewer disputes, faster acceptance cycles, and better CAPA (Corrective and Preventive Action) effectiveness if issues arise. For guidance on implementing pilot-first qualification approaches, see sample testing protocols.
Acceptance Alignment: Tie Spec Thresholds to Sampling & Triage
Specifications become operational only when you define exactly how you’ll verify them and what actions you’ll take when material sits near tolerance limits. This is where acceptance sampling plans tied to specification bands prevent endless acceptance debates.
Designing Your Acceptance Criteria

Start by categorizing each specification according to its impact on your process and finished product. Use a three-tier system:
Tier 1 – Critical specifications. These properties directly affect safety, regulatory compliance, or structural integrity. For containerboard, this might include minimum ECT for box strength. For food-contact grades, this includes compliance with FDA or EU regulations. Set tight acceptance limits (±1-2% from target) and low AQL levels (typically 1.0% or lower). Any lot that fails a Tier 1 spec gets immediate hold and vendor notification.
Tier 2 – Major specifications. These affect performance, yield, or customer satisfaction but don’t create immediate safety or compliance risks. Basis weight, moisture window, and primary strength properties typically fall here. Acceptance limits can match your contractual tolerance (±2-3% from target). Use AQL 2.5-4.0. Lots that marginally fail might be accepted with concessions or routed to less demanding applications.
Tier 3 – Cosmetic specifications. Appearance issues, minor surface defects, or properties that don’t directly affect function. Set wider acceptance bands and higher AQL (4.0-6.5). These lots rarely get rejected unless defects are severe enough to affect end-use or customer perception.
When you set these tiers, communicate them clearly to your vendor. They need to know which properties are negotiable under what circumstances and which are absolute gates.
Triage Protocol for Borderline Lots
Not every shipment will land perfectly on target. Your acceptance system needs a defined protocol for handling borderline cases—material that’s technically within specification but close to limits, or material that’s slightly out-of-spec on non-critical properties.
Establish decision rules based on your tier system:
- Tier 1 borderline (within spec but near limit): Accept but flag for monitoring. If the trend continues across multiple lots, trigger a supplier review.
- Tier 1 out-of-spec: Automatic hold. Vendor provides disposition (rework, sort, replace, or engineering review for use-as-is).
- Tier 2 borderline: Accept with notation. Track trends.
- Tier 2 out-of-spec: Conditional acceptance possible if deviation is minor and application isn’t critical. Require vendor CAPA.
- Tier 3 borderline or out-of-spec: Accept unless defect is obvious to end customer or affects downstream processing.
This triage approach requires you to define thresholds in advance. For instance, if your moisture specification is 7.5% ±1.0% (so 6.5-8.5% is the acceptance window), you might set internal alert limits at 7.0% and 8.0%. Material between 7.0-8.0% passes without comment. Material between 6.5-7.0% or 8.0-8.5% triggers a yellow flag—acceptable but worth noting. Material outside 6.5-8.5% triggers a red flag and follows the Tier 1 disposition protocol.
When labs disagree on test results, the path forward is methodical: compare method identifiers to ensure both labs used the same standard, check measurement uncertainty values, confirm calibration status and operator training, then run a re-test from sealed retained samples under agreed conditioning protocols. This systematic approach, rooted in test-method-named tolerances, prevents disputes from devolving into subjective arguments.
By designing this system before material arrives, you eliminate the “let’s see how the line runs” guesswork that causes acceptance delays and vendor disputes.
Tie-Ins to RFQ Clarity (What to Name & How)
Evidence collection starts during the RFQ phase. If your Request for Quotation doesn’t specify exactly which evidence you expect, you’ll receive inconsistent responses that are impossible to compare.
Structuring Evidence Requirements in Your RFQ
When you issue an RFQ, include a dedicated evidence section that names specific requirements:
Test methods and units. State “Basis Weight per ISO 536 (g/m²)” rather than generic “grammage.” Specify “Burst per TAPPI T 403 (kPa)” instead of just “burst strength.” This precision eliminates confusion about which test standard applies and ensures all vendors quote against the same measurement framework.
Tolerance format. Require vendors to provide tolerances in consistent format: “180 g/m² ±2%” rather than “178-182 g/m²” or “180 g/m² nominal.” The ±format makes it immediately clear how much variation you’re allowing.
Evidence attachments. Explicitly request COAs from recent production, capability study summaries if available, and lab accreditation certificates. State “Attach representative COAs dated within the last 90 days” so vendors understand you want current evidence, not historical best-case data.
Machine-condition disclosure. Include fields for moisture conditioning requirements, recommended storage time with specific temperature and relative humidity targets, core specifications, and splice details. Make these required fields, not optional notes.
Structuring RFQs this way creates a spec-true RFQ blueprint that forces comparable responses. When all vendors respond with the same level of detail using the same measurement standards, your evaluation becomes straightforward. Use concise, measurable fields that map directly to the evidence you need: Property → Method ID → Unit → Target/Tolerance → Moisture window → Reel/core & splice → Tension window → Sampling plan & AQL → Accreditation/Calibration evidence → Contacts/Attachments.
Linking Evidence to Quote Evaluation Criteria
Don’t treat the technical evidence section as separate from commercial evaluation. Build evidence quality into your scoring matrix:
- 30% weight: Commercial terms (price, payment terms, lead time)
- 40% weight: Technical evidence quality (completeness of COAs, clarity of variation bands, machine-condition disclosure, test method alignment)
- 20% weight: Vendor capability (production capacity, certifications, references)
- 10% weight: Service factors (responsiveness, technical support, willingness to pilot)
This weighting ensures that the lowest-price vendor with poor evidence doesn’t automatically win. A vendor who provides complete method-named tolerances and realistic variation bands might cost 2-3% more but save you that amount (and more) by eliminating acceptance disputes and quality-related downtime.
When your RFQ explicitly asks for the 12 RFQ fields that change the quote, you’re telling vendors that evidence matters. Vendors who respond thoroughly signal their commitment to quality transparency.
Before You Award: Audit & Ongoing Assurance Hooks
Evidence collection doesn’t stop at the purchase order. The pre-award evidence establishes baseline expectations; ongoing verification ensures those standards hold across time and production lots.
Pre-Award Verification Steps
Before you sign a supply agreement, perform these final checks:
Cross-reference COA values against your specifications. Lay the vendor’s most recent COAs side-by-side with your specification document. Verify that every critical property has a corresponding test result using the correct method. Flag any discrepancies for clarification before committing.
Request references and audit reports. Ask whether the vendor has recent third-party audit reports (ISO 9001, FSC/PEFC, or customer audits). Contact one or two existing customers to ask about their experience with consistency and dispute resolution.
Conduct a pilot or trial order. For new vendors or high-value contracts, run a small trial before committing to volume. Test the material through your full converting process under the exact machine conditions the supplier disclosed—validate moisture stability, tension window compatibility, and reel handling on your actual equipment. This pilot-first approach proves whether the vendor’s evidence package translates to real-world runnability.
Establish CAPA expectations. Define upfront how you’ll handle quality issues. Will you use a formal CAPA system? What’s the expected response time? Who’s the point of contact? These details prevent confusion when issues arise.
Building Ongoing Assurance into the Relationship
Once production begins, maintain evidence discipline through regular audits and performance reviews:
Quarterly COA reviews. Compare COAs across multiple shipments to spot trends. Is moisture content creeping upward? Is basis weight drifting? Early trend detection prevents out-of-spec surprises. Request quarterly variation summaries that show minimum, maximum, and (if available) Cp/Cpk values for your critical-to-quality properties.
Annual capability updates. Request updated process capability data annually. If the vendor’s Cpk declines from 1.5 to 1.1, that’s a warning sign that their process control is weakening.
Periodic on-site or remote audits. Schedule audits based on risk. New vendors or those with recent quality issues might need annual on-site visits to confirm method alignment, verify operator competency and training, and check instrument calibration traceability. Established vendors with strong track records might only need remote document reviews every two years. When auditing, verify that the ISO/IEC 17025 accreditation scope explicitly covers your critical test methods.
Documented corrective actions. When issues occur, require formal CAPA reports that identify root cause, corrective action, timeline, and verification evidence. Close the loop by verifying effectiveness after the vendor implements changes.
Change control protocol. If the vendor changes production plants, furnish composition, or test methods, require proactive notification and side-by-side COAs demonstrating equivalence before switching over your supply.
This ongoing assurance framework transforms a one-time evidence collection into a continuous quality partnership. The vendor knows you’re monitoring, which encourages consistent performance. You build a historical record that supports both supplier development and risk management. Keep all artifacts organized in the ‘Passport’ evidence pack for each material grade.
Frequently Asked Questions
What’s the difference between a datasheet and a COA?
A datasheet describes what a mill typically produces or what a grade is designed to achieve. It provides target values and general specifications but isn’t tied to a specific production lot. A Certificate of Analysis (COA) documents actual test results from a specific lot or production run, including lot traceability, method IDs with version years, units, tolerances, and measurement uncertainty. For procurement decisions, always request lot-specific COAs rather than relying solely on generic datasheets.
Do I need ISO/IEC 17025 accreditation for the lab issuing the COA?
For high-consequence properties—those affecting safety, structural integrity, or regulatory compliance—yes. ISO/IEC 17025 accreditation provides strong assurance that a lab follows recognized quality management practices, maintains calibrated equipment, and participates in proficiency testing. The accreditation scope must explicitly cover the test methods named in your specification to provide credibility and comparability across labs. While it’s not an absolute requirement for all testing—many reputable mill labs operate without formal accreditation—it does add credibility to test results, especially if disputes escalate to third-party arbitration. At minimum, ask vendors about their lab’s quality procedures, calibration schedules, and participation in any quality assurance programs.
What if a supplier won’t share Cp/Cpk data—what’s a reasonable fallback?
Process capability indices (Cp/Cpk) reveal production maturity, so some vendors are reluctant to share them. If capability data isn’t available, request “typical range” or “typical band” information for the last 6-12 months instead—ask for minimum and maximum values or ±% spread. Ask: “For your 180 g/m² grade, what’s the typical spread you see across a month’s production?” This gives you practical insight into process consistency without requiring the vendor to disclose formal capability metrics. You can also request COAs from multiple recent lots to calculate your own rough capability estimate. Until the vendor demonstrates consistent performance through actual delivery history, tighten your AQL levels for those critical-to-quality properties to compensate for the uncertainty.
How tight should my moisture window be for converting stability?
Most kraft and containerboard grades perform well within a 6.5-8.5% moisture window (ISO 287). However, moisture windows vary by furnish and climate—what works in one plant may need adjustment in another. If you operate high-speed converting lines with tight climate control, consider negotiating a narrower band—7.0-8.0% or even 7.0-7.5% if the vendor’s process capability supports it. Set a plant-verified window based on your actual converting conditions and enforce the conditioning and storage guidance in both your purchase order and RFQ. Tighter windows reduce curl, dimensional variation, and static issues. Balance this against the vendor’s realistic process spread; demanding a 1% window from a mill with ±2% typical variation creates constant disputes.
When should I require an on-site audit versus a remote review?
Use on-site audits for new vendors, vendors with recent quality issues, marginal pilot performance, or high-risk contracts (large volume, critical application, long-term commitment). On-site visits let you observe production firsthand, verify equipment maintenance and calibration, assess operator training and competency, and evaluate quality culture. Trigger on-site audits if method changes occur or if accreditation or calibration documentation is incomplete. Remote audits—document reviews, virtual facility tours, and interview-based assessments—work well for established vendors with strong performance history or for lower-risk categories. A hybrid approach is common: remote annual reviews with on-site audits every 2-3 years or triggered by performance concerns.
For further guidance on assembling complete evidence packages, explore how to build a ‘Passport’ evidence pack and learn which TAPPI/ISO methods to require for comparable vendor responses.
Our Editorial Process
Our expert team uses AI tools to help organize and structure our initial drafts. Every piece is then extensively rewritten, fact-checked, and enriched with first-hand insights and experiences by expert humans on our Insights Team to ensure accuracy and clarity.
About the PaperIndex Insights Team
The PaperIndex Insights Team is our dedicated engine for synthesizing complex topics into clear, helpful guides. While our content is thoroughly reviewed for clarity and accuracy, it is for informational purposes and should not replace professional advice.
