Acceptance Thresholds for Link Performance What Good Looks Like

Category: Link Engineering and Performance

Published by Inuvik Web Services on January 30, 2026

“Does the link work?” is usually the wrong question. Operators need to know how well it works, how consistently, and under what conditions. That’s what acceptance thresholds are for: measurable criteria you can use to validate a satellite link during commissioning, ground station acceptance testing, and ongoing operations.

This guide explains the most common link performance metrics (Eb/No, C/N0, BER/FER, throughput, availability, and more), how teams set realistic pass/fail thresholds, and what to include in an acceptance plan so “good” is unambiguous.

Table of contents

  1. What Acceptance Thresholds Mean
  2. Define “Good” for Your Service Type
  3. Core Metrics Used for Acceptance
  4. Thresholds vs Targets vs Margins
  5. How to Set Thresholds From the Link Budget
  6. Test Conditions: Clear-Sky, Worst-Case, and Operational
  7. Measuring Throughput the Right Way
  8. Accounting for ACM and Dynamic Links
  9. Common Failure Modes That Look Like Link Problems
  10. An Example Acceptance Checklist
  11. Acceptance Thresholds FAQ
  12. Glossary

What Acceptance Thresholds Mean

Acceptance thresholds are the pass/fail criteria that determine whether a link meets the required performance level. They prevent two common problems:

Vague expectations: “It looks good” is not testable.
Moving goalposts: without thresholds, disagreements show up late—during handover, commissioning, or outages.

A good acceptance definition is specific about: the metric, where it is measured, how it is measured, what conditions apply, and what duration/sample size is required.

Define “Good” for Your Service Type

Different services define “good” differently:

TT&C: reliability and command success matter more than peak throughput. You care about robust demod lock, low command error, and predictable margins.
Payload downlink: you care about how much data you can reliably deliver per pass and whether you can hit data delivery timelines.
Broadband / gateway: you care about sustained throughput, latency/jitter behavior, and availability under real weather and interference conditions.

The first step is to write down the user-visible outcome you need (e.g., “deliver X GB per day” or “maintain service at Y Mbps with Z% availability”) and then map that to RF and modem metrics.

Core Metrics Used for Acceptance

Acceptance plans typically combine a few core measurements:

Eb/No: modem-reported energy-per-bit to noise density; closely tied to demod/decoding performance.
C/N0: carrier-to-noise density; useful for comparing performance across different rates.
BER/FER: bit error rate / frame error rate (often measured pre- and post-FEC).
Lock stability: percentage of time the modem maintains lock during a pass or session.
Throughput: user payload throughput measured at a defined network point (not just modem raw rate).
Packet loss: especially relevant for IP-based services and higher-layer experience.
Latency and jitter: critical for interactive or real-time applications.
Availability: the percent of time service meets a defined performance floor.

The best acceptance plans use a small set of metrics that directly connect to mission outcomes, rather than measuring everything possible.

Thresholds vs Targets vs Margins

It helps to separate three ideas:

Threshold: the minimum acceptable value (pass/fail).
Target: the expected typical value (what you aim to achieve in normal conditions).
Margin: how far you are above the threshold under defined conditions.

For example, you might set an Eb/No threshold that guarantees the selected modulation/coding works, a higher Eb/No target for normal operation, and require a minimum margin at a specified elevation angle.

A practical way to set acceptance thresholds is to build them from two sources:

1) Modem/waveform requirements: the required Eb/No (or C/N) for your modulation/coding to hit a specific post-FEC error rate.
2) Link budget predictions: expected Eb/No or C/N0 across elevation angles and conditions (clear sky, typical, fade).

Then define thresholds that account for known implementation losses and measurement variability. In other words, don’t set the pass line exactly at the theoretical requirement—set it at a level that reflects how your real system behaves.

Test Conditions: Clear-Sky, Worst-Case, and Operational

“Good” must be tied to conditions, or it becomes meaningless. Common acceptance condition sets include:

Clear-sky acceptance: validates the RF chain, pointing, and configuration when weather is not the dominant factor.
Operational acceptance: validates performance over multiple passes/days with typical variability (tracking, temperature, normal atmospheric effects).
Availability acceptance: validates that, over time, the link meets a defined performance floor at a required percentage (often most relevant for comms).

For LEO, acceptance may specify a minimum elevation angle (e.g., evaluate performance above X degrees) because low elevations can be dominated by geometry and atmospheric path length.

Measuring Throughput the Right Way

Throughput is often the most argued metric because people measure it at different layers. Acceptance should specify:

Measurement point: modem output, ground station router, mission network boundary, or cloud ingest endpoint.
Payload vs raw: is it user payload throughput or PHY rate?
Protocol overhead: how do you account for framing, encryption, retransmissions, and buffering?
Time window: peak rate, sustained rate over N seconds, or average over the full pass?

A good acceptance plan avoids “speed test ambiguity” by defining the exact method and where the numbers come from.

Many modern links use adaptive coding and modulation (ACM). That means performance is intentionally variable: the link trades throughput for robustness as conditions change.

In ACM systems, acceptance often includes:

Mode distribution: how often the link uses each modulation/coding mode under defined conditions.
Minimum service floor: a minimum throughput or Eb/No margin that must be met with a specified probability.
Recovery behavior: how quickly the link returns to higher efficiency after a fade ends.

This makes acceptance realistic: you validate outcomes (“service stays above X”) instead of demanding one fixed rate.

Links sometimes “fail” for reasons that aren’t the core RF budget:

Pointing and tracking errors: especially during fast LEO passes or high-frequency operations.
Polarization mistakes: incorrect feed alignment or switching.
Frequency offset or Doppler compensation issues: can reduce lock quality or cause dropouts.
Nonlinear distortion: from driving amplifiers too hard, harming higher-order modulation.
Interference: external emitters or adjacent-channel conflicts.
Networking bottlenecks: backhaul congestion, misconfigured QoS, or packet loss that looks like RF loss at the application layer.

Acceptance should include sanity checks and logs that help isolate where degradation is happening (RF, baseband, network, or application).

An Example Acceptance Checklist

A practical acceptance checklist often includes:

RF chain verification: calibrated receive levels, correct filtering, correct LO/frequency plan.
Tracking validation: pointing error bounds and lock stability across a pass.
Eb/No or C/N0 threshold: minimum values above a defined elevation angle.
Error performance: maximum post-FEC FER (or equivalent) over defined windows.
Data delivery: minimum payload throughput sustained over N seconds (or minimum GB per pass).
Operational alarms: monitoring thresholds, alerting, and fault response procedures validated.
Repeatability: pass criteria met across multiple passes/days, not a single “golden” pass.

The exact numbers come from your waveform requirements and your link budget, but the structure stays consistent across missions.

Acceptance Thresholds FAQ

Should acceptance thresholds be the same as link budget predictions?

Not exactly. Budgets are predictions under modeled assumptions. Acceptance thresholds should reflect what you can reliably measure, include implementation losses, and account for normal variability so the test is fair and repeatable.

What metric matters most: Eb/No, C/N0, or throughput?

They answer different questions. Eb/No and C/N0 explain whether the RF/baseband is healthy. Throughput shows the user outcome. Strong acceptance plans include at least one RF-quality metric and one outcome metric.

How do we accept a link if the weather is bad?

Many teams separate acceptance into clear-sky commissioning tests and longer-term operational/availability validation. If you can’t control weather, define acceptance windows, use multiple passes, or use service-floor metrics appropriate for ACM systems.

What does “stable lock” mean in measurable terms?

Define it explicitly: percentage of time locked during a pass, maximum allowable dropouts per pass, and minimum lock acquisition time—tied to specific elevation angles and operating modes.

Glossary

Acceptance threshold: A defined pass/fail criterion for a performance metric under specified conditions.

Eb/No: Energy per bit to noise density—maps link quality to digital decoding performance.

C/N0: Carrier-to-noise density (dB-Hz)—useful for comparing performance across bit rates.

BER/FER: Bit error rate / frame error rate—error performance before or after FEC.

Post-FEC: Error performance after forward error correction decoding.

Margin: Headroom above the minimum required performance.

Availability: Percent of time service meets a defined performance level.

ACM: Adaptive Coding and Modulation—link adapts waveform settings to maintain service under changing conditions.

Service floor: A minimum defined performance level (throughput, error rate, or lock stability) that must be maintained with a given probability.