Category: Testing Commissioning and Acceptance
Published by Inuvik Web Services on February 02, 2026
Modem and baseband acceptance is how you prove a ground station can turn a real RF signal into usable data reliably, not just once, but across the conditions you will see in routine operations. A good acceptance plan focuses on measurable outcomes: how fast the receiver locks, how stable it stays, what throughput is sustained, and what error rates are achieved. This guide describes practical acceptance criteria and the evidence you should collect to make compatibility clear.
People sometimes use “modem” and “baseband” interchangeably, but acceptance is easier when you separate responsibilities. The modem is often the demodulation and decoding engine that converts a waveform into bits. The baseband layer may also include framing, packet extraction, de-randomization, de-interleaving, and forward error correction handling, depending on the architecture.
Practical acceptance typically covers:
The key is to define the boundary: what signal comes in, what data comes out, and how you will prove the output is correct.
Acceptance results are only meaningful if they cover the modes you intend to operate. A single “happy path” test often proves only one configuration. Before measuring anything, define a test matrix that includes the relevant combinations of mode, rate, and pass geometry.
Typical test dimensions include:
Keep the matrix realistic. Cover what will actually run in production and include at least one stress case that approaches your performance boundaries.
Lock is the foundation. If the receiver cannot acquire quickly or maintain stability, everything downstream suffers. Acceptance criteria should include both how fast lock is achieved and how stable it remains during the pass.
Acquisition time is often measured from AOS (or from a “receiver enable” trigger) to a defined lock state. Be explicit about what “lock” means.
Stability criteria should reflect real operations. A receiver that locks but drops repeatedly may not be acceptable even if average throughput looks fine.
Define thresholds based on mission tolerance. For some missions, brief dropouts are acceptable if data can be reconstructed. For others, continuity is critical.
Throughput is not only “how many bits per second.” In acceptance, you want to prove that the system delivers usable data at a sustained rate, and that you can account for what was expected versus what was delivered.
Be clear about what you are measuring. A link can have a high raw symbol rate while net payload throughput is reduced by coding overhead, framing, and retransmission or dropouts.
Throughput acceptance should also include completeness. Even if average throughput is good, missing segments can make data unusable.
A practical approach is to set a “minimum sustained throughput” target and a “minimum completeness” target per pass, then require success across a defined number of passes.
Error rates are how you quantify data quality. Different missions report different metrics, but the idea is the same: measure errors at meaningful points and set thresholds that align with mission needs.
Many systems report both pre-FEC and post-FEC indicators. Pre-FEC quality helps diagnose margin and predict performance under worse conditions. Post-FEC quality is closer to “is the output usable.”
Acceptance should specify which metric is used for pass/fail and how it is measured. If post-FEC is the acceptance gate, pre-FEC can still be recorded to build operational insight and troubleshooting capability.
Thresholds should be realistic and tied to outcomes. If thresholds are too strict, you may reject an integration that is operationally fine. If they are too loose, you accept a system that fails under routine variability.
Practical threshold-setting approaches include:
When possible, define separate thresholds for different elevation ranges. Low elevation passes often have lower margin and may have different expected outcomes.
Acceptance is much easier when evidence is consistent. Collect a standard set of artifacts for each pass and store them in a structured way so results can be reviewed later.
A practical evidence set includes:
Evidence should be tied to a specific pass identifier and a specific configuration version. Without that, results become hard to reproduce after changes.
Many modems look great in nominal conditions. Acceptance improves dramatically when you include a few targeted edge cases that are likely to occur in operations.
These tests do not need to be complicated. The goal is to prove that the system’s behavior is predictable and that operators will have clear signals when performance is degrading.
Even well-written acceptance criteria can fail if test execution is inconsistent. Small discipline improvements during testing can save a lot of time later.
The most credible campaigns are those where results can be reproduced later with the same setup and the same measurements.
Modem and baseband acceptance often goes wrong in predictable ways. Avoiding a few common mistakes improves both speed and trust.
A simple acceptance plan that is consistent is usually better than a complex plan that is hard to run.
Acquisition time
The time from a defined start point to a defined lock state, such as frame lock or consistent data output.
Lock
A stable synchronization state, such as carrier lock, symbol lock, or frame lock, that enables correct decoding.
Throughput
The rate of usable data delivered at the output boundary, often measured per second and summarized per pass.
BER (Bit Error Rate)
The fraction of bits that are incorrect, measured at a defined point such as before or after FEC.
FER (Frame Error Rate)
The fraction of frames that fail checks or cannot be decoded successfully.
PER (Packet Error Rate)
The fraction of packets that are lost or corrupted at the packet output boundary.
FEC (Forward Error Correction)
Encoding that allows the receiver to correct errors without requesting retransmission.
Pre-FEC and post-FEC
Measurements taken before error correction (quality and margin indicators) and after correction (output usability indicators).
More