Modem and Baseband Acceptance: Lock, Throughput, and Error Rates

Category: Testing Commissioning and Acceptance

Published by Inuvik Web Services on February 02, 2026

Modem and Baseband Acceptance: Lock, Throughput, and Error Rates

Modem and baseband acceptance is how you prove a ground station can turn a real RF signal into usable data reliably, not just once, but across the conditions you will see in routine operations. A good acceptance plan focuses on measurable outcomes: how fast the receiver locks, how stable it stays, what throughput is sustained, and what error rates are achieved. This guide describes practical acceptance criteria and the evidence you should collect to make compatibility clear.

Table of contents

  1. What You Are Accepting: Modem vs Baseband
  2. Define the Operating Modes and Test Matrix
  3. Lock Acceptance Criteria: Acquisition and Stability
  4. Throughput Acceptance Criteria: Net Data Rate and Completeness
  5. Error Rate Acceptance Criteria: BER, FER, PER, and FEC Performance
  6. Quality Metrics and Thresholds: How to Set Pass/Fail
  7. Evidence to Collect to Prove Results
  8. Edge Cases and Failure Modes Worth Testing
  9. Test Execution Practices That Improve Confidence
  10. Common Acceptance Mistakes and How to Avoid Them
  11. Glossary: Modem and Baseband Terms

What You Are Accepting: Modem vs Baseband

People sometimes use “modem” and “baseband” interchangeably, but acceptance is easier when you separate responsibilities. The modem is often the demodulation and decoding engine that converts a waveform into bits. The baseband layer may also include framing, packet extraction, de-randomization, de-interleaving, and forward error correction handling, depending on the architecture.

Practical acceptance typically covers:

  • Synchronization and lock: carrier, symbol, frame, and code lock where applicable.
  • Demodulation and decoding: correct interpretation of modulation and coding under expected conditions.
  • Output correctness: frames and packets reconstructed accurately, with expected counters and metadata.
  • Performance: throughput sustained and error rates within limits.

The key is to define the boundary: what signal comes in, what data comes out, and how you will prove the output is correct.

Define the Operating Modes and Test Matrix

Acceptance results are only meaningful if they cover the modes you intend to operate. A single “happy path” test often proves only one configuration. Before measuring anything, define a test matrix that includes the relevant combinations of mode, rate, and pass geometry.

Typical test dimensions include:

  • Modulation type: each modulation scheme that will be used operationally.
  • FEC and coding rate: all coding options and interleaving modes in scope.
  • Symbol rate and occupied bandwidth: representative low, nominal, and high-rate cases.
  • Pass elevation range: include low and high elevation passes if the mission will use them.
  • Operational profiles: any different station configurations, filters, or gain settings.

Keep the matrix realistic. Cover what will actually run in production and include at least one stress case that approaches your performance boundaries.

Lock Acceptance Criteria: Acquisition and Stability

Lock is the foundation. If the receiver cannot acquire quickly or maintain stability, everything downstream suffers. Acceptance criteria should include both how fast lock is achieved and how stable it remains during the pass.

Acquisition criteria

Acquisition time is often measured from AOS (or from a “receiver enable” trigger) to a defined lock state. Be explicit about what “lock” means.

  • Time-to-carrier lock: time until the receiver tracks carrier frequency and phase.
  • Time-to-symbol lock: time until symbol timing is stable.
  • Time-to-frame lock: time until valid frame sync is achieved.
  • Time-to-data: time until output frames or packets are produced consistently.

Stability criteria

Stability criteria should reflect real operations. A receiver that locks but drops repeatedly may not be acceptable even if average throughput looks fine.

  • Lock continuity: lock maintained for a defined percentage of the pass.
  • Dropout limits: maximum allowed number of lock losses or maximum total dropout time per pass.
  • Reacquisition behavior: maximum time to re-lock after a dropout.

Define thresholds based on mission tolerance. For some missions, brief dropouts are acceptable if data can be reconstructed. For others, continuity is critical.

Throughput Acceptance Criteria: Net Data Rate and Completeness

Throughput is not only “how many bits per second.” In acceptance, you want to prove that the system delivers usable data at a sustained rate, and that you can account for what was expected versus what was delivered.

Net data rate vs raw rate

Be clear about what you are measuring. A link can have a high raw symbol rate while net payload throughput is reduced by coding overhead, framing, and retransmission or dropouts.

  • Configured rate: the planned raw rate for the mode.
  • Measured output rate: actual delivered frame or packet rate at the output boundary.
  • Useful payload rate: the part of the output that is mission-relevant after overhead.

Completeness criteria

Throughput acceptance should also include completeness. Even if average throughput is good, missing segments can make data unusable.

  • Frame completeness: percentage of expected frames received within the pass window.
  • Sequence continuity: acceptable gaps in frame counters or packet sequences.
  • Replay or buffering behavior: whether late delivery is allowed and how it is indicated.

A practical approach is to set a “minimum sustained throughput” target and a “minimum completeness” target per pass, then require success across a defined number of passes.

Error Rate Acceptance Criteria: BER, FER, PER, and FEC Performance

Error rates are how you quantify data quality. Different missions report different metrics, but the idea is the same: measure errors at meaningful points and set thresholds that align with mission needs.

Common error metrics

  • BER (Bit Error Rate): fraction of bits that are wrong, usually measured before or after FEC.
  • FER (Frame Error Rate): fraction of frames that fail checks or cannot be decoded.
  • PER (Packet Error Rate): fraction of packets that are lost or corrupted at the packet boundary.
  • CRC failure rate: how often integrity checks fail for frames or packets.

Pre-FEC vs post-FEC

Many systems report both pre-FEC and post-FEC indicators. Pre-FEC quality helps diagnose margin and predict performance under worse conditions. Post-FEC quality is closer to “is the output usable.”

  • Pre-FEC indicators: show how hard the decoder is working and how close you are to loss of service.
  • Post-FEC indicators: show whether data is clean enough for downstream processing.

Acceptance should specify which metric is used for pass/fail and how it is measured. If post-FEC is the acceptance gate, pre-FEC can still be recorded to build operational insight and troubleshooting capability.

Quality Metrics and Thresholds: How to Set Pass/Fail

Thresholds should be realistic and tied to outcomes. If thresholds are too strict, you may reject an integration that is operationally fine. If they are too loose, you accept a system that fails under routine variability.

Practical threshold-setting approaches include:

  • Mission-driven thresholds: base requirements on what downstream systems need to succeed.
  • Percentile thresholds: require performance for a high percentage of samples, not the absolute worst second.
  • Per-pass thresholds: set pass-level acceptance rather than only long-term averages.
  • Success-rate thresholds: require success across multiple passes, such as 9 out of 10.

When possible, define separate thresholds for different elevation ranges. Low elevation passes often have lower margin and may have different expected outcomes.

Evidence to Collect to Prove Results

Acceptance is much easier when evidence is consistent. Collect a standard set of artifacts for each pass and store them in a structured way so results can be reviewed later.

A practical evidence set includes:

  • Lock timeline: timestamps for carrier lock, frame lock, and any lock loss events.
  • Mode snapshot: modulation, coding, symbol rate, and relevant receiver settings used.
  • Throughput series: output rate over time and pass-level totals.
  • Error series: pre-FEC indicators, post-FEC error rate, CRC failures, and drop counters.
  • Completeness report: expected vs received counts for frames or packets, plus gap analysis.
  • Delivery artifacts: output files or packets with integrity checks and an identifiable pass ID.

Evidence should be tied to a specific pass identifier and a specific configuration version. Without that, results become hard to reproduce after changes.

Edge Cases and Failure Modes Worth Testing

Many modems look great in nominal conditions. Acceptance improves dramatically when you include a few targeted edge cases that are likely to occur in operations.

  • Low elevation passes: validate acquisition and stability near minimum elevation angles.
  • Mode switching: verify correct behavior if the mission changes rate or coding between passes.
  • Doppler stress: confirm lock holds when Doppler rate is highest and when frequency offset is near limits.
  • Short passes: verify time-to-data and completeness for brief contacts.
  • Restart events: confirm services restart cleanly and resume correct decoding without manual repair.
  • Backhaul interruption: confirm output buffering and delivery behavior if transfer paths fail.

These tests do not need to be complicated. The goal is to prove that the system’s behavior is predictable and that operators will have clear signals when performance is degrading.

Test Execution Practices That Improve Confidence

Even well-written acceptance criteria can fail if test execution is inconsistent. Small discipline improvements during testing can save a lot of time later.

  • Freeze configurations per test set: avoid changing parameters mid-campaign without documenting it.
  • Record assumptions: spacecraft mode, expected rates, and constraints for each pass.
  • Use a consistent clock: align all logs and timestamps to a stable time source.
  • Include a baseline pass: repeat one known configuration to detect drift in measurement methods.
  • Separate diagnosis from acceptance: acceptance should be pass/fail; diagnosis can happen in parallel.

The most credible campaigns are those where results can be reproduced later with the same setup and the same measurements.

Common Acceptance Mistakes and How to Avoid Them

Modem and baseband acceptance often goes wrong in predictable ways. Avoiding a few common mistakes improves both speed and trust.

  • Accepting “carrier present” as success: you need decoded output and integrity evidence.
  • Using only averages: a good average can hide short but important dropouts.
  • Not defining lock states: “locked” can mean different things; define it precisely.
  • Skipping low elevation tests: the first operational failure often happens near the edges.
  • Ignoring completeness: throughput without completeness can still produce unusable data.
  • Weak evidence: without consistent artifacts, acceptance becomes hard to defend after changes.

A simple acceptance plan that is consistent is usually better than a complex plan that is hard to run.

Glossary: Modem and Baseband Terms

Acquisition time

The time from a defined start point to a defined lock state, such as frame lock or consistent data output.

Lock

A stable synchronization state, such as carrier lock, symbol lock, or frame lock, that enables correct decoding.

Throughput

The rate of usable data delivered at the output boundary, often measured per second and summarized per pass.

BER (Bit Error Rate)

The fraction of bits that are incorrect, measured at a defined point such as before or after FEC.

FER (Frame Error Rate)

The fraction of frames that fail checks or cannot be decoded successfully.

PER (Packet Error Rate)

The fraction of packets that are lost or corrupted at the packet output boundary.

FEC (Forward Error Correction)

Encoding that allows the receiver to correct errors without requesting retransmission.

Pre-FEC and post-FEC

Measurements taken before error correction (quality and margin indicators) and after correction (output usability indicators).