End-to-End Pass Test Criteria and Required Logs

Category: Testing Commissioning and Acceptance

Published by Inuvik Web Services on February 02, 2026

End-to-End Pass Test: Criteria and Required Logs

An end-to-end pass test proves that a ground station can execute a contact and deliver usable results to the mission team. It is not just “we saw a signal.” A good test confirms the full chain: scheduling, pre-pass configuration, antenna pointing, acquisition, demodulation, recording, delivery, and evidence that each step happened as expected. This guide lays out practical pass test criteria and the logs that operators and engineers should capture to make the result believable and repeatable.

Table of contents

  1. What an End-to-End Pass Test Proves
  2. Defining Scope: What Is In and Out of the Test
  3. Success Criteria: The Minimum You Should Pass
  4. Pre-Pass Checks and Ready-State Evidence
  5. Acquisition and Tracking Criteria: AOS to LOS
  6. RF and Baseband Criteria: Lock, Quality, and Stability
  7. Recording and Data Capture Criteria
  8. Data Delivery and Post-Pass Criteria
  9. Required Logs: What to Capture Every Time
  10. Minimum Metadata Package: What Should Travel With the Data
  11. Common Failure Patterns and How the Logs Help
  12. Pass Test Checklist (Operator-Friendly)
  13. Glossary: Pass Test Terms

What an End-to-End Pass Test Proves

A pass test is a controlled exercise that demonstrates operational readiness. It should produce evidence that the station can:

  • Prepare correctly: load the right configuration, timing, and frequency plan before the satellite arrives.
  • Acquire reliably: point and track the spacecraft, and achieve signal lock in a predictable time.
  • Capture data: record or decode the downlink without gaps that matter for the mission.
  • Deliver results: move data and metadata to the next system with integrity checks.
  • Prove what happened: produce logs that make the outcome auditable and debuggable.

The “end-to-end” part matters because most integration problems happen between systems. A pass test forces those seams to show up while there is still time to fix them.

Defining Scope: What Is In and Out of the Test

Before you run a test, define scope so the result is meaningful. A scope statement should say what you are proving and what you are not proving. This prevents confusion later when someone assumes the test covered a capability it did not.

Common scope choices include:

  • Downlink-only test: prove receive chain, decoding/recording, and delivery.
  • TT&C test: prove command path safety and telemetry receipt, often with stricter controls.
  • Automation level: manual pass, assisted pass, or fully scheduled “lights-out” pass.
  • Site coverage: single station test or multi-site handoff test.

If uplink or command is involved, define additional safety steps and explicit approval points. If it is not involved, state clearly that the test does not prove transmit readiness.

Success Criteria: The Minimum You Should Pass

Success criteria should be observable and testable. Avoid vague goals like “good signal.” Instead, define measurable outcomes that can be confirmed from logs.

A practical minimum set of criteria:

  • Schedule alignment: pass start/end matches plan within an agreed tolerance.
  • Pre-pass config applied: correct profile loaded before AOS with confirmation in logs.
  • Acquisition: signal detected and lock achieved within an agreed window after AOS.
  • Tracking: antenna follows predicted track without limit alarms or repeated loss of track.
  • RF quality: signal level and quality metrics remain within expected bounds for most of the contact.
  • Recording/decoding: data capture starts and stops at expected times and produces an output artifact.
  • Integrity: output artifacts pass checksum and completeness checks.
  • Delivery: data and metadata arrive at the receiving system and are acknowledged.
  • Evidence package: required logs and a pass summary are produced and stored.

If your mission cares about throughput, add a criterion for delivered volume or sustained data rate. If your mission cares about latency, add a criterion for time from LOS to delivery completion.

Pre-Pass Checks and Ready-State Evidence

Many pass failures happen before AOS. A pre-pass check set should prove the station is in a known-good “ready” state. The key is to record evidence that checks were performed, not just that they exist on a checklist.

  • Time sync healthy: reference lock state and recent drift/holdover status recorded.
  • Backhaul available: delivery path reachable and storage space adequate.
  • Station mode correct: not in maintenance, not blocked by safety interlocks.
  • Frequency plan loaded: expected frequencies, offsets, and polarization settings confirmed.
  • Equipment warm-up complete: amplifiers, converters, and receivers in stable state.
  • Automation armed: schedule and orchestration services running and not paused.

Capture these as timestamped entries in a pass record. The goal is to remove ambiguity later when someone asks whether the station was prepared correctly.

Acquisition and Tracking Criteria: AOS to LOS

Acquisition is the transition from “satellite is in view” to “we have a usable link.” Your pass test should define what counts as acquisition and how it is measured.

Acquisition criteria

  • AOS observed: when the station first detects energy or expected signal signature.
  • Lock achieved: demodulator lock, frame lock, or equivalent operational lock state.
  • Time to lock: time from AOS to lock within a defined threshold.

Tracking criteria

Tracking confirms the antenna stayed on target. Even if a link works, poor tracking can reduce margin and cause intermittent issues.

  • No axis limit events: antenna stays within mechanical limits.
  • Pointing error bounded: if available, encoder or tracking error stays within expected range.
  • Stable acquisition: no repeated cycle of lock/loss beyond an agreed tolerance.

For GEO contacts, the focus may shift from “tracking” to “steady pointing” and long-duration stability, including monitoring for outages due to predictable geometry events.

RF and Baseband Criteria: Lock, Quality, and Stability

A pass test should include baseband evidence that the received signal was not just present but usable. The specific metrics vary by modem and waveform, but the concept is consistent: confirm quality over time.

Common RF and demod criteria

  • Signal level: received power or AGC value within expected bounds.
  • Quality metric: a consistent measure such as carrier-to-noise proxy, demod quality indicator, or lock confidence.
  • Error performance: packet errors, frame errors, or corrected error rate staying within acceptable limits.
  • Doppler handling: frequency offset within tracking range and no loss of lock due to offset.

It helps to define a “good portion of pass” rule, such as a minimum percentage of contact time with lock held and error rates below a threshold. This allows a pass with brief fades to still be judged fairly.

Recording and Data Capture Criteria

Recording criteria should make it clear what was captured and whether it is complete. This includes start/stop triggers, file boundaries, and expected size or duration.

  • Recording start: begins at the defined trigger (AOS, lock, or scheduled time) and is logged.
  • Recording stop: ends at the defined trigger (LOS, lock loss, or scheduled time) and is logged.
  • Artifact created: output file(s) exist with expected naming and metadata association.
  • Completeness check: expected duration, expected segments, or expected packet counts are met.
  • Storage behavior: no disk-full events, write errors, or unexpected truncation.

If the station produces both raw and decoded outputs, define which is authoritative for mission use and what is kept for troubleshooting.

Data Delivery and Post-Pass Criteria

The pass is not complete when LOS occurs. It is complete when data and context arrive where they are needed and the receiving system can use them. Delivery criteria should cover both the transfer and what the receiver confirms.

Delivery criteria

  • Transfer started: delivery pipeline starts within a defined time after pass end.
  • Transfer completed: all expected artifacts delivered within a defined window.
  • Integrity confirmed: checksum matches or equivalent verification passes.
  • Receiver acknowledgement: the receiving side indicates the delivery is complete and acceptable.

Post-pass closure criteria

  • Pass summary created: a concise summary record is generated with key outcomes.
  • Alerts reviewed: any alarms during the pass are attached to the pass record or ticketed.
  • Station reset: station returns to a clean ready state for the next scheduled activity.

Required Logs: What to Capture Every Time

Logs are the evidence that makes a pass test credible. They allow others to validate results and diagnose issues without repeating the test. The best practice is to define a standard log package that is collected for every pass test, even when the pass is successful.

Core operational logs

  • Schedule and orchestration log: planned times, triggered actions, and automation decisions.
  • Antenna/ACU log: pointing mode, commanded angles, encoder feedback, and alarms.
  • Receiver chain log: lock states, quality indicators, frequency offsets, and gain behavior.
  • Recording log: start/stop times, filenames, write status, and storage warnings.
  • Delivery log: transfer attempts, retries, completion, and integrity verification results.

System and infrastructure logs

  • Time reference status: lock/holdover state and recent stability indicators.
  • Network path logs: interface state changes, notable drops, and bandwidth constraints during delivery.
  • Host health logs: CPU, memory, disk utilization, and critical service restarts.
  • Security-relevant events: privileged actions related to configuration changes during the test window.

Keep log retention rules clear. Pass tests are often used later as baselines, so losing the evidence defeats the purpose.

Minimum Metadata Package: What Should Travel With the Data

Data without context is hard to trust. A minimum metadata package should accompany every delivery so downstream systems and operators can interpret it correctly. The exact fields vary, but the concepts are consistent.

  • Pass identifiers: pass ID, spacecraft ID, station ID, and a consistent contact name.
  • Observed times: observed AOS/LOS, lock start/stop, and recording start/stop.
  • Predicted times: scheduled AOS/LOS and planned pass duration for comparison.
  • RF configuration: frequency, polarization, bandwidth, symbol rate, and applied offsets.
  • Quality summary: lock percentage, major alarms, and basic error statistics.
  • Artifact list: names, sizes, and checksums for each delivered item.

The mission team should be able to answer, from metadata alone, whether the pass was successful and whether data is likely complete.

Common Failure Patterns and How the Logs Help

Pass tests often fail in predictable ways. Good logs shorten recovery time because they allow teams to separate RF issues from configuration issues from delivery issues.

  • No acquisition: antenna log shows pointing mismatch, or receiver log shows no expected signature near predicted frequency.
  • Late acquisition: pre-pass configuration applied late, or Doppler plan incorrect near AOS.
  • Intermittent lock: signal quality fluctuates with elevation, or tracking error increases in wind.
  • Recording gaps: storage constraints, service restarts, or incorrect start/stop triggers.
  • Delivery failure: backhaul drops, retries exhausted, or integrity checks failing.

The goal of pass testing is not to avoid every failure. It is to ensure failures are diagnosable and that the station can be improved quickly.

Pass Test Checklist (Operator-Friendly)

This checklist is designed to be used during the test without turning it into a paperwork exercise. Each step should produce a timestamped entry in the pass record or be reflected in logs.

  1. Confirm scope and success criteria: downlink-only vs command, automation level, required outputs.
  2. Verify time sync and station readiness: time reference healthy, systems running, storage available.
  3. Confirm schedule and configuration: pass loaded, correct profile selected, pre-pass actions armed.
  4. Start pre-pass logging: mark test start time and capture key system states.
  5. Monitor acquisition: record observed AOS, time-to-lock, and lock state transitions.
  6. Monitor link quality: watch for alarms and note any sustained degradations.
  7. Confirm recording behavior: start/stop events match triggers and artifacts are created.
  8. Confirm post-pass delivery: transfers start, complete, and pass integrity checks.
  9. Generate pass summary: include outcomes, issues, and references to the log bundle.
  10. Archive evidence: store logs and metadata package with the test record.

Glossary: Pass Test Terms

End-to-end pass test

A test that validates the full chain from scheduling and acquisition through recording and delivery.

AOS / LOS

Acquisition of Signal and Loss of Signal, marking when the satellite comes into view and leaves view.

Lock

A receiver or modem state indicating the signal can be demodulated or framed reliably enough to extract data.

Integrity check

A method to verify delivered data is complete and uncorrupted, often using checksums and expected counts.

Pass summary

A concise record of what happened during a contact, including timing, quality, and delivery outcomes.

Artifact

A produced output such as a recording file, decoded data product, or metadata package.

Automation

The scheduling and orchestration logic that executes pre-pass actions, tracking, capture, and delivery with minimal manual intervention.

Evidence package

The set of logs and metadata collected to prove test outcomes and support troubleshooting.