FAT vs SAT: What to Test and Why
Ground stations are built from many moving parts, and the easiest time to find problems is before the system is in daily operations. Two test stages are commonly used to reduce risk: Factory Acceptance Testing (FAT) and Site Acceptance Testing (SAT). They sound similar, but they serve different purposes. This guide explains what each test phase should cover, why it matters, and how to write tests that catch real integration issues instead of producing a checklist that looks good on paper.
Table of contents
- FAT and SAT in Plain English
- Why Both Phases Matter
- How to Choose What to Test
- FAT: What to Test at the Factory
- SAT: What to Test at the Site
- End-to-End Tests That Prove Real Readiness
- Measuring Results: Pass/Fail Criteria and Evidence
- Common Gaps and Mistakes in Acceptance Testing
- Handover and Baselining: What You Freeze After SAT
- Glossary
FAT and SAT in Plain English
Factory Acceptance Testing (FAT) is the set of tests performed before equipment leaves the vendor or integration facility. The goal is to prove that the delivered system matches the specification and works in a controlled environment where issues can be fixed quickly.
Site Acceptance Testing (SAT) is the set of tests performed after installation at the real site. The goal is to prove the system works in the environment where it will actually operate: with the real power system, the real backhaul, the real RF environment, and the real operational workflows.
A helpful way to remember the difference is that FAT is about build correctness, while SAT is about operational readiness.
Why Both Phases Matter
FAT and SAT exist because the factory and the site are not the same world. A ground station that looks perfect in a lab can fail in the field due to cable loss, grounding, interference, weather exposure, backhaul behavior, or subtle timing issues. Doing both phases reduces risk in a structured way:
- FAT reduces shipment risk: you avoid sending incomplete or misconfigured systems to a remote location.
- SAT reduces operational risk: you prove performance and reliability in real conditions before routine use.
Without FAT, you often discover basic integration problems at the site, where the cost of fixing them is higher. Without SAT, you can end up with a system that technically “works,” but fails during real passes and high-pressure operations.
How to Choose What to Test
The best acceptance tests are driven by mission outcomes, not by vendor brochures. A simple approach is to test from three angles: requirements, interfaces, and failure modes.
Requirements: prove the system meets what was purchased
- Does the station support the required bands, modulation types, and data rates?
- Does it meet key performance targets like sensitivity, stability, and availability?
- Does it produce the required logs, metadata, and delivery outputs?
Interfaces: prove boundaries are stable
- RF handoffs: levels, frequencies, converters, filtering, and routing.
- Timing handoffs: frequency reference inputs, time sync, and holdover behavior.
- Control handoffs: ACU, modem control, automation triggers, and state feedback.
- Data handoffs: recordings, integrity checks, and delivery pipelines.
Failure modes: prove recovery is possible
- What happens if backhaul drops during a pass?
- How does the system behave after a power cycle?
- Can operators recover quickly and safely using runbooks?
If a test cannot be tied to a requirement, an interface, or a known failure mode, it is often safe to remove it.
FAT: What to Test at the Factory
FAT is where you validate the system build before it becomes hard to change. It is the right place to catch missing parts, incorrect wiring, wrong firmware, and integration issues that do not require a real sky view.
Hardware and build verification
- Bill of materials check: confirm the right devices, spares, cables, and adapters are present.
- Power and grounding: verify correct voltage ranges, protective earth continuity, and safe shutdown behavior.
- Rack layout and labeling: confirm labeling is consistent and supports troubleshooting.
- Environmental checks: confirm fans, temperature sensors, and thermal behavior in normal loads.
Functional tests in a controlled environment
- Antenna/drive simulation: verify ACU behaviors and tracking modes where simulation is possible.
- RF chain bench tests: inject known test signals and verify levels and frequency translation.
- Modem acquisition and decoding: lock to known waveforms and validate bit error performance.
- Automation workflows: run scripted “pass” sequences using test inputs and simulated triggers.
Security and access baseline
- Account setup: ensure unique accounts, correct roles, and privileged access separation.
- Remote access design: confirm the intended access path works and is logged.
- Logging: verify key device logs are captured and time-aligned.
FAT should end with a build baseline: firmware versions, configuration snapshots, and a known-good reference that can be compared after shipment and installation.
SAT: What to Test at the Site
SAT is where you prove the station works in the real world. Even if the hardware is perfect, the environment can create failures: RF interference, poor grounding, cable loss, local weather exposure, and backhaul characteristics. SAT should focus on performance, stability, and operational realism.
Site infrastructure and installation validation
- Power quality: verify UPS behavior, generator switchover, and safe shutdown sequences.
- Backhaul: confirm throughput, latency, and stability under load, including failover paths if present.
- Grounding and bonding: validate that the site grounding design matches expectations and reduces noise risk.
- Physical security and access: confirm access procedures and emergency entry steps are workable.
RF and pointing performance
- Pointing calibration: verify pointing models and confirm repeatable acquisition across different azimuth/elevation.
- Noise environment: baseline the spectrum and identify local interference risks.
- End-to-end link performance: validate sensitivity and margin under normal and degraded conditions.
Operations and monitoring
- Pass execution: prove scheduling, AOS/LOS behavior, and operator workflows for real contacts.
- Alarm quality: confirm alerts are meaningful, not noisy, and that operators know what actions to take.
- Data delivery: prove deliveries occur with correct metadata, integrity checks, and expected latency.
SAT should also validate that the station can survive normal maintenance and recover cleanly after planned downtime.
End-to-End Tests That Prove Real Readiness
End-to-end tests are the most valuable acceptance tests because they exercise the whole chain: scheduling, pointing, RF, decoding, recording, processing, and delivery. They are also the easiest tests to “fake” unless you define clear evidence requirements.
Examples of high-value end-to-end tests:
- Full pass capture and delivery: execute a real contact and deliver a verified product with complete metadata.
- Degraded geometry pass: prove the station still performs at lower elevations or shorter windows within defined limits.
- Repeatability test: run multiple passes over multiple days and confirm consistent acquisition and performance.
- Recovery test: intentionally restart a non-critical service and confirm the next pass executes normally.
End-to-end tests should include checks for “silent failures,” where the station appears to run but produces incomplete or unusable outputs.
Measuring Results: Pass/Fail Criteria and Evidence
Acceptance tests only protect you if the results are measurable. “Works as expected” is not a pass criterion. Each test should define what success looks like and what evidence is required.
Good pass/fail criteria are specific
- Performance thresholds: minimum receive performance, maximum error rates, required lock time, or maximum delivery latency.
- Operational correctness: correct transitions at AOS/LOS, correct transmit safety behavior, correct alarm triggers.
- Data integrity: complete files, consistent time tags, correct metadata fields, and integrity checks that pass.
Evidence you should capture
- Configuration snapshots: key settings and versions at the time of the test.
- Logs and timelines: event logs that show what happened and when.
- Measurements: recorded levels and key RF observations at defined test points.
- Products: the outputs that downstream teams will actually use.
Evidence is not just for disputes. It becomes the baseline you compare against when performance changes months later.
Common Gaps and Mistakes in Acceptance Testing
Many acceptance plans fail for predictable reasons. Avoiding these traps usually produces a bigger improvement than adding more tests.
- Testing only best-case passes: if you never test marginal conditions, you will meet them for the first time during operations.
- Ignoring boundaries: RF levels, timing references, and state feedback are where multi-vendor systems break.
- Unclear ownership: no one is accountable for integration defects, so problems get deferred.
- Skipping recovery testing: systems often fail during restarts, updates, and partial outages.
- No baselining: without a baseline, “it got worse” becomes hard to prove or troubleshoot.
- Checklist inflation: too many low-value checks hide the few tests that actually matter.
If you must simplify, keep the tests that prove end-to-end delivery and the tests that validate interface boundaries.
Handover and Baselining: What You Freeze After SAT
SAT should end with a clear handover state. This is the moment when the station becomes an operational asset and changes should become controlled. A strong handover package reduces long-term instability.
Items worth freezing and recording after SAT:
- Approved configurations: modem profiles, ACU settings, frequency plans, automation policies.
- Firmware and software versions: including any vendor patches applied during commissioning.
- Network diagrams and access paths: how operators and vendors connect and what is allowed.
- Runbooks: startup, shutdown, failover, and common fault isolation procedures.
- Performance baselines: what “normal” looks like for lock time, margins, and delivery latency.
The practical outcome is that future changes can be compared to a known-good state, and recovery steps can be executed without guesswork.
Glossary
Factory Acceptance Testing (FAT)
Testing performed before shipment to confirm the system meets specifications in a controlled environment.
Site Acceptance Testing (SAT)
Testing performed after installation to confirm the system works under real site conditions and operational workflows.
RTO (Recovery Time Objective)
The target maximum time to restore service after an outage or failure.
Baseline
A recorded known-good configuration and performance reference used for comparison during troubleshooting and upgrades.
End-to-end test
A test that exercises the full chain from scheduling and acquisition through decoding, recording, and data delivery.
Interface boundary
The handoff point between components, such as RF level transitions, timing references, control protocols, or data pipelines.
Pass/fail criteria
Explicit measurable conditions that determine whether a test succeeded.