Privacy-Preserving Aggregation for National Security

The PPA Module Project: Enabling secure distributed evaluation in defense manufacturing under zero-trust constraints. Moving from privacy-by-intuition to privacy-by-proof.

Doctoral Research Purdue University · Dept. of Technology Leadership & Innovation

Technical Pillars

Homomorphic Encryption

Encrypted inference on operator-supplied measurements that returns authorized outputs while keeping test thresholds and model parameters confidential.

Differential Privacy

Differential-privacy mechanisms to protect aggregated production telemetry and maintain anonymity bounds within the network.

Probabilistic Indexing

Probabilistic indexing to limit cross-run linkability across heterogeneous manufacturing nodes.

Risk-Based Access Control

Formally verified state-machine access control that enforces workflow safety under realistic adversarial models.

The Challenge

From siloed intelligence to secure collaboration.

Siloed Agencies

  • The defense industrial base relies on geographically distributed sites with uncleared personnel.
  • Manufacturing design tolerances, diagnostic logic, and model parameters remain classified.
  • Sensitive evaluation logic must be executed across heterogeneous nodes without revealing protected information.

Federated Collaboration

  • Models are trained across agencies without centralizing underlying data.
  • Cryptography and formal privacy guarantees bound what can be learned.
  • Aggregation is hardened against compromise in zero-trust environments.

National security organizations need collaborative intelligence capabilities but cannot afford uncontrolled information disclosure. This work asks how far we can push privacy-preserving aggregation while still delivering actionable, high-fidelity models.

The Solution: The PPA Module

The PPA module is a secure, distributed computation architecture that enables encrypted execution of model-based acceptance tests.

Zero-Trust Architecture

Each manufacturing station functions as an edge device with asymmetric trust.

Formal Verification

Cryptographic components are analyzed with mathematical proofs for correctness and security (e.g., IND-CPA under defined assumptions).

Multi-Tenant Operation

Expanding workforce participation and production capacity without expanding the trust boundary.

Research Progress

A living abstract of the dissertation lifecycle.

  1. Literature Review Completed
  2. Theoretical Foundations Completed
  3. Formal Analysis & Proofs Completed
  4. Implementation & Simulation In progress
  5. Defense (August 2027) Pending

The Framework: A technical breakdown of the PPA module, its threat models, and its integration into federated learning pipelines.

System Overview

High-level diagrams and protocol flows describing how agencies enroll, contribute encrypted updates, and obtain aggregated models.

Future content: full architecture diagrams and protocol specifications.

Threat Models

Precise characterization of adversaries (honest-but-curious, colluding agencies, malicious servers) and the guarantees the module provides.

Future content: formal threat model tables and proof sketches.

Simulation Dashboard

A planned, browser-based view into the implementation and simulation testbed for the PPA module.

Placeholder: this tab will eventually host an interactive dashboard showing accuracy/utility trade-offs under different privacy budgets.

Convergence & Utility Analysis

Theoretical bounds on model convergence rates (O(1/T)) that account for noise introduced by encryption quantization and differential privacy.

Future content: full derivations, assumptions, and empirical validation of theoretical bounds.

Publications: Peer-reviewed papers and conference proceedings that inform or derive from the dissertation.

This space will list six key papers and proceedings, along with short, practice-oriented summaries of their contributions to secure federated intelligence.

  • Paper 1 — summary and venue placeholder.
  • Paper 2 — summary and venue placeholder.
  • Paper 3 — summary and venue placeholder.
  • Paper 4 — summary and venue placeholder.
  • Paper 5 — summary and venue placeholder.
  • Paper 6 — summary and venue placeholder.

Dissertation Draft: Controlled access to the evolving dissertation manuscript.

The full dissertation draft will be made available here as a password-protected PDF, or via request-based access for committee members and collaborators.

For now, please contact Ryan directly to request access to the latest draft.

About Ryan

Ryan Miles is a doctoral candidate at Purdue University and a retired United States Air Force veteran with approximately 30 years of experience supporting the U.S. Intelligence Community. His doctoral research moves beyond "privacy-by-intuition" to establish "privacy-by-proof" in national security applications. Specifically, he is developing a Privacy-Preserving Aggregation (PPA) module that unifies homomorphic encryption, probabilistic indexing, and compartmentalized differential privacy to enable secure, multi-tenant federated learning in zero-trust environments.

Ryan’s academic inquiry is deeply rooted in his operational reality. He currently serves as a Technical Program Manager at Anduril Industries (Intelligence Systems Division), where he leads the company’s largest product effort focused on next-generation networking devices—continuing a career-long focus on delivering resilient infrastructure to the tactical edge. Prior to Anduril, Ryan held senior technical leadership roles across the defense sector, including serving as a Chief Data Officer and as a Director of Solutions Architecture at Raytheon, following a career in the U.S. Air Force managing large-scale tactical communications supporting joint expeditionary forces.

Research Focus

  • Provable Security: Formal verification of cryptographic protocols (e.g., IND-CPA security) under clearly stated assumptions.
  • Zero-Trust Collaboration: Risk-based access control specified as explicit state machines and verified for safety and liveness properties.
  • Anonymity: Probabilistic indexing to bound cross-tenant linkability and preserve anonymity guarantees in federated settings.