Homomorphic Encryption
Encrypted inference on operator-supplied measurements that returns authorized outputs while keeping test thresholds and model parameters confidential.
The PPA Module Project: Enabling secure distributed evaluation in defense manufacturing under zero-trust constraints. Moving from privacy-by-intuition to privacy-by-proof.
Encrypted inference on operator-supplied measurements that returns authorized outputs while keeping test thresholds and model parameters confidential.
Differential-privacy mechanisms to protect aggregated production telemetry and maintain anonymity bounds within the network.
Probabilistic indexing to limit cross-run linkability across heterogeneous manufacturing nodes.
Formally verified state-machine access control that enforces workflow safety under realistic adversarial models.
From siloed intelligence to secure collaboration.
National security organizations need collaborative intelligence capabilities but cannot afford uncontrolled information disclosure. This work asks how far we can push privacy-preserving aggregation while still delivering actionable, high-fidelity models.
The PPA module is a secure, distributed computation architecture that enables encrypted execution of model-based acceptance tests.
Each manufacturing station functions as an edge device with asymmetric trust.
Cryptographic components are analyzed with mathematical proofs for correctness and security (e.g., IND-CPA under defined assumptions).
Expanding workforce participation and production capacity without expanding the trust boundary.
A living abstract of the dissertation lifecycle.
High-level diagrams and protocol flows describing how agencies enroll, contribute encrypted updates, and obtain aggregated models.
Precise characterization of adversaries (honest-but-curious, colluding agencies, malicious servers) and the guarantees the module provides.
A planned, browser-based view into the implementation and simulation testbed for the PPA module.
Theoretical bounds on model convergence rates (O(1/T)) that account for noise introduced by encryption quantization and differential privacy.
This space will list six key papers and proceedings, along with short, practice-oriented summaries of their contributions to secure federated intelligence.
The full dissertation draft will be made available here as a password-protected PDF, or via request-based access for committee members and collaborators.
Ryan Miles is a doctoral candidate at Purdue University and a retired United States Air Force veteran with approximately 30 years of experience supporting the U.S. Intelligence Community. His doctoral research moves beyond "privacy-by-intuition" to establish "privacy-by-proof" in national security applications. Specifically, he is developing a Privacy-Preserving Aggregation (PPA) module that unifies homomorphic encryption, probabilistic indexing, and compartmentalized differential privacy to enable secure, multi-tenant federated learning in zero-trust environments.
Ryan’s academic inquiry is deeply rooted in his operational reality. He currently serves as a Technical Program Manager at Anduril Industries (Intelligence Systems Division), where he leads the company’s largest product effort focused on next-generation networking devices—continuing a career-long focus on delivering resilient infrastructure to the tactical edge. Prior to Anduril, Ryan held senior technical leadership roles across the defense sector, including serving as a Chief Data Officer and as a Director of Solutions Architecture at Raytheon, following a career in the U.S. Air Force managing large-scale tactical communications supporting joint expeditionary forces.
Research Focus