Home About Portfolio Resume Contact
IT Operations & Project Management

Jevon Butler —
building systems
that scale.

10+ years leading enterprise technology operations, platform migrations, and digital transformation across federal and non-profit organizations. I turn operational complexity into repeatable, scalable outcomes.

Jevon Butler
10+Years of
IT leadership
100+Production
deployments
30K+Professionals
served at TRAILS
Platforms & Tools Salesforce Azure DevOps JAMF Jira AWS Entra ID Zendesk PostgreSQL Microsoft 365 Google Workspace
$8.2B Infrastructure Supported Air Force Satellite Control Network
41K+ Records Migrated Salesforce implementation · TRAILS
100% Launch Uptime Maintained Zero critical failures across all events
40K+ Athletes on Platforms Delivered DCO Mobile & Athlete Connect · USADA

Where operations meet delivery

I bridge the gap between technical teams and business outcomes — owning complex programs from first principles through to scalable, documented systems.

🔧

IT Operations Leadership

ITIL-based service management, incident and escalation ownership, SLA governance, and 24/7 operational continuity across distributed teams and platforms.

Enterprise Application Delivery

End-to-end product and platform delivery — from requirements and vendor management through UAT, phased rollout, and post-launch support operations.

📋

Project & Program Management

Cross-functional program coordination, stakeholder alignment, release planning, and Agile/DevOps delivery across complex, multi-system environments.

Want to see the work behind the outcomes?

Every project in my portfolio tells the full story — problem, decisions, execution, and impact.

I build the systems behind the mission.

I'm Jevon Butler — an IT Operations and Project Manager based in Colorado Springs with over a decade of experience leading technology programs that matter. From maintaining 24/7 communications infrastructure for the Air Force Satellite Control Network, to building USADA's digital testing ecosystem for Olympic athletes, to scaling TRAILS' technology operations to serve 30,000+ mental health professionals nationwide — my career has been defined by operational ownership in high-stakes environments.

I'm at my best when a program is complex, the stakes are real, and there isn't a pre-built playbook to follow. I thrive in the space between technical teams and business leadership — translating requirements into systems, and systems into outcomes.

Jevon Butler

My background

I started my career in the United States Air Force, where I led IT operations for global satellite control missions — managing 18,000+ assets, directing Tier 1–3 incident response, and owning compliance reporting for 627+ regulatory requirements. That environment taught me operational discipline, documentation rigor, and how to lead technical teams under real pressure.

After the Air Force, I brought that foundation into the non-profit and mission-driven technology space. At USADA, I owned the full enterprise application ecosystem for the U.S. Anti-Doping program — leading platform migrations, building integrations with WADA's global systems, and maintaining 99.9% on-time delivery across 100+ production deployments. At TRAILS, I've led the technology operations scaling effort for a national mental health platform, implementing Salesforce Nonprofit Cloud, building hardware lifecycle programs from scratch, and establishing the data governance infrastructure the organization needed to grow responsibly.

How I work

  • I document everything — SOPs, runbooks, and knowledge bases aren't an afterthought, they're how I make sure programs outlast any single person.
  • I sequence deliberately — when resources are constrained, getting the order right matters more than moving fast. I tackle the highest-risk items first.
  • I work with stakeholders, not around them — whether it's HR, legal, field teams, or executive leadership, I build processes that create shared accountability rather than isolated IT workflows.
  • I measure what matters — impact claims in my work are backed by actual numbers, not directional language.

Outside of work

I'm based in Colorado Springs, CO. I'm currently pursuing my PMP certification and continuing to develop my SQL skills for data-informed operations work. I'm passionate about mission-driven technology — the idea that the right systems, built well, can meaningfully amplify the impact of organizations doing important work.

By the numbers

Education

Projects & Case Studies

Each project tells the full story — the problem, the decisions, the execution, and the measurable impact. Click any card to read the full case study.

💻
TRAILS · 2024
IT Operations · Program Design
Building TRAILS' First Hardware Lifecycle Program

Designed and implemented an end-to-end device management framework — replacing fragmented manual processes with a secure, repeatable system that cut provisioning effort in half.

MDM / JAMF Entra ID Apple Business Manager Asset Management
Enterprise Applications · Platform Migration
Migrating USADA to MODOC for Global Anti-Doping Operations

Led the end-to-end platform transition from a proprietary testing app to a globally standardized system — integrating with WADA's ADAMS while keeping live athlete testing operations running without interruption.

MODOC ADAMS (WADA) PostgreSQL UAT
☁️
TRAILS · 2024
CRM & Data · Platform Implementation
Salesforce Nonprofit Cloud Implementation at TRAILS

Migrated 41,354 records from a fragmented Airtable system to Salesforce — establishing a unified system of record for a national network of 12,000+ schools and 30,000+ mental health professionals.

Salesforce ETL Migration Data Governance RBAC
🏃
USADA · 2021–22
Product Delivery · Web Application
Delivering Athlete Connect — Compliance Platform for 40K+ Athletes

Led cross-functional delivery of a modern athlete compliance platform — improving Whereabouts reporting accuracy by 30% while replacing a low-adoption legacy system under strict WADA regulatory requirements.

Azure DevOps SIMON UAT WADA Compliance
Product Delivery · Mobile Platform
Delivering DCO Mobile — Digitizing Field Anti-Doping Operations

Led delivery of a custom mobile platform that replaced paper-based doping control workflows — eliminating manual data entry, accelerating lab coordination, and establishing legally defensible digital records.

Azure DevOps CI/CD SFTP Pipelines Legal Compliance
🛰️
U.S. Air Force · 2013–2019
IT Operations · Mission-Critical Infrastructure
IT Operations for Live Satellite Launch Events — AFSCN

Served as central IT coordination point for live satellite launch operations — maintaining 100% system availability across 10+ command and control networks with zero critical failures during launch windows.

Mission-Critical Ops Incident Response Contingency Planning Distributed Teams

Experience & Background

A visual summary of my professional history. Download the full resume for the complete picture.

⬇ Download Resume

Professional Experience

Technology Operations & Project Manager

TRAILS — Remote
January 2024 – Present

Senior operational leader for enterprise applications and IT service delivery at a fast-scaling national mental health non-profit. Responsible for platform modernization, vendor coordination, data governance, and building the IT infrastructure from the ground up to support a national workforce and 12,000+ partner schools. Full project case studies are in the portfolio.

50% provisioning reduction
41K records migrated
30K+ professionals served

Enterprise Applications & Operations Lead

United States Anti-Doping Agency (USADA) — Colorado Springs, CO
April 2021 – January 2024

Owned the full enterprise application ecosystem for the U.S. Anti-Doping program — including a proprietary digital testing platform supporting Olympic, Paralympic, and 45+ National Governing Bodies. Led multiple platform builds and migrations, maintained 99.9% on-time delivery across 100+ production deployments, and managed multi-vendor technology relationships and SLA oversight.

99.9% on-time delivery
40K+ athletes served
100% field digitization

IT Systems Administrator

United States Air Force — Various Locations
October 2013 – October 2019

Led IT operations and systems sustainment for global Air Force missions, overseeing infrastructure supporting an $8.2B Satellite Control Network across 10+ geographically separated units. Directed lifecycle management of 18,000+ IT assets and led incident response across Tier 1–3 support. Program recognized as Best Practice by Air Force Space Command Inspector General.

18K+ assets managed
100% launch uptime
5x best reporting activity

Education

Master of Science — Management Information Systems

Bellevue University
2025

Bachelor of Science — Information Technology Operations Management

Bellevue University
2021

Associate of Applied Science — Transportation Management

Community College of the Air Force
2019

Certifications

Project Management Professional (PMP)
PMI
In Progress

Core Skills

Service Management
ITIL Incident Mgmt Change Mgmt SLA Oversight Escalation Mgmt Outage Response
Delivery & PM
Agile DevOps Vendor Mgmt Release Planning UAT
Platforms
Salesforce Azure DevOps JAMF Jira AWS Zendesk
Data & Querying
SQL (intermediate) PostgreSQL Airtable SAS Enterprise Miner

Security & Compliance

RBAC IAM MDM/Endpoint Data Governance Compliance Reporting

Let's connect.

Whether you're exploring a role, a collaboration, or just want to talk about IT operations and project delivery — I'd love to hear from you.

💼 Connect with me on LinkedIn

Send a message

Fill out the form and I'll get back to you within 24 hours.

I typically respond within one business day.

IT Operations · Program Design

Building TRAILS' First Hardware Lifecycle Program from the Ground Up

How I designed and implemented an end-to-end device management framework at a fast-scaling national non-profit — replacing fragmented, manual processes with a secure, repeatable system that cut provisioning effort in half.

OrganizationTRAILS
Duration~6 months (2024)
RoleProgram Designer & Lead
Scope30,000+ users · 12,000+ partner orgs
MDM / JAMFEntra IDApple Business ManagerGoogle WorkspacemacOS & WindowsAsset ManagementRBACVendor Coordination

The context

TRAILS was growing fast — from a small regional initiative into a national mental health platform serving over 12,000 schools and 30,000+ professionals. With that growth came a problem that quietly compounded every week: the organization had no formal system for managing the devices its employees used.

Devices were tracked inconsistently, if at all. When someone joined, IT provisioning was improvised. When someone left, device recovery was informal — creating real data security exposure. As a remote-first, distributed workforce, the stakes of getting this wrong were higher than they would be in a traditional office environment.

The underlying risk: Without standardized offboarding and device recovery, departing employees could retain access to sensitive program data, partner information, and organizational systems — a compliance and security liability that grew with every new hire.

What we were working with

When I assessed the landscape, the gaps broke down into five distinct problem areas:

  • No reliable asset tracking — device ownership was informal and inventory records were fragmented across spreadsheets and memory
  • Inconsistent onboarding — new employee device setup varied by who handled it, leading to configuration gaps and security inconsistencies
  • Weak offboarding controls — no defined process for data wiping, device recovery, or access revocation at separation
  • Manual provisioning — setup took hours per device with no standardized tooling, causing delays and errors
  • No visibility into lifecycle status — leadership had no reliable picture of device age, condition, or reissuance readiness

Individually, each gap was manageable. Together, they represented a meaningful operational and security risk for an organization scaling nationally with sensitive student mental health data in scope.

Key decisions and tradeoffs

This wasn't a case of applying a standard playbook. TRAILS was an early-stage non-profit with lean resources, a distributed workforce, and no existing IT infrastructure to build on. Every design decision had to balance security rigor against operational simplicity.

MDM-first provisioning over manual configuration

Rather than documenting a manual setup checklist, I prioritized implementing MDM (JAMF for macOS, Intune via Entra ID for Windows) so that device configuration, security policy enforcement, and software deployment happened automatically at enrollment. The upfront implementation cost was higher, but it eliminated human error from every future provisioning event and made the program scalable without adding headcount.

Integrating with Apple Business Manager early

TRAILS was primarily a macOS environment. Connecting procurement through Apple Business Manager meant new devices could be automatically enrolled in MDM out of the box — no physical access required. This decision directly enabled remote onboarding at scale, which was non-negotiable for a distributed workforce.

Sequencing: security controls before full lifecycle scope

I made the deliberate choice to tackle offboarding and access revocation workflows first, before completing the full procurement-to-reissuance cycle. The security exposure from weak offboarding was the highest-urgency risk. Getting that right before optimizing the rest of the lifecycle was the right call for a scaling non-profit handling sensitive data.

Aligning with HR rather than working around them

Rather than building a parallel IT-owned process, I partnered with HR to embed device workflows directly into their onboarding and offboarding checklists. This created shared accountability, improved follow-through on device returns, and reduced the coordination overhead that had previously caused delays and missed steps.

How it came together

1

Assessment and gap mapping

Audited existing device inventory, interviewed stakeholders across HR, operations, and leadership, and documented the full scope of risk and operational gaps before designing anything.

2

Offboarding and security workflows (priority 1)

Designed and implemented secure device recovery, data wiping, and access revocation processes integrated with Entra ID and Google Workspace. Partnered with HR to embed these steps into the formal offboarding checklist.

3

MDM deployment and provisioning automation

Implemented JAMF for macOS and Intune for Windows. Configured enrollment profiles, baseline security policies, and automated software deployment. Connected Apple Business Manager for zero-touch macOS provisioning.

4

Asset tracking and inventory standards

Established a centralized asset register with ownership records, lifecycle status, and assignment history. Defined standards for intake, tagging, and record-keeping that any team member could maintain consistently.

5

Documentation and handoff

Produced SOPs, onboarding/offboarding runbooks, and vendor coordination guidelines to ensure the program would run consistently without depending on any single individual.

The biggest organizational challenge wasn't technical — it was getting consistent follow-through on device returns during offboarding. Solving that required embedding IT steps into HR's process rather than maintaining a separate IT checklist. Once that accountability was shared, completion rates improved significantly.

Outcomes and impact

The program launched on schedule and delivered measurable improvements across every dimension we tracked.

50%Reduction in device provisioning effort through automation
98%Asset tracking coverage — up from fragmented records
12K+Partner organizations supported by the scaled infrastructure
30K+Professionals on the platform the program was built to support

Beyond the numbers, this program eliminated a category of security risk that was genuinely concerning for an organization handling sensitive student mental health data. It also gave leadership reliable visibility into the device fleet for the first time — which directly informed budgeting and procurement planning going forward.

Enterprise Applications · Platform Migration

Migrating USADA from DCO Mobile to MODOC for Global Anti-Doping Operations

How I led the end-to-end platform transition from a proprietary testing application to a globally standardized system — integrating with WADA's international data infrastructure while keeping live athlete testing operations running without interruption.

OrganizationUSADA
Duration~12 months (2022–2023)
RoleProject Lead & Integration Owner
Scope40,000+ athletes · 45+ governing bodies
MODOCADAMS (WADA)SIMONPostgreSQLSFTP PipelinesZendeskUATVendor Management

The context

USADA's in-house testing application, DCO Mobile, was purpose-built for domestic doping control operations — and it worked well within those boundaries. But as anti-doping operations became increasingly international, the platform's limitations became a real operational liability. DCO Mobile wasn't designed for direct data exchange with ADAMS, the centralized anti-doping management system governed by the World Anti-Doping Agency (WADA), meaning every cross-border coordination required manual reconciliation to bridge the gap.

MODOC — a globally adopted testing platform used by anti-doping organizations worldwide — offered a path to standardization. Making the switch meant gaining native ADAMS integration and alignment with international agency workflows, but it also meant migrating mission-critical operations for a program where disruption to live athlete testing wasn't just an inconvenience — it was a compliance risk with real consequences for athletes and governing bodies.

The operational constraint: USADA tests athletes year-round, including during major international competitions. The migration could not introduce gaps in testing continuity, data integrity, or chain-of-custody documentation — standards that are legally and reputationally binding under the World Anti-Doping Code.

What we were working with

The core challenge wasn't just replacing software — it was replacing software that sat at the center of a tightly coupled ecosystem of internal systems, external agencies, and field operations.

  • No native ADAMS integration in DCO Mobile — every cross-border data exchange required manual effort to reformat and reconcile testing records
  • Misalignment with international agency workflows — other anti-doping organizations operated in MODOC, creating friction in joint testing programs
  • Complex internal dependencies — MODOC needed to integrate with SIMON, PostgreSQL, and SFTP pipelines without disrupting existing workflows
  • Distributed field operations — DCOs worked across geographically dispersed locations, making a simultaneous cutover operationally dangerous
  • Decommissioning risk — retiring DCO Mobile meant ensuring no historical data was lost and no active testing workflows were left mid-cycle

Key decisions and tradeoffs

Phased regional rollout over a single cutover

A simultaneous switch across all field teams would have introduced unacceptable risk. I designed a phased rollout by region, starting with lower-volume testing areas before moving to high-activity programs. This allowed us to identify and resolve integration issues in a controlled environment before they could affect major competition testing cycles.

Parallel systems during transition, not a hard cutover

Rather than decommissioning DCO Mobile at migration start, I maintained both systems in parallel during the rollout window. This preserved a fallback for active testing workflows and allowed us to validate MODOC data output against known DCO Mobile records before fully committing.

Structured UAT over informal testing

I ran formal UAT cycles designed around real-world testing scenarios and edge cases — including ADAMS sync behavior under data conflicts, SFTP pipeline failures, and concurrent multi-region submissions. The investment surfaced several integration issues that would have been significantly more disruptive to resolve post-launch.

Field enablement before go-live, not after

Doping Control Officers are often working in the field with limited connectivity and no time to troubleshoot software during a test. I prioritized training, documentation, and hands-on readiness sessions before each regional go-live rather than providing support reactively.

How it came together

1

Requirements gathering and vendor alignment

Led detailed requirements sessions with the MODOC vendor to define USADA-specific configurations. Translated existing DCO Mobile workflows and internal system dependencies into formal integration requirements.

2

Integration design across four systems

Coordinated integration work across SIMON, PostgreSQL, SFTP pipelines (laboratory data exchange), and ADAMS. Served as the operational bridge between the vendor's engineering team and USADA's internal systems.

3

Structured UAT across testing scenarios

Designed and executed UAT cycles covering standard workflows, edge cases, and integration failure scenarios. Validated ADAMS sync behavior, chain-of-custody documentation, and multi-region concurrent operations.

4

Phased regional rollout with parallel systems

Executed go-lives by region in sequenced waves, maintaining DCO Mobile in parallel during each transition window. Used learnings from early regions to refine training materials ahead of higher-volume rollouts.

5

Field training and DCO enablement

Delivered hands-on training sessions and operational debriefs for DCOs across regions. Developed SOPs and quick-reference guides tailored to field conditions.

6

DCO Mobile decommissioning and post-launch support

Managed the structured sunset of DCO Mobile once all regions had transitioned. Established Zendesk-based support workflows to track and resolve post-launch issues through the stabilization period.

The most complex moment in the project was coordinating the final DCO Mobile decommission during an active international testing period. Timing the sunset required aligning with competition schedules, lab processing windows, and ADAMS submission deadlines across multiple governing bodies simultaneously.

Outcomes and impact

The migration completed on schedule across all regions with no disruption to live athlete testing operations.

40%Reduction in administrative overhead from eliminating manual reconciliation
0Testing operation disruptions during the full migration period
45+National Governing Bodies on a unified, globally standardized platform
40K+Athletes whose data now flows directly into WADA's global compliance system
CRM & Data · Platform Implementation

Implementing Salesforce Nonprofit Cloud at TRAILS — From Fragmented Data to a National System of Record

How I led the end-to-end Salesforce implementation that transformed TRAILS' program operations — migrating 41,354 records, onboarding 120 users, and establishing a unified platform capable of supporting a national network of 12,000+ schools and 30,000+ mental health professionals.

OrganizationTRAILS
Duration~8 months (2024)
RoleImplementation Lead & Data Migration Owner
Scope41,354 records · 120 users · 12K+ partner orgs
Salesforce Nonprofit CloudSalesforce Data LoaderETL / Data MigrationAirtableRBACData GovernanceQA & Validation

The context

TRAILS had grown from a regional pilot into a national mental health program serving thousands of schools across the country. That growth had exposed a structural problem with how the organization managed its operational data. Program delivery, training operations, and stakeholder engagement each ran through their own tools and processes, with Airtable serving as the closest thing to a shared system of record. In practice, data lived in disconnected places, reporting was unreliable, and leadership was making expansion decisions without a clear view of what was happening on the ground.

The scale constraint: TRAILS was actively expanding during the implementation. The data model and workflows I designed had to accommodate not just current operations, but a program footprint that was growing week over week throughout the project.

What we were working with

  • Disconnected systems — program data, training records, and stakeholder information lived in separate tools with no reliable linkage
  • Limited reporting visibility — leadership had no reliable way to understand program delivery status or training outcomes at a national level
  • Manual process overhead — routine operations required significant manual coordination across tools, slowing teams down
  • No scalable access controls — the absence of structured data governance created real security and compliance exposure as the organization grew

Key decisions and tradeoffs

Build the data model around business processes, not Salesforce defaults

Salesforce Nonprofit Cloud ships with a standard model optimized for donor management. TRAILS' primary use cases were program delivery and training operations. Rather than adapting our processes to fit the standard model, I defined custom object relationships grounded in how TRAILS actually operated.

Data migration quality over migration speed

Migrating 41,354 records required extensive pre-migration cleaning, deduplication, and validation. I invested heavily in data quality before migration rather than moving data quickly and cleaning post-launch. A Salesforce system loaded with dirty data would have undermined adoption immediately.

RBAC as a design constraint, not a post-launch task

Access controls were designed into the data model from the beginning rather than added at the end. With sensitive program data and student-adjacent records in scope, the security architecture had to be deliberate. Retrofitting RBAC onto a live system is significantly harder and riskier than building it in from day one.

Post-launch stabilization as a planned phase

I scoped a dedicated post-launch stabilization period into the project plan — a window for monitoring adoption, resolving workflow friction, and refining configurations based on real-world use. Treating go-live as a transition point rather than the finish line produced a smoother landing and higher long-term adoption.

How it came together

1

Requirements gathering and data modeling

Conducted cross-functional requirements sessions with program delivery, training operations, and stakeholder engagement teams. Designed the Salesforce data model — object relationships, fields, and workflows — to reflect actual business operations.

2

Data audit and pre-migration cleaning

Audited existing Airtable data for duplicates, inconsistencies, and structural mismatches with the target Salesforce model. Cleaned and transformed records to meet migration quality standards before any data was moved.

3

ETL migration via Salesforce Data Loader

Executed the migration of 41,354 records in staged batches with validation checks between each load. Confirmed record integrity, relationship linkages, and field mapping accuracy before proceeding to subsequent stages.

4

RBAC design and implementation

Designed and implemented role-based access controls aligned to organizational hierarchy and data sensitivity levels. Ensured staff access was appropriately scoped by role across program data, partner information, and reporting.

5

QA validation across user scenarios

Conducted comprehensive QA cycles covering standard user workflows for each team, cross-functional data flows, and edge cases including bulk operations and reporting accuracy.

6

Go-live, onboarding, and stabilization

Managed the go-live transition for 120 internal users. Supported onboarding through training and documentation. Maintained a stabilization period post-launch to monitor adoption and address workflow friction.

The hardest part of this implementation wasn't technical — it was managing the workflow changes for staff who had built daily habits around Airtable. Solving that required ensuring the new system immediately felt more helpful than the old one, not just more complex. That demanded careful onboarding quality and rapid response to post-launch friction.

Outcomes and impact

41,354Records successfully migrated with validated data integrity
35%Improvement in data accuracy and consistency across program records
120Internal users onboarded to the new platform at go-live
12K+Partner schools now managed through a single unified system of record

For the first time, TRAILS' executive team had reliable, real-time reporting on program delivery, training completion, and stakeholder engagement at a national scale — directly informing expansion decisions in a way that fragmented Airtable data never could.

Product Delivery · Web Application

Delivering Athlete Connect — A Compliance Platform for 40,000+ Athletes

How I led the cross-functional delivery of a modern athlete compliance platform that replaced a low-adoption legacy system — improving Whereabouts reporting accuracy by 30% while maintaining full alignment with World Anti-Doping Agency standards.

OrganizationUSADA
Duration~9 months (2021–2022)
RoleDelivery Lead & UAT Owner
Scope40,000+ athletes · 45+ governing bodies
Agile / Sprint PlanningAzure DevOpsSIMONPostgreSQLGlobal DROUATWADA Compliance

The context

USADA's athlete compliance operations ran on a legacy platform that had aged out of step with the demands placed on it. Whereabouts reporting — the process by which athletes in the Registered Testing Pool must file their daily location availability for no-advance-notice testing — requires precision and consistency. A system that's hard to use produces errors. Errors produce compliance risk. And in anti-doping, compliance risk has real consequences for athletes.

Athlete Connect was USADA's answer: a modern, purpose-built web application designed around how athletes actually work. My role was to lead the delivery of that platform across engineering, QA, and compliance teams — from requirements definition through production launch.

The regulatory constraint: Every workflow had to satisfy WADA's technical standards simultaneously with usability goals. Building for athletes and building for regulators weren't optional tradeoffs — both were non-negotiable at the same time.

What we were working with

  • Low adoption driven by poor usability — athletes avoided or minimized their use of the platform, increasing the likelihood of missed filings and compliance gaps
  • High error rates in Whereabouts reporting — a system that made errors easy was a systemic compliance risk under WADA rules
  • Limited operational visibility — staff had insufficient real-time insight into athlete compliance status for proactive outreach
  • Fragmented data — athlete testing records, drug reference data, and compliance history lived in separate systems with no unified view

Key decisions and tradeoffs

Regulatory translation before UX design

Before any workflow was designed, I mapped WADA's technical standards directly to system behaviors and validation rules. Getting the compliance boundaries established first meant usability improvements happened within a well-defined regulatory envelope, rather than creating gains that required painful rework later.

Scenario-based UAT over functional coverage testing

Rather than validating individual features in isolation, I designed UAT cycles around complete athlete workflows. Testing across full workflows surfaced integration gaps and edge cases that feature-level testing would have missed.

RTP athletes as a separate validation cohort

Athletes in the Registered Testing Pool have stricter compliance obligations. I carved out RTP-specific UAT scenarios to validate the platform correctly enforced additional requirements for this group — treating them as a distinct cohort rather than a generic athlete profile.

Integration validation as a delivery gate

I made integration validation across SIMON, PostgreSQL, and Global DRO a hard gate in the release process. No sprint was considered complete without confirmed data integrity across all three systems — slowing some cycles but preventing data consistency issues from reaching production.

How it came together

1

Regulatory requirements mapping

Translated WADA's technical standards into concrete system workflows, validation rules, and acceptance criteria — the foundation all delivery decisions were measured against.

2

Agile delivery — sprint planning and backlog ownership

Drove sprint planning and backlog refinement across engineering, QA, and compliance teams. Prioritized features based on regulatory criticality and athlete workflow impact.

3

Integration build and validation

Coordinated integration across SIMON, PostgreSQL, and Global DRO. Validated data consistency, real-time sync behavior, and edge case handling across all three systems before each sprint release.

4

Scenario-based UAT across athlete cohorts

Designed and executed structured UAT cycles covering standard workflows, RTP-specific compliance scenarios, and edge cases. Tracked defects through to resolution before production sign-off.

5

Friction point resolution and launch

Identified and resolved high-frequency workflow friction points during UAT — particularly in the Whereabouts filing flow — before launch. Coordinated the production launch and retirement of the legacy platform.

The most nuanced delivery challenge was the tension between the compliance team's requirement for strict validation rules and the product goal of reducing user friction. Resolving that required sitting in the middle — understanding both the regulatory intent behind each requirement and the usability cost of implementing it literally.

Outcomes and impact

30%Improvement in Whereabouts reporting compliance accuracy
25%Increase in successful compliance workflow completion rates
40K+Athletes on the platform, replacing a low-adoption legacy system
3Systems unified into a single source of truth for athlete compliance data

Athlete Connect established a centralized, integrated compliance platform that gave USADA teams real-time visibility into athlete status for the first time — shifting from reactive case management to proactive outreach, which is a fundamentally more effective operating model.

Product Delivery · Mobile Platform

Delivering DCO Mobile — Digitizing Field Operations for Anti-Doping Testing

How I led the delivery of a custom mobile platform that replaced paper-based doping control workflows — eliminating manual data entry errors, accelerating lab coordination, and establishing legally defensible digital records for 40,000+ athletes.

OrganizationUSADA
Duration~12 months (2021)
RoleDelivery Lead & UAT Owner
Scope40,000+ athletes · Field operations nationwide
Azure DevOpsCI/CDSIMONPostgreSQLSFTP PipelinesUATVendor ManagementLegal Compliance

The context

When a Doping Control Officer conducts an athlete test in the field, every step — from notification to sample collection to chain-of-custody documentation — must be recorded with precision. The data generated is legally significant: it forms the evidentiary basis for anti-doping rule violation proceedings and must be defensible under international arbitration standards.

For years, USADA managed this process on paper. Officers completed forms by hand, scanned them, emailed them to headquarters, and then staff re-entered the data into internal systems. As testing operations scaled, this workflow became an operational liability — delays accumulated, transcription errors introduced data integrity risk, and the paper trail was slower and harder to audit than a digital record.

The legal constraint: DCO Mobile wasn't just a productivity tool — it was producing records that could be scrutinized in international arbitration. Every design decision about data capture, chain-of-custody documentation, and audit trails had to meet the evidentiary standards of the World Anti-Doping Code.

What we were working with

  • Manual data entry at two points — in the field on paper, then again at headquarters during transcription — doubled the opportunity for error
  • Transmission delays — physical forms moved through scan-email-entry workflows that added hours between field collection and lab receipt
  • Limited traceability — reconstructing chain of custody for a given sample required piecing together paper records, scanned documents, and email threads
  • No pre-arrival data to labs — laboratories received samples before having any advance data, increasing processing time

Key decisions and tradeoffs

Field workflow mapping before system design

Before any technical requirements were written, I worked with field teams to map the actual end-to-end testing workflow from officer notification through to sample handoff at the lab. This grounded the system design in operational reality, surfacing field-specific constraints that wouldn't have been obvious from a headquarters perspective.

Legal standards as hard acceptance criteria

I translated USADA's legal and compliance requirements into explicit UAT acceptance criteria that could not be waived. A feature that improved usability but compromised legal defensibility was not acceptable. This created productive tension with the vendor but produced a more defensible system.

CI/CD with controlled release gates

I implemented a CI/CD process with deliberate release gates rather than continuous deployment. Each release required confirmed UAT sign-off and integration validation before promotion to production. The small cost in deployment velocity was worth the risk reduction in a legally sensitive system.

Pre-arrival lab data as a core feature

The ability to transmit testing data to laboratories before the sample physically arrived was scoped as a core delivery requirement, not a phase-two enhancement. Building it in from the start meant labs could begin preparation workflows in parallel with sample transit — a meaningful operational gain.

How it came together

1

Field workflow mapping and requirements definition

Mapped the end-to-end testing workflow with DCOs, translating operational steps into system requirements and legal constraints into formal acceptance criteria.

2

Vendor partnership and delivery coordination

Served as the primary interface between USADA and the external software vendor. Managed sprint planning, backlog prioritization, and release coordination via Azure DevOps.

3

Integration build and validation

Coordinated integration across SIMON, PostgreSQL, and SFTP pipelines for secure lab data transmission. Validated data consistency and lab format compatibility before each sprint sign-off.

4

Structured UAT across real-world field scenarios

Designed UAT cycles based on actual DCO field scenarios — standard tests, no-advance-notice events, and edge cases including connectivity loss and sample exceptions. Validated chain-of-custody documentation and audit log completeness against legal criteria.

5

Launch and paper process retirement

Coordinated the production launch and retirement of paper-based workflows. Developed training materials and SOPs for DCOs. Monitored post-launch data quality through the stabilization period.

The most complex part wasn't the technology — it was the handoff from paper to digital in a legally charged context. Convincing field teams and compliance stakeholders that a digital record was as defensible as a signed paper form required demonstrating through structured UAT that the digital audit trail was actually more complete and traceable than paper had ever been.

Outcomes and impact

100%Of athlete testing data collection digitized — paper workflows fully retired
30%Reduction in manual data entry errors
20%Improvement in turnaround time for testing and results processing
1Unified source of truth for athlete testing data across all connected systems
IT Operations · Mission-Critical Infrastructure

Leading IT Operations for Live Satellite Launch Events — Air Force Satellite Control Network

How I served as the central IT coordination point for live satellite launch operations — owning readiness across infrastructure, applications, and distributed support teams to maintain 100% system availability during zero-tolerance launch windows.

OrganizationUnited States Air Force
Duration2013–2019 (multiple events)
RoleIT Operations Lead & Coordination Point
Scope10+ C2 networks · $8.2B network infrastructure
Mission-Critical OperationsIncident ResponseContingency PlanningCross-Unit CoordinationPre-Launch ValidationReal-Time Escalation

The context

The Air Force Satellite Control Network (AFSCN) is the global ground-based infrastructure that commands and controls U.S. military satellites. It operates across geographically distributed sites, with each location dependent on communications infrastructure, command and control systems, and IT support that must be available continuously — but especially during launch windows, when new satellites are being brought into operational status.

Launch events are operationally compressed and unforgiving. The sequence of events runs on a precise timeline. Any gap in IT infrastructure, communications availability, or systems support during that window doesn't produce a minor inconvenience; it produces a mission impact. My role was to ensure that never happened.

The operational reality: There is no "we'll fix it after the launch" in satellite operations. Problems during a launch window have to be resolved in real time, with no ability to pause or reschedule. The entire value of pre-launch preparation is reducing the probability of that scenario to as close to zero as possible.

What we were working with

  • Zero tolerance for downtime — any failure in command and control systems or communications during a launch window had direct mission consequences
  • Geographically distributed dependencies — multiple sites across different time zones had to be confirmed ready before a launch could proceed
  • Compressed resolution windows — issues that could be addressed over hours in normal operations had to be resolved in minutes during a live event
  • High consequence of escalation gaps — unclear escalation paths could delay issue resolution past the point where recovery was possible
  • Limited rehearsal opportunity — launches happen on fixed schedules determined by orbital mechanics, not IT readiness

Key decisions and tradeoffs

Structured pre-launch validation over informal readiness checks

Rather than relying on experienced operators to informally confirm system status, I implemented a structured validation checklist that systematically stepped through every critical system, communication channel, and escalation path before the window opened. This took longer but caught issues early enough to resolve them — rather than discovering them at T-minus-zero.

Pre-defined escalation paths rather than ad-hoc coordination

In a geographically distributed operation, ad-hoc escalation during a live event is a recipe for confusion and delay. Before each launch, I established and communicated explicit escalation paths: who gets contacted for which category of issue, in what sequence, through which channel.

Contingency planning for the highest-probability failure modes

I developed contingency plans specifically for the failure modes that history and operational analysis indicated were most likely — communications link degradation, application availability issues, and coordination gaps between distributed sites. Pre-thought responses meant faster reaction times when they occurred.

Post-launch reviews as operational investment

After each launch event, I conducted structured reviews to identify what had worked, what had required improvisation, and what should change. Treating these as genuine operational inputs — not administrative paperwork — produced improvements that made each subsequent launch more reliable.

How it came together

1

Pre-launch systems readiness validation

Executed structured validation of all critical systems, applications, and communication channels before each launch window. Confirmed readiness across all geographically distributed sites and documented status before proceeding.

2

Escalation path and contingency plan establishment

Defined and communicated explicit escalation paths for each category of potential issue. Briefed distributed support teams on contingency plans before each event so responses were pre-understood, not improvised.

3

Real-time coordination during launch windows

Served as the central IT coordination point during live events — monitoring system status across sites, directing real-time troubleshooting, managing communications between distributed teams, and making rapid escalation decisions.

4

Post-launch review and process improvement

Conducted structured post-event reviews after each launch to capture what had worked, what had required improvisation, and what process changes would improve the next event.

The most valuable thing I learned wasn't a technical skill — it was the discipline of pre-thinking. Every scenario thought through before the launch window is a scenario that can be responded to calmly and correctly during it. The operators who perform best under pressure aren't the best improvisers; they're the ones who've done the most preparation. That principle has shaped how I approach every high-stakes delivery since.

Outcomes and impact

100%System availability maintained across all launch windows
0Critical IT failures during live launch events
10+Command and control networks supported simultaneously
$8.2BNetwork infrastructure value supported by these operations

The operational processes developed through these launch events — structured pre-launch validation, explicit escalation paths, and disciplined post-event review — became repeatable standards that improved consistency across subsequent events. The broader IT operations program was recognized as a Best Practice by the Air Force Space Command Inspector General.