

ENGINEERING
My approach is holistic: I design, build, test, and iterate systems and products with considerations for all aspects of applicability, from UX, to DFM, to customer engagement, to long-term business strategy, applying sound engineering principles every step of the way.
​
Here are some of my recent projects, including both mechanical design and systems engineering work.
CASE STUDY
1
XR Visor Systems Engineering
2015 - 2022
​Project Overview
-
As mentioned in the Design section, this​​ Mixed Reality headset is the flagship product of my first startup, FYR Inc. I originally conceived of this device and its operating principles during my time with the Skunk Works® at Lockheed Martin as a means to provide fast, retinal quality MR to pilots of high performance aircraft, improving situational awareness by spanning their entire field of view while minimizing eye strain and fatigue.
​​
-
In 2019 through cooperation with Lockheed Martin following years of internal R&D, we launched FYR to commercialize the technology — a rare emergence of Skunk Works® innovation into the public sector.
​

​My Role & Responsibilities
-
Inventor of underlying "Facet" technology, a tileable high-resolution, electro-optical light field ​chip​
​
-
Overall project vision, technological strategy, and general approach
​
-
I wore many hats on this project, beginning with the very first conceptual proof of concept sketches in 2015. Throughout the project I handled all industrial design​​​, optomechanical design, electromechanical design, mechanical assembly, prototyping, and manufacturing. Below is more information on my systems engineering and mechanical design efforts​.
​​
The Challenges
-
For each eye, implement what's been referred to in many circles as the "Visual Turing Test for AR/VR", providing seamless integration of real-world and virtual imagery
​​​​​
-
Deliver crisp, ultra-high-resolution details that match the user's visual fidelity, properly managing the required data bandwidth​​
​
-
Provide a method for distributing the view across as much of their FOV as possible to provide maximum situational awareness
​
-
Create a way for virtual objects to occlude the real-world scene that performs well even in extremely bright outdoor environments
​​​





Power / Data IO
Proprietary Image Sensor
Proprietary Optical Layer
Rigid-Flex PCB
Proprietary Optical Layer
Proprietary Micro Display


A rough approximation of the human visual field, my design target for spanning the full field of view
My Solution
-
In order to address all the main needs, I first divided the view into individual sections akin to the planar faces on a polygonal surface, where each face represents an isolated view frustum as well as a discrete camera and display combination. I then searched for ways to simultaneously reduce the thickness required for optics while also providing the resolution necessary to perceive arcminute details for maximum visual fidelity. Considering various approaches such as metamaterial optics, pancake lenses, HOEs, and various other focusing systems, for initial proof-of-concept tests I chose microlens arrays for integral imaging given the relative manufacturing simplicity, tiny focal lengths, and resulting light field benefits, as shown by Douglas Lanman and David Luebke in their 2013 research on near-eye light field displays. I realized the same principles could be applied to an adjacent layer of image sensors with corresponding optics, and that achieving visual acuity predicated the need for sensor and display pixels small enough to deliver adequate spatial and angular resolution,
​​​​
-
From this idea was born the concept of our "Facet": a tileable electro-optical stacked chip capable of both capturing and displaying a light field. Essentially it's a CMOS image sensor directly connected to an OLED-on-silicon micro-display, Through much research and experimentation, my team and I found that with the right electrical architecture, carefully balanced optics, and customized software, the Facet architecture can indeed deliver a passthrough volumetric light field experience with sufficient performance to achieve what we refer to as "emulated transparency".​​
​
-
The real benefits of dividing the view into these "Facets" begin to emerge when thinking through other profound challenges with encompassing the user's entire FOV with maximum resolution content. This requires managing enormous data throughput: arcminute details (60 pixels per degree) spread over the entire human visual field (180° x 135°) at an appropriate framerate (minimum 60 Hz) with a convincingly wide gamut (at least 10-bit color) requires more than 52 Gbps. But with this approach, each individual Facet only has to worry about its small subset of that data, and injection of virtual or augmented content can be foveated to match the necessary resolution of the user's current view direction - easily manageable with today's hardware,
​​​
-
My latest Visor and Facet designs span almost the entire visual field through a tiled array of 88 facets (44 per eye), following a curved structure similar to wrap-around sunglasses, To ease the complex optical design, I used Rhino's Grasshopper toolset to design a procedural optical system in harmony with the CAD. The system algorithmically maps 1500 individual "plenoptic cells" (roughly akin to the holographic hogel) from the Facets onto a 12 mm eye box by redirecting the optical axis of each and every individual cell to a given IPD, with enough variation possible to span a few millimeters. Because this is only an optical layer and software change, we can therefore make several Visor sizes to account for any IPD. And by controlling both the directly connected sensor and display axes, we can effectively align incoming light to outgoing light digitally with imperceptible latency, generating a convincing perception of the real world.
​​​
-
Another benefit of the "Facet" approach is that each of these chips operates in parallel as a tiled array. This presents an interesting opportunity for extending functionality of the Facet to include AI for computer vision or real-time image recognition applications. Because we have 88 total Facets, we're raising the total parallel processing capability nearly two orders of magnitude. over a single dedicated AI chip.
​

CASE STUDY
2
XR Loupes Systems Engineering
2022 - 2025
​Project Overview
-
As mentioned in the Design section, this light field passthrough​​ Mixed Reality headset is the main product of my most recent startup, FYR Medical. The device connects to surgical navigation platforms to provide spine surgeons with a virtual view into the patient's anatomy even before an incision is made.
​

​My Role & Responsibilities
-
All industrial design​​​, optomechanical design, electromechanical design, mechanical assembly, prototyping, and manufacturing
​​​​​​
-
Coordinating with surgeon advisors to capture and represent the voice of the customer, ensuring the design adequately meets all user needs for ergonomics, handling, storage, and performance​
​
-
During our prototype development I was promoted to CTO, assuming oversight and management of all software and electrical design and fabrication​​ in addition to my existing industrial design, mechanical, optomechanical, and electromechanical responsibilities
​​​
The Challenges
-
For each eye, find a way to fit four different sensors, an OLED microdisplay, a ring of LED emitters, and all supporting electronic circuitry and optics into a volume similar to the ocular of conventional surgical loupes (roughly the size of a golf ball)
​​​​​
-
Provide a means to dissipate over 4 1/2 Watts of thermal energy from the combined electrical components under max load with only passive cooling
​​​​​
-
Ensure the 2 image sensors, the microdisplay, and all associated optics remain absolutely stable and unmovable with handling
​​​​​
-
Keep the optical axis of the RGB image sensor and its lens assembly aligned with the optical axis of the microdisplay and its associated lens assembly
​
-
Create a connection/disconnection mechanism between the Eyepiece assembly and the Bridge that remains stable when locked in place
​
-
Keep weight and bulk to a minimum, using the lightest and strongest materials possible for meeting the design objectives while minimizing manufacturing costs
​​
My Solution
-
After working through various design possibilities using raw image sensors and our own housing and lens mounts, I instead opted for tiny board-level camera modules for both SLAM and RGB input. This facilitated the design of a rigid chassis to support them, as well as the remaining IMU and ToF sensors, OLED microdisplay, and our custom PCB assemblies.
​​​
-
To save space, weight, and potential connection issues, I elected for an accordion-style folding rigid-flex PCB footprint. This required a thorough understanding of the electromechanical limitations and rigid-flex design constraints, such as minimum distance from SMD components to board edge, and minimum flex radius of the flexible board regions — both of which influence the internal enclosure and chassis design,
​​
-
Once we knew the individual components necessary for our electronic architecture, I was able to use the max power draw data for the assembly to better understand how to dissipate the heat. Because we're pushing extreme amounts of data (our initial prototypes operating at 20 Gbps) through such a compact Eyepiece (internal volume was less than 20 cubic centimeters), we were dealing with an enormous power density for this tiny scale.
​​
-
By selecting aluminum as the material for our chassis, I was able to design parts for 5-axis CNC milling that could also align to individual ICs and components to act as heat sinks. I created 6 individual parts that are interconnected with the folding rigid-flex board to conduct heat through the chassis plates to the exterior enclosure. I also created custom vents and fins with each of these plates for convective heat transfer.
​
​

Infrared LED Array
380 mW
​
SLAM Image Sensor
156 mW
​
Hi-Res RGB Image Sensor
558 mW
​
8-Layer PCBA with 7 major ICs
2020 mW
​
1.03" OLED Microdisplay
1600 mW
​
TOTAL
4714 mW​
Heat Sink Chassis Plate 0
for the OLED Microdisplay
Heat Sink Chassis Plate 1
for the USB Hub and Interface ICs​
Heat Sink Chassis Plate 2
for the Image Sensor ICs
​

Heat Sink Chassis Plate 3
for the RGB Camera & Display Driver IC
Heat Sink Chassis Plate 4
for the SLAM Camera & LED Array​
Heat Sink Chassis Plate 5
for the Infrared LED Array
​
-
Each Eyepiece is attached to a Bridge with a strong mechanical arm-and-socket joint fastened by screws. These joints are strong enough to maintain alignment of all optical axes in relation to the user's eyes regardless of headset handling, while also providing a convenient and stable mechanism for customizing the XR Loupes for individual surgeons.





-
The integration of chassis / heat sink parts with the folding rigid-flex PCBA and associated electrical components requires a carefully orchestrated sequence of steps.
​​​​​
-
The resulting electromechanical assembly is lightweight and compact, effectively dissipating enough heat to protect the electronics and keep the overall device within its polyamide enclosure just a little warm to the touch.
​​
-
All power and high-speed data transmission is routed up to the Bridge through a flex cable and gold-finger connectors that allow us to swap different Bridges to accommodate different IPDs.
​​​​