As a part of the Convolutional Neural Network (CNN) team, I'm employing machine learning techniques to accomplish the Eye Multimodal Imaging in Neurodegenerative Disease (iMIND) research group's goal of harnessing multimodal retinal imaging to diagnose neurodegenerative diseases. My work largely involves developing model scripts with Python and PyTorch and running jobs on the Duke Compute Cluster through a remote terminal.
Current Work: Fused CNN Models
My team is testing fused convolutional neural networks for binary and multiple classification of Parkinson's Disease, Alzheimer's Disease, and Mild Cognitive Disorder from retinal imagery. Image features are concatenated across data types for final classification. This method can offer more robust conclusions, as the model uses all relevant patient data together, rather than separately, to make a strong diagnosis.
Ophthalmologic AI Summit
I presented my work on fused CNNs at the 5th annual Ophthalmic Artificial Intelligence Summit in May of 2025. The summit aims to show the outcomes of using artificial intelligence and machine learning in the areas of disease progression and ophthalmic clinical research.
The presentation begins at 5:17:50.
Past Work: Data Pipeline
During my first semester with iMIND, I helped develop a data pipeline that will allow us to combine the Parkinson's Disease, Alzheimer's Disease, and Mild Cognitive Impairment neural networks into a single comprehensive diagnostic algorithm. My independent study report details my week to week activities on the team, which included data preprocessing and Dataloader development.