Nikhil Vytla, SM ’25, is developing AI software focused on reliable outcomes in healthcare.
Vytla’s journey in software development began in high school, when he created a mobile app with friends that aggregated news articles from various websites, translated them into multiple languages, and used text-to-speech technology to read them aloud. The inspiration came from his grandfather, who lost his sight due to age-related macular degeneration and could no longer read his favorite newspapers.
“My grandfather loved reading the news, but over time, he lost his vision and couldn’t read anymore,” Vytla recalled. “This motivated me to develop technology that helps people in my community — those I care about.”
Since then, Vytla has maintained a passion for technology, dedicating his education and career to applying artificial intelligence (AI) in healthcare. He was drawn to this field after witnessing how technology could address real-world challenges, especially for vulnerable groups who stand to benefit most from precise and accessible medical tools. In March 2025, he earned a Master’s degree in Health Data Science from Harvard T.H. Chan School of Public Health.
Creating Technology with Social Impact
Vytla earned his undergraduate degrees in Computer Science and Statistics from the University of North Carolina at Chapel Hill (UNC). He is deeply committed to leveraging his technical skills for social good, particularly in healthcare. “Healthcare is an incredible interdisciplinary field where technological innovation can create immediate, life-changing impacts,” he said. “Building software that helps doctors save lives or improve patient outcomes makes the work profoundly meaningful.” He also helped found UNC’s Computer Science + Social Good chapter, where student teams collaborate with local nonprofits and startups to develop mobile apps and other technologies.
One notable project involved creating virtual reality (VR) software for immunocompromised children at UNC Health. Confined to isolated hospital rooms due to infection risks, these children used VR headsets to explore virtual museums, underwater environments, and interactive games. “Seeing kids virtually swim with dolphins from their hospital beds, their faces lighting up with joy — it makes you realize technology is more than algorithms and code,” Vytla said. “It can restore happiness and possibility when people need it most.”
After graduation, Vytla worked as a software engineer at the startup TruEra, developing methods to better explain how AI models operate. “AI is like a black box,” he explained. “How do you know which features or factors are most important for a model’s decisions? For example, predicting whether a patient has a certain disease.”
Vytla emphasized the importance of AI interpretability for fairness, especially regarding protected characteristics such as race and gender, which legally must not influence decisions on loans, credit, or health diagnoses. “My goal is not just to make AI smarter, but to make it fairly serve everyone,” he said. “I want to bridge cutting-edge AI research and practical tools to truly improve patient care in clinical settings.” He noted a pressing need for trustworthy AI in healthcare, citing research showing racial bias in some medical AI systems — such as algorithms underestimating pain in Black patients or diagnostic tools trained primarily on White populations, resulting in lower accuracy for people of color. These challenges motivated him to pursue further study at Harvard.
Advancing Trauma Diagnosis
During his Health Data Science program, Vytla joined a close-knit cohort of students. “We built a strong support network through demanding courses and complex problems,” he said. “That camaraderie was essential; without it, I’m not sure I would have completed the program.”
Alongside coursework, he completed a thesis project at Harvard Medical School and Beth Israel Deaconess Medical Center’s Surgical Informatics Lab, focusing on diagnosing trauma patients arriving at emergency rooms due to accidents or falls. He developed an AI model to assist surgeons in making more accurate diagnoses.
“Clinical decisions in trauma care are highly subjective,” he explained. “Different surgeons treat patients differently, which can lead to missed diagnoses, delayed treatment, or inconsistent outcomes.”
His model integrates multiple data sources, chiefly medical imaging results (X-rays, CT scans) described in clinical reports. Since different clinicians rarely use identical terms for the same observations, the model first analyzes text to standardize diagnostic terminology. It then incorporates patient demographics and physical exam findings. Based on this comprehensive data, the model generates a list of potential missed injuries and suggests follow-up tests.
“This tool doesn’t provide diagnoses but advises surgeons or residents on what to check or test,” Vytla said. “It’s meant to supplement, not replace, clinical expertise.”
Evaluated on test datasets, the model tends to over-predict injuries, favoring safety. “In trauma care, false positives are preferable to false negatives. Extra CT scans may be inconvenient, but missing internal injuries can be fatal. We deliberately designed the system to err on the side of caution,” he added. The lab continues refining the model for future clinical use.
Building Trustworthy AI
Since graduating, Vytla has worked at Snowflake (which acquired TruEra last year) as a software engineer, focusing on improving the trustworthiness of AI models — particularly large language models like ChatGPT and Claude. His work includes developing ways to trace how models reach conclusions and encouraging models to cite sources and express uncertainty, making AI responses more transparent and verifiable.
He is also investigating whether AI outputs align with the models’ “thought processes.” “Are models truly expressing what they ‘think’? What does AI thinking really mean? These are questions I’m passionate about,” Vytla said.
Related Topics