People / Students / 2024-2025 Cohort
Harry is currently a Researcher at MIT CSAIL. He has an educational background in Mathematics, Physics, and Statistics. He completed his undergraduate studies at the University of California, Santa Barbara. After graduating, he worked as a Senior Data Analyst at Ipsos, a global leading market research and consulting firm, where he applied statistical, machine learning, and data mining techniques to maintain large databases and build proprietary analytical models. These models provided insights from complex market data, enabling clients to optimize their business decisions.
In the meantime, Harry served as a Research Assistant at Yale’s Computer Science and Internal Medicine departments, where he collaborated with diverse researchers to develop an innovative Graph Neural Network framework. This framework leverages pre-trained non-textual foundation models in graph-based tasks, demonstrating that the self-attention layers of foundation models can be effectively repurposed on graphs to perform cross-node attention-based message passing.
At MIT, Harry has worked on a diverse range of projects, including topics such as Computer Vision, Large Language Models, Beyond-CMOS Technologies, firm-level algorithmic progress, and Quantum Computing. His primary focus is on developing cost-effective strategies for building AI systems that accurately reflect real-world use cases and can confidently scale across different scenarios. He leads projects that explore efficient training methods and develop Scaling Laws for various applications, utilizing techniques such as model distillation, fine-tuning, and prompt engineering.
Harry's research interests focus on understanding the fundamental aspects and limitations of modern AI systems and on developing efficient, scalable, and robust training strategies from both technical and economic perspectives. He is also interested in how current AI technologies drive innovation and sustain and promote productivity growth. His current research areas include the Fundamental Laws of LLMs, Model Behavior and Performance Evaluation, and Foundational Models as Tools.