About me

I am a Senior Software Engineer at Optum (part of UnitedHealth Group), specializing in Large Language Models (LLMs) and their applications in healthcare and enterprise AI. My background is in Natural Language Processing (NLP), where I have over ten years of experience working with text analysis, machine learning, and more recently, large-scale language model development.

Previously, I was a Lecturer in Computer Science at the University of North Carolina at Greensboro (UNCG), where I taught courses in systems programming, advanced data structures, and data science, and advised students on academic and research projects.

I completed my M.Sc. in Computer Science with a concentration in Big Data and Data Science at UNCG, working as a graduate research student in the IFFS-ML Lab under the supervision of Dr. Shan Suthaharan. My dissertation was titled LDEB: Label Digitization with Emotion Binarization and Machine Learning for Emotion Recognition in Conversational Dialogues.

I earned my B.Sc. in Computer Science from BRAC University, where I worked under the supervision of Dr. Amitabha Chakrabarty. My undergraduate thesis was titled Fake News Pattern Recognition using Linguistic Analysis.

My work and research

I specialize in large-scale language modeling for healthcare, with a focus on building, fine-tuning, and deploying domain-adapted Large Language Models (LLMs) under clinical, enterprise, and regulatory constraints.

I study approaches to advance healthcare-focused Large Language Models (LLMs), with particular emphasis on long-sequence processing and domain-specific adaptation for tasks such as medical coding and bridging communication between patients and healthcare providers. A core part of my work involves building domain adaptation pipelines that transform general-purpose LLMs into clinically specialized models.

In the past, my work focused on developing computational models and training them so that they can understand and generate human language with a high level of precision. The two major domains of my research were tackling the challenges of deploying misinformation-preventing measures and improving conversational AI.

My research previously delved into the realm of multimodal analysis, encompassing text, images, videos, and audio, develop models that can understand the context surrounding a piece of content, including historical context, user demographics, and network dynamics to develop context awareness of models, and also explores cross-lingual and cross-cultural aspects of misinformation, ensuring that detection models can be applied globally and adapted to diverse linguistic and cultural contexts.