developed performant python-based ML code for GPU architectures
used cunumeric and FlexFlow with Legate for acceleration
use large language models for aiding Fortran simulation code translation to C++
use mixture of LLMs with prompting (zero/few shot learning) and fine tuning
create workflows and GUIs, uncertainty analysis and validation.
create predictive machine learning models for in-depth analysis and visualization of ensemble wildfire data
create explainable models for such large-scale ensembles
create in situ ML algorithms for modeling extreme events and perform sophisticated statistical inference,
use neural networks for Gaussian Process parameter estimation, unsupervised clustering for feature tracking/identification
couple Julia runtime with Fortran simulation codes, develop new streaming and parallel distributed inference techniques, scalability analysis and handle extreme-scale simulation data
deliver to open-source repo PRISM .
under Exascale computing project, develop automatic feature-based sampling method for compression of extreme-scale spatio-temporal datasets,
contribute HistSampling filter to VTK-m/VTK-h code base
use Summit for scaling and testing
automation of LabVIEW-based tools via machine learning and feature detection algorithms,
acceleration of existing ML regression code on GPU and multi-core CPU for faster computation,
UQ for high dimensional data and PCA-based reduction, developed interactive open-source tools for data exploration,
sensitivity study for determining input parameter saliency.