My primary research interests include Uncertainty Quantification (UQ) and eXplainable AI (XAI), with a focus on developing methods that are theoretically grounded and practically reliable. I’m especially interested in how UQ and XAI can strengthen each other—using uncertainty-aware reasoning to make explanations more reliable, and using explanation-driven structure to improve how uncertainty is represented, validated, and communicated.
Beyond this, I also study incentive and strategic dynamics in federated learning, and I investigate machine learning fairness as a question of distribution and legitimacy—how to define, measure, and justify equitable outcomes in real systems. Ultimately, my goal is to develop trustworthy methods that remain reliable under uncertainty and genuinely support decision-making.
I’m drawn to exploring fresh and underexplored research questions, and I enjoy approaching the same problem from multiple angles—formal theory, empirical testing, and practical constraints. To do that well, I actively pursue interdisciplinary work, drawing on ideas across fields to explore problems from fresh perspectives and connect theory, evidence, and real-world constraints into coherent insights.
Overall, my goal is not to be confined to one corner of AI. I want to bridge disciplines and contribute new paradigms—methods and frameworks that are both rigorous and practical, and that others can extend across a wide range of domains.
Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh
Theoretical Foundations of Prompt Engineering: From Heuristics to Expressivity
arXiv Preprint, 2025
Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh
arXiv Preprint, 2025
Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh
CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace
arXiv Preprint, 2025
Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh
ORACLE: Explaining Feature Interactions in Neural Networks with ANOVA
arXiv Preprint, 2025