Jinsook (Jennie) Lee
I’m a Ph.D. candidate in Information Science at Cornell University, advised by René F. Kizilcec in the Future of Learning Lab and Thorsten Joachims with Nikhil Garg on committee. I’m also fortunate to collaborate with National Tutoring Observatory, and AJ Alvero.
My research examines sociotechnical systems in education. My goal is to develop and evaluate responsible AI that supports an equitable society where future generations can thrive safely. My main research interests are :
Socio-technical alignment
I study how AI systems can be evaluated and designed to align with the social contexts and institutional norms in which they operate, examining the gap between technical capabilities and societal needs.
Societal impact of AI systems
I study how AI technologies reshape decision-making processes in high-stakes settings. I have been working on how college application essay writing is changing and its relationship with admissions outcomes across socioeconomic status.
AI agent evaluation and responsible deployment
I explore what current agents are good and bad at, and how we can improve them responsibly. I examine the potential capabilities of state-of-the-art AI models and develop frameworks for incorporating them into practice.
Prior to Cornell, I spent several years as a data scientist at Korea University to develop course and major recommender systems to support college students’ decision making process.
I have a love-hate relationship with tennis — you’ll often find me attempting to upgrade my skills from the ‘absolute beginner’ category. I also love listening to music and curating songs!
news
| Jan 2026 | New paper is out with NTO team! Codebook-Injected Dialogue Segmentation for Multi-Utterance Constructs Annotation: LLM-Assisted and Gold-Label-Free Evaluation |
|---|---|
| Dec 2025 | Our first NTO paper AI Annotation Orchestration: Evaluating LLM Verifiers to Improve the Quality of LLM Annotations in Learning Analytics has been accepted to Learning Analytics and Knowledge (LAK26)! |
| Sep 2025 | Check out my recent paper featured in Cornell Chronicle! |
| Jul 2025 | Our work Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays has been accepted to COLM Social Simulation with LLMs Workshop and Socially Responsible Language Modelling Research (SoLaR) |
| May 2025 | I became a PhD candidate! |
| Apr 2025 | Our work has been presented at ICLR-HAIC 2025 workshop and Georgetown University |
| Apr 2025 | Relocating to NYC this summer - Excited to be a PiTech Fellow! |
| Mar 2025 | New paper is out! Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays |
| Aug 2024 | “Large Language Models, Social Demography, and Hegemony: Comparing Authorship in Human and Synthetic Text” has been accepted for publication in Journal of Big Data |
| Jul 2024 | “Ending Affirmative Action Harms Diversity Without Improving Academic Merit” has been accepted to EAAMO’24 See you in San Luis Potosí, Mexico! |
| Jun 2024 | “The Life Cycle of Large Language Models in Education: A Framework for Understanding Sources of Bias” has been accepted to the British Journal of Educational Technology. |
| Apr 2024 | Our project “Evaluating the Impact of Different Application Ranking Policies on College Admission Outcomes” has been awarded a grant from the Cornell Center for Social Sciences! ($12,000) |
| Jan 2024 | Our work “Comparing Authorship in Human and Synthetic Text” has been accepted to Generative AI and Sociology workshop at Yale University! |
| Dec 2023 | Our workshop paper “When Bias Meets Personalization: Challenges and Perspectives in LLM-Based Educational Technology” has been accepted to LAK24! |
| Dec 2023 | Our proposal “Application Essays and Characters in Higher Education Admissions” has been accepted to NCME 2024! |
| Oct 2023 | Gave a talk about our on-going literature review “Bias in Large Language Models in Education: Sources, Measures, and Mitigation Strategies” at NCME-AIMC(National Council on Measurement in Education-AI in Measurement and Education) |
| Jul 2023 | Our workshop paper “Augmenting Holistic Review in University Admission using Natural Language Processing for Essays and Recommendation Letters” has been accepted to AIED Tokyo 2023! |
selected publications
- AI Annotation Orchestration: Evaluating LLM Verifiers to Improve the Quality of LLM Annotations in Learning AnalyticsIn Proceedings of the Learning Analytics and Knowledge Conference (LAK26), 2026
- Poor Alignment and Steerability of Large Language Models: Evidence from College Admission EssaysIn Conference on Language Modeling (COLM25) SoLAR Workshop / Social Sim Workshop, 2025
- Ending Affirmative Action Harms Diversity Without Improving Academic MeritIn AAAI/ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO’24), 2024
- The Life Cycle of Large Language Models in Education: A Framework for Understanding Sources of BiasBritish Journal of Educational Technology, 2024
- Large Language Models, Social Demography, and Hegemony: Comparing Authorship in Human and Synthetic TextJournal of Big Data, 2024
- Augmenting Holistic Review in University Admission using Natural Language Processing for Essays and Recommendation LettersIn Artificial Intelligence in Education (AIED23) EDI in EdTech R&D Workshop, 2023
- Artificial Communication and Media Realism in College AdmissionsIn The Digitized Campus: Artificial Intelligence and Big Data in Higher Education, 2025