Jinsook (Jennie) Lee
I’m a Ph.D. candidate in Information Science at Cornell University advised by René F. Kizilcec in the Future of Learning Lab and Thorsten Joachims. My research studies socio-technical systems in education through the lens of computational social science, focusing on how AI technologies influence evaluation, decision-making, and equity in high-stakes contexts.
At Cornell, I study how language models influence college admissions both directly through applicants’ use of LLMs in essay writing, and indirectly, through how institutions interpret and act on these AI-mediated signals. My recent work analyzes (1) how policy shifts and the rise of LLM-assisted writing reconfigure the allocation of educational opportunities; (2) how LLMs write differently in terms of lexical diversity, semantic space and stylistic homogenization in college essays; and (3) how uncertainty and arbitrariness manifest in algorithmic predictions within admissions.
Beyond admissions, I am also developing AI evaluation pipelines for tutoring data in collaboration with National Tutoring Observatory spanning multi-agent orchestration, and dialogue segmentation to enhance tutoring move annotation.
I’m also fortunate to collaborate with Nikhil Garg and AJ Alvero.
Prior to Cornell, I spent several years as a data scientist at Korea University to develop course and major recommender systems to support college students’ decision making process.
I have a love-hate relationship with tennis — You’ll often find me attempting to upgrade my skills from the ‘absolute beginner’ category. I also love listening to music and curating songs!
news
| Dec 2025 | Our first NTO paper AI Annotation Orchestration: Evaluating LLM Verifiers to Improve the Quality of LLM Annotations in Learning Analytics has been accepted to Learning Analytics and Knowledge (LAK26)! |
|---|---|
| Sep 2025 | Check out my recent paper featured in Cornell Chronicle! |
| Jul 2025 | Our work Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays has been accepted to COLM Social Simulation with LLMs Workshop and Socially Responsible Language Modelling Research (SoLaR) |
| May 2025 | I became a PhD candidate! |
| Apr 2025 | Our work has been presented at ICLR-HAIC 2025 workshop and Georgetown University |
| Apr 2025 | Relocating to NYC this summer - Excited to be a PiTech Fellow! |
| Mar 2025 | New paper is out! Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays |
| Aug 2024 | “Large Language Models, Social Demography, and Hegemony: Comparing Authorship in Human and Synthetic Text” has been accepted for publication in Journal of Big Data |
| Jul 2024 | “Ending Affirmative Action Harms Diversity Without Improving Academic Merit” has been accepted to EAAMO’24 See you in San Luis Potosí, Mexico! |
| Jun 2024 | “The Life Cycle of Large Language Models in Education: A Framework for Understanding Sources of Bias” has been accepted to the British Journal of Educational Technology. |
| Apr 2024 | Our project “Evaluating the Impact of Different Application Ranking Policies on College Admission Outcomes” has been awarded a grant from the Cornell Center for Social Sciences! ($12,000) |
| Jan 2024 | Our work “Comparing Authorship in Human and Synthetic Text” has been accepted to Generative AI and Sociology workshop at Yale University! |
| Dec 2023 | Our workshop paper “When Bias Meets Personalization: Challenges and Perspectives in LLM-Based Educational Technology” has been accepted to LAK24! |
| Dec 2023 | Our proposal “Application Essays and Characters in Higher Education Admissions” has been accepted to NCME 2024! |
| Oct 2023 | Gave a talk about our on-going literature review “Bias in Large Language Models in Education: Sources, Measures, and Mitigation Strategies” at NCME-AIMC(National Council on Measurement in Education-AI in Measurement and Education) |
| Jul 2023 | Our workshop paper “Augmenting Holistic Review in University Admission using Natural Language Processing for Essays and Recommendation Letters” has been accepted to AIED Tokyo 2023! |
selected publications
- AI Annotation Orchestration: Evaluating LLM Verifiers to Improve the Quality of LLM Annotations in Learning AnalyticsIn Proceedings of the Learning Analytics and Knowledge Conference (LAK26), 2026
- Poor Alignment and Steerability of Large Language Models: Evidence from College Admission EssaysIn Conference on Language Modeling (COLM25) SoLAR Workshop / Social Sim Workshop, 2025
- Ending Affirmative Action Harms Diversity Without Improving Academic MeritIn AAAI/ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO’24), 2024
- The Life Cycle of Large Language Models in Education: A Framework for Understanding Sources of BiasBritish Journal of Educational Technology, 2024
- Large Language Models, Social Demography, and Hegemony: Comparing Authorship in Human and Synthetic TextJournal of Big Data, 2024
- Augmenting Holistic Review in University Admission using Natural Language Processing for Essays and Recommendation LettersIn Artificial Intelligence in Education (AIED23) EDI in EdTech R&D Workshop, 2023
- Artificial Communication and Media Realism in College AdmissionsIn The Digitized Campus: Artificial Intelligence and Big Data in Higher Education, 2025