Ph.D. Candidate in Computer Science at NYU
hazem.ibrahim [at] nyu.edu
I'm a PhD candidate in Computer Science at NYU, where I study how algorithms, data, and institutions create systematic inequalities in who is seen, heard, and valued. My work treats bias in socio-technical systems—social-media platforms, large language models, academic publishing—not as a bug in a single model but as an emergent property of tightly coupled computational and social processes.
My research is organized around three mechanisms: visibility bias (who sees what information), representational bias (how groups are portrayed), and institutional bias (how credit, access, and evaluation are allocated). I study these through an end-to-end approach that moves from measurement—building large-scale datasets and quantifying inequalities—to causation—designing field experiments and algorithmic audits that isolate how platform rules and human decisions jointly produce bias—to mitigation—proposing algorithmic and policy interventions. This work spans two empirical domains: social media and LLMs (auditing recommendation algorithms on YouTube and TikTok, measuring the political behavior of LLMs) and academia and bibliographic systems (uncovering citation manipulation and demonstrating racial and institutional biases in access to scientific knowledge).
My findings have been published in Nature, PNAS Nexus, Scientific Reports, and IEEE, and covered by outlets including Nature, Science, Scientific American, and The Times. I was named to MIT Technology Review's Innovators Under 35 list in 2023. Prior to my PhD, I earned an M.Sc. from the University of Toronto and a B.Sc. from NYU Abu Dhabi.
Forthcoming in Nature (2026)
PoliticalNLP Workshop at EACL (2026)
Scientific Reports 15, 5480 (2025)
Interaction Design and Architecture(s) Journal (IxD&A) (2025)
IEEE Transactions on Computational Social Systems 11, 3741–3752 (2024)
PNAS Nexus 2, pgad264 (2023)
IEEE Intelligent Systems 38, 24–27 (2023)
Scientific Reports 13, 12187 (2023)
ACM IMC (2023)
6th International Conference on Higher Education Advances (HEAd'20) (2020)
IEEE Transactions on Reliability 68, 514–525 (2018)
Revise and Resubmit at Science
Under review at American Political Science Review
Under review at PNAS Nexus
Under review at Journal of Informetrics
Revise and Resubmit at ICWSM 2026
Under review at Engineering Applications of Artificial Intelligence
Under review at IMC 2026
In preparation
In preparation
In preparation
In preparation
I was awarded the Best Poster Award for my poster on investigating racial and institutional biases in accessing paywalled articles and scientific data.
Our paper "Perception, Performance, and Detectability of Conversational Artificial Intelligence Across 32 University Courses" evaluated ChatGPT's ability to solve homework assignment. It was covered by news outlets worldwide: Scientific American, The Times, The Independent, Nature Asia, Government Tech, Daily Mail, The Daily Beast, New Scientist, EurekAlert!, Phys.org, The National, Neuroscience News, Nature Middle East.
Using 323 independent bot-driven audits, we tracked changes in TikTok's recommendation algorithm in the six months prior to the 2024 US presidential race. Our findings were covered by PsyPost, Der Standard, and NextShark.
We went under cover, contacted a "citation boosting service", and managed to buy citations that appeared in a Scopus-Indexed journal. Our sting operation provided conclusive evidence that citations can be bought in bulk. The findings were covered by Nature and Science.
Our paper "YouTube's recommendation algorithm is left-leaning in the United States" revealed a political bias in YouTube's algorithm. The paper was published in PNAS Nexus, and received media coverage from Daily Caller, American Council on Science and Health, The College Fix, PsyPost.
I was awarded the MIT Innovator Under 35 Award in 2023 for my work on large language models and its impact on university education.