Ph.D. Candidate in Computer Science at NYU
hazem.ibrahim [at] nyu.edu
I'm a third-year PhD candidate in Computer Science at NYU Abu Dhabi, where I study how socio-technical systems shape attention, persuasion, and inequality. My work sits at the intersection of artificial intelligence, computational social science, and social/traditional media studies: I build large-scale measurements of platform behavior, probe where algorithms and institutions introduce bias, and design empirical tests that separate noisy anecdotes from durable, causal patterns. Prior to my PhD, I earned a Master's in Computer Science from the University of Toronto, and a Bachelor's in Computer Engineering from New York University Abu Dhabi.
My research agenda focuses on three broad themes: (1) auditing recommendation and ranking systems to understand how political information and viewpoints are amplified or suppressed; (2) mapping representation and visibility in media—who appears, how they're framed, and with what downstream consequences; and (3) examining gatekeeping and access in knowledge ecosystems, from scholarly publishing to data availability. I also study ideological behavior in modern large language models, connecting model outputs to real-world persuasion and stance-taking.
Methodologically, I have utilized large-scale data collection and engineering, network analysis, mixed-effects and causal inference, survey and behavioral experiments, and contemporary NLP/LLM evaluation techniques to investigate these research themes. I aim for reproducible pipelines and policy-relevant findings, pairing observational evidence with experiments whenever possible, releasing tools and code, and collaborating across computer science, information science, and the social sciences. Ultimately, my goal is to develop general, testable frameworks for auditing algorithmic systems and for measuring representation and influence across digital media—work that advances science while informing governance and public debate.
Under review at American Political Science Review
Under review at Nature
Under review at Science
Scientific Reports (2025)
Under review at IEEE Transactions on Mobile Computing
Scientific Reports (2023)
IEEE Transactions on Computational Social Systems (2024)
PNAS Nexus (2023)
IEEE Intelligent Systems (2023)
ACM IMC (2023)
IEEE Transactions on Reliability (2018)
Our paper "Perception, Performance, and Detectability of Conversational Artificial Intelligence Across 32 University Courses" evaluated ChatGPT's ability to solve homework assignment. It was covered by news outlets worldwide: Scientific American, The Times, The Independent, Nature Asia, Government Tech, Daily Mail, The Daily Beast, New Scientist, EurekAlert!, Phys.org, The National, Neuroscience News, Nature Middle East.
Using 323 independent bot-driven audits, we tracked changes in TikTok's recommendation algorithm in the six months prior to the 2024 US presidential race. Our findings were covered by PsyPost, Der Standard, and NextShark.
We went under cover, contacted a "citation boosting service", and managed to buy citations that appeared in a Scopus-Indexed journal. Our sting operation provided conclusive evidence that citations can be bought in bulk. The findings were covered by Nature and Science.
Our paper "YouTube’s recommendation algorithm is left-leaning in the United States" revealed a political bias in YouTube's algorithm. The paper was published in PNAS Nexus, and received media coverage from Daily Caller, American Council on Science and Health, The College Fix, PsyPost.
I was awarded the MIT Innovator Under 35 Award in 2023 for my work on large language models and its imapct on university education.
Working paper
Working paper
Working paper
Working paper
Working paper
Working paper