Hazem Ibrahim

Hazem Ibrahim

Ph.D. Candidate in Computer Science at NYU

I'm a third-year PhD candidate in Computer Science at NYU Abu Dhabi, where I study how socio-technical systems shape attention, persuasion, and inequality. My work sits at the intersection of artificial intelligence, computational social science, and social/traditional media studies: I build large-scale measurements of platform behavior, probe where algorithms and institutions introduce bias, and design empirical tests that separate noisy anecdotes from durable, causal patterns. Prior to my PhD, I earned a Master's in Computer Science from the University of Toronto, and a Bachelor's in Computer Engineering from New York University Abu Dhabi.

My research agenda focuses on three broad themes: (1) auditing recommendation and ranking systems to understand how political information and viewpoints are amplified or suppressed; (2) mapping representation and visibility in media—who appears, how they're framed, and with what downstream consequences; and (3) examining gatekeeping and access in knowledge ecosystems, from scholarly publishing to data availability. I also study ideological behavior in modern large language models, connecting model outputs to real-world persuasion and stance-taking.

Methodologically, I have utilized large-scale data collection and engineering, network analysis, mixed-effects and causal inference, survey and behavioral experiments, and contemporary NLP/LLM evaluation techniques to investigate these research themes. I aim for reproducible pipelines and policy-relevant findings, pairing observational evidence with experiments whenever possible, releasing tools and code, and collaborating across computer science, information science, and the social sciences. Ultimately, my goal is to develop general, testable frameworks for auditing algorithmic systems and for measuring representation and influence across digital media—work that advances science while informing governance and public debate.

Research interests

Computational Social Science AI Ethics Large Language Models Algorithmic Bias Diversity and Equity Ideological Behavior Media Studies Representation and Visibility

Publications

Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts

AlDahoul, N., Hazem Ibrahim, Varvello, M., Kaufman, A., Rahwan, T., and Zaki, Y.

Under review at American Political Science Review

TikTok's recommendations skewed towards Republican content during the 2024 US presidential race

Hazem Ibrahim, Jang, H. D., AlDahoul, N., Kaufman, A. R., Rahwan, T., and Zaki, Y.

Under review at Nature

Causal evidence of racial and institutional biases in accessing paywalled articles and scientific data

Hazem Ibrahim, Liu, F., Mengal, K., Kaufman, A., Zaki, Y., and Rahwan, T.

Under review at Science

Citation manipulation through citation mills and pre-print servers

Hazem Ibrahim, Liu, F., Zaki, Y., and Rahwan, T.

Scientific Reports (2025)

A Tale of Three Location Trackers: AirTag, SmartTag, and Tile

Jang, H. D., Hazem Ibrahim, Asim, R., Varvello, M., and Zaki, Y.

Under review at IEEE Transactions on Mobile Computing

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

Hazem Ibrahim, Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W., Alhanai, T., AlShebli, B., Baghdadi, R., et al.

Scientific Reports (2023)

Big tech dominance despite global mistrust

Hazem Ibrahim, Debicki, M., Rahwan, T., and Zaki, Y.

IEEE Transactions on Computational Social Systems (2024)

YouTube’s recommendation algorithm is left-leaning in the United States

Hazem Ibrahim, AlDahoul, N., Lee, S., Rahwan, T., and Zaki, Y.

PNAS Nexus (2023)

Rethinking homework in the age of artificial intelligence

Hazem Ibrahim, Asim, R., Zaffar, F., Rahwan, T., and Zaki, Y.

IEEE Intelligent Systems (2023)

I tag, you tag, everybody tags!

Hazem Ibrahim, Asim, R., Varvello, M., and Zaki, Y.

ACM IMC (2023)

Multithreaded and reconvergent aware algorithms for accurate digital circuits reliability estimation

Ibrahim, W., and Hazem Ibrahim

IEEE Transactions on Reliability (2018)

Media Coverage and Awards

ChatGPT and Homework

Our paper "Perception, Performance, and Detectability of Conversational Artificial Intelligence Across 32 University Courses" evaluated ChatGPT's ability to solve homework assignment. It was covered by news outlets worldwide: Scientific American, The Times, The Independent, Nature Asia, Government Tech, Daily Mail, The Daily Beast, New Scientist, EurekAlert!, Phys.org, The National, Neuroscience News, Nature Middle East.

gpt_1 gpt_2 gpt_3 gpt_3 gpt_3 gpt_3

TikTok's recommendations skewed towards Republican content during the 2024 US presidential race

Using 323 independent bot-driven audits, we tracked changes in TikTok's recommendation algorithm in the six months prior to the 2024 US presidential race. Our findings were covered by PsyPost, Der Standard, and NextShark.

tiktok_1 tiktok_2 tiktok_3

Citation manipulation

We went under cover, contacted a "citation boosting service", and managed to buy citations that appeared in a Scopus-Indexed journal. Our sting operation provided conclusive evidence that citations can be bought in bulk. The findings were covered by Nature and Science.

scholar_1 scholar_2 scholar_3

YouTube's recommendation algorithm is left-leaning in the United States

Our paper "YouTube’s recommendation algorithm is left-leaning in the United States" revealed a political bias in YouTube's algorithm. The paper was published in PNAS Nexus, and received media coverage from Daily Caller, American Council on Science and Health, The College Fix, PsyPost.

youtube_1 youtube_2 youtube_3

MIT Innovator Under 35 Award

I was awarded the MIT Innovator Under 35 Award in 2023 for my work on large language models and its imapct on university education.

MIT Innovator Under 35 Award

Best Parallel Talk and Best Poster Awards at IC2S2 2024

I was awarded the Best Parallel Talk and Best Poster Awards at IC2S2 2024.

IC2S2 Awards

Works in progress

Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular media outlets

AlDahoul, N., Hazem Ibrahim, Park, M., Rahwan, T., and Zaki, Y.

Working paper

Who Gets Seen in the Age of AI? Adoption Patterns of Large Language Models in Scholarly Writing and Citation Outcomes

Farhan, K., Hazem Ibrahim, Rahwan, T., and Zaki, Y.

Working paper

A longitudinal analysis of racial and gender bias in New York Times and Fox News images and articles

Hazem Ibrahim, AlDahoul, N., Abbasi, S. M. A., Zaffar, F., Rahwan, T., and Zaki, Y.

Working paper

Structural Inequalities in Hollywood Representation Across a Century of Film

Hazem Ibrahim, AlDahoul, N., Rahwan, T., Zaki, Y., and Park, M.

Working paper

Analyzing political stances on Twitter in the lead-up to the 2024 US election

Hazem Ibrahim, Khan, F., Alabdouli, H., Almatrooshi, M., Nguyen, T., Rahwan, T., and Zaki, Y.

Working paper

Neutralizing the Narrative: AI-Powered Debiasing of Online News Articles

Kuo, C. W., Chu, K., AlDahoul, N., Hazem Ibrahim, Rahwan, T., and Zaki, Y.

Working paper

Teaching & Service

Teaching

Academic Advising

Service