Hazem Ibrahim

Hazem Ibrahim

Ph.D. Candidate in Computer Science at NYU

I'm a PhD candidate in Computer Science at NYU, where I study how algorithms, data, and institutions create systematic inequalities in who is seen, heard, and valued. My work treats bias in socio-technical systems—social-media platforms, large language models, academic publishing—not as a bug in a single model but as an emergent property of tightly coupled computational and social processes.

My research is organized around three mechanisms: visibility bias (who sees what information), representational bias (how groups are portrayed), and institutional bias (how credit, access, and evaluation are allocated). I study these through an end-to-end approach that moves from measurement—building large-scale datasets and quantifying inequalities—to causation—designing field experiments and algorithmic audits that isolate how platform rules and human decisions jointly produce bias—to mitigation—proposing algorithmic and policy interventions. This work spans two empirical domains: social media and LLMs (auditing recommendation algorithms on YouTube and TikTok, measuring the political behavior of LLMs) and academia and bibliographic systems (uncovering citation manipulation and demonstrating racial and institutional biases in access to scientific knowledge).

My findings have been published in Nature, PNAS Nexus, Scientific Reports, and IEEE, and covered by outlets including Nature, Science, Scientific American, and The Times. I was named to MIT Technology Review's Innovators Under 35 list in 2023. Prior to my PhD, I earned an M.Sc. from the University of Toronto and a B.Sc. from NYU Abu Dhabi.

Publications


Published

1. Systematic partisan content skews in TikTok during the 2024 U.S. elections

Hazem Ibrahim, Jang, H. D., AlDahoul, N., Kaufman, A. R., Rahwan, T., and Zaki, Y.

Forthcoming in Nature (2026)

2. Analyzing political stances on Twitter/X in the lead-up to the 2024 U.S. election

Hazem Ibrahim, Khan, F., T., Rahwan, T., and Zaki, Y.

PoliticalNLP Workshop at EACL (2026)

3. Citation manipulation through citation mills and pre-print servers

Hazem Ibrahim, Liu, F., Zaki, Y., and Rahwan, T.

Scientific Reports 15, 5480 (2025)

4. Heritage Language Maintenance: The Case of Bangladeshi Immigrants in Canada

Hazem Ibrahim, Sabie, D., Roy, P., Bhattacharjee, A., Alam, S. M. R., Mim, N. J., and Ahmed, S. I.

Interaction Design and Architecture(s) Journal (IxD&A) (2025)

5. Big tech dominance despite global mistrust

Hazem Ibrahim, Debicki, M., Rahwan, T., and Zaki, Y.

IEEE Transactions on Computational Social Systems 11, 3741–3752 (2024)

6. YouTube's recommendation algorithm is left-leaning in the United States

Hazem Ibrahim, AlDahoul, N., Lee, S., Rahwan, T., and Zaki, Y.

PNAS Nexus 2, pgad264 (2023)

7. Rethinking homework in the age of artificial intelligence

Hazem Ibrahim, Asim, R., Zaffar, F., Rahwan, T., and Zaki, Y.

IEEE Intelligent Systems 38, 24–27 (2023)

8. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

Hazem Ibrahim, Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W., Alhanai, T., AlShebli, B., Baghdadi, R., et al.

Scientific Reports 13, 12187 (2023)

9. I tag, you tag, everybody tags!

Hazem Ibrahim, Asim, R., Varvello, M., and Zaki, Y.

ACM IMC (2023)

10. Gamification in online educational systems

Hazem Ibrahim and Ibrahim, W.

6th International Conference on Higher Education Advances (HEAd'20) (2020)

11. Multithreaded and reconvergent aware algorithms for accurate digital circuits reliability estimation

Ibrahim, W., and Hazem Ibrahim

IEEE Transactions on Reliability 68, 514–525 (2018)


Under Review

12. Causal evidence of racial and institutional biases in accessing paywalled articles and scientific data

Hazem Ibrahim, Liu, F., Mengal, K., Kaufman, A., Zaki, Y., and Rahwan, T.

Revise and Resubmit at Science

13. Large language models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts

AlDahoul, N., Hazem Ibrahim, Kaufman, A., Rahwan, T., and Zaki, Y.

Under review at American Political Science Review

14. Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular media outlets

AlDahoul, N., Hazem Ibrahim, Park, M., Rahwan, T., and Zaki, Y.

Under review at PNAS Nexus

15. Who Gets Seen in the Age of AI? Adoption Patterns of Large Language Models in Scholarly Writing and Citation Outcomes

Farhan, K., Hazem Ibrahim, Rahwan, T., and Zaki, Y.

Under review at Journal of Informetrics

16. A longitudinal analysis of racial and gender bias in New York Times and Fox News images and articles

Hazem Ibrahim, AlDahoul, N., Abbasi, S. M. A., Zaffar, F., Rahwan, T., and Zaki, Y.

Revise and Resubmit at ICWSM 2026

17. Neutralizing the Narrative: AI-Powered Debiasing of Online News Articles

Kuo, C. W., Chu, K., AlDahoul, N., Hazem Ibrahim, Rahwan, T., and Zaki, Y.

Under review at Engineering Applications of Artificial Intelligence

18. A Tale of Three Location Trackers: AirTag, SmartTag, and Tile

Jang, H. D., Hazem Ibrahim, Asim, R., Rahwan, T., and Zaki, Y.

Under review at IMC 2026


In Preparation

19. Structural inequalities in Hollywood representation across a century of film

Hazem Ibrahim, AlDahoul, N., Rahwan, T., Zaki, Y., and Park, M.

In preparation

20. Two-thirds of citations to review papers belong to original research

Hazem Ibrahim, Liu, F., Zaki, Y., and Rahwan, T.

In preparation

21. Measuring the Political Ideology of LLMs Across 90 Countries

Omari, A., Hazem Ibrahim, AlDahoul, N., Zaki, Y., Rahwan, T., and Kaufman, A.

In preparation

22. Examining propaganda on Telegram during the Russia/Ukraine War

Hazem Ibrahim, Holovatska, Y., AlDahoul, N., Zaki, Y., and Rahwan, T.

In preparation

Teaching Experience & Service

Teaching and Guest Lectures

Academic Advising

Service

Media Coverage and Awards

Best Poster Award at AI4GS 2025

I was awarded the Best Poster Award for my poster on investigating racial and institutional biases in accessing paywalled articles and scientific data.

AI4GS Poster

ChatGPT and Homework

Our paper "Perception, Performance, and Detectability of Conversational Artificial Intelligence Across 32 University Courses" evaluated ChatGPT's ability to solve homework assignment. It was covered by news outlets worldwide: Scientific American, The Times, The Independent, Nature Asia, Government Tech, Daily Mail, The Daily Beast, New Scientist, EurekAlert!, Phys.org, The National, Neuroscience News, Nature Middle East.

gpt_1 gpt_2 gpt_3 gpt_4 gpt_5 gpt_6

TikTok's recommendations skewed towards Republican content during the 2024 US presidential race

Using 323 independent bot-driven audits, we tracked changes in TikTok's recommendation algorithm in the six months prior to the 2024 US presidential race. Our findings were covered by PsyPost, Der Standard, and NextShark.

tiktok_1 tiktok_2 tiktok_3

Citation manipulation

We went under cover, contacted a "citation boosting service", and managed to buy citations that appeared in a Scopus-Indexed journal. Our sting operation provided conclusive evidence that citations can be bought in bulk. The findings were covered by Nature and Science.

scholar_1 scholar_2 scholar_3

YouTube's recommendation algorithm is left-leaning in the United States

Our paper "YouTube's recommendation algorithm is left-leaning in the United States" revealed a political bias in YouTube's algorithm. The paper was published in PNAS Nexus, and received media coverage from Daily Caller, American Council on Science and Health, The College Fix, PsyPost.

youtube_1 youtube_2 youtube_3

MIT Innovator Under 35 Award

I was awarded the MIT Innovator Under 35 Award in 2023 for my work on large language models and its impact on university education.

MIT Innovator Under 35 Award

Best Parallel Talk and Best Poster Awards at IC2S2 2024

I was awarded the Best Parallel Talk and Best Poster Awards at IC2S2 2024.

IC2S2 Awards