The Augmented Science Journalist: A Human-in-the-Loop Framework for AI Integration
Abstract
Algorithmic bias in healthcare systems has emerged as a critical threat to equitable patient care, with growing evidence that machine learning models perpetuate racial and ethnic disparities in clinical decision-making. This study aimed to investigate the extent, evolution, and real-world consequences of bias in healthcare algorithms through an innovative human-in-the-loop (HITL) investigative journalism framework. The methodology integrated AI-driven discovery, automated code repository auditing, and in-depth human investigation across three phases. AI tools analyzed temporal bias trends from 2015–2023, audited over 50 public GitHub repositories, and quantified disparities, while human journalists conducted expert interviews, impact assessments, and narrative synthesis to ensure contextual accuracy and ethical framing. Key findings revealed persistent and severe biases: Black and Native American patients experienced 2–3 times higher bias scores than White patients, with diagnostic and risk-prediction algorithms showing the greatest disparities. Only 33% of analyzed repositories included explicit bias testing, despite high adoption rates. Consequential impacts included false negative rates up to 73.7% for Black patients needing care, elevated treatment disparities, poorer health outcomes, and substantial economic costs from excess hospitalizations. The novelty lies in the scalable HITL synergy that enabled longitudinal, multi-source analysis previously infeasible manually, translating technical artifacts into actionable public knowledge. In conclusion, unchecked algorithmic bias systematically harms marginalized communities. We recommend mandatory bias audits, regulatory oversight of proprietary systems, and participatory governance involving affected patients.
Downloads
References
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? ????. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). https://doi.org/10.1145/3442188.3445922
Berkeley AI Research. (2022). Rethinking human-in-the-loop for artificial augmented intelligence. https://bair.berkeley.edu/blog/2022/05/03/human-in-the-loop/
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in neural information processing systems, 29.
Brennen, J. S., Simon, F., Howard, P. N., & Nielsen, R. K. (2020). *Types, sources, and claims of COVID-19 misinformation*. Reuters Institute for the Study of Journalism.
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.
Brumfiel, G. (2009). Science journalism: Supplanting the old media? Nature, 458(7236), 274–277. https://doi.org/10.1038/458274a
Bulletin of the Atomic Scientists. (2025). AI is polluting truth in journalism.
Callaghan, M. W., et al. (2021). Machine-learning-based evidence and attribution mapping of 100,000 climate impact studies. Nature Climate Change, 11(11), 966–972.
Chamuah, A. (2023). AI, climate adaptation, and epistemic injustice. Platypus.
Chen, I. Y., Pierson, E., Rose, S., Joshi, S., Ferryman, K., & Ghassemi, M. (2023). Algorithmic fairness in artificial intelligence for medicine and healthcare. Nature Biomedical Engineering, 7(6), 719-742.
Clerwall, C. (2014). Enter the robot journalist: Users’ perceptions of automated content. Journalism Practice, 8(5), 519-531. https://doi.org/10.1080/17512786.2014.883116
Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI. Harvard Business Review Press.
Debnath, R., et al. (2025). Enabling people-centric climate action using human-in-the-loop artificial intelligence: A review. Current Opinion in Behavioral Sciences.
Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press.
Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809-828. https://doi.org/10.1080/21670811.2016.1208053
Dwyer, L. (2024). Is the human still in the loop? Digital Journal.
Fink, K., & Anderson, C. W. (2015). Data journalism in the United States: Beyond the “usual suspects”. Journalism Studies, 16(4), 467-481. https://doi.org/10.1080/1461670X.2014.939852
Gianfrancesco, M. A., et al. (2023). Fairness of artificial intelligence in healthcare: review and recommendations. Japanese Journal of Radiology.
Graefe, A. (2016). Guide to automated journalism. Tow Center for Digital Journalism, Columbia University. https://doi.org/10.7916/D8Q532W4
Horbach, S. P. (2020). Predicting novelty and efficiency in science and technology. Journal of Informetrics, 14(4), 101090. https://doi.org/10.1016/j.joi.2020.101090
Lewis, S. C., & Westlund, O. (2015). Actors, actants, audiences, and activities in cross-media news work. Digital Journalism, 3(1), 19–37. https://doi.org/10.1080/21670811.2014.927986
Maskey, M. (2025). Personal communication on climate data models.
Metzler, H. (2025). Personal communication on AI constraints in misinformation detection..
Milojević, S. (2020). Practical method to reclassify Web of Science articles into unique subject categories and broad disciplines. Quantitative Science Studies, 1(1), 183-206. https://doi.org/10.1162/qss_a_00014
National Academies of Sciences, Engineering, and Medicine (NASEM). (2017). Communicating science effectively: A research agenda. The National Academies Press. https://doi.org/10.17226/23674
Nisbet, E. (2024). Climate misinformation challenges Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
Ohno-Machado, L., et al. (2023). Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Network Open, 6(12), e2345050.
Painter, J. (2025). Climate journalism in flux: Navigating crisis, innovation, and misinformation in the age of AI. Environmental Change Institute.
Parasie, S. (2015). Data-driven revelation? Epistemological tensions in investigative journalism in the age of “big data”. Digital Journalism, 3(3), 364 380. https://doi.org/10.1080/21670811.2014.976408
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., ... & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). https://doi.org/10.1145/3351095.3372873
Rajkomar, A., et al. (2024). AI in medicine need to counter bias. Nature Medicine.
Rolnick, D., et al. (2022). Tackling climate change with machine learning.
Siddique, S. M., et al. (2024). Health care algorithms can improve or worsen disparities. Penn LDI.
Simon, F. M. (2025). Neither humans-in-the-loop nor transparency labels will save the news media when it comes to AI. Reuters Institute.
Simon, F. M. (2025). Neither humans-in-the-loop nor transparency labels will save the news media when it comes to AI. Reuters Institute for the Study of Journalism.
Thurman, N., Dörr, K., & Kunert, J. (2017). When reporters get hands-on with robo-writing: Professionals consider automated journalism’s capabilities and consequences. Digital Journalism, 5(10), 1240-1259. https://doi.org/10.1080/21670811.2017.1289819
van der Hel, S., & Biermann, F. (2025). The role of artificial intelligence in climate change scientific assessments. PLOS Climate.
Zamith, R. (2019). Algorithms and journalism. In The Oxford encyclopedia of journalism studies. Oxford University Press. https://doi.org/10.1093/acrefore/9780190228613.013.823
Copyright (c) 2026 Konfrontasi: Jurnal Kultural, Ekonomi dan Perubahan Sosial

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons Attribution 4.0 International License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.Penulis.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (Refer to The Effect of Open Access).
_.gif)








_.gif)


