Balancing progress and principles: AI in neonatal care requires ethical safeguards

/ News

A young child stands in a dark digital environment, illuminated by the glow of a tablet, surrounded by floating blue binary code.

As artificial intelligence (AI) rapidly enters neonatal care, it brings both promise and concern. New technologies offer the chance to better predict and monitor health outcomes for newborns, especially for those born preterm. However, this progress also raises deep ethical questions about fairness, transparency, and informed decision-making. A recent review explored how AI impacts four core principles of medical ethics – beneficence, non-maleficence, autonomy, and justice – in the context of neonatology. This study emphasises that without proper safeguards, AI may unintentionally reinforce existing health disparities or hinder parental involvement in decisions. To ensure ethical and equitable care for all newborns, clear guidelines and collaboration across disciplines are essential.

 

Across the world, there is a growing need to ensure every newborn receives fair, high-quality care. In neonatal units, where the smallest and most vulnerable lives are supported, artificial intelligence is beginning to influence decision-making. From monitoring vital signs to predicting complications like bronchopulmonary dysplasia (BPD), AI can help identify health risks earlier and support faster intervention. But because these decisions affect infants who cannot speak for themselves, the stakes are especially high.

The reviewed study examines how AI must align with four ethical principles to support – not compromise – neonatal health. These are: doing good (beneficence), avoiding harm (non-maleficence), ensuring fairness (justice), and respecting families’ rights to make decisions (autonomy). When implemented thoughtfully, AI can improve care. But if systems are not transparent or based on biased data, they may risk doing harm instead of good.

 

Ethics cannot be an afterthought in neonatal AI

Beneficence demands that AI systems bring real benefit to newborns. When trained with accurate, diverse data, AI can help personalise care and reduce medical errors. For instance, some algorithms use early-life data to predict a child’s risk of developing BPD, enabling earlier interventions. However, non-maleficence highlights the risks when algorithms are not fully understood or validated. Many AI systems are “black boxes,” meaning even clinicians can’t explain how decisions are made. This lack of clarity can lead to confusion or incorrect care, especially if AI models are used in different settings without adjustment.

Adding to this, AI can unintentionally reflect and even worsen existing healthcare inequalities. This is especially concerning in neonatology, where past disparities in care have affected children from different backgrounds. The study stresses that AI tools must be trained on data from varied populations to ensure they work fairly. Otherwise, decisions might benefit some groups more than others – a direct violation of justice in healthcare.

 

Putting families first in a digital era

Autonomy, or the right to make informed decisions, is particularly complex in neonatal care. Since infants cannot speak for themselves, parents and clinicians must decide together what’s best. When AI systems are too complex to explain, or when recommendations come from algorithms instead of people, this shared decision-making can suffer. Families deserve to know when AI is influencing care and to fully understand its role.

Ultimately, AI must support – not replace – the human relationships that are vital in neonatal care. The study urges ongoing collaboration among healthcare providers, parents, ethicists, and data scientists to make sure AI tools are safe, clear, and fair. By focusing on ethical standards, we can harness AI’s potential while protecting the rights and futures of every newborn.

 

Paper available at: Preserving medical ethics in the era of artificial intelligence: Challenges and opportunities in neonatology

Full list of authors: Arora, T.; Muhammad-Kamal, H.; Beam, K.

DOI: https://doi.org/10.1016/j.semperi.2025.152100