Could Algorithm Do The Justice?: The Heartbreaking Case of Children Abuse

Progresa
8 min readMar 12, 2021

--

Children abuse and neglect are dreadful events that have occurred throughout the world. Indonesia’s National Commission for Child Protection has reported 2.700 child abuse cases for the year of 2020 across the country (Mantalean & Gatra, 2021). The number is said to have a thirty percent increase compared to the previous year, endangering babies and infants. Moreover, in another countries such as China, France, India, and South Africa; the COVID-19 lockdown has substantially increased helpline calls for child abuse assistance (UNICEF, 2020). Romanou and Belton (2020) discusses that risk of child maltreatment has surged during the pandemic due to increase in stressors to parents and caregivers, increase in children and young vulnerability, and reduction in normal protective services. Surging cases of violence against children must be followed up correctly as more and more well-being and lives of children are being threatened.

The Netflix documentary series launched on February 26th, The Trials of Gabriel Fernandez, shows details of the sickening violence against children case that brought an eight-year-old victim into death in Los Angeles, 2013. Gabriel died from a series of beatings he received from his own mom, Pearl Fernandez, and his mother’s boyfriend, Isauro Aguirre, that caused several bruises, broken ribs, burn marks in several places, and swollen hands based on BBC News Report (Diez, 2020). Horribly, his siblings witnessed Pearl and Isauro push Gabriel to eat cat’s poop prior to his death that was identified through a medical test. One of his teachers at school also said that Gabriel had started coming to school with pulled hair, bruises on the head, swollen lips, and a wound caused by an air gun months before his death. Isauro is sentenced to death while Pearl is sentenced to life without parole.

It is worth highlighting more on the caveat where systems failed to protect and remove Gabriel from dangerous situations when necessary. It is said that social services were faced with some difficulties, specifically with heavy workload where each staff needed to deal with 25–30 cases simultaneously (Diez, 2020). In this case, child welfare authorities were the ones to decide whether cases needed to be screened-in or screened-out from further investigations due to overcalls (Sederstorm, 2020). In addition, the Sheriffs just believed what his mother had said without checking the child’s condition. It is now clear that the system has failed Gabriel and this needs to be addressed more to prevent future cases like this from arising again.

Now, rapid technological innovation has posed a vital role in human work. The steep proliferation of say, machine learning and artificial intelligence has offered wide dimensions in helping, affecting, and solving discourses substantially. For instance, from where an algorithm helps businesses to get high engagement on Instagram (Chacon, 2017) until where it features your favourite food-delivery apps to customize the offer in matching your taste (Pokhriyal, 2020). One caveat to mention is that technology should not only be utilized in business’ interest but also be deployed for social issues, particularly where children have been the prolonged concern. Thus, the question raised is, to what extent should technology be applied?

To that end, artificial intelligence has been argued to have saved Gabriel’s life as well as save the other children-at-risks in the future. Such innovation is deployed by utilizing algorithms in predicting the risk of violence against children. Thus, children can be separated from parents where necessary as soon as possible. Marc Cherna, director of Allegheny County Department of Human Services, explained to look at factors such as child welfare and parent’s history, family mental illness, as well as jail and conviction variables to be considered in the decision-making (Sederstrom, 2020). Such socio-technological innovation in decision-making shall complement the work of multiple professionals (Jenny & Isaac, 2006) as it provides adequate tools to diagnose and investigate cases properly. In the following, we shall discuss how this life-saving algorithmic justice works, its role in addressing human bias in decision-making, and how it still prevails with flaws.

How Algorithmic Justice Works: The Allegheny Family Screening Tool

Interviewees in The Trials of Gabriel Fernandez emphasized the use of algorithms in the Allegheny Family Screening Tool (AFST) which has been used since August 2016 to improve call screening decision-making to protect children (“The Allegheny,” n.d.). In response to Gabriel’s case where fatality was initially caused in the screening process — where social workers have limited time to do research and rely heavily on the phone call, which is called human bias — algorithmic justice comes prior to the decision of screeners as illustrated in the picture below.

Figure 1. Referral Progression Process. Decision point that relies on the AFST is yellowed. (Chouldechova, et al., 2018)

Through Predictive Risk Modelling (PRM), AFST integrates data elements for any person involved in child maltreatment cases using data from the DHS Data Warehouse. From a variety of data; child protective, mental health, drug & alcohol, and homeless services are a few of many information included in the PRM analysis. The resulting screening score, 1–20, combined with other traditional-elicited data could better predict the likelihood of whether children should be removed from the home. The highest level requires conduct of mandatory screening where allegations must be investigated, otherwise providing supporting data to screen decision making. In a nutshell, Predictive Risk Modelling (PRM) collects data to predict likelihood of adverse outcomes by targeting services to the most risky cases (Chouldechova, et al., 2018).

Its Role in Addressing Human Bias

The PRM is finally translated into child protection settings, the AFST, after once suggested through Vaithianathan et al. (2013) in Chouldechova et al. (2018). The system is built to overcome biases that very likely occur in screening processes where officers have limited time to do research about the victim and its environment. Such cognitive biases are addressed by a system that prevents decisions from depending on the officer’s personal experience or being affected by any other cases the officer has been handling. Otherwise, where AFST is not applied, officers need to hear lengthy information through the call which is inefficient. Therefore, tight pressure in making sure the investigation goes to children with the highest risk are potential for the PRM in complementing officers to result in more effective assessment for each referral as aligned with Chouldechova et al. (2018).

To be precise, officers might discriminate against certain geographic context, race, or ethnicity based on their experiences. Dettlaff et al. (2010) in Chiyldechova et al. (2018) revealed that Black children are more likely than White children to be screened-in, even though Black children had lower risk than White children. This is likely the case where officers apply different thresholds to different racial groups. With more accurate risk assessment tools, the AFST, a finer control over the bias prevents mis-calibration to happen. Caveat is to make sure thresholds are applied uniformly in each case.

Flaws: The Black Box and Data Reliability

As the popular line often says “Nobody is perfect’’, so is the PRM. Similar to other algorithmic programs, it is difficult to extract how such algorithms really work and conclude a decision. There is no way to exactly peek at the algorithm and look inside where this has been labelled as the “black box” problem that is illustrated in Figure 2 below.

Figure 2. The Black Box (“ECKER RAPID,” n.d.)

It leads to trust issues whilst AI needs to be trusted anytime in the near future. For a model to apply in the interest of public; inputs, processes, as well as outputs needs to be opened and published in public (LaGrone, 2019) following algorithmic transparency. Moreover, algorithmic accountability should also be possessed during the conduct of operation to manage harsh consequences where crises happen in the future. Organizations need to be accountable for the outcomes that the model resulted (LaGrone, 2019) by revealing all resulting measures of risk, in AFST case, to the public. Where Allegheny County publishes processes as well as outcomes systematically to the public, other software providers for instance Eckerd Connects (“ECKER RAPID,” n.d.) that uses a similar system to protect children are operating for profit. Thus, all activities involved in the algorithmic function are partly owned by the providers and do not guarantee accountability.

Another flaw of the system might be caused by data unreliability. In the AFST system, the PRM tool analyzes data from the DHS Data Warehouse (“The Allegheny,” n.d.) which is collected routinely (Chouldechova et al., 2018). As it utilizes hundreds of data elements in an allegation of violence against children, one wrong data might result in a lower or higher visualization of risk faced by children. Therefore, children might be separated from their parents when unnecessary and risky babies might be threatened at home when parents act awfully. When a data reporting system within a country or institution is not reliable enough, algorithms might fail to do justice.

Then, Could it Save Lives of Children?

The development of technology, specifically artificial intelligence and machine learning, has helped lots of fields such as health care. Through the use of massive high-tech machines in conduct of treatment as well as pharmaceutical or drugs development, it has already saved thousands of lives. However, whether it could help Gabriel’s persists to be a big question mark. Though, it is very likely such a predictive system will be used in other socio-technical decision-making processes. Its success depends on whether technologies can be ruled to be transparent and accountable and whether the data used is reliable.

In addition,

“Decades of research and several large scale meta-analyses have largely upheld the original conclusions: When it comes to prediction tasks, statistical models are generally significantly more accurate than human experts.” (Dawes et al., 1989; Grove et al., 2000; Kleinberg et al., 2017 in Chouldechova, et al., 2018)

.. and the lives saved by the algorithm might be your loved ones.

Written by: Sainsna Demizike

Reviewed by: Sendy Jasmine K. Hadi, Rosalia Marcha Violeta,

REFERENCES

Chacon, B. (2017, July 16). 5 Things to Know About The Instagram Algorithm. https://later.com/blog/instagram-algorithm/

Chouldechova, et al. (2018). A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. Proceedings of Machine Learning Research. 81:1–15.

Cress, M. (2019, January 10). THE BLACK BOX PROBLEM. http://artificialintelligencemania.com/2019/01/10/the-black-box-problem/

Diez, B. (2020, March 4). Kisah anak delapan tahun yang meninggal setelah disiksa oleh ibu dan ayah tirinya: Ke sekolah dengan bibir bengkak, luka bakar. https://www.bbc.com/indonesia/majalah-51725636

ECKERD RAPID SAFETY FEEDBACK. (n.d.). https://eckerd.org/family-children-services/ersf/

Jenny, C., & Isaac, R. (2006). The relation between child death and child maltreatment. Archives of Disease in Childhood.91(3): 265–269.

LaGrone, N. (2019, December 10). Ethical Machine Learning for Disaster Relief: Rage for Machine Learning. https://www.azavea.com/blog/2019/12/10/ethical-machine-learning-for-disaster-relief-rage-for-machine-learning/

Mantalean, V., & Gatra, S. (2021, January 4). Komnas PA: Ada 2.700 Kasus Kekerasan 2020, Mayoritas Kejahatan Seksual. https://megapolitan.kompas.com/read/2021/01/04/15361151/komnas-pa-ada-2700-kasus-kekerasan-terhadap-anak-selama-2020-mayoritas

Pokhriyal, R. (2020, September 24). Enhance the Customer Experience of Food Delivery Application. https://customerthink.com/enhance-the-customer-experience-of-food-delivery-applications/

Romanou, E., and Belton, E. (2020). Isolated and struggling: social isolation and the risk of child maltreatment, in lockdown and beyond. London: NSPCC.

Sederstorm, J. (2020, March 2). Can Digital Algorithms Help Protect Children like Gabriel Fernandez From Abuse?. https://www.oxygen.com/true-crime-buzz/trials-of-gabriel-fernandez-risk-assessment-algorithms

The Allegheny Family Screening Tool. (n.d.). https://www.alleghenycounty.us/Human-Services/News-Events/Accomplishments/Allegheny-Family-Screening-Tool.aspx

UNICEF. (2020). Global status report on preventing violence against children 2020. ISBN 978–92–4–000419–1.

--

--

Progresa
Progresa

Written by Progresa

A student-run think tank with the primary goal of advocating progress and promoting awareness of the issues of the future

No responses yet