Google: Father takes pictures of sick toddler's genitals – and the trouble begins

Google-Büro in New York City: Das Unternehmen arbeitet mit dem National Center for Missing and Exploited Children (NCMEC) zusammen Google-Büro in New York City: Das Unternehmen arbeitet mit dem National Center for Missing and Exploited Children (NCMEC) zusammen

Google Office in New York City: The company works with the National Center for Missing and Exploited Children (NCMEC)

Photo: ANDREW KELLY / Reuters

Private cell phone photos showing his young son’s genitals have gotten a lot of trouble for a San Francisco man, The New York Times reports in an article

that explains how Google tries to identify and report instances of child abuse – and which instances push the partially artificial intelligence-based system to its limits.

The case from the USA that the newspaper describes is about a man named Mark, whose last name was not published to protect his identity. Mark documented with his Android smartphone in February 2021, so during the corona pandemic, a swelling in the genital area of ​​his son, a toddler of an undisclosed age. Among the photos Mark took was said to have included a picture of his hand – which the report said helped make the swelling easier to see. His wife forwarded some of the images to a doctor via an online form, who used the images to prescribe antibiotics for the child.

Google had concerns

At Google where Mark his Photos and videos saved in the cloud, the images were obviously not interpreted as recording for medical purposes. The company decided to completely suspend the father’s Google account soon after the photos were taken. From then on, he no longer had access to a decade of contacts, emails and photos and videos stored in the cloud. He was also unable to log in easily to many a third-party service that was connected to his e-mail account.

In the course of the account deactivation, according to the newspaper report, Google referred to »harmful content ‘ which ‘are a serious violation of Google’s policies and may be illegal’. An info page was linked to it, it says, on which Google lists a number of possible reasons, including the topics “sexual abuse and exploitation of children”

.
From this point on, Mark could no longer even use his cell phone number, writes the »New York Times,” as he was a customer of Google Fi – a mobile service provided by Google. Quite apart from the issue of child abuse, the case shows how much it can disrupt people’s everyday lives when digital companies whose services they have relied on for years suddenly drop them.
An objection did not help

Mark resented the lockout. Using a form, he asked Google to review his decision, also referring to his son’s infection. But Google would not be persuaded. His account remained blocked, and recently it was even completely deleted. Google justifies its action with another, somewhat older recording that the company noticed when further examining Mark’s data. According to Google, it showed a naked woman next to Mark’s young son.

Mark himself tells the »New York Times« that he himself no longer has this photo. However, he assumes that he simply wanted to capture a “private moment” with his wife and son in the morning.

But the Father didn’t just get in trouble with Google. Due to a US federal law, the IT group is obliged to report discovered suspicious material to the so-called CyberTipline of the National Center for Missing and Exploited Children (NCMEC). The NCMEC is a non-governmental organization that, among other things, maintains a database with the hash values ​​of known child abuse images. It cooperates with several digital companies, in addition to Google, for example, with Facebook. And she also notified the police about the alleged abuse.

Google needs to report suspicious detections

The San Francisco Police Department took over the investigation. A search warrant gave an investigator access to Mark’s Internet searches, his location history, his messages and all documents, photos and videos he had stored on Google services, writes the New York Times. This all happened within a week after Mark took the photos of the genital area.

What’s remarkable about Mark’s case is that it’s is about recordings made by himself and not about relevant known photographic material. Providers often only look for the signatures of already known recordings that show child abuse. The “New York Times” writes that Google, on the other hand, used a tool that the company 2018 and which is intended to help in the discovery of images of still unknown victims of abuse. According to this, an artificial intelligence initially came to the conclusion that the material that Mark created could be problematic. If he hadn’t activated automatic photo backup, Mark would probably have been spared his problems.

This is what happens when the AI ​​flags a photo

It wasn’t just software that got Mark’s case to the police . According to Google, if its AI system has flagged content as potentially problematic, a human moderator will also check whether the content meets the US definition of child sexual abuse material. If this check is also positive, the account will be blocked and a search will be made for further problematic material. As Google informed the New York Times, the responsible moderator did not notice any rash or redness when checking the genital area photos – the person was therefore not aware of the context in which the photo was taken.

4260

How often the verification systems of companies like Google wrongly associate people like Mark with the subject of child abuse is unknown. The New York Times came across another case from Houston online that was similar to Mark’s. It was also about photos that a father had taken to document an infection in his son’s intimate area – according to the article at the request of a pediatrician. The pictures were automatically transferred to the Google cloud, they say, and the man also sent them to his wife via the Google Hangouts chat service.

To further classify what happened, it is also emphasized that Google in 2021 all in all .583 Cases reported to NCMEC’s ​​CyberTipLine Has. The NCMEC, in turn, informed the authorities in about

cases alerted to potential victims, it is said. In the end, Mark was a »false positive« result, which politicians, the authorities and the digital companies apparently accept in view of their goal of curbing child abuse.

The police couldn’t reach Mark

IT In any case, experts have long warned of the danger of false alarms when AI is used on sensitive topics and human moderators often have too little time to examine questionable content closely.

Suzanne Haney of the American Academy of Pediatrics, which includes many US pediatricians, advises parents not to take photos of on their children’s genitals, even when told to do so by doctors. “The last thing you want is for a kid to feel comfortable with someone photographing their genitals,” says Haney. “If you absolutely must, avoid uploading the images to the cloud and delete them as soon as possible.”

By the way, the San Francisco Police Department quickly closed their investigation into Mark’s case. According to the New York Times, she came to the conclusion that there was no crime – and the case in Houston also ended that way. However, Mark only found out about the end of the investigation because he contacted the responsible investigator himself. Conversely, the contact failed, according to the article – because Mark’s previous e-mail address and telephone number no longer worked.

Related Articles

Back to top button