How do social networks use artificial intelligence?

Social media: inspectors are pushing for the use of AI to protect minors

Operators of social networks such as Facebook with Instagram or YouTube are already using artificial intelligence (AI) processes to prevent the spread of sexual abuse depictions of children, revenge porn or copyrighted material or to block it from the start with upload filters. Youth advocates now want to ensure that providers use such automated detection mechanisms widely so that children and young people do not see content that is unsuitable for them.

The technology is there

Service operators and programmers "have the technology in their hands, so they should develop solutions," said Wolfgang Kreißig, chairman of the Commission for Youth Media Protection (KJM), on Thursday in Berlin. Traditional approaches such as time limits, age verification systems or ID checks are seen as inhibiting thresholds by many online companies. Child protection programs were only based on Windows and thus no longer reflected the reality of the young generation of smartphones. It is therefore obvious to rely on AI for the general protection of minors on the Internet and to make the providers "responsible" if necessary.

Jugendschutz.net has already put it to the test and tested three systems with machine learning capabilities with regard to individual content such as text, images or video. Experts from the federal and state center of excellence checked out the program library fastText from Facebook, which is freely available as open source. This neural network has been trained with around 6,000 individual pages from the news area of ​​your own test bed, explained Andreas Marx, advisor for technical youth media protection at jugendschutz.net.

The technician stated that he was surprised by the "high recognition rates achieved ad hoc". This would have been 83 to 90 percent in the pornography category, 77 percent for violence and 78 percent for extremism. An extremist propaganda site, for example, would be classified "immediately as hatred," he gave an example. The performance of the already pre-trained model TensorFlow alias Inception from Google for image recognition is even better: Here, the hit accuracy for self-injuries is 95 percent, for tattoos that are difficult to differentiate it is 94 percent and for everyday violence it is 88 percent. In a demo with 23 pictures, these rattled through in seconds and there was only one error in which a body beautification was assessed as cracks in the skin.

Sometimes very high recognition rates

In the photo & co. Area, the youth advocates experimented with the paid web service Vision API from Google, which the search engine company also uses for its own "SafeSearch" function. This finished cloud application recognized pornography almost 100 percent, tattoos 93 percent, violence on the other hand only with an accuracy of 76 and self-harm to 61 percent. Despite this weakness, according to Marx, the interface allows a good context search, which then first diagnoses a self-harm in a picture of a scratched arm.

"We see great potential here," said the practitioner, summarizing the results, which were also included in a situation report published by jugendschutz.net. The existing models actually only have to be merged in order to identify impairing content. The limits of AI are that the recognition performance depends on the choice of training material and that the algorithms used are non-transparent black box processes.

A uniform system is necessary

Stefan Glaser, the head of the federal and state office, has in mind a switching point where parents can activate relevant functions for their adolescents at the push of a button by specifying the age. At the same time, content should be classified and labeled by the service with the help of AI or the users in their own assessment, which does not require any expertise. In order to offer an incentive for the latter, it is conceivable to initially set all content to the age limit of 12 by default. The operator then automatically sets the safe mode when he receives the information that a child or adolescent is using his service.

When using the "exciting interface" there are still "great challenges", admitted KJM boss Kreißig. But it is time to have a dialogue with everyone involved. Providers are already obliged by the State Treaty on Youth Media Protection (JMStV) to withhold self-harm, for example, from those in need of protection, as these have a negative impact on development. The federal states therefore wondered whether they could also hold other players such as manufacturers of operating systems accountable. The federal government should also have a draft for a youth protection reform by the end of the year. (mho)

Read comments (83) Go to homepage
display