Editor’s note: Kara Alaimo, associate professor of communications at Fairleigh Dickinson University, writes about issues affecting women and social media. her book”Beyond influence: Why social media is harmful to women and girls — and how we can take it back.” Published by Alcove Press on March 5, 2024. Instagram, Facebook and X. The opinions expressed in this comment are her own.read more opinions On CNN.
CNN
—
On Wednesday, the chief executives of Meta, TikTok, X, Snap, and Discord testified before the Senate about what they are doing to protect children from online harm. Meta’s Mark Zuckerberg and TikTok’s Hsu Chu are voluntarily appearing in court, while Illinois Democratic Sen. Dick Durbin, Snap’s Evan Spiegel, X’s Linda Yaccarino and Discord’s He criticized Jason Citron for having to be subpoenaed and pointed out the need to send U.S. Marshals to Citron’s office to do the job. Summon.
Provided by: Kara Alaimo
Kara Alaimo
Durbin blamed social networks for harming children’s mental health, saying they were “not only contributing to this crisis, but also responsible for many of the dangers children face online.” There is,” he said.
Senators also expressed concern about how children are being exploited online. For example, “sextortionists” persuade children to share racy images, sometimes under the guise of forming romantic relationships, and then threaten to publish the images unless they provide more explicit content or money. (and in some cases, both). This can destroy people’s lives and put them at risk of depression and suicide. “You have blood on your hands,” South Carolina Republican Sen. Lindsey Graham told the executives.
Ahead of the testimony, tech companies announced new efforts to protect children. Apps like Instagram and TikTok are encouraging kids to take breaks or limit screen time, and have changed their algorithms to show kids less harmful content, such as posts about eating disorders. Meta says it hides this type of inappropriate content from children and has changed default privacy settings for teens to prevent strangers from sending them messages. Snapchat offers more monitoring options for parents, including information about child safety settings.
Chu testified that TikTok accounts for children under the age of 16 are set to private and are not recommended to strangers, and Yaccarino said that children between the ages of 13 and 17 are not allowed to receive messages from strangers. Approve with X who said he cannot receive messages. Spiegel pointed out that there are no public likes or comments on Snapchat. These are all important features.
Zuckerberg also stood up and apologized to the families of children harmed by social media who attended the hearing, saying: No one should have to go through what your family has gone through. That’s why we have invested so much and will continue to work across the industry to ensure that no one has to go through what your family has had to go through. ”
But that’s not enough. Lawmakers and technology companies need to do more to protect children.
The Stop CSAM (Child Sexual Abuse Material) Act of 2023 would allow technology companies to be held civilly liable for hosting child sexual abuse material. This will be an important way for technology companies to step up efforts to protect children from sextortion and other forms of online exploitation.
The SHIELD Act would criminalize sharing or threatening to share intimate images without a person’s consent (commonly known as “revenge porn”). This is also an essential way to protect children and adults.
Technology companies also have a lot of work to do. They may claim to be reducing posts that are harmful to children, but it’s still unclear how they identify those that are harmful. Tech companies are working with youth health experts to develop and share standards for the content kids can see, ensuring that no potentially harmful posts about things like body image and mental health appear in their feeds. It is necessary to do so. You can learn what the rules are and try to follow them. Then they need to hire more human moderators to determine whether content meets those standards before showing it to children.
Artificial intelligence tools cannot be trusted with this scrutiny. For example, automated systems removed less than 1% of his content that violated company rules against violence and incitement, according to one internal document. (Zuckerberg said at the hearing that “something like 99%” of the content that Meta removes is automatically identified using AI.) But finding ways for humans to vet content It’s not as difficult as you might think — Snapchat, for example, isn’t. The creator cannot promote videos from her program to more than 25 people until they are reviewed by human moderators.
You also need to know what big trends your kids are exposed to online. At the moment, none of these platforms provide good indicators of what is trending on their platforms. For example, TikTok recently disabled the search feature of a tool that allows users to search for popular hashtags on the app following reports that topics censored by the Chinese government were found to be less widespread on TikTok than on Instagram. did. All social apps should offer tools that show you what’s trending. You should also analyze what’s trending among users under 18 so parents know what to talk to their kids about.
As I write in my recent book, Beyond Influence: Why Social Media Is Harmful to Women and Girls, and How to Take It Back, children are increasingly exposed to content related to mental health. When searching, technology companies should show children a box containing a link. to organizations that can help them. It’s surprising to think that children searching for this content could instead be served videos that make the problem worse.
I also argue in the book that these apps need to identify when a photo has been filtered or otherwise manipulated. So children are constantly reminded that the images they see of other people’s bodies are often not real. This can help children form a realistic body image.
Get our free weekly newsletter
Finally, these tech companies should do a better job of leveraging their platforms to provide helpful and empowering information to children, including lessons on the healthy use of social media. For example, following the Taylor Swift nude deepfake that has been circulating online in recent days, social apps are educating children about how to spot fake images and why they should not create or use them. Content needs to be displayed. As I write in my book, when nude images of women are circulated online, they put women at risk for sexual assault, job loss, dating difficulties, depression, and even suicide.
Tech executives promised to protect children in testimony to senators. But they have not promised to do what is actually necessary to protect the physical and mental health of children. To protect kids on apps, we need more human moderators, mental health resources, lessons for kids, more disclosure when content is manipulated, and creating better standards for the content kids see. must be enforced. And lawmakers need to pass legislation to crack down on online sexual exploitation. Such a solution will give parents something that they will actually like.