Morning Briefing
Summaries of health policy coverage from major news organizations
For Those Who Raised Alarm On Social Media Harms, Verdicts Are Validation
For years, parents, teenagers, pediatricians, educators and whistleblowers have pushed the idea that social media is detrimental to young people鈥檚 mental health and can lead to addiction, eating disorders, sexual exploitation and suicide. For the first time, juries in two states took their side. In Los Angeles on Wednesday, a jury found both Meta and YouTube liable for harms to children using their services. In New Mexico, a jury determined that Meta knowingly harmed children鈥檚 mental health and concealed what it knew about child sexual exploitation on its platforms. Tech watchdog groups, families and children鈥檚 advocates cheered the jury decisions. (Ortutay, 3/26)
A Colorado woman whose son died from a fentanyl-laced pill he bought through social media celebrated a pair of verdicts this week against Meta and YouTube that she said opened the door for companies to be held responsible for harms to children using their platforms. 鈥淭he truth is out, and it鈥檚 time that they are held accountable for the design of the platforms,鈥 said Kimberly Osterman, whose son Max died in 2021 at age 18. 鈥淭hey put profits over safety.鈥 (Peipert and Schoenbaum, 3/27)
A bipartisan proposal to set guardrails around social media sites for children advanced in the Minnesota House on Thursday, one day after a landmark case against tech companies in which they were found liable for creating products that led to harmful behavior. The Minnesota bill would require parental consent for someone under 15 to make an account and would limit features bill authors say are addictive: infinite scrolling, autoplay of videos and push notifications for those users.聽Paid ads would also be prohibited and the strongest privacy settings need to be the default. (Cummings and Lisignoli, 3/26)
Also 鈥
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy 鈥 behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions. (O鈥橞rien, 3/26)