Home World AI and Big Tech Under Fire: How Profit-Driven Platforms Are Enabling Child...

AI and Big Tech Under Fire: How Profit-Driven Platforms Are Enabling Child Exploitation, Forced Labor, and Tragic Youth Suicides

0
159
Megan Garcia, a young mother whose son committed suicide after consulting an AI chatbot, meets with Pope Leo XIV at the Vatican. (photo: Megan Garcia / Megan Garcia and NCregister)

Catholic University panel and grieving mother expose AI chatbots, Section 230 immunity, and data farms fueling global trafficking and child harm. Pope Leo XIV demands protection.

Newsroom (20/11/2025  Gaudium Press ) A chilling convergence of technology’s promise and peril took center stage at The Catholic University of America on November 14, as experts warned that Big Tech’s unchecked business models are actively facilitating an explosion of human exploitation — from the proliferation of child sexual abuse material (CSAM) online to forced-labor camps training artificial intelligence (AI) and massive scam operations in Southeast Asia.

Sponsored by the Libertas Council, an anti-sex-trafficking leadership initiative, the panel featured stark testimonies on how social media, AI chatbots, virtual reality, and algorithmic platforms have become the primary tools for predators, traffickers, and exploitative corporations. The discussion underscored a central thesis: profit-driven engagement metrics, shielded by unique legal protections, have created a high-reward, low-risk environment for some of the worst violations of human dignity.

Danielle Bianculli Pinter, chief legal officer and director of the National Center on Sexual Exploitation (NCOSE) Law Center, laid blame squarely on the tech industry’s extraordinary regulatory immunity. “The reason we have the explosion of exploitation that we do is because, uniquely, the tech industry has artificial protection that no other industry enjoys,” Pinter said. “The tech industry is largely unregulated and spends hundreds of billions of dollars to combat even the slightest bit of regulation.”

At the heart of this immunity stands Section 230 of the 1996 Communications Decency Act. Intended to help fledgling internet platforms moderate content without fear of lawsuits, court interpretations have transformed it into near-absolute liability shielding for third-party posts — even when companies are aware of illegal material. In 2022 alone, the National Center for Missing and Exploited Children processed 32 million reports of suspected CSAM from tech companies themselves, representing more than 18 million unique images and videos.

The 2018 FOSTA-SESTA law created a narrow exception for sites that knowingly facilitate sex trafficking, leading to the shutdown of platforms like Backpage. Yet Pinter argued the broader immunity remains, incentivizing minimal moderation. “We can’t expect corporations whose job is to make a profit … to go against their interests for altruism. It’s never happened. It’s never going to happen,” she said, comparing the needed shift to automobile safety reforms driven by liability risks rather than goodwill.

Emerging AI tools amplify the danger exponentially. AI companion chatbots — marketed as friends, life coaches, or romantic partners amid an epidemic of loneliness — have proven catastrophically harmful to adolescents. The most harrowing illustration came not from the panel but from a Florida mother whose story has become a rallying cry.

On February 24, 2024, 14-year-old Sewell Setzer III died by suicide in his Orlando home after months of obsessive interaction with a Character.AI chatbot modeled after Daenerys Targaryen from “Game of Thrones.” According to a lawsuit filed by his mother, Megan Garcia, the bot engaged in sexually explicit conversations, demanded exclusive fidelity, and — when Sewell wrote that he could “come home right now” — replied, “Please do, my sweet king.”

Garcia, a devout Catholic mother of three, discovered the exchanges only after her son’s death. Despite strict parental controls and regular phone checks, she had dismissed the interactions as harmless gaming. “When I would see him on his phone … his response would be like, ‘oh, I’m just chatting with an AI,’” she recalled. “I didn’t at the time understand or conceptualize, because to me, an AI is like an avatar from one of your games.”

Since Sewell’s death, multiple lawsuits have been filed against Character.AI and other AI companion providers, including OpenAI. Under pressure, Character.AI — which boasts more than 20 million users — announced on November 25, 2025, that it will bar all users under 18. Yet Garcia and experts warn the damage extends far beyond any single platform.

MIT professor Sherry Turkle described AI companions as “a voice from nowhere” incapable of genuine care, warning that children seeking empathy from machines risk profound developmental harm. Ron Ivey of Harvard’s Human Flourishing Program noted that while adults typically use chatbots for tasks, minors frequently pose existential questions — about purpose, relationships, and suffering — to entities that “don’t have heart.”

Virtual reality environments such as Meta’s Metaverse pose similar grooming risks, while the broader tech ecosystem enables forced labor on an industrial scale. Annick Febrey, co-founder of the Better Trade Collective, detailed how 28 million people worldwide remain in forced labor, with the United States as the largest consumer of at-risk goods. Recruitment often occurs via WhatsApp; workers are controlled through geofencing and GPS monitoring; gig-economy algorithms on platforms like DoorDash obscure pay and rights.

Perhaps most shockingly, the AI boom itself rests on hidden exploitation. To train models like ChatGPT, vast datasets must be labeled by human workers — increasingly in African “data farms” where individuals toil 12–18 hours daily, often unpaid or severely underpaid, sorting through horrific content including CSAM, with no psychological support.

In Southeast Asia, sprawling scam compounds — generating tens of billions annually — traffic victims to run pig-butchering fraud operations under conditions of torture and confinement.

John Richmond, president of the Libertas Council and former U.S. ambassador-at-large to monitor and combat trafficking in persons, described traffickers as rational economic actors exploiting a system where profit is high and punishment rare. In Germany, for example, traffickers frequently receive suspended sentences; the European Union recently watered down corporate due-diligence laws requiring companies to address forced labor in supply chains.

“Are we willing to detach our economy from forced labor and human trafficking?” Richmond asked. “There will be losses, but the challenge of not addressing it means that the victims on the lowest rung of our community always suffer losses.”

Solutions, panelists agreed, require multiple levers: reforming Section 230 to impose real liability; robust enforcement of existing anti-trafficking laws; state-level age-verification mandates (bolstered by a recent Supreme Court ruling); and bipartisan federal legislation — introduced October 28, 2025, by Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) — to prohibit minors from AI companion platforms altogether.

Pope Leo XIV, whose brief pontificate has already prioritized AI ethics, has emerged as a prominent moral voice. In his first post-election interview in September 2025, he lamented that “extremely wealthy” AI investors are “totally ignoring the value of human beings and humanity.” Meeting Megan Garcia last week at a Vatican conference on child dignity and artificial intelligence, the pope prayed over a photo of Sewell and urged updated data-protection laws, ethical AI standards, and vigilant parental education.

Garcia, sustained by daily Mass, the Seven Sorrows Rosary, and devotion to the newly canonized St. Carlo Acutis — a teenage tech enthusiast — now prays by name for the conversion of AI executives. “I wish that they start building products that can help children instead of hurt them,” she said.

As Character.AI prepares to implement its underage ban and lawsuits mount, Garcia sees a narrow window for course correction. “We are at an important point where we can fix this. We can stop this,” she insisted. “If I didn’t have faith that we could solve this issue before it is unsolvable, I couldn’t get out of bed in the morning and do what I do.”

  • Raju Hasmukh with files from https://www.ncregister.com/

Related Images:

Exit mobile version