Tag Archives: social media addiction

How Meta and TikTok Turn User Rage into Revenue, While Pretending to Keep You Safe

Whistleblowers from Meta and TikTok revealed that both companies knowingly allowed more harmful content, including violence, extremism, and exploitation of minors, on their platforms to win the algorithm-driven engagement race, prioritizing stock prices and political relationships over user safety.

Disclaimer- According to Kate Miller at The Fastest Media, the original source for this story, Cybernews, has been caught in significant inaccuracies.

Cyberbullying Enabled

These platforms also prioritize resolving complaints from politicians over those from vulnerable people, such as minors experiencing cyberbullying. 

“While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity,” writes Jurgita Lapienytė, Editor-in-Chief at Cybernews. 

Profit Over Safety?

A new BBC report revealed what we suspected all along – big tech platforms turn a blind eye to harmful content for the sake of profit. Platforms allow so-called borderline content – misogynistic, sexist, racist, conspiracy-driven – that is harmful yet legal.

According to the report, based on accounts from a dozen whistleblowers and insiders, Meta engineers were instructed to allow more borderline content to compete with TikTok. Meanwhile, TikTok is said to have prioritized several user complaints involving politicians to “avoid threats of regulation or bans.”

Unsurprisingly, big tech platforms denied any wrongdoing, insisting that they do not amplify harmful content.

Algorithms are allegedly designed to better understand user interests and needs, and cater to them accordingly. Unfortunately, most of what a user “wants” turns out to be conspiracy theories, AI slop, deepfakes, and pro-Nazi content. Or at least the algorithm seems to think so – because most of this is so-called ragebait content, designed to provoke a strong response from the user.

And since users engage with it, the algorithm is tricked into “thinking” this is what people want. Humans behind the algorithm must clearly understand this is not the case, but clicks translate to cash. So why would Big Tech cut the branch it’s sitting on?

In 2024, Meta earned $16 billion, or 10% of its annual revenue, from scam ads and banned goods. The information comes not from a third-party analytics firm but from Meta’s own documents, proving that the tech giant is well aware of how much harm it can spread – and how much money it can make along the way.

While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity.

It’s not only our mental health that’s at stake. Adversaries, well aware of algorithmic logic, abuse it to spread misinformation and straightforward lies, sowing division to influence elections all over the world – making us wonder just how much harm performative compliance has already done to democracy.

Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data.

Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:

  • Cybernews researchers found that Android AI apps leak Google secrets the most, 700TB of files already exposed.
  • Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
  • The research team also studies over 19 billion newly exposed passwords, and found that most people use 8–10 character passwords (42%).
  • Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
  • Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia. 
  • The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google’s latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
  • The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
  • The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
  • An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
  • The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
  • The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
  • The team analyzed NASA’s website, and discovered an open redirect vulnerability plaguing NASA’s Astrobiology website.

For the Silo, Živilė Kasparavičiūtė.

Featured image via Cybernews- Elon Musk’s artificial intelligence (AI) firm xAI has said it is working to remove posts by its chatbot Grok that praised Adolf Hitler as the best person to deal with “vile anti-white hate.”

Messenger Kids Facebook App Creeps Technology Into Family Life

On December 4, the New York Times ran an article about how Facebook just introduced a new app called Messenger Kids. According to Facebook, this app makes it easier for kids to safely video chat and message family and friends. Per their privacy policy, the app collects registration details from parents such as a child’s full name. It also collects the texts, audio and videos children send, as well as information about whom the child interacts with on the service, what features they use and how long the children use them. In launching this new app, Facebook has ignited a fierce debate about how young is too young for children to use mobile apps and how do parents deal with the creep of technology into family life.

One mother has stepped into the debate with an alternative. Janice Taylor created a website and application called Mazu, which teaches children and families how to use digital media responsibly and become positive digital citizens. She cautions parents and says they need to ask themselves, “do you trust Facebook as a medium to protect your children?”

Bing search engine results for “Facebook messenger kids”

“Facebook’s only goal is to monetize a new user base and beat SnapChat at it. Children should never be used as ammunition in the Social Media war for dominance.” Taylor explains.

Based on the concept that “It takes a village to raise a child,” Taylor takes the position that every adult has a role to play in the well-being of the child and society. Taylor created Mazu to build a healthy digital village for families that is founded in love and core values. “Traditional social media preys on our desires to be liked, to be validated, and to be rewarded. That’s why the ‘Like’ button is so addicting and why we at Mazu don’t have one.”

Since its inception in 2010, Mazu, with over 250,000 users and growing, has evolved and now, through partnerships with professional sports teams, has brought the ‘it takes a village’ mentality online. With a suite of family friendly apps, Mazu connects kids to their family, friends, and teams/ brands they love in a way that is safe, healthy and fun. To date they have raised more than $6 million from non-Silicon Valley companies.

A recent Reuters Facebook post about Facebook Messenger Kids

“We believe that parents matter in the digital lives of their children, that’s why our COPPA-certified apps are created with parents in mind. We believe in the power of family and staying connected,” says Taylor. “By building our products around a set of values and using the community to build each other up, we believe we can create better digital citizens.”

Janice Taylor is a social entrepreneur, mother, inspirational speaker, author, and online safety advocate. She has a Bachelor of Arts in Psychology with an Honors thesis that focused on self-esteem and self-efficacy among women. It was from this research that she sought to create a solution to the issue of social media addiction and how it was affecting women, children, and families. For the Silo, Trina Kaye.  Have something to say about this article? Leave us a video comment by clicking record below or use the comment section at the bottom of this page to type us a response. 

[vidrack align=”center” ext_id=”345″ desc=”Leave us a video comment by clicking the record button”]