Tag Archives: site vulnerability

How Meta and TikTok Turn User Rage into Revenue, While Pretending to Keep You Safe

Whistleblowers from Meta and TikTok revealed that both companies knowingly allowed more harmful content, including violence, extremism, and exploitation of minors, on their platforms to win the algorithm-driven engagement race, prioritizing stock prices and political relationships over user safety.

Cyberbullying Enabled

These platforms also prioritize resolving complaints from politicians over those from vulnerable people, such as minors experiencing cyberbullying. 

“While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity,” writes Jurgita Lapienytė, Editor-in-Chief at Cybernews. 

Profit Over Safety?

A new BBC report revealed what we suspected all along – big tech platforms turn a blind eye to harmful content for the sake of profit. Platforms allow so-called borderline content – misogynistic, sexist, racist, conspiracy-driven – that is harmful yet legal.

According to the report, based on accounts from a dozen whistleblowers and insiders, Meta engineers were instructed to allow more borderline content to compete with TikTok. Meanwhile, TikTok is said to have prioritized several user complaints involving politicians to “avoid threats of regulation or bans.”

Unsurprisingly, big tech platforms denied any wrongdoing, insisting that they do not amplify harmful content.

Algorithms are allegedly designed to better understand user interests and needs, and cater to them accordingly. Unfortunately, most of what a user “wants” turns out to be conspiracy theories, AI slop, deepfakes, and pro-Nazi content. Or at least the algorithm seems to think so – because most of this is so-called ragebait content, designed to provoke a strong response from the user.

And since users engage with it, the algorithm is tricked into “thinking” this is what people want. Humans behind the algorithm must clearly understand this is not the case, but clicks translate to cash. So why would Big Tech cut the branch it’s sitting on?

In 2024, Meta earned $16 billion, or 10% of its annual revenue, from scam ads and banned goods. The information comes not from a third-party analytics firm but from Meta’s own documents, proving that the tech giant is well aware of how much harm it can spread – and how much money it can make along the way.

While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity.

It’s not only our mental health that’s at stake. Adversaries, well aware of algorithmic logic, abuse it to spread misinformation and straightforward lies, sowing division to influence elections all over the world – making us wonder just how much harm performative compliance has already done to democracy.

Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data.

Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:

  • Cybernews researchers found that Android AI apps leak Google secrets the most, 700TB of files already exposed.
  • The team has found that an unprotected database owned by IDMerit and likely containing KYC data has spilled vast amounts of personal records across multiple countries.
  • Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
  • The research team also studies over 19 billion newly exposed passwords, and found that most people use 8–10 character passwords (42%).
  • Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
  • Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia. 
  • The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google’s latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
  • The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
  • The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
  • An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
  • The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
  • The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
  • The team analyzed NASA’s website, and discovered an open redirect vulnerability plaguing NASA’s Astrobiology website.

For the Silo, Živilė Kasparavičiūtė.

Featured image via Cybernews- Elon Musk’s artificial intelligence (AI) firm xAI has said it is working to remove posts by its chatbot Grok that praised Adolf Hitler as the best person to deal with “vile anti-white hate.”