Tag Archives: deepfake

How Meta and TikTok Turn User Rage into Revenue, While Pretending to Keep You Safe

Whistleblowers from Meta and TikTok revealed that both companies knowingly allowed more harmful content, including violence, extremism, and exploitation of minors, on their platforms to win the algorithm-driven engagement race, prioritizing stock prices and political relationships over user safety.

Disclaimer- According to Kate Miller at The Fastest Media, the original source for this story, Cybernews, has been caught in significant inaccuracies.

Cyberbullying Enabled

These platforms also prioritize resolving complaints from politicians over those from vulnerable people, such as minors experiencing cyberbullying. 

“While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity,” writes Jurgita Lapienytė, Editor-in-Chief at Cybernews. 

Profit Over Safety?

A new BBC report revealed what we suspected all along – big tech platforms turn a blind eye to harmful content for the sake of profit. Platforms allow so-called borderline content – misogynistic, sexist, racist, conspiracy-driven – that is harmful yet legal.

According to the report, based on accounts from a dozen whistleblowers and insiders, Meta engineers were instructed to allow more borderline content to compete with TikTok. Meanwhile, TikTok is said to have prioritized several user complaints involving politicians to “avoid threats of regulation or bans.”

Unsurprisingly, big tech platforms denied any wrongdoing, insisting that they do not amplify harmful content.

Algorithms are allegedly designed to better understand user interests and needs, and cater to them accordingly. Unfortunately, most of what a user “wants” turns out to be conspiracy theories, AI slop, deepfakes, and pro-Nazi content. Or at least the algorithm seems to think so – because most of this is so-called ragebait content, designed to provoke a strong response from the user.

And since users engage with it, the algorithm is tricked into “thinking” this is what people want. Humans behind the algorithm must clearly understand this is not the case, but clicks translate to cash. So why would Big Tech cut the branch it’s sitting on?

In 2024, Meta earned $16 billion, or 10% of its annual revenue, from scam ads and banned goods. The information comes not from a third-party analytics firm but from Meta’s own documents, proving that the tech giant is well aware of how much harm it can spread – and how much money it can make along the way.

While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity.

It’s not only our mental health that’s at stake. Adversaries, well aware of algorithmic logic, abuse it to spread misinformation and straightforward lies, sowing division to influence elections all over the world – making us wonder just how much harm performative compliance has already done to democracy.

Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data.

Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:

  • Cybernews researchers found that Android AI apps leak Google secrets the most, 700TB of files already exposed.
  • Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
  • The research team also studies over 19 billion newly exposed passwords, and found that most people use 8–10 character passwords (42%).
  • Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
  • Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia. 
  • The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google’s latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
  • The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
  • The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
  • An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
  • The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
  • The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
  • The team analyzed NASA’s website, and discovered an open redirect vulnerability plaguing NASA’s Astrobiology website.

For the Silo, Živilė Kasparavičiūtė.

Featured image via Cybernews- Elon Musk’s artificial intelligence (AI) firm xAI has said it is working to remove posts by its chatbot Grok that praised Adolf Hitler as the best person to deal with “vile anti-white hate.”

5 Free AI Identifying Tools That Are Free

Fake Photo? Manipulated Video? How to Spot Sham AI

This to preserve the credibility of digital media and safeguard users from falling victim to scams. As synthetic media becomes more sophisticated, identifying AI-generated manipulations presents a unique challenge, but numerous  free apps and tools are readily available allowing users to validate photo and video authenticity with ease—a major step forward in safeguarding trust in a world increasingly influenced by AI-generated visuals, ensuring transparency and security in the digital age. More below.

How AI Drives Misinformation

Amid the onslaught of highly concerning news headlines  spotlighting how deepfake AI-generated photo and video scams are driving rampant misinformation and wreaking havoc across digital, cultural, workplace, political and other societal frameworks, solutions are emerging combat AI-driven misinformation and fraud before people fall victim to scams.

One AI disruptor transforming the fight against AI fraud is BitMind—an AI deepfake detection authority that offers a suite of free  apps and tools that instantly identify and flag AI-generated images before you fall victim. 

Built by AI Engineers

Built by a team of AI engineers hailing from leading tech companies like Amazon, Poshmark, NEAR, and Ledgersafe, BitMind’s instant detection of deepfakes helps uphold the credibility of the media, guaranteeing the authenticity of the information we use. A strong deepfake detection enhances digital interactions, supports better decision making and strengthens the integrity of the modern digital world—serving to protect reputations, shield finances and maintain trust for celebrities, politicians, public figures … and everyone else.

x1 -1.pngtom.png

For both B2C and B2B use, these 5 BitMind tools are free and accessible to anyone: 

  • AI Detector App: A simple web page where users can drag-and-drop suspicious images for fast deepfake detection results;
  • Chrome Extension: Flags AI-created content in real-time, while browsing.
  • X Bot: Verifies if images on X/Twitter are real or AI-generated;
  • Discord Bot: Verifies if images are real or AI-generated via its Discord Integration;
  • AI or Not GameFun Telegram bot that tests your ability to distinguish between AI-generated and human-created images.

“Recognizing the need to integrate deepfake detection into everyday technology use, our applications fit seamlessly into users’ lives,” notes Ken Miyachi, BitMind CEO. “For example, the BitMind Detection App is a user-friendly application that allows individuals to upload images and quickly assess the likelihood of them being real or synthetic. Additionally, the Browser Extension enhances online security by analyzing images on web pages in real time and providing immediate feedback on their authenticity through our subnet validators. These tools are designed to empower users, enabling them to navigate digital spaces with confidence and security.”

As the world’s first decentralized Deepfake Detection System, BitMind is an open-source technology that enables developers to easily integrate the technology into their existing platforms to provide accurate real-time detection of deepfakes.

“Deepfake technology has emerged as both a marvel and a menace,” continued Miyachi.  “With the capacity to create synthetic media that closely mimics reality, deepfakes present unprecedented challenges in privacy, security, and information integrity. Responding to these challenges, we introduced the BitMind Subnet, a breakthrough on the Bittensor network, dedicated to the detection and mitigation of deepfakes.”

According to Miyachi, here are key reasons why BitMind technology is a game changer:

  • The BitMind Subnet, which represents a pivotal advancement in the fight against AI-generated misinformation. Operating on a decentralized AI platform, this deepfake detection system employs sophisticated AI models to accurately distinguish between real and manipulated content. This not only enhances the security of digital media but also preserves the essential trust in digital interactions.
  • The BitMind Subnet is equipped with advanced detection algorithms that utilize both generative and discriminative AI technologies to provide a robust mechanism for identifying deepfakes.
  • BitMind employs cutting-edge techniques, including Neighborhood Pixel Relationships, ensuring competitive accuracy in detection. The operation of the subnet is decentralized, with miners across the network running binary classifiers. This setup ensures that the detection processes are widespread and not confined to any centralized repository, enhancing both the reliability and integrity of the detection results.
  • Community collaboration is a cornerstone of the BitMind Subnet, actively encouraging the community to contribute to our evolving codebase, and by engaging with developers and researchers, the subnet is continuously improved and updated with the latest advancements in AI.
  • BitMind combines its extensive industry expertise, cutting-edge academic research, and a deep passion for technology. The team has a proven track record in AI, blockchain, and systems architecture, successfully leading tech projects and founding innovative companies.

What truly sets BitMind apart is their commitment to creating a safer, more transparent digital world where AI benefits humanity, driven by their passion for innovation, security and community engagement. Their technologies are expressly designed to safeguard the integrity of digital media and foster a trustworthy digital ecosystem.

In the modern world full of fake news and increasing cyber threats, BitMind’s innovations are paving the way for a future in which digital trust is not an option, but a necessity. As the threats increase, the global community must be equipped with the means to ingest digital information in a reliable and authentic in order to realize AI’s true potential safely and efficiently. For the Silo, Marsha Zorn.

The Sentient AI Future is Here And She’s A Lovely Stranger Named Frankie

Bringing work home can put stress stress on a marriage, especially when that “work” is a beautiful woman who seems too cozy with the husband. But in Bruce Deitrick Price’s genre-busting tragicomedy book “Frankie”, looks are deceiving.

Raymond Mason, an AI genius and college professor, brings Frankie, his latest, most human-like creation, to dinner. Raymond knows his wife will be impressed.

No way! Julia Mason feels competitive and threatened. 
Raymond touches Frankie in a romantic way.

Julia is hostile and drinks too much. She passes out as Professor
Mason runs upstairs to find a gun. An hour later, Julia wakes to find
her husband dead and Frankie gone. Julia, semi-hysterical, races into
the night to find the missing masterpiece.

Simon, a grad school drug dealer, falls in love with Frankie. He
realizes he can build a cult around this spiritually evolved woman.
First, he has to hide her.

For different reasons, many people search frantically for Frankie.
Meanwhile, more unexplained deaths are reported. Panic sweeps  New Jersey. Some experts think that humanity is dealing with
an alien invasion.

A pathologist says he has never seen so many beautiful corpses. Cause of death: unknown.

“Elon Musk believes that AI will destroy us. 

First there will be lots of misunderstandings, confusion, and paranoia,” Price says. “Frankie is a look into the future of AI. The smarter the robots, the more likely that strange, unanticipated things will happen.”

About the Author

Bruce Deitrick Price is a novelist, poet, artist and education
reformer. He wrote his first article about robots around 1990. 

Featured image: Historic “Mona Lisa of the Pacific Islands” photograph Mestiza de Sangley, c. 1875