Tag Archives: social engineering

Amazon Outage Created Perfect Hacker Conditions

AWS Outage Created “Perfect Storm” for Social Engineering Attacks 

Last week Amazon Web Services (AWS) went down worldwide, including here in Canada, causing a ripple effect, from governments and local municipalities, to enterprises, small businesses and the individuals who rely on these services daily. 

AWS is a cloud-based service thousands of major companies use to not only store their data, but run their apps and software for many critical business services.  

Whether basic communications using apps such as Snapchat, Signal and Reddit to airlines such as Delta and United reporting disruptions to their customer facing operations, when these services go down it highlights the reliance on just a few cloud services companies (AWS, Microsoft Azure, and Google Cloud) to ‘run the country’ so to speak. 

The AWS outage has further impacted shopping websites, banking apps, and even streaming and smart homes devices.

And while organizations scramble to ensure business operations continue to run, it’s also an opportunity for individuals to do a quick check-in on their own cyber hygiene. 

Cybercriminals and hackers can easily take advantage of these types of outages to deploy an array of social engineering attacks. 

Whether in the office or at home, nothing is more frustrating than losing the ability to access files and documents, and communicate with business associates or loved ones, especially in an emergency or crisis.  

Hackers who rely on mass urgency and panic will see this as an opportunity to take advantage of people’s heightened emotions with phishing emails offering to “fix” the issue and get you back online and into your accounts or apps.  

But in reality, these scammers are looking to steal your personal information, such as login credentials by tricking you into updating your software or resetting your password.   

During major outages, users should avoid clicking on any links in emails, texts and pop-ups claiming to be able to fix the outage. 

Additionally, double check that any alerts or update messages from organizations, such as your bank or payment apps, are verified from the official website or app.   

This is the time to make sure you are using a strong password and multifactor authentication to prevent any unauthorized access to your accounts. 

Delay Things

However, individuals should also delay making sensitive transactions, such as major financial transactions, resetting your password, or installing critical software updates, until the service in question has been announced as officially restored. 

Furthermore, when the service disruption has ended, users should also monitor any affected accounts for unusual activity, discrepancies, and duplicate or fraudulent transactions. 

Finally, this is an excellent reminder for individuals to make sure they have a back-up system in place to access important documents and for communications.  

This can be as easy as keeping a secondary email account or even a back-up mobile phone. For the Silo, Stefani Schappert.

ABOUT THE AUTHOR

Stefanie Schappert, MSCY, CC, Senior Journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019.  She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News.  With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University’s International Social Engineering Pen Testing Competition, sponsored by Google.  Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines. 

ABOUT CYBERNEWS

Friends of The Silo, Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence. 

Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:

  • Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
  • Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
  • Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia. 

Feds False News Checker Tool To Use AI- At Risk Of Language & Political Bias

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence
Canadian Heritage Minister Pascale St-Onge speaks to reporters on Parliament Hill after Bell Media announces job cuts, in Ottawa on Feb. 8, 2024. (The Canadian Press/Patrick Doyle)

A new federally funded tool being developed with the aim of helping Canadians detect online misinformation will rely on artificial intelligence (AI), Ottawa has announced.

Heritage Minister Pascale St-Onge said on July 29 that Ottawa is providing almost $300,000 cad to researchers at Université de Montréal (UdeM) to develop the tool.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. “We’re fighting for Canadians to get the facts” by supporting the university’s independent project, she added.

Canadian Heritage says the project will develop a website and web browser extension dedicated to detecting misinformation.

The department says the project will use large AI language models capable of detecting misinformation across different languages in various formats such as text or video, and contained within different sources of information.

“This technology will help implement effective behavioral nudges to mitigate the proliferation of ‘fake news’ stories in online communities,” says Canadian Heritage.

Related-

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

With the browser extension, users will be notified if they come across potential misinformation, which the department says will reduce the likelihood of the content being shared.

Project lead and UdeM professor Jean-François Godbout said in an email that the tool will rely mostly on AI-based systems such as OpenAI’s ChatGPT.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus (the data which served for its training),” Godbout wrote in French.

The political science professor added the system will also be able to consult “distinct and reliable external sources.” After considering all the information, the system will produce an evaluation to determine whether the content is true or false, he said, while qualifying its degree of certainty.

Godbout said the reasoning for the decision will be provided to the user, along with the references that were relied upon, and that in some cases the system could say there’s insufficient information to make a judgment.

Asked about concerns that the detection model could be tainted by AI shortcomings such as bias, Godbout said his previous research has demonstrated his sources are “not significantly ideologically biased.”

“That said, our system should rely on a variety of sources, and we continue to explore working with diversified and balanced sources,” he said. “We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information.”

The professor said that the fundamental research behind the project was conducted before receiving the federal grant, which only supports the development of a web application.

Bias Concerns

The reliance on AI to determine what is true or false could have some pitfalls, with large language models being criticized for having political biases.

Such concerns about the neutrality of AI have been raised by billionaire Elon Musk, who owns X and its AI chatbot Grok.

British and Brazilian researchers from the University of East Anglia published a study in January that sought to measure ChatGPT’s political bias.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” they wrote. Researchers said there are real concerns that ChatGPT and other large language models in general can “extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Misinformation and Disinformation

The federal government’s initiatives to tackle misinformation and disinformation have been multifaceted.

The funds provided to the Université de Montréal are part of a larger program to shape online information, the Digital Citizen Initiative. The program supports researchers and civil society organizations that promote a “healthy information ecosystem,” according to Canadian Heritage.

The Liberal government has also passed major bills, such as C-11 and C-18, which impact the information environment.

Bill C-11 has revamped the Broadcasting Act, creating rules for the production and discoverability of Canadian content and giving increased regulatory powers to the CRTC over online content.

Bill C-18 created the obligation for large online platforms to share revenues with news organizations for the display of links. This legislation was promoted by then-Heritage Minister Pablo Rodriguez as a tool to strengthen news media in a “time of greater mistrust and disinformation.”

These two pieces of legislation were followed by Bill C-63 in February to enact the Online Harms Act. Along with seeking to better protect children online, it would create steep penalties for saying things deemed hateful on the web.

There is some confusion about what the latest initiative with UdeM specifically targets. Canadian Heritage says the project aims to counter misinformation, whereas the university says it’s aimed at disinformation. The two concepts are often used in the same sentence when officials signal an intent to crack down on content they deem inappropriate, but a key characteristic distinguishes the two.

The Canadian Centre for Cyber Security defines misinformation as “false information that is not intended to cause harm”—which means it could have been posted inadvertently.

Meanwhile, the Centre defines disinformation as being “intended to manipulate, cause damage and guide people, organizations and countries in the wrong direction.” It can be crafted by sophisticated foreign state actors seeking to gain politically.

Minister St-Onge’s office has not responded to a request for clarification as of this posts publication.

In describing its project to counter disinformation, UdeM said events like the Jan. 6 Capitol breach, the Brexit referendum, and the COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.” For the Silo, Noe Chartier/ The Epoch Times.

The Canadian Press contributed to this report.

Video Art & Culture- The Distinctive Features Of The Medium

VIDEO ART. The name is equivocal. A good name. It leaves open all the questions  and asks them anyway. Is this an art form, a new genre? An anthology of valued activity conducted in a particular arena defined by display on a cathode ray tube? The kind of video made by a special class of people—artists—whose works are exhibited primarily in what is called ‘the art world’–ARTIST’S VIDEO?

An inspection of the names in the catalogue gives the easy and not quite sufficient answer that is this last we are considering, ARTIST’S VIDEO. But is this a class apart?
Read the full original and historic essay [in appropriately PDF electronic form] by David Antin by clicking here.

Special thanks to http://pzacad.pitzer.edu/ for archiving the original essay.