Google Cloud Warns of 2024 Surge in Cyber Threats Fueled by Generative AI

In a comprehensive report compiled by Google Cloud’s security experts, concerns have surfaced regarding the imminent rise in cyber threats anticipated for 2024. The report, drawing insights from Mandiant Intelligence, Chronicle Security Operations, and other prominent entities, points to the escalating use of generative AI and large language models (LLMs) by cyber criminals to bolster social engineering attacks, specifically in phishing, SMS, and other deceptive operations.

According to Google Cloud, the utilization of generative AI will amplify the authenticity of fraudulent content, rendering misspellings, grammar errors, and cultural inconsistencies more challenging to detect in phishing emails and messages. The report highlights the looming difficulty for users to discern the legitimacy of content due to the AI’s capacity to refine translations and craft convincing, personalized emails at scale.

Moreover, the report suggests that malicious actors could leverage generative AI to fabricate fake news, interactive phone calls, and deepfake media, further intensifying the impact of misinformation campaigns and intrusions. Google Cloud emphasizes the potential for these AI technologies to be commodified in underground forums, offered as paid services for various nefarious purposes, including phishing campaigns and dissemination of disinformation.

In parallel, the forecast underscores the continued exploitation of zero-day vulnerabilities by both nation-state attackers and cyber criminal groups. The rationale behind this strategy, as outlined by Google Cloud, stems from the desire to sustain prolonged access to environments, particularly targeting edge devices and virtualization software due to their inherent challenges in detection and monitoring.

The report elucidates that this shift toward zero-day exploits results from the evolving sophistication of security solutions in identifying conventional phishing and malware, prompting threat actors to explore alternative avenues for evasion and prolonged access. Google Cloud warns that these tactics are likely to lead to a surge in victims and organizations susceptible to ransomware or extortion demands, based on recent cybercrime trends.

In response to these escalating concerns, the Biden-Harris Administration’s plans to establish the U.S. Artificial Intelligence Safety Institute (USAISI) aim to spearhead AI safety and trust evaluations, especially focusing on the most advanced AI models like generative AI. The institute is poised to set standards for authenticating AI-generated content and facilitate testing environments for evaluating emerging AI risks and mitigating their impact.

ALL LATEST
- Advertisment -ad

Most Popular