Cybersecurity has now been included in Google’s AI plans

Google plans to apply AI to cybersecurity and make threat alerts easier to understand as consumers look for additional applications of generative AI that go beyond simply creating a fake photo.

Google claims in a blog post that its new cybersecurity offering, Google Threat Intelligence, will combine the Gemini AI model with the work of its Mandiant cybersecurity section and VirusTotal threat intelligence.

Google claims that the new tool reduces the time required to reverse engineer malware attacks by using the Gemini 1.5 Pro large language model. Gemini 1.5 Pro, which was launched in February, is said by the company to have taken only 34 seconds to find a kill switch in the coding of the WannaCry virus, a ransomware attack that affected hospitals, businesses, and other organizations globally in 2017. That’s impressive, but not surprising considering how well-versed in code LLMs are.

Gemini might also be used in the threat venue to summarize threat reports into plain English within Threat Intelligence, allowing businesses to evaluate the potential impact of future attacks and avoid overreacting or underreacting to threats.

According to Google, Threat Intelligence also maintains a huge data network to track any threats prior to an attack. It enables users to select their areas of emphasis and get a broader view of the cybersecurity ecosystem. Mandiant offers advisers that help companies in repelling attacks as well as human experts who keep an eye on potentially dangerous organizations. Threat indications are also frequently posted by the VirusTotal community.

The cybersecurity firm Mandiant, which discovered the 2020 SolarWinds cyberattack against the US federal government, was acquired by Google in 2022.

Additionally, the business aims to evaluate security flaws in AI projects by utilizing Mandiant’s expertise. Mandiant will help in red-teaming operations and test the AI models’ defenses using Google’s Secure AI Framework. Although AI models are useful for summarizing risks and decoding malware attacks, the models themselves can occasionally fall victim to malicious individuals. Occasionally, one of these risks is “data poisoning,” where malicious code is added to data that AI models collect, preventing the models from reacting to certain cues.

Of course, Google is not the only business integrating cybersecurity and AI. Microsoft introduced Copilot for Security, which allows cybersecurity experts to inquire about weaknesses and is driven by GPT-4 and Microsoft’s AI model built for cybersecurity. It remains to be seen if each of these uses cases for generative AI is actually beneficial, but it’s encouraging to see it put to use for something other than images of a swaggy Pope.