On the surface, machine learning appears promising for IT teams that want to boost their cybersecurity. After all, the tech could potentially help your team improve endpoint security, automate repetitive tasks, and even reduce the likelihood of attacks resulting in data exfiltration.
But—sorry to be the bearers of bad news here—hackers aren’t sleeping on the benefits it could lend to their cybercriminal strategy. According to industry analysts, they’re already swiftly capitalizing on this intelligent tech to supercharge their exploits. Here’s what you need to know.
Keep an eye on smart, sneaky malware
Machine learning is a form of AI that allows systems to automatically learn and improve without being explicitly programmed to change their behavior. This technology could prove massively beneficial for IT security teams already juggling multiple priorities and need an extra hand—maybe it could even help them stop the next WannaCry in its tracks. But hackers have grokked its potential, too, and that’s a dangerous development.
Exhibit A is malware, something that’s notoriously labor-intensive to create and, therefore, ripe for acceleration. In 2017, researchers created a Generative Adversarial Network—an algorithm that can generate malware with the ability to slip past cognitively enhanced malware detection systems. Industry analysts speculate this clever form of AI could adapt its malware on the fly based on what defenses it detects, quickly refining it to boost its potency.
Don’t stand by as advanced threats evolve
Advanced IoT threats require economies of scale to wreak maximum damage—and that’s where botnets come in. According to Fortinet, 2018 will be the year of self-learning “hivenets” and “swarmbots” that can independently check in with one another on their progress and coordinate their strategies based on local intelligence. Zombies will be gifted with this form of AI, too, becoming able to strike out on their own and cause havoc without receiving express orders from the botnet. Hivenets could soon metastasize into swarms, simultaneously attacking multiple victims in an unpredictable, harder to thwart fashion.
Spear phishing was already devious, with hackers even creeping into office workers’ email conversations to deliver malicious payloads. But—wait for it—spear phishing is about to get even worse. It takes considerable skill and manual research to pull off a big exploit, like CEO fraud, but natural language processing and other related technologies could help hackers automate this process, so it scales better and proves more effective. Adversarial AI could also be enlisted to analyze stolen records and identify targets. From there, it could start dropping perfectly crafted and deadly malicious spear phishing emails into an inbox near you.
Hackers are trying to overpower your AI
It’s tricky to sift through a sea of security alerts and understand which ones require urgent action. That’s why automated threat intelligence is so appealing—it conducts complex analysis on your behalf and surfaces the vulnerabilities with the greatest threats to your business. But any technology can be used for good or evil, and hackers have latched onto this area, too. There’s speculation hackers could use machine learning to “raise the noise floor,” overwhelming your threat intelligence with a blitz of false positives and keeping it from helping you spot a dangerous threat in time.
This tech will outsmart authentication techniques, like CAPTCHA, in short order, too. Although CAPTCHA has done a serviceable job verifying who’s a human versus who’s synthetic, researchers have already used deep learning to hack it. Even the newer form of CAPTCHA that uses semantic images was cracked with an astonishing 92 percent success rate, which means hackers will be able to gain unauthorized access to systems they shouldn’t, waltzing right in and setting up shop to see what valuable digital goodies they can find.
Hackers have also identified a scary vulnerability in machine learning itself: the data it relies on. This form of AI depends on high quality, relevant data to power its learning, but if that pool of data becomes diluted or contaminated with information, it will set off on the wrong path. Although this form of attack is theoretical for now, researchers have already demonstrated how convolutional neural networks, like Google, Microsoft, and AWS, could be compromised in this way.
Rage against the machine and fight back
If hackers are using AI, what can the average IT pro do to fight back against the evolution of cybercrime? Well, there are a few ways you can prepare for this new cybercriminal strategy coming your way. One place to start, of course, is by making sure you have at least as much advanced firepower in your cybersecurity arsenal as the criminals, tapping the capabilities of automation and AI to bring your defenses up to spec. If you don’t, you’re essentially bringing a knife to a gunfight.
That said, hackers are also opportunists. For some, the strategy of choice is to find the path of least resistance and target an organization’s weakest link. For far too many businesses today, that’s still the endpoint—especially IoT devices and the print environment. If you haven’t bolstered your defenses with secure printing solutions yet, you’ll want to do so ASAP. Today’s intelligent printers, for instance, come with built-in security features that proactively identify and self-heal from attacks. That’s an added layer of protection most IT pros would love to have on their team.
Lastly, forewarned is forearmed. With cyberthreats rapidly evolving as they integrate cognitive technologies, like AI, it’s best to stay up to date and keep informed on the latest threats as they develop. That way, no matter what exploits hackers devise next, you’ll have the best possible chance of protecting your business. Click “subscribe” at the top of the page to stay tuned for more insights from Tektonika.