Are AI Chatbots a Cybersecurity Threat?

ChatGPT and Other Tools Could Be Used Maliciously

Artificial intelligence-powered language processing tools are capable of writing computer code, composing essays, and carrying on real-time conversations. But AI applications like ChatGPT and chatbots from companies like Google and Microsoft have also generated controversy by misinterpreting prompts, producing factual errors, and even reciting inflammatory content.

ChatGPT went live only four months ago, while Bing’s AI chatbot just debuted last week to devastating reviews from beta testers. But many digital ethicists and cybersecurity experts say it’s important to highlight these issues now when the technology is new and could still be refined.

The terms of service for ChatGPT expressly forbid the creation of malware, ransomware, and “other software intended to impose some level of harm.” But hackers have already tried to exploit the tool for nefarious means.

Forums used by cybercriminals are rife with speculation about AI’s state-of-the-art language model being used to write realistic email copy, fake web ads, and spam listings on websites like Craigslist. Naturally, cybercrooks have already figured out how to use AI tools, write computer code that exploits IT vulnerabilities, and execute malware infections. And a lack of encryption during chatbot conversations could put identifying information like birthdays and passwords at risk.

But the biggest threat posed by tools like ChatGPT is the platform’s ability to scan and synthesize millions of pages of content in real-time. That can empower bad actors to better mimic a specific person’s writing style to be used in spearphishing emails, which trick someone into thinking they’re communicating with a real contact.

These types of emails are the common denominators linking ransomware attacks, financial fraud, and hacking attempts. For years, phishing emails have been relatively easy to spot because of poor grammar, strange spelling, and uncharacteristic translations in the email copy. Many of the biggest online gangs are based in Eastern European countries, where a natural command of English is a desirable, in-demand skill.

But an efficient AI-powered copywriter can help hackers easily overcome the need to have English speakers on staff. Instead, ChatGPT and other chatbots could be used to instantly write realistic text in natural language—which can fool even the most discerning contact.

How Can You Protect Yourself from Increasingly Sophisticated Phishing Emails?

• Spot commonly used email templates. In the past, these typically included messages that urged a user to claim an annual bonus, download an important software update, or review an attached document. Outright appeals for financial assistance are always easy to downright spot, too. But with ChatGPT, these could start to become sharper and more focused—arriving, say, as an invitation to register for an upcoming conference or a request to provide feedback about a particular area of expertise. That means that users across the board will need updated training and education about how phishing emails will change and what kinds of subjects will become more common.

• Understand how social engineering could evolve. Ever receive an invitation from a LinkedIn contact you don’t quite know? This could become increasingly common with ChatGPT. Hackers have been known to exploit LinkedIn, Facebook, and other social media platforms to conduct cyber surveillance of potential targets. But creating legitimate-looking online profiles with bios, posts, and other activity is time-intensive. With AI tools, however, hackers could quickly fill out multiple profiles and send hundreds of messages a day encouraging someone to connect—and, ultimately, click a malicious link, or share private information. This will require social media users to be more cautious and discerning as unfamiliar requests potentially increase.

• Defend your entire network with multi-layered security protection. The best defenses respond with flexibility to evolving threats—and AI represents one of the biggest potential problems in a generation. That’s why CMIT Solutions surrounds its clients with dynamic tiers of different cybersecurity protections:

    • Anti-spam filters that quarantine many phishing attempts before they land in your inbox
    • Advanced firewalls that provide robust perimeter security
    • Endpoint detection services that protect every device and isolate any specific threats
    • Remote monitoring and IP traffic analysis that identify anomalies and block broad web-based attacks
    • Security incident event management (SIEM) and security operations center (SOC) tools that automate mitigation steps in case of an attack

Stitched together as a whole, these services can construct a defensive wall around any company’s IT ecosystem, offering comprehensive protection from a variety of issues.

• Back up your data regularly, remotely, and redundantly. After being hit by a ransomware attack and having their data stolen or encrypted, many businesses choose to deal with cybercriminals and fork over a pricey Bitcoin ransom that may or may not result in the return of their information. But the same lesson is demonstrated over and over: if a backup version of that data is stored in an off-site location, it can be recovered relatively easily. A trusted IT provider like CMIT Solutions can help remove ransomware from infected machines, reset all affected systems, retrieve data from its latest backup point, and reinstall everything you thought you had lost.

• Recognize the flip side of AI-powered content. Many cybersecurity experts believe that ChatGPT and other chatbots could contain a silver lining. Back-end developers might start to use the tool to quickly scan computer code looking for hidden flaws, while public-facing information security officers could rapidly produce security incident reports that share information about hacks, breaches, and vulnerabilities. As tech blog ZDNet emphasized, even ChatGPT itself has good advice about the future of AI safety. When prompted to detail existing rules that prevent it from being abused for phishing, the tool responded:

“It’s important to note that while AI language models like ChatGPT can generate text that is similar to phishing emails, they cannot perform malicious actions on their own and require the intent and actions of a user to cause harm. As such, it is important for users to exercise caution and good judgment when using AI technology, and to be vigilant in protecting against phishing and other malicious activities.”

At CMIT Solutions, we’re constantly refining our cybersecurity knowledge to understand the impact of new tools like AI, ChatGPT, and chatbots. For 25 years, we’ve helped thousands of companies across North America to defend their data, secure their devices, and respond to changing threats. We also empower employees at every level with training and education so they can serve as the first line of defense, spotting advanced phishing emails and stopping cyberattacks before they start.

If you need help understanding AI-powered tools or beefing up cybersecurity protection for your company, contact CMIT Solutions today. We’re here to help with knowledgeable insight and expert advice.

Back to Blog

Share:

Related Posts

15 Quick Keyboard Shortcuts to Supercharge Your Use of Microsoft Office

In late 2013 and early 2014, CMIT Solutions covered 10 tricks, tips,…

Read More

Personal Data at Risk if You Don’t Wipe Your Old Mobile Device

Over the last 12 months, the four largest mobile carriers in the…

Read More

Who Can You Trust with Your Information? Recent Poll Says Not Many Institutions

No technology trend has been more ubiquitous lately than online security (or…

Read More