AI and Cybersecurity
It’s likely that you’ve heard a lot about generative artificial intelligence (AI) lately. Since ChatGPT, a language tool, launched in November of 2022, not only have more than 100 million people signed up to use it but stories about it and its ilk like Dall-E, Vall-E and others have made buzzwords like generative AI, natural language processing and large language models a fixture of technology and economic news stories. For some, AI is the next step in human evolution, for others, it is a threat to global security. At Allied Healthcare Federal Credit Union, we’re interested in how this new technology can help our members keep their money safe—and what potential it may have in the ongoing threat of cybercrime.
What Generative AI Is
Let’s start with a primer. With a lot of information (and new stories coming out daily), it’s hard to keep up with the emerging technology but let’s be clear, generative AI tools like ChatGPT are just that; tools. They don’t think for themselves or create ideas any more than a hammer or your cell phone does. In fact, the descriptor “generative AI” is a little misleading because, strictly speaking, large language models like ChatGPT don’t actually generate anything. Here’s how they work: given a request such as “Write a blog about the threat of AI to cybersecurity,” they scan the vast data of the internet and instantaneously collect and edit the information into the requested form, mirroring human writing with great sophistication. They do what a college freshman might do when writing a paper on a topic the student is unfamiliar with—research and summarization—but in seconds, not hours. Also, we should note that AI was not used to create this blog, we did it the old-fashioned way.
As a tool, AI is neither “good” or “bad.” It is all a matter of how it is applied. Hopefully, this technology will automate certain tasks that allow humans to do things only they are qualified for. It may save lives by making the healthcare industry more accurate or risky professions less perilous. However, in the wrong hands, AI can be used for nefarious purposes, including cybersecurity threats.
What to Watch For
The most immediate threat that AI presents to most people is through phishing. A phishing attack is an email that pretends to be from a legitimate source with an above-board request but is actually a trick designed to fool individuals into revealing sensitive information. One of the easiest ways to spot a phishing email is if the language used is unnatural or off-putting. Generative AI can create more sophisticated messages. Some AI tools can even create videos that convincingly impersonate real people to carry out fraud. Be sure to always check links and confirm with a sender if you receive an email that requests you to share personal information.
Generative AI can also be used to write code and a bad actor can use it to write malware or other malicious software. Many companies are developing tools to combat these types of attacks.
AI: Both a Threat and a Protector
The irony is that businesses are using AI to develop ways to fight the very threats AI is making more sophisticated. By recognizing patterns from enormous piles of data, AI can identify spam and phishing emails by taking into account the message and context. The same can be done for malware and to bolster multi-factor authentication.
AI is firmly part of the cyber landscape. What that means for the future is anyone’s guess but like any advance in technology, there will be benefits and threats. The key is to stay diligent and absorb new lessons to protect what’s yours.