The Threats of AI in CyberSecurity

In AI the great threat to security that everyone thinks? Let’s find out …

I’ve been watching AI in Security develop over the years and have been a supporter since the very beginning, telling all – AI is the future of security, because, it is. But is it the big scary threat that many think?

In the foreseeable future, the handover of cyber defense to AI seems inevitable. We stand on the precipice of a digital battleground where Offensive AI will engage Defensive AI at speeds surpassing human capability. This reality is not a mere fantasy; it’s actively under development and testing. Nations lacking in this technological prowess risk swift obsolescence in the face of emerging threats.

Yet, amidst this potential upheaval, concerns linger regarding the impact of publicly available Generative AI, exemplified by platforms like ChatGPT. While they hold promise in code generation, their security implications are nuanced. Yes, they may aid in crafting attack vectors, but their output often necessitates human intervention due to inaccuracies or inadequacies.

At this point, they are not as big a security threat that is commonly thought. Will they help generate code and be used in phishing attacks and other similar attacks, yes! But ask any coder or security professional that has actually used it, they will tell you that frequently it is wrong or creates code that needs modification.

AI engines like ChatGPT aren’t fail proof. For example, I was using it to generate some attack code for a project and it literally came out and told me that I couldn’t use PowerShell in that certain type of attack. I had to tell it, “Yes I can”, because I have done it many times. ChatGPT literally came back with an apology, that I was correct and offered the code I needed.

Two things from this should be concerning – first, ChatGPT was wrong and after being corrected, admitted it was wrong. Secondly, it was helping me generate attack code!

Think of those that are trying to use Generative AI, like ChatGPT to create code for critical systems. It frequently needs to be corrected or modified. AI isn’t the know all magic genie that many think it is. What if the human programmers don’t check or edit the code? That thought is very concerning.

Yes, there are safety measures put into place that will try to stop people from using ChatGPT from generating malware. But there are ways around it, and no, I will not tell you how to do it. But know, the code I used for my latest project – using LoRa for creating a long-range Raspberry Pi (up to 20km over RF!) hacking device was 100% coded by ChatGPT. With much editing, of course, because it didn’t work the first try – see the previous point.

Is AI the future of cybersecurity, oh, absolutely. It already does many things very well. Check out the deepfake videos and its audio capabilities. I attended a government conference on AI and they demonstrated using an AI to impersonate a corporate executive and call the company help desk for a password reset. It was unreal, it was so believable, and, it worked.

It will revolutionize everything from Red Team operations to military drones, and will take away many people’s jobs. For example, programmers – ChatGPT can generate code and convert it to multiple languages faster than any human being.   

But is AI going to end the world today? No, just ask it to generate 3D art…

It tried,

Again, and again.

And AGAIN!

After telling it that it has a bug, it finally admitted it can’t do it.

Is it impressive? Oh yes, AI is and will change everything. It will touch every area of our lives. Make no doubt about it, AI is the future of CyberSecurity.

Is this Skynet? Will it take over the world and make all humans into slaves. Absolutely not.

Not, yet…

Stay tuned!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.