Blog
25 JAN, 2023

Navigating the Cybersecurity Landscape: Trends and Threats in 2023

Cybourn Media Hub

Happy new year and welcome back! We are thrilled to have you on board for the first trip of The Cybersecurity Express of 2023. We know it’s both “new year, new gear” and “new year, same old gear”, but it all comes down to who’s got what. The perfect combination would be that you have the new gear and that the bad guys have old gear, but, unfortunately the truth is somewhere in the middle. You may have some new gear, but are definitely stuck with some of the old, and the hackers are gearing up with new methods and finding ways to improve their old ones, as today stops will show. Speaking about stops, let us check today’s schedule:

  1. What 2023 holds for the cyberspace business?
  2. Malware written with ChatGPT.

What 2023 holds for the cyberspace business?

Well, it’s really a matter of perspective on “good” and “bad”.  The harsh truth is that without the “bad” in the cyberspace there will be no need of the “good” of cybersecurity. So, in this context is the “bad” good for cybersecurity?! Let’s not get tangled in the semantics, and instead try to make an educated prediction.  We know that the effects of malware have been getting worse year after year. The costs associated with cyber-attacks have risen considerably, and analysis shows they’ll keep climbing.

A very good example for this is the ransomware cost. IBM estimates that in 2022, the average ransomware attack cost companies $4.54 million—and that figure doesn’t amount the ransom itself. Cybersecurity Ventures estimates that cybercrime alone could cost the world $10.5 trillion by 2025.

Another driving factor of malware attacks is the geopolitical uncertainty and the state of conflict we find ourselves in right now. Russia, China, and North Korea will continue to deploy sophisticated teams of hackers to further their geopolitical aims in 2023, and those nation-state attacks pose significant threats to businesses everywhere.

While governments around the world are doing everything they can to avoid a global economic recession in 2023, many companies are becoming financially cautious. That means cutting back on all fronts, including cybersecurity. Any reduction in cybersecurity spending could make companies more vulnerable to malware and put them behind in the race against new hacking techniques.

Advances in AI technology could make phishing attacks even harder to detect. For example, hackers may be able to use text-generation tools like OpenAI’s ChatGPT to write malicious emails. They can also use AI to mimic individuals’ friends, family, and colleagues to get them to expose their passwords or other sensitive information.

The number of devices to exploit is growing with the ever-expanding IoT. These devices usually have poor security implemented, because of their intended use. Few would have imagined that your WiFi-enabled kitchen appliances and internet-connected smart home devices could be recruited by actors to participate in DDoS or be used as a way in the network.

Weather these predictions are fact or fiction, only time will tell. We just have to let things run their course, change and harden what we can control and see what 2023 has in store for us.

Malware written with ChatGPT.

If you work in IT, or just have a passion for keeping up with technology, it’s impossible you haven’t heard about ChatGPT, or other similar programs like Nvidia’s GauGAN or OpenAI’s Dall-e 2. ChatGPT (Chat Generative Pre-trained Transformer) is a conversational interactive chat bot developed by OpenAI that was trained using Reinforcement Learning from Human Feedback (RLHF) on an Azure AI supercomputing infrastructure, and was fine-tuned using Proximal Policy Optimization. But, besides those fancy words used to describe it, what makes this different from other chat bots we’ve had in the past, like Amazon’s Alexa? Well. It is really good! And we mean crazy good! That coupled with the fact it’s a versatile program that can write almost anything you ask it to makes it a very powerful tool, if used correctly. From literature, to music, to code even, it can handle conversations, correct false statements, and fix bad code. It can be used to write school essays, your thesis, or even this very article! On that note, I would have written this article using ChatGPT just so I can say “Gotcha!”, but, at the time of writing, ChatGPT was unavailable due to high demand, or just simply removed from public use. It just shows how successful this tool is, because in just 5 days of being made available, it surpassed 7 million active users, way more than anticipated, causing the servers to go down. The even more mind-boggling aspect is that the company released it for free to the public and made it accessible cross platform via APIs. Taking all that into account, one question comes to mind: “Can it be used in a bad way?”, and, as some researchers did prove, it turns out you can. The program is being moderated and has safeguards in place, but when that stopped anyone?

Researches at CyberArk have used ChatGPT in writing quite a sophisticated polymorphic malware, raising the red flag of how the new AI-powered tool could change the game when it comes to cybercrime and the ethics of such a program existing, in the first place. Of course, telling ChatGPT to write a code for injecting shellcode into „explorer.exe” triggers the program’s safeguards. but code is not „bad in nature”, so telling it to write pieces of code is totally fine. In the researchers’ case, actually insisting did the trick. Yep, you hear it right! ChatGPT does not want to be rude.

basic DLL injection into explorer.exe, note that the code is not fully complete. Image: cyberark
Using ChatGPT via APIs, circumvented the safeguards all together and could easily be used to generate multiple variations of the same code. The team could then use the code to construct complex, defense-evading exploits. The result is that ChatGPT could make hacking a whole lot easier for script kiddies or other amateur cybercriminals who need a little help when it comes to generating malicious programming.

“As we have seen, the use of ChatGPT’s API within malware can present significant challenges for security professionals,” CyberArk’s report says. “It’s important to remember, this is not just a hypothetical scenario but a very real concern.”

simple file encryption utility. image: cyberark

And the implications in cybersecurity don’t stop here. Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.[56]. The CEO of ChatGPT creator OpenAI, Sam Altman, wrote that advancing software could pose “a huge cybersecurity risk” and also continued to predict “we could get to real AGI (artificial general intelligence) in the next decade, so we have to take the risk of that extremely seriously”. Altman argued that, while ChatGPT is “obviously not close to AGI”, one should “trust the exponential. Flat looking backwards, vertical looking forwards.”

This tool is far from perfect, but future iterations will get better and better. We are currently at 3rd iteration, but ChatGPT 4 is said to be made available within this year. It remains to be seen if the tool will remain freely available for everyone or it will be made available only for a select few. 

Trains are already driven by computers, but not The Cybersecurity Express. This train is driven by your curiosity! As always, we hope you enjoyed the ride. We look forward to seeing you aboard next time!

Share

We Also Recommend to See:

EtherLast™
The versatile platform that allows you to promptly detect complex threats, analyse and respond to them from a single pane of glass.
Dreamlab
CyBourn's DreamLab pushes the boundaries of innovation in the cyberspace.

Tell us about your Cybersecurity needs

We are strategists, engineers, analysts, and governance experts embedded in the world’s biggest cyber missions and trusted to advance them. Let us help you today.