What You Need to Know Now About the Changing Cybersecurity Landscape
At HITEC 2023, Melina Scotto, VP, CISO, Hilton, Daemon Behr, Sr. Presales Systems Engineer, Arctic Wolf, and Lynn Goodendorf, CISSP, CIPP, Board Advisor & Consultant, Metro Atlanta ISSA Chapter, spent an hour talking about cybersecurity to a packed room of hotel technologists. Their session, titled: Cybersecurity 201: Protection from the Unknown, was meant to tackle meatier topics than just poor password management and phishing emails. During their time on stage, they spoke on a variety of topics including the massive changes taking place in cyber criminal operations, the good and bad of generative AI, end-user security awareness and training, and much more. Here is a brief recap of some thoughts the panelists shared while on stage.
How the Cybersecurity Landscape Is Changing
The accelerated use of cloud platforms, cloud service providers and the increase in remote work has caused cybercriminals to change who they are targeting, and we’re seeing them use the same techniques, but in new ways.
- Goodendorf
In the past, a group would create a ransomware virus and use that to hack whomever they could. But now there has emerged a global marketplace and a very complex ecosystem where threat actors can now buy things like ransomware-as-a-service. These ‘enterprises’ are incredibly sophisticated and they even offer their clients access to customer service!
- Behr
There is a lot of money to be made working for these malware service subcontractors, but the FBI encourages organizations to come to them and ask for help rather than pay the bad actors. Why? Because there are no guarantees associated with payment. These criminals will often hit up an organization multiple times for payment to keep their data off the dark web or they’ll end up putting it up there anyway just to embarrass the organization.
- Scotto
On Generative AI
There is a difference between good old AI that we’ve all been using for a long time where we own the tech stack, the data, the testing, the modules, and that means we can secure it. With Generative AI your questions, your laws, your code entities, become part of this enormous data set where the basics of cybersecurity (privacy, integrity and confidentiality) aren’t promised. They’re very open and transparent about the fact that they’re ‘about’ protecting data. And there are some very real questions about data ownership and who this data belongs to once it is provided to this environment.
Consider this scenario: An SOC wants to do their job efficiently and quickly and they end up dropping a log in a generative AI environment asking: ‘Let me know if this is an indicator of compromise.’ But now the next time a malware as a service bad actor goes into that generative AI environment and asks about that log, they’re going to have access to a lot of information that they shouldn't, such as your operating system, your naming convention, or something else that would statistically be useful for their next attack.
- Scotto
Once data is ingested into a generative AI model, you’re no longer the owner of that data. As an alternative, some organizations are training their own large scale data sets - foundational models - offline in their own data centers. When they’re ready to make use of it, they may look at exposing a chat interface to the public so that the public can benefit from the AI’s knowledge, but that’s the vector for attack. Because if the entire information set is accessible, one way or another, that data can then be exfiltrated by creating custom prompts - prompt engineering - to extract data that was never meant to be extracted. So the question becomes, how do you implement private models and get the benefit of that to be able to share with clients among all these other security concerns.
Additionally, we’re seeing AI Canaries. This is when threat actors put specific things out into the wild that can reverse-prompt engineer the AIs that are scraping the internet looking for information. This allows the bad actors to exfiltrate some data or even send malicious code over to the AI itself.
- Behr
Years ago, in the early days of cloud centers, individuals were really concerned or even scared of the technology. But then an organization called the Cloud Security Alliance was founded by security practitioners with the very intention of proclaiming: ‘We can’t live in fear of new technology. We have to learn to overcome, manage and control new technology.’ So we’re not there yet with generative AI, but that’s not a reason to be overly fearful.
- Goodendorf
On Third-Party Risk
We’re seeing a trend where third parties are spending more money on lawyers than they are on cybersecurity experts to actually secure their product. We’ve been a part of some contractual MSA battles where our third parties don’t want to take responsibility for securing their downstream and are pushing back on our cybersecurity policies. It’s really tough right now for bigger pieces of software to do business in this highly risky environment.
- Scotto
On End User Security Awareness and Training
Traditionally, training has used a hammer instead of a carrot. So when an employee does something wrong they’re chastised, but there is no real learning that comes from that approach. Building a culture that is accepting and non punitive in nature and emphasizes “See something? Say something” works better. Having a program in place where you’re able to report possible scams, and having continuous microlearning in place are also valuable. Also, rewarding people for doing things correctly - money or prizes - tends to work well.
- Behr
We offer a weekly cyber talk that is 30 minutes long and open to anybody from the front desk employee to a cyber security expert. We talk about what’s going on in the world, what threats are on the horizon, and then we have a topic of the week and even a day in the life of one of our cybersecurity experts. We want our employees to be cyber security-curious and we try to foster that.
- Scotto
On Zero Trust
Zero Trust is a strategy where you have multiple layers of security, and in every layer of security you assume that there has been a breach. So, validation is required at every layer. There are many different avenues one can follow with the Zero Trust model, and what that journey looks like for different organizations depends on what is most critical in the short term.
- Behr
Zero Trust means there is no inheritance of authentication. Just because a device or a user has been authenticated previously doesn’t mean they’re automatically given access. It’s every user, every device, every session, every time. Pairing this with micro segmentation means that these same users are only given access to a very narrow amount of resources or a very small piece of the network - whatever aligns with their job responsibilities and no more. But a lot of time has to go into those rules, and you need strong identity management going into this process.
- Scotto
On Cyber Insurance
Some people think that cyber insurance used to be worthwhile but isn’t anymore because the premiums have increased exponentially and the coverage has decreased exponentially. Ransomware isn’t covered, or you aren’t covered if a nation state is the one attacking you, or even if the threat actor finds out you have insurance - then you aren’t covered. And there are no standards across different insurers. If they make changes they don’t need to notify anyone, unlike other insurances that are very regulated. Cyber insurance is not regulated.
But the one benefit is that they won’t give you any coverage unless you follow a minimum set of security controls. So, if you’re validating third-party organizations, you can check to see whether or not they have cyber insurance and that will give you some insight into their cyber security measures.
- Behr