{{item.title}}
{{item.text}}
{{item.title}}
{{item.text}}
The reinvention and innovation that businesses are doing today connect more digital experiences using the latest tech tools. Cybersecurity should be right there at the epicentre, hence the theme of our 2024 survey. We have a C-suite playbook for those who dare to break cyber-as-usual.
Austrian companies are stepping up their cybersecurity efforts and investing significantly in IT protection programs. The results of our Global Digital Trust Insights 2024 clearly show that
The survey shows that 60% of Austrian companies will increase their investment in cyber security in 2024. Generative AI plays a crucial role in this, with more than half (53%) planning to use GenAI tools for cyber defense.
A further rise in cyber threats could be on the horizon as generative AI can help create advanced business email fraud on a large scale.
CISOs and CIOs should pay attention to a prevailing sentiment: 50% expect GenAI to lead to devastating cyberattacks in the next 12 months. Organizations need to establish sound AI governance and prevent the risks that could arise from GenAI.
More than half say they’ll use GenAI for cyber defence in the next 12 months.
Nearly half are already using it for cyber risk detection and mitigation.
One-fifth are already seeing benefits to their cyber programmes because of GenAI — mere months after its public debut.
Source: PwC, 2024 Global Digital Trust Insights.
For defence. Organisations have long been overwhelmed by the sheer number and complexity of human-led cyberattacks, both of which continually increase. And GenAI is making it easier to conduct complex cyber attacks at scale. Researchers found a 135% increase in novel social engineering attacks in just one month, from January to February 2023. Services like WormGPT and FraudGPT are enabling credential phishing and highly personalised business email compromise.
To secure innovation. Businesses eager to reap GenAI’s many potential benefits to develop new lines of business and increase employee productivity invite serious risks to privacy, cybersecurity, regulatory compliance, third-party relationships, legal obligations and intellectual property. So to get the most benefit from this groundbreaking technology, organisations should manage the wide array of risks it poses in a way that considers the business as a whole.
Generative AI will shape our everyday lives at an unprecedented pace. Companies that neglect this development risk falling behind the competition.
From reconnaissance to action, GenAI can be useful for defence all along the cyber kill chain. Here are the three most promising areas.
Threat detection and analysis. GenAI can be invaluable for proactively detecting vulnerability exploits, rapidly assessing their extent — what’s at risk, what’s already compromised and what the damages are — and presenting tried-and-true options for defence and remediation. GenAI can identify patterns, anomalies and indicators of compromise that elude traditional signature-based detection systems.
GenAI is strong at synthesising voluminous data on a cyber incident from multiple systems and sources to help teams understand what has happened. It can present complex threats in easy-to-understand language, advise on mitigation strategies and help with searches and investigations.
Cyber risk and incident reporting. GenAI also promises to make cyber risk and incident reporting much simpler. Vendors already are working on this capability. With the help of natural language processing (NLP), GenAI can turn technical data into concise content that non-technical people can understand. It can help with incident response reporting, threat intelligence, risk assessments, audits and regulatory compliance. And it can present its recommendations in terms that anyone can understand, even translating confounding graphs into simple text. GenAI could also be trained to create templates for comparisons to industry standards and leading practices.
The European Union’s Digital Operational Resilience Act calls for timely and consistent reporting of incidents that affect financial entities’ information and communication technologies. Imagine having a tool that makes preparing these reports much easier.
Adaptive controls. Securing the cloud and software supply chain requires constant updates in security policies and controls — a daunting task today. Machine learning algorithms and GenAI tools could soon recommend, assess and draft security policies that are tailored to an organisation's threat profile, technologies and business objectives. These tools could test and confirm that policies are holistic throughout the IT environment.
Within a zero trust environment, GenAI can automate and continually assess and assign risk scores for endpoints, and review access requests and permissions. An adaptive approach, powered by GenAI tools, can help organisations better respond to evolving threats and stay secure.
And more. Many vendors are pushing the limits of GenAI, testing what’s possible. As the technology improves and matures, we’ll see many more uses for it in cyber defence. It could be some time, however, before we see “defenceGPT’s” broad-scale use.
So far, managing directors and board members have often paid too little attention to the topic of cybersecurity. The combination of unforeseeable risk and associated liability calls for action. The growing vigilance and increased investment in resilience measures clearly demonstrate this.
The use of GenAI for cyber defence — just like the use of GenAI across the business — will be affected by AI regulations, particularly concerning bias, discrimination, misinformation and unethical uses.
Political and business leaders around the world are working hard to set AI boundaries and strengthen accountability as they recognize the urgent need for action to address the potential, far-reaching and rapid impact of GenAI on society. The legal framework is rapidly evolving accordingly.
Against the backdrop of recent legislative tightening, such as the NIS2 Directive or the EU AI Act, forward-looking companies recognize the importance of staying ahead of AI regulatory requirements. Our survey participants are aware of upcoming regulations and know that these could have a significant impact on their future revenue growth.
Among the 37% of respondents worldwide who anticipate AI regulation, three-quarters think the costs of compliance will also be significant. About two-fifths say they’ll need to make major changes in the business to comply.
Enthusiasm for AI is so high that 47% of Austrian executive respondents said they’d personally feel comfortable launching GenAI tools in the workplace without any internal controls for data quality and governance.
However, without governance, adoption of GenAI tools opens organisations to privacy risks and more. What if someone includes proprietary information in a GenAI prompt? And without training in how to properly evaluate outputs, people might base recommendations on invented data or biased prompts.
The place to start with GenAI — as with almost any technology — is by laying the foundation for trust in its design, its function and its outputs. This foundation begins with governance, but concentrating on data governance and security concerns is especially important. The lion’s share of respondents in Austria say they intend to use GenAI in an ethical and responsible way: 60% agree with this statement.
GenAI tools will be able to quickly synthesise information from multiple sources to aid in human decision-making. And, given that 74% of global breaches reportedly involve humans, governance of AI for defence ought to include a human element as well.
Enterprises would do well to adopt a responsible AI toolkit to guide the organisation’s trusted, ethical use of AI. Although it’s often considered a function of technology, human supervision and intervention are also essential to AI’s highest and ideal uses.