Back to the previous page
Large Language Models in Cybersecurity
Friday 05 July 2024 12:00

The emergence of new cyber technologies is also accompanied by new threats and impacts. The Cyber-Defence Campus of the Federal Office for Defence Procurement armasuisse is publishing an Open Access study on this topic for the second time. This highlights the risks and challenges which arise through the use of generative artificial intelligence (AI) in cyber security. The study helps experts and decision-makers from public administration and industry in assessing the risks of use and the development of security measures.



The development and spread of artificial intelligence present major challenges for security in cyberspace. In particular, machine learning models for generating texts, images and videos, better known by the term “generative artificial intelligence (AI)” are becoming increasing powerful. They are widely used among the population. However, generative AI offers considerable risks of misuse, such as Deep Fakes, fake news and attempted frauds. At the same time, the conscious use of AI can also have positive effects.

Influence of Large Language Models on cyber security

The study shows that the manipulation and misuse of learning algorithms can jeopardise the security of the AI applications used. For example, machine learning models can generate text or software, some of which contain subtle errors which, however, can only be found with difficulty. Large Language Models (LLMs) have revolutionised the understanding of language and are already used today in security-relevant products and applications. On the one hand, LLMs enable cyber attacks to be combated more efficiently, yet at the same time, malicious actors can use LLMs to create malware, phishing messages and malicious chatbots inexpensively.

Strengthening cyber security

The majority of LLMs today are manufactured abroad. Accordingly, it is important for state actors like Switzerland to understand which dependencies on these foreign manufacturers exist and which risks are involved. This open access study offers valuable insights for experts and decision-makers in the area of cyber security. With this jointly developed study, the CYD Campus and the University of Applied Sciences and Arts of Western Switzerland (HES-SO) underline the necessity of confronting the rapidly growing technological developments and taking proactive measures.

The most important findings of the study are:

• generative artificial intelligence and in particular Large Language Models (LLM) involve substantial new threats with regard to cyber security;

• the use of generative artificial intelligence in government, the economy or in society must be performed with caution;

• safety checks must be deployed in the data processing chain for a secure development and secure usage of generative artificial intelligence.

The Cyber-Defence Campus

The Cyber-Defence Campus was founded in January 2019 to anticipate cyber developments more rapidly. It forms the link between DDPS, industry and science in research, development and training for cyber defence. It is part of the Federal Office for Defence Procurement armasuisse in the DDPS.