slider

Artificial Intelligence and IT Security – More Security, More Threats - Prof. Dr. Norbert Pohlmann

Artificial Intelligence and IT Security – More Security, More Threats

Artificial Intelligence and IT Security - More Security, More Threats

Artificial Intelligence and IT Security.
Prof. Norbert Pohlmann from eco investigates the dual role of AI as both a powerful security asset and a potential attack vector. He highlights the importance of securing AI systems while underscoring the need for human oversight in critical areas to ensure safety and trust.

IT security and Artificial Intelligence (AI) are two interconnected technologies that profoundly influence each other. This article begins by outlining key classifications, definitions, and principles of AI. We then explore how AI can enhance IT security, while also addressing how attackers can leverage AI to compromise systems. Lastly, the article examines the critical issue of securing AI itself, detailing the measures needed to protect AI systems from manipulation by attackers.

Classification of Artificial Intelligence

“Data Science” is a field of computer science that deals with extracting knowledge from information in data. As data proliferates, an increasing degree of knowledge can be derived from the information in the data.

A distinction is made between weak and strong AI. Strong AI, also known as Artificial General Intelligence (AGI), refers to the hypothetical point where AI systems surpass human-level intelligence. At this “singularity” point, AI could rapidly improve itself in unpredictable ways, potentially leading to a loss of human control. Humanity would lose control over the AI, making the future unpredictable and potentially having negative consequences. Our common task must therefore be to ensure that AI systems that surpass human intelligence act in harmony with human values and goals.

In contrast, weak AI, exemplified by Machine Learning (ML), is currently successful due to innovations like Deep Learning and Large Language Models (LLMs). In the field of machine learning, the term “stochastic parrot” is a metaphor to describe the theory that a large language model can plausibly generate text but does not understand their meaning. Therefore, a large language model cannot identify errors and humans must be able to verify the results themselves when using the large language model. Large Language Models (LLM) are the basis for generative AI. 

Generative AI (GenAI), such as ChatGPT, creates various types of content and has revolutionized digitalization.

Paradigm – The Garbage In, Garbage Out (GIGO)

The quality of data is paramount in AI. The GIGO paradigm (see Figure 2) emphasizes that the quality of AI results depends directly on the standard of input data. This means that poor data leads to poor outcomes, while high-quality data is essential for trustworthy AI results.

Several factors influence quality of input data:

  • Completeness: AI systems must have access to full, relevant datasets to make accurate decisions. In cybersecurity, for instance, a system should have data on various types of cyberattacks to detect them effectively.
  • Representativeness: The data must reflect real-world scenarios, ensuring that AI can generalize across different contexts.
  • Traceability: Knowing the source and transformation process of the data helps ensure its reliability.
  • Timeliness: AI needs up-to-date data to predict current threats. Old data can lead to missed vulnerabilities.
  • Correctness: Accurate data labels and categories are crucial to prevent misleading results, especially in sensitive applications like malware detection.

Overall, ensuring high-quality data is essential to leveraging AI effectively in IT security.

Handling AI results

In the principle of “Keep the human in the loop,” AI results are understood as a recommendation / suggestion for action for the user (see Figure 3). AI is a powerful tool, but it should not operate autonomously in all situations. Keeping the human in the loop is critical in IT security to avoid potential misjudgments by AI. While AI can detect patterns or anomalies very fast in vast data sets, human expertise is needed to validate its findings. False positives, such as an unusual but harmless network activity, require human judgment for accurate interpretation.

However, in time-sensitive contexts, autonomous decision-making by AI can be valuable. AI systems can automatically adjust firewalls or isolate compromised systems during an active cyberattack. Although this reduces response time, human oversight is still necessary to ensure that these automatic actions don’t lead to service disruptions or unintended consequences.

AI for IT Security

The use of AI for IT security creates significant added value for the protection of companies and organizations. Two application areas are presented below as examples:

1. Increasing the detection rate of attacks: The increase in the detection rate of attacks involves, for instance, using adaptive AI models and gathering security-relevant data from networks and IT systems to identify threats early across devices, servers, IoT, and cloud applications.

2. Support and relief of IT security experts of which we do not have enough: In detecting IT security incidents, AI can analyze and prioritize large volumes of security-relevant data, helping IT security experts to focus on the most critical threats. Furthermore, with (partial) autonomy in reactions, AI can automatically adjust firewall and email rules during an attack, minimizing the attack surface and ensuring essential business processes are retained.

Den vollständigen Artikel finden Sie unter: https://www.dotmagazine.online/issues/digital-security-trust-consumer-protection/artificial-intelligence-it-security

Für mehr zum Thema: Artificial Intelligence and IT Security

Siehe auch:



Rüstzeug für mehr Security-Awareness – SecAware.nrw – das kostenlose Selbstlernangebot, nicht nur für Hochschulen

Cybersicherheit, IT-Sicherheit und Informationssicherheit – Definition und Abgrenzung



Schutzlos ausgeliefert? Über unsere Abhängigkeit von digitalen Systemen haben

IT-Technologien müssen für die digitale Zukunft deutlich robuster werden



Forschungsinstitut für Internet-Sicherheit (IT-Sicherheit, Cyber-Sicherheit)

Master-Studiengang Internet-Sicherheit (IT-Sicherheit, Cyber-Sicherheit)

Marktplatz IT-Sicherheit

Marktplatz IT-Sicherheit: IT-Notfall

Marktplatz IT-Sicherheit: IT-Sicherheitstools

Marktplatz IT-Sicherheit: Selbstlernangebot

Vertrauenswürdigkeits-Plattform

Artificial Intelligence and IT Security - More Security, More Threats
Artificial Intelligence and IT Security – More Security, More Threats Prof. Dr. Norbert Pohlmann - Cyber-Sicherheitsexperten