In our seminars, you will learn everything you need for the implementation and development of generative AI - from the use of large language models to programming your own AI assistants. Learn how to create LLMs and integrate them via API, build RAG systems with your own data and bring generative AI to the cloud with AWS or Azure. Our training courses combine technical know-how with practical application.

Generative artificial intelligence is revolutionizing software development by automating processes and increasing productivity. Our training courses provide you with comprehensive knowledge about the use of AI tools such as GitHub, Copilot and ChatGPT to make the entire software development process more efficient.
The training courses cover topics such as the following:
Our trainers will teach you the knowledge with direct reference to real project scenarios. This means you are ideally prepared to use generative AI effectively in your development processes.
AI agents are fundamentally changing how companies tackle complex tasks. These intelligent systems work independently, make informed decisions, and optimize business processes without constant human supervision. In this comprehensive guide, you will learn everything you need to know about AI agents, their diverse applications, and proven strategies for successful implementation.

This article provides a clear overview of large language models (LLMs), i.e., AI models that have been trained on huge amounts of text to analyze and generate language. You will learn how LLMs work technically—from tokenization to parameters to the training process—and get practical examples of their use in business. In addition, the article highlights opportunities, challenges, and future developments of this technology in a professional context.

In practice, AI models quickly reach their limits: they know a lot from their training, but without a connection to current data, tools or systems, their answers often remain incomplete or outdated. Anthropic's Model Context Protocol (MCP) fundamentally changes this. As an open standard, it enables AI systems like Claude to directly and securely access relevant information sources and applications, from databases to cloud services. Instead of manually gathering information and copying it into prompts, AI assistants can now access these sources independently. This makes them much more useful for real use cases.

Learn what RAG means and how this approach supplements large language models (LLMs) by specifically retrieving relevant information from documents or knowledge databases before the generative response is created. Learn how RAG overcomes the typical limitations of static training data, increases accuracy, and enables AI systems to leverage up-to-date company-specific knowledge. This improves the quality of responses and makes AI applications more relevant for practical tasks, such as internal knowledge management or technical queries.

