certified representative
Your qualification for the safe and sustainable use of artificial intelligence
Certified by the Hochschule der Wirtschaft für Management (HdWM), Mannheim
This further training course was designed in cooperation with the Mannheim University of Applied Management Sciences and meets the university's quality standards in terms of overall concept, content, trainers and examination. As a result, participants benefit from high-quality qualifications with guaranteed topicality, high practical relevance and excellent trainers.Fundamentals and technical basis of AI with the following focal points:
- Fundamentals and technical basis: Differentiation of AI from other technologies such as analytics and automation, as well as the functioning of modern AI systems (machine learning, neural networks) and the learning mechanisms of generative AI.
- Risks associated with the use of AI: Technical risks such as bias in data models, erroneous decisions, security risks (e.g., adversarial attacks), and awareness of shadow AI and misinformation.
- Data management: The importance of data quality, data protection impact assessments, and structured data governance to ensure secure and compliant data usage.
- Ethics as a link between technology and law: establishing organizational structures, educating employees, promoting transparency and traceability of technical decisions, and preventing discrimination.
Regulatory framework and practical implementation of AI in companies, focusing on the following areas:
- EU AI Regulation (AI-VO) for the safe use of AI systems, especially high-risk applications, through clear guidelines on transparency and monitoring.
- Other legal frameworks such as data protection and liability are key aspects in the processing of personal data and erroneous AI results.
- AI management system as a company-wide system that integrates compliance and security measures and defines responsibilities for AI deployment.
- AI system registry as a central registry creates transparency, documents risk categories, and facilitates monitoring.
- Organizational measures such as clear processes, training, and control mechanisms promote the safe and ethical use of AI in the company.
- Recommendations for action and key findings to facilitate the introduction of responsible AI use.
The content of the two compulsory modules of the certified representative training course will be examined.
You can take the e-exam as soon as you have completed both modules.
Contents
Module 1
Fundamentals and technical basis
- Introduction to AI and how it differs from other technologies.
- Differences between AI, analytics, and automation.
- How modern AI systems such as machine learning and neural networks work.
- Understanding generative AI learning processes.
Risks associated with the use of AI
- Technical risks such as bias in data models, incorrect decisions or results, and security risks due to attacks on systems (adversarial attacks).
- Dealing with uncertainties surrounding new technologies.
- Raising awareness of shadow AI, bias, and misinformation in AI tools.
- Establishment of technical security measures to protect against attacks and system failures.
- Definition of clear escalation mechanisms in the event of malfunctions or misuse of AI systems.
Data management as the basis for AI deployment
- Clean data as the basis for successful AI models.
- Protect the rights of affected persons (e.g., right to information, obligation to delete).
- Conduct data protection impact assessments for AI projects.
- Data governance for structured and secure data usage.
Ethics as a link between technology and law
- Create organizational structures to operationalize ethical principles.
- Educate and train employees to minimize risks such as bias or misinformation.
- Responsible use of technology development/application: Avoid discrimination.
- Document and implement transparency requirements.
- Ensure the traceability of technical decisions.
Module 2
Regulatory framework – focus on EU AI Regulation (AI-VO)
- Safe use of high-risk AI systems.
- Risk classification: prohibited applications, high-risk applications, minimal-risk systems.
- Transparency and monitoring obligations for high-risk systems (e.g., through audits, conformity assessments).
Further legal framework conditions in the context of AI
- GDPR specifically in the context of AI projects: processing of personal data, privacy by design/default.
- Software liability for incorrect results from automated systems.
Development of a company-wide "AI management system"
- Connection to existing compliance, risk, or quality management systems (e.g., ISO 9001, ISO/IEC 27001).
- Clearly define roles and responsibilities, including the role of the AI officer.
- Ensure cooperation with data protection officers and the IT department.
- Moderate interdisciplinary teams and promote cross-functional skills.
Establishment and maintenance of a central AI system registry
- Transparency and risk monitoring through a central register.
- System description: Functionality, area of application, risk category.
- Document responsibilities and evidence related to conformity assessments or audits.
- Use automated documentation tools or platforms for governance data.
Organizational measures & practical implementation
- Establish processes for approving new AI applications: Define decision criteria and review steps.
- Develop policies and guidelines for dealing with AI in the company.
- Introduce internal control mechanisms such as audits or self-assessments.
- Define warning notices and escalation processes (e.g., threshold values, yellow status).
- Train employees: How to use AI systems, ethical guidelines, transparency requirements, liability issues.
- Develop communication and change management strategies to promote acceptance of AI and overcome resistance.
Final discussion: The path to successful AI implementation
- Bring together key findings.
- representative recommended actions for the first steps asrepresentative certified representative .
Learning environment
In your online learning environment, you will find useful information, downloads and extra services for this training course once you have registered.
Your benefit
In this practical and interdisciplinary continuing education program, you will develop the technical, legal, and ethical skills necessary to use artificial intelligence (AI) safely and responsibly. You will learn to systematically assess risks, develop a company-wide management system for AI use, and implement the requirements of the EU AI Regulation and the GDPR in your company. At the same time, you will strengthen your personal position an AI expert and actively contribute to the future-proof orientation of your company.
Building your skills
- Expertise: Understand the key technical, legal, and ethical aspects of AI.
- Regulatory security: Implement all relevant requirements, such as the EU AI Regulation and GDPR, securely.
- Practical focus: Apply what you have learned directly in practice, e.g., by introducing a central AI system registry.
- Sustainability: Promote your company's innovative strength and competitiveness through the responsible use of AI.
- Personal development: Position yourself as a key figure in the safe use of AI and improve your career prospects.
Methods
experts, group work, discussion, practical exercises, case studies, lessons learned from failed projects, and practical examples from various industries specifically tailored to compliance, data protection, and IT management.
Why this training?
- Holistic:Technical, legal, and ethical aspects are linked together.
- Practical:Learn through case studies, practical exercises, and best practice examples.
- Recognized:The recognized certification from the Mannheim University of Applied Sciences for Management gives you a seal of approval for your expertise.
- Networking: Exchange ideas with experts participants from various industries. Benefit from interdisciplinary perspectives to gain long-term advantages from best practices and new ideas.
Tool
Recommended for
- Compliance officers, data protection managers, and IT managers who are responsible for implementing regulatory requirements.
- Specialists in interdisciplinary teams who are responsible for AI projects.
- Anyone who faces AI-related challenges such as compliance issues, data management, or technical implementation.
Required prior knowledge
Basic knowledge of artificial intelligence is required. Regardless of professional background, candidates should have an interest in interdisciplinary topics—especially at the intersection of technology, law, and ethics.
Final examination
Examination requirements
The prerequisite is completion of the two compulsory modules.
Form of examination
Written final exam in the form of an e-exam. To save you travel costs and time, you can take the e-exam on your computer at work or at home.
Exam contents
The training content of both modules is tested in written form (time required: approx. 45 minutes). The e-exam can be taken as soon as both compulsory modules have been completed.
After successfully completing the final exam, you will receive the recognized certificate from Haufe Akademie the Mannheim University of Applied Sciences "certified representative." This certifies your in-depth knowledge as a basis for further developing your professional career.
42489
42490
Start dates and details
