Campus Days – Workshop: Modern Employee Training – What Does Technological Advancement Offer?

In our workshop "Modern Employee Training – What Does Technological Advancement Offer?", we received questions on topics such as Artificial Intelligence (AI), data protection, and compliance. In this post, we address the questions raised by our participants during the workshop.

How can AI-related risks be detected? How can AI be recognised?

Artificial Intelligence offers various advantages but also presents risks, especially in data protection and information security. If AI systems are incorrectly configured or misused, they can lead to the processing of personal data without adequate security measures.

Here are several methods to detect AI-related risks:

  • AI systems should be continuously monitored for anomalies. Tools for monitoring data usage can uncover deviations and unusual behaviour.
  • Regular auditing of AI models can help identify data protection gaps, which is particularly important in regulated industries.
  • AI systems should be designed so that their decision-making processes are understandable (transparent), known as "Explainable AI." Understanding how AI reaches its results can help identify potential risks faster.
  • To ensure employees recognize potential risks, it is important to regularly train them on AI security, data protection, and information security.

Detecting AI-generated content can sometimes be challenging, especially as AI becomes more advanced. However, there are signs, both in and out of the workplace, that can help you recognise them:

  • AI-generated images or videos may sometimes look odd. For example, facial or object proportions may be off, or the lighting may appear unnatural. Such errors may indicate that the image or video is AI-generated.
  • When texts are created by AI, they may sound correct but repeat themselves, lack depth, or contain contradictions. AI-generated text doesn’t always have the same coherence as human-written content.
  • AI can be used to create fake news or content that looks or sounds deceivingly real. Therefore, it’s crucial to always verify the sources of information and critically question their authenticity.
  • Another risk is that AI could be used for purposes that threaten privacy. Be mindful of which data is shared to protect it from being misused.

How does AI generate images and videos?

AI generates images and videos using special computer programs that are trained on numerous examples. These programs learn how things look or move by analysing large amounts of data, such as photos, videos, or drawings. AI generates images and videos using algorithms known as generative models. A well-known example is Generative Adversarial Networks (GANs), which consist of two components: a generator and a discriminator. The generator creates images or videos based on random input data, while the discriminator evaluates them. Through constant feedback, the generator learns to produce more realistic images and videos.

Another method, known as diffusion models, starts with random noise and refines it step by step until an image is created. Tools like DALL-E and MidJourney use variations of this technique to generate visual content that looks human-made. However, when creating AI-generated visual content, it is important to consider intellectual property and copyright laws, as AI outputs may resemble protected works.

Will everything be controlled by AI in the future?

While AI is making great strides in many areas, it will not be able to replace everything. Although AI systems are becoming increasingly powerful and supporting many processes, the human factor remains irreplaceable. Fields that rely heavily on creativity, emotional intelligence, or complex ethical judgment still require human expertise.

Moreover, the implementation of AI involves significant ethical and legal considerations. In data protection and compliance training, for example, human oversight is crucial to ensure that legal frameworks are followed, and ethical concerns are addressed.

How do I write appropriate prompts?

Creating input commands, known as prompts, for AI systems is a key skill for achieving optimal results. Here are some tips:

  • Be specific. A common mistake is being too vague. The clearer and more detailed your prompt, the better the AI understands what you expect.
  • Provide the AI with information about the desired style, format, and audience. For example, a prompt could be: “Create a blog post on the benefits of eLearning in the compliance sector, targeting data protection officers in medium-sized companies.” Always ensure that no personal data is used, and check how AI usage is regulated within your company.
  • Often, you’ll need to try different versions of a prompt to achieve the desired results. AI systems respond to language nuances, so a small change can make a big difference.

Can AI generate texts that comply with all laws?

While AI systems are capable of generating legally relevant texts, the responsibility ultimately lies with humans to ensure that the texts comply with all applicable laws and regulations.

The challenge is that AI systems often cannot fully account for the legal intricacies of individual countries or industries. Even if AI models are trained on large volumes of legal documents, this does not guarantee that they will produce reliable, legally sound texts.

Closing Remarks

We would like to extend our sincere thanks for your participation in this year's Campus Day. It was wonderful to welcome you on-site and to see you actively engage in the lectures and workshops. The lively exchange and your valuable contributions enriched the day for all of us.

If you have any further questions or suggestions, please feel free to contact us at any time. On the CAMPUSTAGE website, you will find all the important information and presentation slides once again. 

We look forward to continued collaboration and vibrant discussions with you.

Best regards,
Your DSN train team