AI is a driving topic and has of course long been a fundamental part of our portfolio of services, including the implementation of artificial intelligence in our customers' content management systems. A topic that people are talking about.
But we'll leave the thousandth AI article with basic explanations and a categorisation of whether it's an enrichment or a threat here. We have become accustomed to Chat GPT & Co and want to learn new things or deepen our knowledge.
Topics often develop so quickly that we adopt individual terms into our own language and use them confidently without questioning them completely. And artificial intelligence (AI) really does come with a wealth of different terms relating to technologies, models and approaches. The vocabulary is just as varied as the content: So let's take a look at the key terms that we currently encounter in various media and on a wide range of channels in order to perhaps de-mystify AI a little.
Let's take a look at the specialist terminology that has now made a name for itself in the field of AI:
What is a Large Language Model?
A Large Language Model (LLM) is a powerful artificial neural network that is designed to understand and generate natural language and to interact with people in natural language. Trained using large amounts of text from the internet, the output is based on the learnt patterns. Essentially, it resembles a written autocomplete system for whole sentences and paragraphs in terms of its design / structure / functionality.
- Examples of large language models:
- Example BERT, Bard or T5 from Google
- Databricks Dolly
- Young and ambitious: Heidelberg start-up Aleph Alpha with LLM Luminous
- Probably the best-known example of an LLM with a focus on conversational applications: ChatGPT from OpenAI
The differences between these models lie in their architecture, their area of application and their objective. What they have in common is that they can drive progress in natural language processing and text processing and provide users with a wide range of benefits.
Powerful and easily accessible models like OpenAI's have many potential applications, from supporting content production and translation to data analytics or various virtual assistants. Of course, it must always be remembered that, for all the quality of their now often highly usable output, they have no true understanding or awareness equal to that of a human.
Neurolinguistic programming
Neurolinguistic programming is a method that deals with the analysis of communication and human behaviour. It aims to identify patterns in human language and behaviour and to influence or change these patterns in a targeted manner. The techniques have historical links to psychology and therapy, but they are also used in areas such as communication training, coaching and sales.
The link to artificial intelligence and large language models lies in the understanding of natural language and the ability to generate and analyse it in a human-like way. While neuro-linguistic programming techniques are traditionally aimed at human interaction, AI and LLMs can be used to process large amounts of text data in real time, identify patterns in communication and generate targeted responses based on these patterns. The benefits of various applications such as chatbots, automated customer interactions and personalised communication are evident.
The three models differ in terms of the underlying technology as well as the training data used and the possibilities for interacting with it. Stable Diffusion is based on a latent diffusion model and was trained with the publicly available LAION-5B dataset. The rights to the generated images are not owned by Stable Diffusion or the companies behind the model, but can be used freely by the user as far as possible. DALL-E, on the other hand, is based on a Transformer model. Unlike Stable Diffusion, DALL-E cannot be installed locally. DALL-E is available as a cloud service and is available as an integration in ChatGPT (for Plus users only) so that images can be developed and refined in dialogue with ChatGPT. The technology behind Midjourney is also categorised as diffusion, but is not as transparently documented because, like DALL-E, it is proprietary, i.e. manufacturer-specific, software.
The models vary in the way they respond to requests, in terms of the tools available for image processing and in the accessibility or free availability of their platforms.
Text-to-image generators
Various AI models can create images from text and help to improve the quality of images. In terms of the technical basis of the technology, a distinction is made between image diffusion and transformer models. In the transformer model, an encoded representation of the input text is fed into a transformer, which decodes it to generate an image that reflects the described content. The AI image diffusion model, on the other hand, is one of the techniques in which a low-quality image is gradually refined to give it more clarity and detail. Comparable to an artist adding finer strokes to a rough sketch to make it more defined and detailed.
The models use complex algorithms to iteratively enhance images, uncover hidden details, give them aesthetic quality and achieve an improvement in accuracy and precision. Text-to-image generators help to take the interpretation and use of images in various applications to a higher level. They make it possible to clarify important visual information and maximise the usefulness of images in medical, industrial, cultural, administrative, educational and entertainment applications.
Well-known examples include
- Stable Diffusion
- DALL-E
- Midjourney
Artificial neural network
An artificial neural network (ANN) is a computer system inspired by the way our brain works. It forms the architectural foundation on which many large language models are based. The network consists of interconnected nodes, or neurons, that work together to process information. Each neuron takes input, processes it and passes it on to the next layer of neurons. Imagine a team of transmitters passing a message along a conveyor belt, each making its own contribution. The KNN learns to recognise patterns in data, such as the difference between cats and dogs in pictures. Once trained, it can make predictions or decisions based on new, unseen data, making it a significant asset for tasks in image and pattern recognition, language processing, game strategies, autonomous vehicles and many other applications. It is a powerful tool for finding complex relationships in information.
Prompt Engineering
Prompt engineering can be translated as "instruction modelling" and has already developed into its own professional field due to its importance. Prompt engineering deals with the targeted design of text input in order to optimise the performance and controllability of text-based AI models, such as a chatbot. This process involves developing clear and precise instructions or prompts that are sent to the model to obtain specific output in natural language. The technique makes it possible to direct the output of the model and adapt it specifically to a variety of applications.
Effective prompt engineering can make the difference between high quality output and inaccurate or inadequate output. It often requires some experimentation and repeated fine-tuning to create the best instructions and prompts to achieve the desired results.
By targeting the prompt, in this case by adding context and a clear task, the model can understand exactly what is expected of it and produce a high-quality summary.
A simple and generic input such as "What are databases?" will remain superficial in the output according to the undifferentiated question. A good prompt, on the other hand, contains clear instructions and specific context: "Explain the importance of databases, e.g. NoSQL databases, and their use cases in big data applications. Please emphasise the advantages over relational databases. Please summarise in a maximum of 10 sentences and consider an experienced specialist readership as the target group." A well-founded and comprehensive answer can be expected from the model. Follow-up questions on the first product can further sharpen the output as the process continues. There is a lot of potential here, after all, we are talking to chatbots that you can call by name and challenge in dialogue. Repeated and targeted enquiry is also an added value option.
We recommend trying it out!
Few-Shot Prompting
Few-shot prompting is a concept used in the field of machine learning and specifically in the training of AI models such as those based on the Generative Pre-trained Transformer (GPT) architecture. It refers to how an AI model is trained or tuned to fulfil certain tasks using a very small number of examples, known as "shots".
Once again, a crucial aspect of Few-Shot Prompting is the design of the prompt. Here, the problem or task that the model is supposed to solve is formally unveiled step by step in a structured and clear manner, often fed by a few selected, high-quality examples of the desired outcome. The examples serve as context and guide the model to what kind of answer is expected. The model uses the selective examples to adjust its internal weights to better perform the task. The model learns to recognise patterns and apply them to new, similar tasks. It therefore generalises from the given examples to new situations.
Few-shot prompting is used in scenarios where not much data is available or when time is of the essence. It is also proof of the advanced ability of AI models to learn from a minimal amount of data and perform complex tasks.
Have we shed some light on the AI terminology jungle?
We are always happy to exchange ideas on exciting tech topics. Contact us via the form and let's see together how the symbiosis of human and artificial intelligence can also inspire your project.
We look forward to your prompt!
Technical terms and their abbreviations
AI = Artificial Intelligence
LLM = Large Language Model
GPT = Generative Pre-trained Transformer
ANN = Artificial Neural Network