Chatbot of the GWDG

The GWDG offers it's own chatbot Chat AI since February 22, which is running on in-house hardware. You can log in using the AcademicCloud or via SSO and start chatting with the generative AI. We have collected possible use cases here.

Interface

The intuitive layout enables easy navigation and efficient usage. You can chat with the AI using messages or using a datafile (see image).

gwdg-llm-screenshot

Furthermore, in the button on the lower right, you can choose bewteen different AI models (so called Large Language Models, LLMs): The internal LLMs Mixtral 8x7B Instruct, Qwen2 72B Instruct, Meta LLaMA 3 SauerkrautLM 70B Instruct, Meta LLaMA 3.1 70B Instruct und Meta LLaMA 3.1 8B Instruct run on in-house hardware within the GWDG, while the external LLMs OpenAI GPT-3.5-Turbo und OpenAI GPT-4 are using third-party resources.

Using the advanced options, you can adjust your workflow by setting a system-prompt, which provides context and instructions for the model. Additionally, you can set the temperature, which influences creativity and randomness of the model (see image).

gwdg-llm-screenshot-optionen

Data handling

During the creation process of this service, the GWDG paid special attention to data security. In order to secure the all inserted data, all prompts and answers are only saved locally on the user device, within one session. If you close the website, all prompts and answers are deleted. Only the number of requests per user are saved, in order to estimate demand for this new service.

Attention! This only applies to the internal LLMs (Mixtral, Qwen und Meta). While using the OpenAI models will not transfer any information about the users themselves, but all requests will be passed along unfiltered. Personal information, that are included in the prompts themselves, will be transmitted to the external service provider. You can find more information at the privacy policy.

If you have any questions or feedback, do not hesitate to contact us at support@gwdg.de.