Updates and improvements

Changelog

Follow us on X (Twitter)
March 28, 2024

GPT-4 Speed Increase

Over the last weeks, we have worked to improve the speed of GPT-4. We were able to increase it by 2 to 2.5 times and updated all workspaces last Monday. Let us know if you are using your API keys for GPT-4 so we can also increase your speed.

Improved Import of Integrations

Some users had problems integrating Slack, Google Drive, and Confluence in the last few months. The issues occurred primarily with large amounts of files and folders. We took several weeks to identify the problems and resolve them. Today, we announce that integrations are working reliably, even with large amounts of data.

Updated documentation

We updated our documentation and added all the major functionalities from the last months. If you have questions or need prompting tips, you can find answers here.We constantly update it and will add more articles and guides over time. If you have any questions or are missing anything, let us know!

March 20, 2024

Website Crawling

We have released the much-requested Web Crawling feature! Input a link, and our platform automatically crawls all sub-links in the background to find the exact information you need. This saves time and accelerates the research process, making information retrieval quicker and more efficient.

Claude 3 Sonnet & Haiku (hosted in the US)

Since Anthropic announced their new model, Claude 3, many users have asked us when Claude 3 will be available in Langdock. We are still waiting for AWS to release the models in Europe. According to AWS, we expect the Claude 3 family (Haiku, Sonnet and Opus) in Europe in the following weeks.

Claude 3 Sonnet and Haiku are already available in the US, so we added them for you to try. Admins can turn them on here.

Improvements & Bug Fixes

Streamlining user experience and interface when generating responses and appearing buttons (copy, like/dislike, regenerate response).

Removing prompt suggestions: We removed the prompt suggestions from the chat by default. For those who would still like to use it: Don’t worry, you can turn them on again here.

We also fixed some bugs regarding integrations and other features and cleaned up our entire interface.

March 14, 2024

DALL-E (Beta)

This week, we released image generation with DALL-E. Next to the known different chat modes (Web, Auto, Custom, Plain), you can now select a new mode called “Image”. From there, type in a prompt, and we’ll create an image based on that. We’ll launch additional image models very soon.

GPT Vision (Beta)

You can also analyze images and use them as context in your chats. To do this, select the model GPT-4 Turbo, and you will see a third button appear in the chat field. Here, you can upload images and ask the AI model to describe or analyze it, or to extract text from it.

Improvements & Bug Fixes

Add multiple domains to allowed email domain settings: We added the functionality to add more than one domain with access to the workspace. Before, admins could only add one domain.

Web mode in Assistants: Assistants can now access the web, just as a normal chat.

Sharing web chats with team members: You can now share a chat in web mode with colleagues. This feature was previously only available for chats in plain mode.

Analytics screen: Admins can now filter how many messages were sent out in total in the workspace for the last seven days or the last month.

We've made various background improvements, such as restoring the ability for admins to share folders with files with the entire workspace.

March 7, 2024

SSO & SCIM

Multiple customers requested SSO, and we finally shipped full support last week! SSO is a single set of credentials that now unlocks multiple applications, enhancing security while making the login process smoother.Also, we added support for SCIM, which helps customers automate how users are provisioned. This means consistent data across platforms and significantly reduced time IT spends on user management!

February 27, 2024

Edit Prompt & Regenerate Response

If you are unsatisfied with a response, you can edit the instructions you give the model in a different context. You can do this by clicking on the pen icon below the prompt.Similarly to the edit prompt functionality, the "regenerate response" feature in Langdock allows you to regenerate a response from the model if you are unsatisfied with the initial response. You can regenerate a response by clicking on the circle arrow below the reaction and the response will be regenerated. Afterwards, a second answer will be generated, and the chat will be split, similar to a path that splits into two directions.

Custom Instructions

From now on, you can add custom instructions to the settings to provide additional context to the AI model each time a prompt is sent. This helps to tailor the responses to your needs.There are two sections: one about yourself and one about your expected response. You can add any information you like - for example, which area you are working on or to whom responses should be addressed. In the field of the responses, you can define things like the length of responses or always writing in bullet points. We collected some ideas of what to add here in our documentation.