New features of Mindbreeze InSpire 25.5 Release
Want to check out the highlights of the Mindbreeze InSpire 25.5 Release? Learn more in the following blog post.
AI-based Neural Reranking – Optimized answer quality through deeper semantic matching
With the Mindbreeze InSpire 25.5 Release, Mindbreeze customers now have access to AI-based Neural Reranking through configurable Transformer Language models for optimizing answer quality. The relevance of answers to a query is redefined by configurable AI models in terms of deep semantic matching. By applying “Cross Attention,” Mindbreeze retrieves more potential answer options than originally requested and checks them for relevance according to the newly evaluated match. Based on this, the corresponding answer is generated and output. By configuring boosting rules, Mindbreeze customers can customize the new evaluation and ranking according to their use cases in addition to the initial evaluation of the answers.

Neural Reranking can be used not only for AI Answers in the Mindbreeze InSpire client, but also for Retrieval Augmented Generation (RAG) and agentic RAG use cases. This means that Mindbreeze customers not only receive high-quality content in the LLM context, but also benefit from reduced token consumption and faster answer generation. For flexible use of this optimization, it is also possible to dynamically activate Neural Reranking per request.
Mindbreeze InSpire AI chat functionalities with AI Answers available in Insight AppsPermanent link for this heading
With the Mindbreeze InSpire 25.5 Release, it is possible to use the entire functionality of Insight Services for RAG pipelines with AI Answers in Insight Apps. To this end, improvements have been made in particular with regard to consistency with other chat applications when using retrieval within the RAG pipeline. The AI Answers component has been further developed by adjusting the progress bar and optimizing the formatting of answers.
Optimized multi-turn chat conversationsPermanent link for this heading
With the Mindbreeze InSpire 25.5 Release, multi-turn chat conversations receive an optimization. Depending on the configuration and the productive RAG pipeline, the AI assistant can now access the chat history and previous conversations more comprehensively and generate an even more context-relevant answer.
Detailed information on all innovations can be found in our release notes.
Don't miss your opportunity to see a live demo of the innovations in our comprehensive "What's New"-Webinar, which covers the last three releases. Sign up here
Contact our experts for further information.
Latest Blogs
The Future of Enterprise AI Depends on Smarter RAG Solutions
Today’s enterprise leaders ask how to make AI meaningful, responsible, and scalable. One architectural approach stands out as organizations look beyond isolated proof-of-concepts and begin embedding AI into workflows: Retrieval-Augmented Generation (RAG).
Boosting Enterprise Intelligence with Tool Calling
Introduction: A New Era of Intelligent SearchMindbreeze understands that enterprise needs have evolved. It is no longer sufficient for AI systems to retrieve documents or surface static answers. Tool calling meets this demand head-on.