Hybrid Search combines the speed of generative answers with the precision of guided steps in the same conversation by using large Language Models (LLM) to respond conversationally using knowledge stored in the Mavenoid asset library as well as all guided content marked as searchable.
Hybrid Search will present a mix of generative and guided responses in the same conversation, with all the context preserved.
The assistant automatically picks the best response for each question submitted by the user: guided, generative, suggestions or clarifying questions.
In order to get the most out of Hybrid Search, flows should be built using a mixture of files uploaded to Smart Assets and dedicated node paths.
This maximises your ability for content coverage and eliminates the risk of users not being able to find what they're looking for.
You can add the following types of content to Smart Assets:
PDF files
Website knowledge
Custom data (in JSON format)
Each Smart Asset node should only contain one file. To keep things clear, rename the node to match the name of the file you uploaded.
To make sure users only see information relevant to their product, add conditions directly to the files, not to the Smart Asset node itself.
When adding conditions to assets, it's important to make sure each value matches the value of a condition that has been set in a write node at another point in the flow. If the values do not match, the condition will not be met.
You can customize how the AI responds in tone, length, depth, and style to align with your brand. You can also set escalation rules like trigger phrases and turn limits.
A node must be searchable to appear in results. Edit the node and toggle “Include in Assistant search.”
Searchable nodes show a small magnifying glass jump target, meaning they can be reached directly from search.
Only symptoms, solutions, choice lists, questions, smart assets, and message nodes are searchable.
Symptom and Smart Asset nodes are searchable by default; all other node types are not.
Symptoms appear in results using their Symptom description, not the “Question to user.”