Skip to main content
NICE CXone Expert
Expert Success Center

Expert Kernels Beta Release


What are Expert Kernels?

For teams building AI applications, Expert Kernels is an LLM-ready knowledge delivery solution. Unlike other knowledge retrieval options on the market, our product provides exactly the knowledge you need to build a fast, knowledge-powered LLM solution.

Without Kernels, your teams building LLM-powered applications have limited options to build knowledge into the ecosystem. With Kernels, Expert provides semantically relevant pieces of pages via API, enabling your team to target the parts of your knowledge base that are highly relevant and useful to the query. Your application can use Kernels as the only knowledge source or in combination with other data to construct prompts for an LLM. Ultimately, Kernels enables your generative AI applications to leverage the knowledge already available on a Expert site.

How does Expert Kernels work?

Published, public content is available via an API request as kernels. Kernels contain pieces of text from the pages along with metadata like the page ID, title, etc. When a natural language search query is entered into the Kernels API, it returns the most relevant kernels to that query, regardless of where that content was originally located in the site hierarchy. Kernels contain the most recently published information automatically. All this helps your team engineer prompts for LLMs that contain the relevant information to build an exceptional solution

What are some benefits of Expert Kernels?

  • Faster integration - Use Kernels instead of scraping or parsing a full Expert page in order to build an integration more quickly.
  • Improved relevancy - By returning only the most relevant kernels instead of full pages, users are more likely to get pertinent answers to their queries.
  • Faster results - Kernels provide answers quickly by eliminating the need to search through full page content.
  • Better for AI uses - The ability to retrieve relevant text kernels makes it well-suited for seeding generative AI systems compared to page links.
  • Always up-to-date - Kernels auto-update based on source content changes, so the information is always current.
  • Context preservation - Kernels contain metadata like page title/URL so the source context of the information is preserved.

What Does Beta Release mean? 

Until further notice,

  • Structures of Kernel responses are subject to change with no warning
  • Structures of Kernel requests are subject to change with no warning
  • Content of Kernel responses for the same query are subject to change with no warning
  • There is no SLA on response time

Use Case: Pulling Knowledge into an LLM application

Use kernels to build prompts out of Expert knowledge into your own LLM solution. An example of an application you could build might work like this: 


Compare to other content retrieval methods

Knowledge Retrieval API  Query With Response Is... Use When
Search Endpoint User generated keywords Links to Expert Pages  Users are looking for content on the site, location is unknown but content exists
Page Object Endpoint Page ID Full content of the page Content location is known, all the content on the page is desired and you want to display the content elsewhere
Kernels Endpoint User generated natural language question  Parts of pages from around an expert site Building AI applications that can parse unstructured data, constructing LLM prompts that need factual, up to date information, or building chat bot solutions


Working with Kernels

Kernels are an API and follow the authorization protocols of other Expert APIs. Our Getting Started Guide is available here. Attached to this page is a Postman collection with almost everything you need to begin testing the Kernels API in Postman. To test the Kernels API in Postman: 

  • Generate a server token on the expert site
  • Import the collection to Postman. In Postman: 
    • Go to the collection settings in Postman (click on the name of the collection)
    • Update the key and secret in the Pre-Request Script tab
    • Update the baseURL variable in the Variables tab
      • Replace SITE_URL_HERE with the actual site url. Keep the /@api/deki/ portion
  • Update the "query" query parameter in the endpoint settings
  • Hit send

Kernel Request Example: 

  • [SITE_URL]@api/deki/llm/kernels?q=hello&limit=5

Kernel Responses Contain a number of vector records, each vector record contains:

  • ID: The unique ID of the record
  • Chunk: The text from the Expert page -- note, this is meant to be consumed by an LLM application and is not meant to be human readable or consumable. 
  • Page 
    • ID: Page ID of the source information 
    • GUID: Page GUID of the source page
    • Draft.State: If the page has a draft or not
    • href: The href of the source page
    • uri.ui: The URL of the source page
    • title: The title of the page at the time it was most recently indexed 
    • path: the path of the page at the time it was most recently indexed 
    • namespace: if the page was in the main or media namespace (only relevant for customers with Media Manager)
    • date created: the date the page was created
    • language: The lanugage of the source content at the time it was most recently indexed 

Single Kernel Response Example:  

  <kernel id="b74d5331-55a6-4728-a6f8-cd4d483f0c28">
    <chunk>Every deployment needs clear goals and a focused plan for success. Use the five starting points below to understand your objectives and design a plan that helps you reach intended outcomes. Talk to  MindTouch Professional Services  for a fine-tuned methodology and unique approach that guides customers through successful deployments.</chunk>
    <page id="141" guid="40dedebe7b8644798d0796c1517533a3" draft.state="inactive" href="" deleted="false">
      <title>Define Success</title>
      <path seo="true">Define_Success</path>


Known Limitations: 

  • Content that is restricted in any way is not available in Kernels, regardless of the active user's permissions. 
    • Only pages with the page restriction set to public are eligible to be returned in a Kernel. 
    • In the case where a page restriction is set to public and there are content blocks with stricter limitations, the content that is not restricted will be eligible to be returned in a Kernel. 
    • In the case where the page restriction is set to public and there are content re-use blocks that are from a private page, the content that is not restricted will be eligible to be returned in a Kernel
  • Non-text content (including but not limited to images, tables and links) are not included in Kernels
  • Content formatting is not included in Kernels
  • Queries are text only and no advanced search functionality is respected.

Additional Helpful API Recipes:

Other APIs have been utilized to build demos across the Nice organization. They are production-ready APIs that Expert customers have been using for years. The below is suggestions for how additional Expert features may be leveraged in your LLM application. They are not intended as requirements nor as the end-all be-all of what is possible with an expert/LLM integration. Attached to this page is a postman collection containing an example pre-request script. The user in the pre-request script has been set to 'llm-bot,' which has been manually created on the kernels-testing site. The llm-bot user has been provisioned with the appropriate permissions to align with the content that can be returned by Kernels (public content only).  

Images and Attachments 
  • Images are not automatically included in Kernels results. To retrieve images to use as a part of a generative response, you can use a separate API endpoint to get all the attached or embedded images from a given page ID.
  • Page IDs can be retrieved from the Kernels or from the search results as outlined above. 
  • To retrieve an image URL, use the following endpoint and the page ID of your choosing:
    • {base_URL}/@api/deki/pages/{pageID}/files
  • Files response example: 

    • <files count="2" offset="0" totalcount="2" href="">
          <file id="16348" revision="1" res-id="34473" href="" res-is-head="true" res-is-deleted="false" res-rev-is-deleted="false" res-contents-id="188911">
              <filename>Expert Kernels.postman_collection.json</filename>
              <contents type="application/json" size="2340" href=""/>
              <user.createdby anonymous="false" virtual="false" id="795" wikiid="site_11706" href="" guid="ecf1b6be968b4d0abad9bac994b3cd85">
              <fullname>Whitney Meer</fullname>
              <password exists="false"/>
              <revisions count="1" totalcount="1" href=""/>
              <page.parent id="18153" guid="358df04b59b143bc9c8aaef0ba99be1a" draft.state="inactive" href="" deleted="false">
                  <title>Expert Kernels Beta Release</title>
                  <path seo="true">Internal/Product/Documentation/Expert_Kernels_Beta_Release</path>
Linking to Search Results
  • To generate links to pages, you'll need to make a separate API call to the search endpoint.
  • In the co-pilot demo, they are using an LLM to generate a search query from the user input and then hitting Expert's search endpoint to return a list of pages. They are using the search results to generate buttons. 
  • Information on how to do that is available in our Recipe for Use Case 1 and 2 Documentation.
  • Search results API Response for one page: 
    • <page id="3461" guid="59edada9c67a4b0a80c9d3461712d9a7" draft.state="inactive" href="" deleted="false" unpublish="true" revision="1" score="1">
      <title>OAWash High Efficiency Laundy Detergent</title>
      <path seo="true">Sales_Offers/OAWash_High_Efficiency_Laundy_Detergent</path>
      <security href="">
      <restriction id="1">Public</restriction>
      <user.createdby anonymous="false" virtual="false" id="1" wikiid="site_15087" href="">
      <password exists="true"/>
      < owner="true">true</>
      < anonymous="false" virtual="false" id="1" wikiid="site_15087" href="">
      <description>Page created, 109 words added</description>
      <page.parent id="3258" guid="c493ec624e784d7bb4328ed20f8a3d93" draft.state="inactive" href="" deleted="false">
      <rating score="" count="0" seated.score="" seated.count="0" unseated.score="" unseated.count="0" anonymous.score="" anonymous.count="0"/>
      <subpages href=""/>
      <aliases href=""/>
      <revisions count="1" href=""/>
      <revisions.archive count="0" deprecated="true" href=""/>
      <comments count="0" href=""/>
      <properties count="0" href=""/>
      <tags count="1" href="">
      <tag value="article:topic-guide" id="2" href="">
      <files count="0" href=""/>
      <contents type="application/vnd.deki1410+xml" href="" etag="b37a35f0f90a3bcbd84bb5502c87043e"/>
      <contents.alt type="application/pdf" href=""/>

What kind of content can be returned as kernels?

  • Only public pages. Confidential content, conditional blocks, reused blocks, and non-text won't be returned.

How often are kernels updated?

  • Kernels are automatically updated based on page changes.

Is there a limit to how many kernels I can request?

  • The API supports requesting the specific number of kernels per request. The default is 10 and the maximum is 100. 

How should kernels be used in my application?

  • Kernels is optimized for retrieving relevant text for seeding LLMs and building chatbots. Kernels are not formatted for end user display.
  • Was this article helpful?