Skip to main content

Documentation Index

Fetch the complete documentation index at: https://domoinc-arun-raj-connetors-domo-480645-add-reports-sort-asc.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Intro

Domo Workflows integrates with the Domo AI Service Layer, specifically the AI Playground services. There are standard and advanced options available. To enable these services for your instance, contact ai@domo.com. Learn more about the AI Service Layer and AI Playground. Learn more about Domo.AI external link.png .

Access AI Service Layer in Workflows

To access the AI Service Layer (both standard and advanced options) after they have been enabled in your instance, follow these steps:
  1. In the navigation header, select More > Workflows to display the Workflows landing page.
  2. Either select an existing workflow or select + New Workflow.
  3. In this article, go to either the instructions for Standard AI Service Layer integration or Advanced AI Service Layer integration and follow the steps.
workflows landing page.jpg

Standard AI Service Layer Integration

After opening your new or existing workflow, follow these steps to add a standard AI Service Layer function:
  1. (Conditional) For a new workflow, first choose a Start type.
  2. Select Add > Service Task.
  3. In the configuration panel at the right of the canvas, under the Mapping tab, select Explore Functions. to display the available packages.
    explore functions.jpg
    The Packagesmodal displays.
  4. In the modal, use the search bar to enter and locate the Domo AI Services Layerpackage.
  5. In the search bar, locate the Domo AI Services Layer option.
    domo ai services layer.jpg
  6. Select Domo AI Services Layer to view the available functions. Choose which one to add to your flow.
    • Ask for SQL Query
    • Ask for Text
    • Ask for Beast Mode
    • Ask for Summary
      Screenshot 2024-01-12 at 1.43.41 PM.png

Ask for SQL Query

The Ask for SQL Query function allows you to transform your natural language questions into precise SQL queries. As with the AI Playground Text-to-SQL feature, you can input and map parameters for your output SQL query and test the input and output.
Screenshot

Ask for Text

The Ask for Text function allows you to generate a prompt-specific response to meet your needs. You can input and map parameters for your output text query and test the input and output.
testing for text.jpg

Ask for Beast Mode

The Ask for Beast Mode function allows you to transform your natural language questions into precise Beast Mode queries. You can input and map parameters for your output Beast Mode query and test the input and output.
testing for beast mode.jpg

Ask for Summary

The Ask for Summary function allows you to summarize a prompt. You can input and map parameters for your output summary query and test the input and output.
Screenshot

Advanced AI Service Layer Integration

After opening your new or existing workflow, follow these steps to add an advanced AI Service Layer function:
  1. Add a new action and select Explore Functions.
  2. In the search bar, locate the Domo AI Services Layer – Advanced option.
  3. Select Domo AI Services Layer – Advanced to view the available functions and choose which one to add:
The advanced options allow more configuration than the standard option.
Screenshot

Ask for SQL Payload

The Ask for SQL Payload function transforms your natural language questions into precise SQL queries, returns the payload, and allows for more input and output options. You can define the following input parameters:
  • dataset
  • input prompt
  • model
  • modelConfiguration
  • parameters
  • promptTemplate
You can also define the following output parameters and mapping for the output.
Screenshot

Ask for Text Payload

The Ask for Text Payload function generates a prompt-specific response to meet your needs, returns the payload, and allows for increased input and output options. This function allows you to set up mapping for the following:
  • input prompt
  • model
  • modelConfiguration
  • parameters
  • promptTemplate
The following output parameters can also be mapped:
  • generatedText
  • isCustomerModel
  • modelId
  • prompt
Screenshot

Ask for Beast Mode Payload

The Ask for Beast Mode Payload function transforms your natural language questions into precise Beast Mode calculations, returns the payload, and allows for increased input and output options. This function allows you to map the following parameters:
  • chatContext
  • dataSet
  • input prompt
  • modelConfiguration
  • parameters
  • promptTemplate
  • system
The following output parameters can also be mapped:
  • beastMode
  • isCustomerModel
  • modelId
  • prompt
  • choices

Screenshot

Ask for Summary Payload

The Ask for Summary Payload function summarizes a prompt and returns the payload. This function allows you to map the following input parameters. Required parameters are starred (*).
  • chatContext
  • chunkingConfiguration
  • model
  • modelConfiguration
  • parameters
  • prompt*
  • promptTemplate
  • sizeBoundary
  • summarizationOutputStyle
  • summarizationStrategy
  • system
For Ask for Summary Payload, you can map the following output parameters:
  • choices
  • isCustomerModel
  • modelId
  • prompt
  • summary
Screenshot 2024-01-12 at 1.06.33 PM.png

Input Parameter Reference

ParameterData typeDefinition
chatContextA list of historical messages between the AI and the user to be considered in the current request.
chunkingConfigurationWith large documents, not all text can be included in every request because the document may be too long to fit in the context window. The chunking configuration helps guide how we split our document into smaller parts or chunks.------chunkOverlapText can be overlapped to help give context to how the chunks in a document are grouped. An integer value defines the size of the overlap between successive chunks.disallowIntermediateChunksSometimes when trying to summarize very long texts, chunks are summarized and put together only to find that the text is still too long. In these cases, you would get summaries of the summaries unless disallowIntermediateChunks is set to false.maxChunkSizeAn integer value that specifies the maximum size for a chunkseparatorsA list of String values that is used to separate chunks in order of most to least desired. For example: [“\n\n”, “\n”, ”.”, ""] separates double new lines first (within the max chunk size) followed by a single new line, then a period, followed by any character.separatorTypeOut of the box separator types. The default is Text and includes HTML, JavaScript, Python.
dataSetThe ID for the DataSet used in the request.
generatedTextThe object that contains the pompt,
isCustomerModel
modeltextThe AI you want to send your request to. Examples: GPT-4, Claude-2.1
modelConfigurationA map with custom configuration parameters for a selected language model.
modelId
sizeBoundaryThe desired length of the output text represented as a boundary between min and max length.
parametersThe variables that can be inserted into the promptTemplate.
prompttextThe field for any instructions needed to run the function.
promptTemplatetextA text template that supports placeholder variables allowing users to easily organize input to the Large Language Model. Example: Summarize the following text to be no longer than ${max_word_count} words: ${input}
SQLThis is the output of the request in SQL.
summarizationOutputStyleBulleted, numbered, or paragraph
summarizationStrategyThe strategy to follow while summarizing the given input. Currently available strategies include STUFFING or MAP_REDUCE.
systemAllows you to set a system prompt for the model.

Output Parameters

Required parameters have an *.
ParameterDefinition
beastModeThe output object of the Ask for Beast Mode Payload service. It contains the following parameters: - isCustomerModel - modelId - prompt
chatContextA list of historical messages between the AI and the user to be considered in the current request.
choicesA list of the outputs that the service creates.
chunkingConfigurationWith large documents, not all text can be included in every request because the document may be too long to fit in the context window. The chunking configuration helps guide how we split our document into smaller parts or chunks.------chunkOverlapText can be overlapped to help give context to how the chunks in a document are grouped. An integer value defines the size of the overlap between successive chunks.disallowIntermediateChunksSometimes when trying to summarize very long texts, chunks are summarized and put together only to find that the text is still too long. In these cases, you would get summaries of the summaries unless disallowIntermediateChunks is set to false.maxChunkSizeAn integer value that specifies the maximum size for a chunkseparatorsA list of String values that is used to separate chunks in order of most to least desired. For example: [“\n\n”, “\n”, ”.”, ""] separates double new lines first (within the max chunk size) followed by a single new line, then a period, followed by any character.separatorTypeOut of the box separator types. The default is Text and includes HTML, JavaScript, Python.
dataSetThe ID for the DataSet used in the request.
generatedText
isCustomerModel
modelThe AI you want to send your request to. Examples: GPT-4, Claude-2.1
modelConfigurationA map with custom configuration parameters for a selected language model.
modelId
outputWordLength — Size BoundaryThe desired length of the output text represented as a boundary between min and max length.
parameters
promptThe field for the user to input any instructions that should be included in the prompt needed to run the function.
promptTemplateA text template that supports placeholder variables allowing users to easily organize input to the Large Language Model. Example: Summarize the following text to be no longer than ${max_word_count} words: ${input}
SQLThis is the output of the request in SQL.
summarizationOutputStyleBulleted, numbered, or paragraph
summarizationStrategyThe strategy to follow while summarizing the given input. Currently available strategies include STUFFING or MAP_REDUCE.
summaryThe output of the function.
system