ProConfig Tutorial
  • Overview & Setup
  • Tutorial Structure
  • Tutorial
    • Hello World with Pro Config
    • Building Workflow
    • Transitions
    • Expressions and Variables
    • An Advanced Example
    • Function Calling Example
    • Random Routing Example
  • API Reference
    • Widgets
      • Bark TTS
      • Champ
      • CoinGecko
      • ControlNet with Civitai
      • Crawler
      • Crypto News
      • Data Visualizer
      • Email Sender
      • Google Flight Search
      • Google Hotel Search
      • Google Image Search
      • Google Map Search
      • Google News Search
      • Google Scholar Search
      • Google Search
      • GroundedSAM
      • Image Text Fuser
      • Information Extractor - OpenAI Schema Generator
      • Information Extractor
      • Instagram Search
      • JSON to Table
      • LinkedIn
      • MS Word to Markdown
      • Markdown to MS Word
      • Markdown to PDF
      • Mindmap Generator
      • Notion Database
      • OCR
      • Pdf to Markdown
      • RMBG
      • Stabel-Video-Diffusion
      • Stable Diffusion Inpaint
      • Stable Diffusion Recommend
      • Stable Diffusion Transform
      • Stable Diffusion Upscale
      • Stable Diffusion with 6 fixed category
      • Stable Diffusion with Civitai
      • Storydiffusion
      • Suno Lyrics Generator
      • Suno Music Generator
      • Table to Markdown
      • TripAdvisor
      • Twitter Search
      • UDOP: Document Question Answering
      • Weather forecasting
      • Whisper large-v3
      • Wikipedia
      • Wolfram Alpha Search
      • Yelp Search
      • YouTube Downloader
      • YouTube Transcriber
      • Youtube Search
  • Tools
    • AutoConfig Bot
    • Cache Mode
Powered by GitBook
On this page
  1. Tools

Cache Mode

PreviousAutoConfig Bot

Last updated 1 year ago

Sometimes it is time-consuming to debug a Pro Config, since there might be a lot of AI widgets inside the workflow. Besides, debugging a Pro Config may cost a log of battery when some of the widgets are called repetitively.

To alleviate these issues, we release the cache mode, where the creator flexibly chooses which widget to skip during the workflow. When a widget is set to cache mode, it will be called only once and store the outputs in our database. If the module_config is not changed, further calling the widget will simply return the previously stored outputs and cost zero battery. The cache mode is very useful when you are building the workflow.

The cache flag can be set either in the automata (when you want to use cache in the whole Pro Config) or in state (when you want to use cache in a specific state) or the module_config (when you want to debug a specific module). Note that the priority of the debug flag is module_config > state > automata , which means the value of cache set in the former would overwrite that in the latter.

Taking the simple demo in as an example:

{
  "type": "automata",
  "id": "chat_demo",
  "initial": "chat_page_state",
  "properties": {
    "cache": true,
  }
  "states": {
    "chat_page_state": {
      "inputs": {
        "user_message": {
          "type": "IM",
          "user_input": true
        }
      },
      "tasks": [
        {
          "name": "generate_reply",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1744214024104448000", // GPT 3.5
            "system_prompt": "You are a teacher teaching Pro Config.",
            "user_prompt": "{{user_message}}",
            "output_name": "reply",
            "cache": true
          }
        },
        {
          "name": "generate_voice",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1743159010695057408", // TTS widget (Samantha)
            "content": "{{reply}}",
            "output_name": "reply_voice"
            "cache": true
          }
        }
      ],
      "render": {
        "text": "{{reply}}",
        "audio": "{{reply_voice}}"
      },
      "transitions": {
        "CHAT": "chat_page_state"
      }
    }
  }
}

here the two modules are set into cache mode and how it becomes:

We can see that both the results of LLM and TTS stay the same after the second chat. If we want to run the LLM, we can just set the debug as false:

       {
          "name": "generate_reply",
          "module_type": "AnyWidgetModule",
          "module_config": {
            "widget_id": "1744214024104448000", // GPT 3.5
            "system_prompt": "You are a teacher teaching Pro Config.",
            "user_prompt": "{{user_message}}",
            "output_name": "reply",
            "debug": false
          }
        },

The results are as follows:

The response of LLM varies based on the user's input, while the output of TTS stays the same because the TTS widget is still in cache mode.

Building Workflow