Function Calling Example
Functional Calling can be a very useful technique to obtain structured output from an LLM. We can find detailed documentation about function-calling in the following two reference.
In this section, we will provide an example of advanced usage of function calling, and we hope this can serve as a good start point for you to write your own Pro Config.
{
"type": "automata",
"id": "studymate_bot",
"initial": "home_page_state",
"inputs": {},
"outputs": {},
"context": {
"general_purpose_widget_id":"",
"advanced_task_widget_id":"",
"memory":"{{[]}}",
"note_topic":"",
"user_given_note_topic":null,
"number_of_questions":"",
"difficulty_level":"",
"questions_string_sample": "{\"question\": \"Which of the following statements is not correct? \\n A. The execution of an Automata starts from the `initial` state. \\n B. An Automata can contain multiple AtomicStates. \\n C. Each AtomicState must define both inputs and outputs. \\n D. We can define transitions in either Automata or AtomicState.\", \"answer\": \"C\", \"explanation\": \"Both inputs and outputs in an AtomicState are optional.\"}",
"questions_string_sample_another": "{\"question\": \"You are building an AutomicState, please choose the correct order of execution: \\n A. inputs -> tasks -> outputs -> render \\n B. render -> inputs -> tasks -> outputs. \\n C. tasks -> inputs -> outputs -> render. \\n D. render -> tasks -> inputs -> outputs\", \"answer\": \"A\", \"explanation\": \"The correct order is `inputs -> tasks -> outputs -> render`. Please refer to `Expressions and Variables`\"}, {\"question\": \"Which of the following expressions is not correct (assume all the variables exist)? \\n A. context.variable \\n B. variable \\n C. variable1 + variable2 \\n D. np.array(variable)\", \"answer\": \"D\", \"explanation\": \"Our expression supports JavaScript grammar.\"}",
"questions": "",
"question_idx": "",
"chosen_answer": "",
"correct_answer": "",
"correct_count": "",
"google_search_raw_result":"",
"json_regex":"{{\\[.*?\\]}}",
"json_flags":"s",
"json_match":""
},
"transitions": {
"go_home": "home_page_state",
"submit_note":"submit_note_state",
"prequiz":"pre_quiz_page",
"get_quiz":"quiz_page_state",
"continue": "continue_state"
},
"states": {
"home_page_state": {
"inputs": {
"intro_message": {
"type": "text",
"user_input": false,
"default_value": "Hi, this is your studymate chatbot"
},
"general_purpose_widget_id": {
"type": "text",
"user_input": false,
"default_value": "1744214024104448000",
"description": "widget id for the main widget(LLM)"
},
"advanced_task_widget_id": {
"type": "text",
"user_input": false,
"default_value": "1744214047475109888",
"description": "widget id for the advanced tasks widget(LLM)"
},
"topic":{
"type": "text",
"user_input": true,
"default_value": "{{context.note_topic}}"
}
},
"outputs": {
"context.general_purpose_widget_id": "{{general_purpose_widget_id}}",
"context.advanced_task_widget_id": "{{advanced_task_widget_id}}",
"context.user_given_note_topic":"{{topic}}",
"context.note_topic":"{{topic}}",
"context.question_idx": "{{0}}",
"context.correct_count": "{{0}}",
"context.json_match":"{{new RegExp(context.json_regex, context.json_flags)}}"
},
"render": {
"text": "Welcome to the Study Mate Chatbot, continue by pasting your note",
"buttons": [
{
"content": "🏠Home",
"description": "Go to back to 🏠Home",
"on_click": "go_home"
},
{
"content": "Take Quiz",
"description": "Skip submitting note and take quiz on {{topic}}",
"on_click": "prequiz"
}
]
},
"transitions": {
"CHAT": "submit_note_state"
}
},
"submit_note_state":{
"inputs": {
"user_note":{
"type": "IM",
"user_input": true
}
},
"tasks": [
{
"name": "generate_reply",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "{{context.general_purpose_widget_id}}",
"system_prompt": "You are a summary-giving machine, summarising notes given and also giving main key points.",
"user_prompt": "{{user_note}}",
"output_name": "reply"
}
},
{
"name": "generate_topic",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "{{context.general_purpose_widget_id}}",
"system_prompt": "You are a summary-giving encyclopedia, identify the topic of the given note.",
"user_prompt": "{{user_note}}",
"output_name": "note_topic"
}
},
{
"name": "generate_voice",
"module_type": "AnyWidgetModule",
"module_config": {
"content": "{{reply}}",
"widget_id": "1743159010695057408",
"output_name": "reply_voice"
}
}
],
"outputs": {
"context.note_topic":"{{note_topic}}",
"context.google_search_raw_result":"{{google_search_raw_result}}"
},
"render": {
"text": "{{reply}}",
"audio": "{{reply_voice}}",
"buttons": [
{
"content": "Quiz",
"description": "Attempt quiz on {{context.note_topic??note_topic}}",
"on_click": "prequiz"
}
]
}
},
"pre_quiz_page":{
"inputs": {
"difficulty_level":{
"type": "text",
"choices": ["easy","medium","hard"],
"default_value": "medium",
"description": "Choose the difficulty level for your quiz",
"user_input": true
},
"number_of_questions":{
"type": "text",
"default_value": "10",
"description": "Enter the number of questions you want to attempt in this quiz",
"user_input": true
}
},
"tasks": [
{
"name": "generate_multiple_choice_question",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "{{context.advanced_task_widget_id}}",
"system_prompt": "You are my question bank, generating dynamic {{difficulty_level}} level questions based on the topic provided, you will generate a question with several choices (as mcq test) and output it in JSON format. the format is `{'question': 'here is the question', 'choices': ['A. xxxxxxxxxxxx', 'B. xxxxxxxxx', 'C. xxxxxxxxxxx', 'D. xxxxxxxxxx'], 'answer': '? (from ABCD)', 'explanation': 'why we choose ? (from ABCD) as answer, explain it.'}`",
"user_prompt": "generate a four choice questions under the topic:{{context.user_given_note_topic??context.note_topic}}. The output should be a JSON format.",
"function_description": "Generate a multiple-choice question. Output: JSON format question. containing question, choices, answer and explanation",
"function_name": "GenerateMCQ",
"function_parameters": [
{
"name": "mcqtest",
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "the question"
},
"choices": {
"type": "array",
"items": {
"type": "string"
},
"description": "the choices, must contain 4 choices, format should be A. xxx, B. xxx, C. xxx, D.xxx"
},
"answer": {
"type": "string",
"description": "the answer, must chosen from A/B/C/D"
},
"explanation": {
"type": "string",
"description": "the explanation"
}
},
"required": ["question", "choices", "answer", "explanation"],
"description": "The generated question. The format is JSON, contains question, choices, answer and explanation."
}
],
"output_name": "llm_generated_question"
}
}
],
"outputs": {
"context.number_of_questions":"{{number_of_questions}}",
"context.difficulty_level":"{{difficulty_level}}",
"context.questions":"{{llm_generated_question.mcqtest}}"
},
"render": {
"text": "```json\n{{JSON.stringify(context.questions, null, 2)}}\n```",
"buttons": [
{
"content": "Start Quiz",
"description": "Start untimed quiz {{context.questions}}",
"on_click":"get_quiz"
}
]
}
},
"quiz_page_state": {
"outputs": {
"context.correct_answer": "{{context.questions[context.question_idx]['answer']}}"
},
"render": {
"text": "{{context.question_idx + 1}}. {{context.questions[context.question_idx]['question']}}",
"buttons": [
{
"content": "A.",
"description": "Choose A.",
"on_click": "check_answer",
"UNSTABLE_button_id": "A"
},
{
"content": "B.",
"description": "Choose B.",
"on_click": "check_answer",
"UNSTABLE_button_id": "B"
},
{
"content": "C.",
"description": "Choose C.",
"on_click": "check_answer",
"UNSTABLE_button_id": "C"
},
{
"content": "D.",
"description": "Choose D.",
"on_click": "check_answer",
"UNSTABLE_button_id": "D"
}
]
},
"transitions": {
"check_answer": "analyze_answer_state"
}
},
"analyze_answer_state": {
"inputs": {
"button_id": {
"type": "text",
"user_input": false,
"value": "UNSTABLE_button_id"
}
},
"outputs": {
"context.chosen_answer": "{{button_id}}",
"context.is_correct": "{{button_id == context.correct_answer}}"
},
"render": {
"text": "Check answer state."
},
"transitions": {
"ALWAYS": [
{
"target": "correct_answer_state",
"condition": "{{context.is_correct}}"
},
{
"target": "wrong_answer_state",
"condition": "{{true}}"
}
]
}
},
"correct_answer_state": {
"outputs": {
"context.question_idx": "{{(context.question_idx + 1) % context.questions.length}}",
"context.correct_count": "{{context.correct_count + 1}}"
},
"render": {
"text": "Congratulations! You have chosen the correct answer {{context.correct_answer}}",
"buttons": [
{
"content": "Continue",
"description": "continue",
"on_click": "continue"
}
]
}
},
"wrong_answer_state": {
"outputs": {
"context.question_idx": "{{(context.question_idx + 1) % context.questions.length}}"
},
"render": {
"text": "Oh No! The chosen answer is {{context.chosen_answer}}, while the correct one is {{context.correct_answer}}.",
"buttons": [
{
"content": "Continue",
"description": "continue",
"on_click": "continue"
}
]
}
},
"continue_state": {
"render": {
"text": "Click to Next Question"
},
"transitions": {
"ALWAYS": [
{
"target": "quiz_page_state",
"condition": "{{context.question_idx > 0}}"
},
{
"target": "finish_state",
"condition": "{{context.correct_count == context.questions.length}}"
},
{
"target": "review_state",
"condition": "{{true}}"
}
]
}
},
"finish_state": {
"render": {
"text": "Congratulations you scored {{context.correct_count}}/{{context.questions.length}}",
"buttons": [
{
"content": "🏠Home",
"description": "Back to Home",
"on_click": "go_home"
}
]
}
},
"review_state": {
"outputs": {
"context.memory": "{{[]}}"
},
"render": {
"text": "{{context.intro_message}}"
}
}
}
}
The above example is from a bot called StudyMate made by a participant of Pro Config Learning Lab. In this app, the user can input any topic and choose a difficulty level, and some quizzes can be dynamically generated accordingly. The core to parse the results into structured quizes are achieved using function-calling as:
"tasks": [
{
"name": "generate_multiple_choice_question",
"module_type": "AnyWidgetModule",
"module_config": {
"widget_id": "{{context.advanced_task_widget_id}}",
"system_prompt": "You are my question bank, generating dynamic {{difficulty_level}} level questions based on the topic provided, you will generate a question with several choices (as mcq test) and output it in JSON format. the format is `{'question': 'here is the question', 'choices': ['A. xxxxxxxxxxxx', 'B. xxxxxxxxx', 'C. xxxxxxxxxxx', 'D. xxxxxxxxxx'], 'answer': '? (from ABCD)', 'explanation': 'why we choose ? (from ABCD) as answer, explain it.'}`",
"user_prompt": "generate a four choice questions under the topic:{{context.user_given_note_topic??context.note_topic}}. The output should be a JSON format.",
"function_description": "Generate a multiple-choice question. Output: JSON format question. containing question, choices, answer and explanation",
"function_name": "GenerateMCQ",
"function_parameters": [
{
"name": "mcqtest",
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "the question"
},
"choices": {
"type": "array",
"items": {
"type": "string"
},
"description": "the choices, must contain 4 choices, format should be A. xxx, B. xxx, C. xxx, D.xxx"
},
"answer": {
"type": "string",
"description": "the answer, must chosen from A/B/C/D"
},
"explanation": {
"type": "string",
"description": "the explanation"
}
},
"required": ["question", "choices", "answer", "explanation"],
"description": "The generated question. The format is JSON, contains question, choices, answer and explanation."
}
],
"output_name": "llm_generated_question"
}
}
],
In the above example, we aim to generate the question, choices, answer and explanation, and the type of them are string
array
string
string
accordingly. We can adopt the OpenAI schema like above to define the desired structure recursively.
Last updated