# AI Prompt (Multi-LLM)

This action is suited to execute LLM-prompts with MCP support from multiple AI Providers, including on-prem once based on Ollama/vLLM. [LLMs Full list](#supported-llm-providers-and-models).

### How to configure and use the action <a href="#how-to-configure-and-use-the-action" id="how-to-configure-and-use-the-action"></a>

Drag & Drop the Action from the actions palette on the left to the stage:

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2FctlKhjG0wD8O4fAwYuGQ%2F378uv9-image.png?alt=media&#x26;token=11b45a83-4ab8-427a-b8eb-7c1e64a752f1" alt=""><figcaption></figcaption></figure>

Once on the stage you can configure your favourite LLM prompt using the action detail panel.

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2FfAITjHO05nKorJ2hBIms%2F378ut2-image.png?alt=media&#x26;token=23580dad-2d6a-4115-8415-af149adeed08" alt=""><figcaption></figcaption></figure>

To properly use you Action you need to configure your LLM models in the Settings > Integrations section, provinding the corresponding LLM API-KEY.

E.g. for Google Gemini:

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2Fc1dc1YXNOdnJMULWA5XG%2F378v09-image.png?alt=media&#x26;token=d508ce71-0e55-4110-91de-0004a32f3a25" alt=""><figcaption></figcaption></figure>

### Using Ollama with Action PromptAI <a href="#using-ollama-with-action-promptai" id="using-ollama-with-action-promptai"></a>

As with others LLMs, in order to use Ollama with the PromptAI action, you need to configure the Ollama Integration on Tiledesk by going to Settings → Integrations, entering:

* The URL of the machine where Ollama is running
* (Optional) Your favorite models to use for a faster action configuration

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2F0gvq0a1CFbpB5wRAh2pR%2F3925nm-Screenshot%202025-03-25%20alle%2016.28.18.png?alt=media&#x26;token=04365735-2e68-478b-a6fe-0be5f0dd318a" alt=""><figcaption></figcaption></figure>

To add a model to your Favorites List, type the exact model name and press the Enter button. Finally save the settings clicking on **Save** button.

In the PromptAI action, select Ollama as LLM and choose a model between the predefined favorite models

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2FtVHAxMvd24i0y9nP9AuW%2F39261u-Screenshot%202025-03-25%20alle%2016.36.52.png?alt=media&#x26;token=9d406f59-14de-4cbb-9db4-32979295c3f7" alt=""><figcaption></figcaption></figure>

### MCP support

MCP tools are fully supported in your AI Prompt action.

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2Fn9AjV56oSiNT1uMC7vyN%2Fimage.png?alt=media&#x26;token=b7889f87-6cd1-4865-9cb5-e901c57466c7" alt=""><figcaption></figcaption></figure>

You can add as many tools your current selected model supports using the " + Add MCP tools" button in the AI Prompt detail panel, just under the prompt section.

Once you press the button a popup appears where you can add the tools. Simply choose a name for your tool and fill the corresponding MCP endpoint URL. Consider that actually only MCP of type HTTP streamable are supported.

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2FSpoGBCd8f4ErUDKFk3SK%2Fimage.png?alt=media&#x26;token=656d6d15-8d81-4d5f-a4e8-f41741c52196" alt=""><figcaption></figcaption></figure>

Add your own tools pressing the "Add MCP Server" button on the bottom of the MCP popup, then fll out the MCP server form to setup your tool

<figure><img src="https://1328774413-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FVguk6y5OMn3pAaHZSiwU%2Fuploads%2FL1k64VZy7yfyqbssqqET%2Fimage.png?alt=media&#x26;token=c15bc130-a19c-414f-af9b-20e87acb86fe" alt=""><figcaption></figcaption></figure>

We hope you enjoy our new Action that will let you use your favourite LLM provider and models!

If you have questions about the AI Prompt Action or other Tiledesk features feel free to send us an email to [support@tiledesk.com  ](mailto:support@tiledesk.com)or leave us [feedback](https://feedback.tiledesk.com/feedback)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://guide.tiledesk.com/ai-chatbots-and-automation/actions-explained/ai-prompt-multi-llm.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
