Configuration reference

The following table describes all configuration options available for the Siren AI plugin. Options are bolded if they are required, though provider config options marked as required are only required if the associated provider is in use.

Option Description Type Default

siren-ai.enabled

Whether the plugin is enabled

boolean

true

siren-ai.defaultModel

The label of the model to use as defined in the siren-ai.models config. If not provided, uses the first model in the models list.

string

siren-ai.showReasoning

Make reasoning content that the LLM produces visible in the UI

boolean

false

siren-ai.models

A list of the model configurations, identified by their label. Each model has a provider and settings associated with that provider, described below.

ModelConfig[]

ModelConfig types

OpenAI ModelConfig

Option Description Type Default

provider

Model provider. Must be openai for models hosted by OpenAI

openai

connection.apiKey

OpenAI API key. This can be found in the API key page.

string

connection.orgId

OpenAI organization ID.

string

connection.timeout

LLM timeout in milliseconds.

integer (>0)

600000 (10 mins)

parameters.model

The OpenAI model to use. For a full list options, see here.

string

parameters.temperature

See Temperature.

float (0.0-2.0)

parameters.topP

See TopP.

float (0.0-1.0)

Azure OpenAI ModelConfig

Option Description Type Default

provider

Model provider. Must be azure for models hosted by Azure

azure

connection.endpoint

Azure OpenAI endpoint. This can be found in the deployed Azure resource’s Keys and Endpoint page.

string

connection.deploymentName

Azure OpenAI deployment name. This deployment determines the model used.

string

connection.apiKey

Azure OpenAI API key. This can be found in the deployed Azure resource’s Keys and Endpoint page.

string

connection.timeout

LLM timeout in milliseconds.

integer (>0)

600000 (10 mins)

parameters.temperature

See Temperature.

float (0.0-2.0)

parameters.topP

See TopP.

float (0.0-1.0)

OpenAI-compatible provider ModelConfig

Option Description Type Default

provider

Model provider. Must be openai-compat for models hosted by providers that support the OpenAI Chat Completions API

openai-compat

connection.baseUrl

The URL to access the model provider. Typically ends in /v1

string

connection.apiKey

API key required by the provider.

string

connection.timeout

LLM timeout in milliseconds.

integer (>0)

parameters.model

Model to use.

string

parameters.temperature

See Temperature.

float (0.0-2.0)

parameters.topP

See TopP.

float (0.0-1.0)

AWS Bedrock ModelConfig

Option Description Type Default

provider

Model provider. Must be aws for models hosted by AWS Bedrock

aws

connection.region

AWS region.

string

connection.profile

AWS profile created locally.

string

connection.credentials.accessKeyId

AWS access key ID. Can also be specified using AWS_ACCESS_KEY_ID.

string

connection.credentials.secretAccessKey

AWS secret access key. Can also be specified using AWS_SECRET_ACCESS_KEY.

string

connection.credentials.sessionToken

A security or session token to use with these credentials. Usually present for temporary credentials. Can also be specified using AWS_SESSION_TOKEN

string

connection.credentials.credentialScope

AWS credential scope for this set of credentials.

string

connection.credentials.accountId

AWS account ID.

string

connection.timeout

LLM timeout in milliseconds.

integer (>0)

undefined (no timeout)

parameters.model

The model to use. See here for a full list of supported models.

string

parameters.temperature

See Temperature.

float (0.0-2.0)

parameters.topP

See TopP.

float (0.0-1.0)

LLM parameters

Temperature

The temperature parameter controls the randomness and creativity of the model’s output by adjusting the probability distribution used when selecting the next token.

A higher temperature value makes the model’s output more diverse and creative by giving less probable words a higher chance of being selected. Conversely, a lower temperature value makes the output more focused and predictable by favoring the most probable words. This parameter allows users to fine-tune the balance between creativity and coherence in the model’s responses, depending on the desired application.

Note: The configuration may accept a range of 0 to 2 but the valid range for temperature depends on the provider or model you are using. Some providers accept values between 0 and 1, while others support a wider range, typically 0 to 2. Always choose a temperature value that falls within the range supported by your selected provider. If this parameter is not defined, it will default to your provider’s default.

TopP

The topP parameter, also known as nucleus sampling, is used to control the diversity of the output generated by an LLM. It works by considering only the smallest set of top probable tokens whose cumulative probability exceeds the value of topP.

For example, if topP is set to 0.9, the model will only consider the top 90% of probable tokens for generating the next word, effectively filtering out the less likely options. This results in more diverse and creative outputs when topP is set closer to 1, as the model has a wider range of tokens to choose from. Conversely, setting topP closer to 0 makes the output more predictable and focused, as it limits the model to a smaller set of highly probable tokens.

If this parameter is not defined, it will default to your provider’s default.