Unverified Commit b8592ad4 authored by Bowen Liang's avatar Bowen Liang Committed by GitHub

fix: indentation violations in YAML files (#1972)

parent e696b72f
name: "🕷️ Bug report" name: "🕷️ Bug report"
description: Report errors or unexpected behavior description: Report errors or unexpected behavior
labels: labels:
- bug - bug
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
...@@ -13,7 +13,7 @@ body: ...@@ -13,7 +13,7 @@ body:
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: input - type: input
attributes: attributes:
label: Dify version label: Dify version
placeholder: 0.3.21 placeholder: 0.3.21
...@@ -21,7 +21,7 @@ body: ...@@ -21,7 +21,7 @@ body:
validations: validations:
required: true required: true
- type: dropdown - type: dropdown
attributes: attributes:
label: Cloud or Self Hosted label: Cloud or Self Hosted
description: How / Where was Dify installed from? description: How / Where was Dify installed from?
...@@ -33,7 +33,7 @@ body: ...@@ -33,7 +33,7 @@ body:
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Steps to reproduce label: Steps to reproduce
description: We highly suggest including screenshots and a bug report log. description: We highly suggest including screenshots and a bug report log.
...@@ -41,14 +41,14 @@ body: ...@@ -41,14 +41,14 @@ body:
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ✔️ Expected Behavior label: ✔️ Expected Behavior
placeholder: What were you expecting? placeholder: What were you expecting?
validations: validations:
required: false required: false
- type: textarea - type: textarea
attributes: attributes:
label: ❌ Actual Behavior label: ❌ Actual Behavior
placeholder: What happened instead? placeholder: What happened instead?
......
name: "📚 Documentation Issue" name: "📚 Documentation Issue"
description: Report issues in our documentation description: Report issues in our documentation
labels: labels:
- ducumentation - ducumentation
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
...@@ -12,7 +12,7 @@ body: ...@@ -12,7 +12,7 @@ body:
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Provide a description of requested docs changes label: Provide a description of requested docs changes
placeholder: Briefly describe which document needs to be corrected and why. placeholder: Briefly describe which document needs to be corrected and why.
......
name: " Feature or enhancement request" name: " Feature or enhancement request"
description: Propose something new. description: Propose something new.
labels: labels:
- enhancement - enhancement
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
...@@ -12,24 +12,24 @@ body: ...@@ -12,24 +12,24 @@ body:
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Description of the new feature / enhancement label: Description of the new feature / enhancement
placeholder: What is the expected behavior of the proposed feature? placeholder: What is the expected behavior of the proposed feature?
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Scenario when this would be used? label: Scenario when this would be used?
placeholder: What is the scenario this would be used? Why is this important to your workflow as a dify user? placeholder: What is the scenario this would be used? Why is this important to your workflow as a dify user?
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Supporting information label: Supporting information
placeholder: "Having additional evidence, data, tweets, blog posts, research, ... anything is extremely helpful. This information provides context to the scenario that may otherwise be lost." placeholder: "Having additional evidence, data, tweets, blog posts, research, ... anything is extremely helpful. This information provides context to the scenario that may otherwise be lost."
validations: validations:
required: false required: false
- type: markdown - type: markdown
attributes: attributes:
value: Please limit one request per issue. value: Please limit one request per issue.
name: "🤝 Help Wanted" name: "🤝 Help Wanted"
description: "Request help from the community [please use English :)]" description: "Request help from the community [please use English :)]"
labels: labels:
- help-wanted - help-wanted
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
...@@ -12,7 +12,7 @@ body: ...@@ -12,7 +12,7 @@ body:
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Provide a description of the help you need label: Provide a description of the help you need
placeholder: Briefly describe what you need help with. placeholder: Briefly describe what you need help with.
......
name: "🌐 Localization/Translation issue" name: "🌐 Localization/Translation issue"
description: Report incorrect translations. [please use English :)] description: Report incorrect translations. [please use English :)]
labels: labels:
- translation - translation
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
...@@ -12,39 +12,39 @@ body: ...@@ -12,39 +12,39 @@ body:
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: input - type: input
attributes: attributes:
label: Dify version label: Dify version
placeholder: 0.3.21 placeholder: 0.3.21
description: Hover over system tray icon or look at Settings description: Hover over system tray icon or look at Settings
validations: validations:
required: true required: true
- type: input - type: input
attributes: attributes:
label: Utility with translation issue label: Utility with translation issue
placeholder: Some area placeholder: Some area
description: Please input here the utility with the translation issue description: Please input here the utility with the translation issue
validations: validations:
required: true required: true
- type: input - type: input
attributes: attributes:
label: 🌐 Language affected label: 🌐 Language affected
placeholder: "German" placeholder: "German"
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ❌ Actual phrase(s) label: ❌ Actual phrase(s)
placeholder: What is there? Please include a screenshot as that is extremely helpful. placeholder: What is there? Please include a screenshot as that is extremely helpful.
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ✔️ Expected phrase(s) label: ✔️ Expected phrase(s)
placeholder: What was expected? placeholder: What was expected?
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ℹ Why is the current translation wrong label: ℹ Why is the current translation wrong
placeholder: Why do you feel this is incorrect? placeholder: Why do you feel this is incorrect?
......
...@@ -5,11 +5,7 @@ extends: default ...@@ -5,11 +5,7 @@ extends: default
rules: rules:
brackets: brackets:
max-spaces-inside: 1 max-spaces-inside: 1
comments-indentation: disable
document-start: disable document-start: disable
indentation:
level: warning
line-length: disable line-length: disable
new-line-at-end-of-file: truthy: disable
level: warning
trailing-spaces:
level: warning
...@@ -6,7 +6,7 @@ on: ...@@ -6,7 +6,7 @@ on:
- 'main' - 'main'
- 'deploy/dev' - 'deploy/dev'
release: release:
types: [published] types: [ published ]
jobs: jobs:
build-and-push: build-and-push:
......
...@@ -6,7 +6,7 @@ on: ...@@ -6,7 +6,7 @@ on:
- 'main' - 'main'
- 'deploy/dev' - 'deploy/dev'
release: release:
types: [published] types: [ published ]
jobs: jobs:
build-and-push: build-and-push:
......
...@@ -16,9 +16,9 @@ help: ...@@ -16,9 +16,9 @@ help:
url: url:
en_US: https://console.anthropic.com/account/keys en_US: https://console.anthropic.com/account/keys
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: anthropic_api_key - variable: anthropic_api_key
......
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: claude-2.1 en_US: claude-2.1
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 200000 context_size: 200000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,7 +21,7 @@ parameter_rules: ...@@ -21,7 +21,7 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
......
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: claude-2 en_US: claude-2
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 100000 context_size: 100000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,7 +21,7 @@ parameter_rules: ...@@ -21,7 +21,7 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
......
...@@ -2,16 +2,16 @@ model: claude-instant-1 ...@@ -2,16 +2,16 @@ model: claude-instant-1
label: label:
en_US: claude-instant-1 en_US: claude-instant-1
model_type: llm model_type: llm
features: [] features: [ ]
model_properties: model_properties:
mode: chat mode: chat
context_size: 100000 context_size: 100000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -20,7 +20,7 @@ parameter_rules: ...@@ -20,7 +20,7 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://azure.microsoft.com/en-us/products/ai-services/openai-service en_US: https://azure.microsoft.com/en-us/products/ai-services/openai-service
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://www.baichuan-ai.com en_US: https://www.baichuan-ai.com
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
......
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: Baichuan2-53B en_US: Baichuan2-53B
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4000 context_size: 4000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,17 +21,17 @@ parameter_rules: ...@@ -21,17 +21,17 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 1000 default: 1000
min: 1 min: 1
max: 4000 max: 4000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: with_search_enhance - name: with_search_enhance
label: label:
zh_Hans: 搜索增强 zh_Hans: 搜索增强
en_US: Search Enhance en_US: Search Enhance
......
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: Baichuan2-Turbo-192K en_US: Baichuan2-Turbo-192K
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 192000 context_size: 192000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,17 +21,17 @@ parameter_rules: ...@@ -21,17 +21,17 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 8000 default: 8000
min: 1 min: 1
max: 192000 max: 192000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: with_search_enhance - name: with_search_enhance
label: label:
zh_Hans: 搜索增强 zh_Hans: 搜索增强
en_US: Search Enhance en_US: Search Enhance
......
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: Baichuan2-Turbo en_US: Baichuan2-Turbo
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 192000 context_size: 192000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,17 +21,17 @@ parameter_rules: ...@@ -21,17 +21,17 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 8000 default: 8000
min: 1 min: 1
max: 192000 max: 192000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: with_search_enhance - name: with_search_enhance
label: label:
zh_Hans: 搜索增强 zh_Hans: 搜索增强
en_US: Search Enhance en_US: Search Enhance
......
...@@ -13,9 +13,9 @@ help: ...@@ -13,9 +13,9 @@ help:
url: url:
en_US: https://github.com/THUDM/ChatGLM3 en_US: https://github.com/THUDM/ChatGLM3
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_base - variable: api_base
......
...@@ -3,17 +3,17 @@ label: ...@@ -3,17 +3,17 @@ label:
en_US: ChatGLM2-6B-32K en_US: ChatGLM2-6B-32K
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 32000 context_size: 32000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 2000 default: 2000
......
...@@ -3,17 +3,17 @@ label: ...@@ -3,17 +3,17 @@ label:
en_US: ChatGLM2-6B en_US: ChatGLM2-6B
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 2000 context_size: 2000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
......
...@@ -3,18 +3,18 @@ label: ...@@ -3,18 +3,18 @@ label:
en_US: ChatGLM3-6B-32K en_US: ChatGLM3-6B-32K
model_type: llm model_type: llm
features: features:
- tool-call - tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 32000 context_size: 32000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 8000 default: 8000
......
...@@ -3,18 +3,18 @@ label: ...@@ -3,18 +3,18 @@ label:
en_US: ChatGLM3-6B en_US: ChatGLM3-6B
model_type: llm model_type: llm
features: features:
- tool-call - tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 8000 context_size: 8000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
......
...@@ -14,9 +14,9 @@ help: ...@@ -14,9 +14,9 @@ help:
url: url:
en_US: https://dashboard.cohere.com/api-keys en_US: https://dashboard.cohere.com/api-keys
supported_model_types: supported_model_types:
- rerank - rerank
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
...@@ -28,4 +28,4 @@ provider_credential_schema: ...@@ -28,4 +28,4 @@ provider_credential_schema:
placeholder: placeholder:
zh_Hans: 请填写 API Key zh_Hans: 请填写 API Key
en_US: Please fill in API Key en_US: Please fill in API Key
show_on: [] show_on: [ ]
\ No newline at end of file
...@@ -16,9 +16,9 @@ help: ...@@ -16,9 +16,9 @@ help:
url: url:
en_US: https://ai.google.dev/ en_US: https://ai.google.dev/
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: google_api_key - variable: google_api_key
...@@ -29,4 +29,3 @@ provider_credential_schema: ...@@ -29,4 +29,3 @@ provider_credential_schema:
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
\ No newline at end of file
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: Gemini Pro Vision en_US: Gemini Pro Vision
model_type: llm model_type: llm
features: features:
- vision - vision
model_properties: model_properties:
mode: chat mode: chat
context_size: 12288 context_size: 12288
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,7 +21,7 @@ parameter_rules: ...@@ -21,7 +21,7 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
......
...@@ -3,16 +3,16 @@ label: ...@@ -3,16 +3,16 @@ label:
en_US: Gemini Pro en_US: Gemini Pro
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 30720 context_size: 30720
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -21,7 +21,7 @@ parameter_rules: ...@@ -21,7 +21,7 @@ parameter_rules:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 2048 default: 2048
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://huggingface.co/settings/tokens en_US: https://huggingface.co/settings/tokens
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -15,9 +15,9 @@ help: ...@@ -15,9 +15,9 @@ help:
url: url:
en_US: https://jina.ai/embeddings/ en_US: https://jina.ai/embeddings/
supported_model_types: supported_model_types:
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://github.com/go-skynet/LocalAI en_US: https://github.com/go-skynet/LocalAI
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -3,24 +3,24 @@ label: ...@@ -3,24 +3,24 @@ label:
en_US: Abab5-Chat en_US: Abab5-Chat
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 6144 context_size: 6144
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 6144 default: 6144
min: 1 min: 1
max: 6144 max: 6144
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
pricing: pricing:
input: '0.00' input: '0.00'
......
...@@ -3,26 +3,26 @@ label: ...@@ -3,26 +3,26 @@ label:
en_US: Abab5.5-Chat en_US: Abab5.5-Chat
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16384 context_size: 16384
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 6144 default: 6144
min: 1 min: 1
max: 16384 max: 16384
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: plugin_web_search - name: plugin_web_search
required: false required: false
default: false default: false
type: boolean type: boolean
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://api.minimax.chat/user-center/basic-information/interface-key en_US: https://api.minimax.chat/user-center/basic-information/interface-key
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: minimax_api_key - variable: minimax_api_key
......
...@@ -4,21 +4,21 @@ label: ...@@ -4,21 +4,21 @@ label:
en_US: gpt-3.5-turbo-0613 en_US: gpt-3.5-turbo-0613
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -4,21 +4,21 @@ label: ...@@ -4,21 +4,21 @@ label:
en_US: gpt-3.5-turbo-1106 en_US: gpt-3.5-turbo-1106
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16385 context_size: 16385
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -4,21 +4,21 @@ label: ...@@ -4,21 +4,21 @@ label:
en_US: gpt-3.5-turbo-16k-0613 en_US: gpt-3.5-turbo-16k-0613
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16385 context_size: 16385
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -4,21 +4,21 @@ label: ...@@ -4,21 +4,21 @@ label:
en_US: gpt-3.5-turbo-16k en_US: gpt-3.5-turbo-16k
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16385 context_size: 16385
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -3,20 +3,20 @@ label: ...@@ -3,20 +3,20 @@ label:
zh_Hans: gpt-3.5-turbo-instruct zh_Hans: gpt-3.5-turbo-instruct
en_US: gpt-3.5-turbo-instruct en_US: gpt-3.5-turbo-instruct
model_type: llm model_type: llm
features: [] features: [ ]
model_properties: model_properties:
mode: completion mode: completion
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -4,21 +4,21 @@ label: ...@@ -4,21 +4,21 @@ label:
en_US: gpt-3.5-turbo en_US: gpt-3.5-turbo
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -4,26 +4,26 @@ label: ...@@ -4,26 +4,26 @@ label:
en_US: gpt-4-1106-preview en_US: gpt-4-1106-preview
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 128000 context_size: 128000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 128000 max: 128000
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
...@@ -39,7 +39,7 @@ parameter_rules: ...@@ -39,7 +39,7 @@ parameter_rules:
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
......
...@@ -4,26 +4,26 @@ label: ...@@ -4,26 +4,26 @@ label:
en_US: gpt-4-32k en_US: gpt-4-32k
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 32768 context_size: 32768
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 32768 max: 32768
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
...@@ -39,7 +39,7 @@ parameter_rules: ...@@ -39,7 +39,7 @@ parameter_rules:
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
......
...@@ -4,25 +4,25 @@ label: ...@@ -4,25 +4,25 @@ label:
en_US: gpt-4-vision-preview en_US: gpt-4-vision-preview
model_type: llm model_type: llm
features: features:
- vision - vision
model_properties: model_properties:
mode: chat mode: chat
context_size: 128000 context_size: 128000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 128000 max: 128000
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
...@@ -38,7 +38,7 @@ parameter_rules: ...@@ -38,7 +38,7 @@ parameter_rules:
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
......
...@@ -4,26 +4,26 @@ label: ...@@ -4,26 +4,26 @@ label:
en_US: gpt-4 en_US: gpt-4
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 8192 context_size: 8192
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 8192 max: 8192
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
...@@ -39,7 +39,7 @@ parameter_rules: ...@@ -39,7 +39,7 @@ parameter_rules:
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
......
...@@ -3,20 +3,20 @@ label: ...@@ -3,20 +3,20 @@ label:
zh_Hans: text-davinci-003 zh_Hans: text-davinci-003
en_US: text-davinci-003 en_US: text-davinci-003
model_type: llm model_type: llm
features: [] features: [ ]
model_properties: model_properties:
mode: completion mode: completion
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
......
...@@ -16,13 +16,13 @@ help: ...@@ -16,13 +16,13 @@ help:
url: url:
en_US: https://platform.openai.com/account/api-keys en_US: https://platform.openai.com/account/api-keys
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
- speech2text - speech2text
- moderation - moderation
configurate_methods: configurate_methods:
- predefined-model - predefined-model
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -5,10 +5,10 @@ description: ...@@ -5,10 +5,10 @@ description:
en_US: Model providers compatible with OpenAI's API standard, such as LM Studio. en_US: Model providers compatible with OpenAI's API standard, such as LM Studio.
zh_Hans: 兼容 OpenAI API 的模型供应商,例如 LM Studio 。 zh_Hans: 兼容 OpenAI API 的模型供应商,例如 LM Studio 。
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://github.com/bentoml/OpenLLM en_US: https://github.com/bentoml/OpenLLM
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://replicate.com/account/api-tokens en_US: https://replicate.com/account/api-tokens
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -5,13 +5,13 @@ model_type: llm ...@@ -5,13 +5,13 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.5 default: 0.5
help: help:
zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。 zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。
en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question. en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
...@@ -19,7 +19,7 @@ parameter_rules: ...@@ -19,7 +19,7 @@ parameter_rules:
help: help:
zh_Hans: 模型回答的tokens的最大长度。 zh_Hans: 模型回答的tokens的最大长度。
en_US: 模型回答的tokens的最大长度。 en_US: 模型回答的tokens的最大长度。
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
......
...@@ -6,13 +6,13 @@ model_type: llm ...@@ -6,13 +6,13 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.5 default: 0.5
help: help:
zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。 zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。
en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question. en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 2048 default: 2048
min: 1 min: 1
...@@ -20,7 +20,7 @@ parameter_rules: ...@@ -20,7 +20,7 @@ parameter_rules:
help: help:
zh_Hans: 模型回答的tokens的最大长度。 zh_Hans: 模型回答的tokens的最大长度。
en_US: 模型回答的tokens的最大长度。 en_US: 模型回答的tokens的最大长度。
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
......
...@@ -5,13 +5,13 @@ model_type: llm ...@@ -5,13 +5,13 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.5 default: 0.5
help: help:
zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。 zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。
en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question. en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 2048 default: 2048
min: 1 min: 1
...@@ -19,7 +19,7 @@ parameter_rules: ...@@ -19,7 +19,7 @@ parameter_rules:
help: help:
zh_Hans: 模型回答的tokens的最大长度。 zh_Hans: 模型回答的tokens的最大长度。
en_US: 模型回答的tokens的最大长度。 en_US: 模型回答的tokens的最大长度。
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
......
...@@ -15,9 +15,9 @@ help: ...@@ -15,9 +15,9 @@ help:
url: url:
en_US: https://www.xfyun.cn/solutions/xinghuoAPI en_US: https://www.xfyun.cn/solutions/xinghuoAPI
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: app_id - variable: app_id
......
...@@ -13,9 +13,9 @@ help: ...@@ -13,9 +13,9 @@ help:
url: url:
en_US: https://api.together.xyz/ en_US: https://api.together.xyz/
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -6,7 +6,7 @@ model_properties: ...@@ -6,7 +6,7 @@ model_properties:
mode: completion mode: completion
context_size: 32000 context_size: 32000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 1.0 default: 1.0
min: 0.0 min: 0.0
...@@ -14,13 +14,13 @@ parameter_rules: ...@@ -14,13 +14,13 @@ parameter_rules:
help: help:
zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。 zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。
en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain. en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.8 default: 0.8
help: help:
zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。 zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。
en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated. en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 2000 default: 2000
min: 1 min: 1
...@@ -28,7 +28,7 @@ parameter_rules: ...@@ -28,7 +28,7 @@ parameter_rules:
help: help:
zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。 zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。
en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated. en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated.
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -37,7 +37,7 @@ parameter_rules: ...@@ -37,7 +37,7 @@ parameter_rules:
zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。 zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。
en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect. en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect.
required: false required: false
- name: seed - name: seed
label: label:
zh_Hans: 随机种子 zh_Hans: 随机种子
en_US: Random seed en_US: Random seed
...@@ -47,7 +47,7 @@ parameter_rules: ...@@ -47,7 +47,7 @@ parameter_rules:
zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。 zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。
en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234. en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234.
required: false required: false
- name: repetition_penalty - name: repetition_penalty
label: label:
en_US: Repetition penalty en_US: Repetition penalty
type: float type: float
......
...@@ -6,7 +6,7 @@ model_properties: ...@@ -6,7 +6,7 @@ model_properties:
mode: completion mode: completion
context_size: 8192 context_size: 8192
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 1.0 default: 1.0
min: 0.0 min: 0.0
...@@ -14,13 +14,13 @@ parameter_rules: ...@@ -14,13 +14,13 @@ parameter_rules:
help: help:
zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。 zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。
en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain. en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.8 default: 0.8
help: help:
zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。 zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。
en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated. en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1500 default: 1500
min: 1 min: 1
...@@ -28,7 +28,7 @@ parameter_rules: ...@@ -28,7 +28,7 @@ parameter_rules:
help: help:
zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。 zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。
en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated. en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated.
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
...@@ -37,7 +37,7 @@ parameter_rules: ...@@ -37,7 +37,7 @@ parameter_rules:
zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。 zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。
en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect. en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect.
required: false required: false
- name: seed - name: seed
label: label:
zh_Hans: 随机种子 zh_Hans: 随机种子
en_US: Random seed en_US: Random seed
...@@ -47,7 +47,7 @@ parameter_rules: ...@@ -47,7 +47,7 @@ parameter_rules:
zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。 zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。
en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234. en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234.
required: false required: false
- name: repetition_penalty - name: repetition_penalty
label: label:
en_US: Repetition penalty en_US: Repetition penalty
type: float type: float
......
...@@ -15,9 +15,9 @@ help: ...@@ -15,9 +15,9 @@ help:
url: url:
en_US: https://dashscope.console.aliyun.com/api-key_management en_US: https://dashscope.console.aliyun.com/api-key_management
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: dashscope_api_key - variable: dashscope_api_key
......
...@@ -3,29 +3,29 @@ label: ...@@ -3,29 +3,29 @@ label:
en_US: Ernie Bot 4 en_US: Ernie Bot 4
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4800 context_size: 4800
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
min: 1 min: 1
max: 4800 max: 4800
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: disable_search - name: disable_search
label: label:
zh_Hans: 禁用搜索 zh_Hans: 禁用搜索
en_US: Disable Search en_US: Disable Search
......
...@@ -3,29 +3,29 @@ label: ...@@ -3,29 +3,29 @@ label:
en_US: Ernie Bot 8k en_US: Ernie Bot 8k
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 8000 context_size: 8000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 1024 default: 1024
min: 1 min: 1
max: 8000 max: 8000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: disable_search - name: disable_search
label: label:
zh_Hans: 禁用搜索 zh_Hans: 禁用搜索
en_US: Disable Search en_US: Disable Search
......
...@@ -3,25 +3,25 @@ label: ...@@ -3,25 +3,25 @@ label:
en_US: Ernie Bot Turbo en_US: Ernie Bot Turbo
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 11200 context_size: 11200
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 1024 default: 1024
min: 1 min: 1
max: 11200 max: 11200
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
...@@ -3,29 +3,29 @@ label: ...@@ -3,29 +3,29 @@ label:
en_US: Ernie Bot en_US: Ernie Bot
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4800 context_size: 4800
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
min: 1 min: 1
max: 4800 max: 4800
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: disable_search - name: disable_search
label: label:
zh_Hans: 禁用搜索 zh_Hans: 禁用搜索
en_US: Disable Search en_US: Disable Search
......
...@@ -16,9 +16,9 @@ help: ...@@ -16,9 +16,9 @@ help:
url: url:
en_US: https://cloud.baidu.com/wenxin.html en_US: https://cloud.baidu.com/wenxin.html
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
......
...@@ -13,11 +13,11 @@ help: ...@@ -13,11 +13,11 @@ help:
url: url:
en_US: https://github.com/xorbitsai/inference en_US: https://github.com/xorbitsai/inference
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
- rerank - rerank
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
......
...@@ -5,7 +5,7 @@ model_type: llm ...@@ -5,7 +5,7 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
...@@ -13,7 +13,7 @@ parameter_rules: ...@@ -13,7 +13,7 @@ parameter_rules:
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
......
...@@ -5,7 +5,7 @@ model_type: llm ...@@ -5,7 +5,7 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
...@@ -13,7 +13,7 @@ parameter_rules: ...@@ -13,7 +13,7 @@ parameter_rules:
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
......
...@@ -5,7 +5,7 @@ model_type: llm ...@@ -5,7 +5,7 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
...@@ -13,7 +13,7 @@ parameter_rules: ...@@ -13,7 +13,7 @@ parameter_rules:
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
......
...@@ -5,7 +5,7 @@ model_type: llm ...@@ -5,7 +5,7 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
...@@ -13,7 +13,7 @@ parameter_rules: ...@@ -13,7 +13,7 @@ parameter_rules:
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
......
...@@ -5,7 +5,7 @@ model_type: llm ...@@ -5,7 +5,7 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.95 default: 0.95
min: 0.0 min: 0.0
...@@ -13,13 +13,13 @@ parameter_rules: ...@@ -13,13 +13,13 @@ parameter_rules:
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: incremental - name: incremental
label: label:
zh_Hans: 增量返回 zh_Hans: 增量返回
en_US: Incremental en_US: Incremental
...@@ -28,7 +28,7 @@ parameter_rules: ...@@ -28,7 +28,7 @@ parameter_rules:
zh_Hans: SSE接口调用时,用于控制每次返回内容方式是增量还是全量,不提供此参数时默认为增量返回,true 为增量返回,false 为全量返回。 zh_Hans: SSE接口调用时,用于控制每次返回内容方式是增量还是全量,不提供此参数时默认为增量返回,true 为增量返回,false 为全量返回。
en_US: When the SSE interface is called, it is used to control whether the content is returned incrementally or in full. If this parameter is not provided, the default is incremental return. true means incremental return, false means full return. en_US: When the SSE interface is called, it is used to control whether the content is returned incrementally or in full. If this parameter is not provided, the default is incremental return. true means incremental return, false means full return.
required: false required: false
- name: return_type - name: return_type
label: label:
zh_Hans: 回复类型 zh_Hans: 回复类型
en_US: Return Type en_US: Return Type
......
...@@ -15,10 +15,10 @@ help: ...@@ -15,10 +15,10 @@ help:
url: url:
en_US: https://open.bigmodel.cn/usercenter/apikeys en_US: https://open.bigmodel.cn/usercenter/apikeys
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
......
...@@ -236,7 +236,7 @@ services: ...@@ -236,7 +236,7 @@ services:
# ports: # ports:
# - "5432:5432" # - "5432:5432"
healthcheck: healthcheck:
test: ["CMD", "pg_isready"] test: [ "CMD", "pg_isready" ]
interval: 1s interval: 1s
timeout: 3s timeout: 3s
retries: 30 retries: 30
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment