Unverified Commit b8592ad4 authored by Bowen Liang's avatar Bowen Liang Committed by GitHub

fix: indentation violations in YAML files (#1972)

parent e696b72f
name: "🕷️ Bug report" name: "🕷️ Bug report"
description: Report errors or unexpected behavior description: Report errors or unexpected behavior
labels: labels:
- bug - bug
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
options: options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones. - label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: input - type: input
attributes: attributes:
label: Dify version label: Dify version
placeholder: 0.3.21 placeholder: 0.3.21
description: See about section in Dify console description: See about section in Dify console
validations: validations:
required: true required: true
- type: dropdown - type: dropdown
attributes: attributes:
label: Cloud or Self Hosted label: Cloud or Self Hosted
description: How / Where was Dify installed from? description: How / Where was Dify installed from?
multiple: true multiple: true
options: options:
- Cloud - Cloud
- Self Hosted (Docker) - Self Hosted (Docker)
- Self Hosted (Source) - Self Hosted (Source)
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Steps to reproduce label: Steps to reproduce
description: We highly suggest including screenshots and a bug report log. description: We highly suggest including screenshots and a bug report log.
placeholder: Having detailed steps helps us reproduce the bug. placeholder: Having detailed steps helps us reproduce the bug.
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ✔️ Expected Behavior label: ✔️ Expected Behavior
placeholder: What were you expecting? placeholder: What were you expecting?
validations: validations:
required: false required: false
- type: textarea - type: textarea
attributes: attributes:
label: ❌ Actual Behavior label: ❌ Actual Behavior
placeholder: What happened instead? placeholder: What happened instead?
validations: validations:
required: false required: false
...@@ -5,4 +5,4 @@ contact_links: ...@@ -5,4 +5,4 @@ contact_links:
about: Documentation for users of Dify about: Documentation for users of Dify
- name: "\U0001F4DA Dify dev documentation" - name: "\U0001F4DA Dify dev documentation"
url: https://docs.dify.ai/getting-started/install-self-hosted url: https://docs.dify.ai/getting-started/install-self-hosted
about: Documentation for people interested in developing and contributing for Dify about: Documentation for people interested in developing and contributing for Dify
\ No newline at end of file
name: "📚 Documentation Issue" name: "📚 Documentation Issue"
description: Report issues in our documentation description: Report issues in our documentation
labels: labels:
- ducumentation - ducumentation
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
options: options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones. - label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Provide a description of requested docs changes label: Provide a description of requested docs changes
placeholder: Briefly describe which document needs to be corrected and why. placeholder: Briefly describe which document needs to be corrected and why.
validations: validations:
required: true required: true
\ No newline at end of file
name: " Feature or enhancement request" name: " Feature or enhancement request"
description: Propose something new. description: Propose something new.
labels: labels:
- enhancement - enhancement
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
options: options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones. - label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Description of the new feature / enhancement label: Description of the new feature / enhancement
placeholder: What is the expected behavior of the proposed feature? placeholder: What is the expected behavior of the proposed feature?
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Scenario when this would be used? label: Scenario when this would be used?
placeholder: What is the scenario this would be used? Why is this important to your workflow as a dify user? placeholder: What is the scenario this would be used? Why is this important to your workflow as a dify user?
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Supporting information label: Supporting information
placeholder: "Having additional evidence, data, tweets, blog posts, research, ... anything is extremely helpful. This information provides context to the scenario that may otherwise be lost." placeholder: "Having additional evidence, data, tweets, blog posts, research, ... anything is extremely helpful. This information provides context to the scenario that may otherwise be lost."
validations: validations:
required: false required: false
- type: markdown - type: markdown
attributes: attributes:
value: Please limit one request per issue. value: Please limit one request per issue.
\ No newline at end of file
name: "🤝 Help Wanted" name: "🤝 Help Wanted"
description: "Request help from the community [please use English :)]" description: "Request help from the community [please use English :)]"
labels: labels:
- help-wanted - help-wanted
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
options: options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones. - label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Provide a description of the help you need label: Provide a description of the help you need
placeholder: Briefly describe what you need help with. placeholder: Briefly describe what you need help with.
validations: validations:
required: true required: true
\ No newline at end of file
name: "🌐 Localization/Translation issue" name: "🌐 Localization/Translation issue"
description: Report incorrect translations. [please use English :)] description: Report incorrect translations. [please use English :)]
labels: labels:
- translation - translation
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Self Checks label: Self Checks
description: "To make sure we get to you in time, please check the following :)" description: "To make sure we get to you in time, please check the following :)"
options: options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones. - label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true required: true
- label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)). - label: I confirm that I am using English to file this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true required: true
- type: input - type: input
attributes: attributes:
label: Dify version label: Dify version
placeholder: 0.3.21 placeholder: 0.3.21
description: Hover over system tray icon or look at Settings description: Hover over system tray icon or look at Settings
validations: validations:
required: true required: true
- type: input - type: input
attributes: attributes:
label: Utility with translation issue label: Utility with translation issue
placeholder: Some area placeholder: Some area
description: Please input here the utility with the translation issue description: Please input here the utility with the translation issue
validations: validations:
required: true required: true
- type: input - type: input
attributes: attributes:
label: 🌐 Language affected label: 🌐 Language affected
placeholder: "German" placeholder: "German"
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ❌ Actual phrase(s) label: ❌ Actual phrase(s)
placeholder: What is there? Please include a screenshot as that is extremely helpful. placeholder: What is there? Please include a screenshot as that is extremely helpful.
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ✔️ Expected phrase(s) label: ✔️ Expected phrase(s)
placeholder: What was expected? placeholder: What was expected?
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: ℹ Why is the current translation wrong label: ℹ Why is the current translation wrong
placeholder: Why do you feel this is incorrect? placeholder: Why do you feel this is incorrect?
validations: validations:
required: true required: true
\ No newline at end of file
failure-threshold: "error" failure-threshold: "error"
\ No newline at end of file
...@@ -5,11 +5,7 @@ extends: default ...@@ -5,11 +5,7 @@ extends: default
rules: rules:
brackets: brackets:
max-spaces-inside: 1 max-spaces-inside: 1
comments-indentation: disable
document-start: disable document-start: disable
indentation:
level: warning
line-length: disable line-length: disable
new-line-at-end-of-file: truthy: disable
level: warning
trailing-spaces:
level: warning
...@@ -32,18 +32,18 @@ jobs: ...@@ -32,18 +32,18 @@ jobs:
MOCK_SWITCH: true MOCK_SWITCH: true
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: '3.10' python-version: '3.10'
cache: 'pip' cache: 'pip'
cache-dependency-path: ./api/requirements.txt cache-dependency-path: ./api/requirements.txt
- name: Install dependencies - name: Install dependencies
run: pip install -r ./api/requirements.txt run: pip install -r ./api/requirements.txt
- name: Run pytest - name: Run pytest
run: pytest api/tests/integration_tests/model_runtime/anthropic api/tests/integration_tests/model_runtime/azure_openai api/tests/integration_tests/model_runtime/openai api/tests/integration_tests/model_runtime/chatglm api/tests/integration_tests/model_runtime/google api/tests/integration_tests/model_runtime/xinference api/tests/integration_tests/model_runtime/huggingface_hub/test_llm.py run: pytest api/tests/integration_tests/model_runtime/anthropic api/tests/integration_tests/model_runtime/azure_openai api/tests/integration_tests/model_runtime/openai api/tests/integration_tests/model_runtime/chatglm api/tests/integration_tests/model_runtime/google api/tests/integration_tests/model_runtime/xinference api/tests/integration_tests/model_runtime/huggingface_hub/test_llm.py
...@@ -6,55 +6,55 @@ on: ...@@ -6,55 +6,55 @@ on:
- 'main' - 'main'
- 'deploy/dev' - 'deploy/dev'
release: release:
types: [published] types: [ published ]
jobs: jobs:
build-and-push: build-and-push:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.pull_request.draft == false if: github.event.pull_request.draft == false
steps: steps:
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub - name: Login to Docker Hub
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USER }} username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker - name: Extract metadata (tags, labels) for Docker
id: meta id: meta
uses: docker/metadata-action@v5 uses: docker/metadata-action@v5
with: with:
images: langgenius/dify-api images: langgenius/dify-api
tags: | tags: |
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') }} type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') }}
type=ref,event=branch type=ref,event=branch
type=sha,enable=true,priority=100,prefix=,suffix=,format=long type=sha,enable=true,priority=100,prefix=,suffix=,format=long
type=raw,value=${{ github.ref_name }},enable=${{ startsWith(github.ref, 'refs/tags/') }} type=raw,value=${{ github.ref_name }},enable=${{ startsWith(github.ref, 'refs/tags/') }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: "{{defaultContext}}:api" context: "{{defaultContext}}:api"
platforms: ${{ startsWith(github.ref, 'refs/tags/') && 'linux/amd64,linux/arm64' || 'linux/amd64' }} platforms: ${{ startsWith(github.ref, 'refs/tags/') && 'linux/amd64,linux/arm64' || 'linux/amd64' }}
build-args: | build-args: |
COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }} COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
push: true push: true
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha cache-from: type=gha
cache-to: type=gha,mode=max cache-to: type=gha,mode=max
- name: Deploy to server - name: Deploy to server
if: github.ref == 'refs/heads/deploy/dev' if: github.ref == 'refs/heads/deploy/dev'
uses: appleboy/ssh-action@v0.1.8 uses: appleboy/ssh-action@v0.1.8
with: with:
host: ${{ secrets.SSH_HOST }} host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }} username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }} key: ${{ secrets.SSH_PRIVATE_KEY }}
script: | script: |
${{ secrets.SSH_SCRIPT }} ${{ secrets.SSH_SCRIPT }}
...@@ -6,55 +6,55 @@ on: ...@@ -6,55 +6,55 @@ on:
- 'main' - 'main'
- 'deploy/dev' - 'deploy/dev'
release: release:
types: [published] types: [ published ]
jobs: jobs:
build-and-push: build-and-push:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.pull_request.draft == false if: github.event.pull_request.draft == false
steps: steps:
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub - name: Login to Docker Hub
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USER }} username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker - name: Extract metadata (tags, labels) for Docker
id: meta id: meta
uses: docker/metadata-action@v5 uses: docker/metadata-action@v5
with: with:
images: langgenius/dify-web images: langgenius/dify-web
tags: | tags: |
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') }} type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') }}
type=ref,event=branch type=ref,event=branch
type=sha,enable=true,priority=100,prefix=,suffix=,format=long type=sha,enable=true,priority=100,prefix=,suffix=,format=long
type=raw,value=${{ github.ref_name }},enable=${{ startsWith(github.ref, 'refs/tags/') }} type=raw,value=${{ github.ref_name }},enable=${{ startsWith(github.ref, 'refs/tags/') }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: "{{defaultContext}}:web" context: "{{defaultContext}}:web"
platforms: ${{ startsWith(github.ref, 'refs/tags/') && 'linux/amd64,linux/arm64' || 'linux/amd64' }} platforms: ${{ startsWith(github.ref, 'refs/tags/') && 'linux/amd64,linux/arm64' || 'linux/amd64' }}
build-args: | build-args: |
COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }} COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
push: true push: true
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha cache-from: type=gha
cache-to: type=gha,mode=max cache-to: type=gha,mode=max
- name: Deploy to server - name: Deploy to server
if: github.ref == 'refs/heads/deploy/dev' if: github.ref == 'refs/heads/deploy/dev'
uses: appleboy/ssh-action@v0.1.8 uses: appleboy/ssh-action@v0.1.8
with: with:
host: ${{ secrets.SSH_HOST }} host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }} username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }} key: ${{ secrets.SSH_PRIVATE_KEY }}
script: | script: |
${{ secrets.SSH_SCRIPT }} ${{ secrets.SSH_SCRIPT }}
...@@ -7,7 +7,7 @@ name: Mark stale issues and pull requests ...@@ -7,7 +7,7 @@ name: Mark stale issues and pull requests
on: on:
schedule: schedule:
- cron: '0 3 * * *' - cron: '0 3 * * *'
jobs: jobs:
stale: stale:
...@@ -18,13 +18,13 @@ jobs: ...@@ -18,13 +18,13 @@ jobs:
pull-requests: write pull-requests: write
steps: steps:
- uses: actions/stale@v5 - uses: actions/stale@v5
with: with:
days-before-issue-stale: 15 days-before-issue-stale: 15
days-before-issue-close: 3 days-before-issue-close: 3
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "Close due to it's no longer active, if you have any questions, you can reopen it." stale-issue-message: "Close due to it's no longer active, if you have any questions, you can reopen it."
stale-pr-message: "Close due to it's no longer active, if you have any questions, you can reopen it." stale-pr-message: "Close due to it's no longer active, if you have any questions, you can reopen it."
stale-issue-label: 'no-issue-activity' stale-issue-label: 'no-issue-activity'
stale-pr-label: 'no-pr-activity' stale-pr-label: 'no-pr-activity'
any-of-labels: 'duplicate,question,invalid,wontfix,no-issue-activity,no-pr-activity,enhancement,cant-reproduce,help-wanted' any-of-labels: 'duplicate,question,invalid,wontfix,no-issue-activity,no-pr-activity,enhancement,cant-reproduce,help-wanted'
...@@ -18,37 +18,37 @@ jobs: ...@@ -18,37 +18,37 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup NodeJS - name: Setup NodeJS
uses: actions/setup-node@v4 uses: actions/setup-node@v4
with: with:
node-version: 18 node-version: 18
cache: yarn cache: yarn
cache-dependency-path: ./web/package.json cache-dependency-path: ./web/package.json
- name: Web dependencies - name: Web dependencies
run: | run: |
cd ./web cd ./web
yarn install --frozen-lockfile yarn install --frozen-lockfile
- name: Web style check - name: Web style check
run: | run: |
cd ./web cd ./web
yarn run lint yarn run lint
- name: Super-linter - name: Super-linter
uses: super-linter/super-linter/slim@v5 uses: super-linter/super-linter/slim@v5
env: env:
BASH_SEVERITY: warning BASH_SEVERITY: warning
DEFAULT_BRANCH: main DEFAULT_BRANCH: main
ERROR_ON_MISSING_EXEC_BIT: true ERROR_ON_MISSING_EXEC_BIT: true
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
IGNORE_GENERATED_FILES: true IGNORE_GENERATED_FILES: true
IGNORE_GITIGNORED_FILES: true IGNORE_GITIGNORED_FILES: true
VALIDATE_BASH: true VALIDATE_BASH: true
VALIDATE_BASH_EXEC: true VALIDATE_BASH_EXEC: true
VALIDATE_GITHUB_ACTIONS: true VALIDATE_GITHUB_ACTIONS: true
VALIDATE_DOCKERFILE_HADOLINT: true VALIDATE_DOCKERFILE_HADOLINT: true
VALIDATE_YAML: true VALIDATE_YAML: true
...@@ -17,4 +17,4 @@ ...@@ -17,4 +17,4 @@
- xinference - xinference
- openllm - openllm
- localai - localai
- openai_api_compatible - openai_api_compatible
\ No newline at end of file
...@@ -16,24 +16,24 @@ help: ...@@ -16,24 +16,24 @@ help:
url: url:
en_US: https://console.anthropic.com/account/keys en_US: https://console.anthropic.com/account/keys
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: anthropic_api_key - variable: anthropic_api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: anthropic_api_url - variable: anthropic_api_url
label: label:
en_US: API URL en_US: API URL
type: text-input type: text-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的 API URL zh_Hans: 在此输入您的 API URL
en_US: Enter your API URL en_US: Enter your API URL
...@@ -3,32 +3,32 @@ label: ...@@ -3,32 +3,32 @@ label:
en_US: claude-2.1 en_US: claude-2.1
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 200000 context_size: 200000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '8.00' input: '8.00'
output: '24.00' output: '24.00'
unit: '0.000001' unit: '0.000001'
currency: USD currency: USD
\ No newline at end of file
...@@ -3,32 +3,32 @@ label: ...@@ -3,32 +3,32 @@ label:
en_US: claude-2 en_US: claude-2
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 100000 context_size: 100000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '8.00' input: '8.00'
output: '24.00' output: '24.00'
unit: '0.000001' unit: '0.000001'
currency: USD currency: USD
\ No newline at end of file
...@@ -2,32 +2,32 @@ model: claude-instant-1 ...@@ -2,32 +2,32 @@ model: claude-instant-1
label: label:
en_US: claude-instant-1 en_US: claude-instant-1
model_type: llm model_type: llm
features: [] features: [ ]
model_properties: model_properties:
mode: chat mode: chat
context_size: 100000 context_size: 100000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '1.63' input: '1.63'
output: '5.51' output: '5.51'
unit: '0.000001' unit: '0.000001'
currency: USD currency: USD
\ No newline at end of file
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://azure.microsoft.com/en-us/products/ai-services/openai-service en_US: https://azure.microsoft.com/en-us/products/ai-services/openai-service
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
...@@ -26,79 +26,79 @@ model_credential_schema: ...@@ -26,79 +26,79 @@ model_credential_schema:
en_US: Enter your Deployment Name here, matching the Azure deployment name. en_US: Enter your Deployment Name here, matching the Azure deployment name.
zh_Hans: 在此输入您的部署名称,与 Azure 部署名称匹配。 zh_Hans: 在此输入您的部署名称,与 Azure 部署名称匹配。
credential_form_schemas: credential_form_schemas:
- variable: openai_api_base - variable: openai_api_base
label: label:
en_US: API Endpoint URL en_US: API Endpoint URL
zh_Hans: API 域名 zh_Hans: API 域名
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: '在此输入您的 API 域名,如:https://example.com/xxx' zh_Hans: '在此输入您的 API 域名,如:https://example.com/xxx'
en_US: 'Enter your API Endpoint, eg: https://example.com/xxx' en_US: 'Enter your API Endpoint, eg: https://example.com/xxx'
- variable: openai_api_key - variable: openai_api_key
label: label:
en_US: API Key en_US: API Key
zh_Hans: API Key zh_Hans: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API key here en_US: Enter your API key here
- variable: base_model_name - variable: base_model_name
label: label:
en_US: Base Model en_US: Base Model
zh_Hans: 基础模型 zh_Hans: 基础模型
type: select type: select
required: true required: true
options: options:
- label: - label:
en_US: gpt-35-turbo en_US: gpt-35-turbo
value: gpt-35-turbo value: gpt-35-turbo
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: gpt-35-turbo-16k en_US: gpt-35-turbo-16k
value: gpt-35-turbo-16k value: gpt-35-turbo-16k
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: gpt-4 en_US: gpt-4
value: gpt-4 value: gpt-4
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: gpt-4-32k en_US: gpt-4-32k
value: gpt-4-32k value: gpt-4-32k
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: gpt-4-1106-preview en_US: gpt-4-1106-preview
value: gpt-4-1106-preview value: gpt-4-1106-preview
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: gpt-4-vision-preview en_US: gpt-4-vision-preview
value: gpt-4-vision-preview value: gpt-4-vision-preview
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: gpt-35-turbo-instruct en_US: gpt-35-turbo-instruct
value: gpt-35-turbo-instruct value: gpt-35-turbo-instruct
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
- label: - label:
en_US: text-embedding-ada-002 en_US: text-embedding-ada-002
value: text-embedding-ada-002 value: text-embedding-ada-002
show_on: show_on:
- variable: __model_type - variable: __model_type
value: text-embedding value: text-embedding
placeholder: placeholder:
zh_Hans: 在此输入您的模型版本 zh_Hans: 在此输入您的模型版本
en_US: Enter your model version en_US: Enter your model version
\ No newline at end of file
...@@ -8,30 +8,30 @@ icon_large: ...@@ -8,30 +8,30 @@ icon_large:
background: "#FFF6F2" background: "#FFF6F2"
help: help:
title: title:
en_US: Get your API Key from BAICHUAN AI en_US: Get your API Key from BAICHUAN AI
zh_Hans: 从百川智能获取您的 API Key zh_Hans: 从百川智能获取您的 API Key
url: url:
en_US: https://www.baichuan-ai.com en_US: https://www.baichuan-ai.com
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: secret_key - variable: secret_key
label: label:
en_US: Secret Key en_US: Secret Key
type: secret-input type: secret-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的 Secret Key zh_Hans: 在此输入您的 Secret Key
en_US: Enter your Secret Key en_US: Enter your Secret Key
...@@ -3,40 +3,40 @@ label: ...@@ -3,40 +3,40 @@ label:
en_US: Baichuan2-53B en_US: Baichuan2-53B
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4000 context_size: 4000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 1000 default: 1000
min: 1 min: 1
max: 4000 max: 4000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: with_search_enhance - name: with_search_enhance
label: label:
zh_Hans: 搜索增强 zh_Hans: 搜索增强
en_US: Search Enhance en_US: Search Enhance
type: boolean type: boolean
help: help:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。 zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results. en_US: Allow the model to perform external search to enhance the generation results.
required: false required: false
\ No newline at end of file
...@@ -3,40 +3,40 @@ label: ...@@ -3,40 +3,40 @@ label:
en_US: Baichuan2-Turbo-192K en_US: Baichuan2-Turbo-192K
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 192000 context_size: 192000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 8000 default: 8000
min: 1 min: 1
max: 192000 max: 192000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: with_search_enhance - name: with_search_enhance
label: label:
zh_Hans: 搜索增强 zh_Hans: 搜索增强
en_US: Search Enhance en_US: Search Enhance
type: boolean type: boolean
help: help:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。 zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results. en_US: Allow the model to perform external search to enhance the generation results.
required: false required: false
\ No newline at end of file
...@@ -3,40 +3,40 @@ label: ...@@ -3,40 +3,40 @@ label:
en_US: Baichuan2-Turbo en_US: Baichuan2-Turbo
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 192000 context_size: 192000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 8000 default: 8000
min: 1 min: 1
max: 192000 max: 192000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: with_search_enhance - name: with_search_enhance
label: label:
zh_Hans: 搜索增强 zh_Hans: 搜索增强
en_US: Search Enhance en_US: Search Enhance
type: boolean type: boolean
help: help:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。 zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results. en_US: Allow the model to perform external search to enhance the generation results.
required: false required: false
\ No newline at end of file
...@@ -2,4 +2,4 @@ model: baichuan-text-embedding ...@@ -2,4 +2,4 @@ model: baichuan-text-embedding
model_type: text-embedding model_type: text-embedding
model_properties: model_properties:
context_size: 512 context_size: 512
max_chunks: 16 max_chunks: 16
\ No newline at end of file
...@@ -13,16 +13,16 @@ help: ...@@ -13,16 +13,16 @@ help:
url: url:
en_US: https://github.com/THUDM/ChatGLM3 en_US: https://github.com/THUDM/ChatGLM3
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_base - variable: api_base
label: label:
en_US: API URL en_US: API URL
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API URL zh_Hans: 在此输入您的 API URL
en_US: Enter your API URL en_US: Enter your API URL
...@@ -3,19 +3,19 @@ label: ...@@ -3,19 +3,19 @@ label:
en_US: ChatGLM2-6B-32K en_US: ChatGLM2-6B-32K
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 32000 context_size: 32000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 2000 default: 2000
min: 1 min: 1
max: 32000 max: 32000
\ No newline at end of file
...@@ -3,19 +3,19 @@ label: ...@@ -3,19 +3,19 @@ label:
en_US: ChatGLM2-6B en_US: ChatGLM2-6B
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 2000 context_size: 2000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
min: 1 min: 1
max: 2000 max: 2000
\ No newline at end of file
...@@ -3,20 +3,20 @@ label: ...@@ -3,20 +3,20 @@ label:
en_US: ChatGLM3-6B-32K en_US: ChatGLM3-6B-32K
model_type: llm model_type: llm
features: features:
- tool-call - tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 32000 context_size: 32000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 8000 default: 8000
min: 1 min: 1
max: 32000 max: 32000
\ No newline at end of file
...@@ -3,20 +3,20 @@ label: ...@@ -3,20 +3,20 @@ label:
en_US: ChatGLM3-6B en_US: ChatGLM3-6B
model_type: llm model_type: llm
features: features:
- tool-call - tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 8000 context_size: 8000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
required: false required: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
min: 1 min: 1
max: 8000 max: 8000
\ No newline at end of file
...@@ -14,18 +14,18 @@ help: ...@@ -14,18 +14,18 @@ help:
url: url:
en_US: https://dashboard.cohere.com/api-keys en_US: https://dashboard.cohere.com/api-keys
supported_model_types: supported_model_types:
- rerank - rerank
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
label: label:
zh_Hans: API Key zh_Hans: API Key
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 请填写 API Key zh_Hans: 请填写 API Key
en_US: Please fill in API Key en_US: Please fill in API Key
show_on: [] show_on: [ ]
\ No newline at end of file
model: rerank-multilingual-v2.0 model: rerank-multilingual-v2.0
model_type: rerank model_type: rerank
model_properties: model_properties:
context_size: 5120 context_size: 5120
\ No newline at end of file
...@@ -16,17 +16,16 @@ help: ...@@ -16,17 +16,16 @@ help:
url: url:
en_US: https://ai.google.dev/ en_US: https://ai.google.dev/
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: google_api_key - variable: google_api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
\ No newline at end of file
...@@ -3,32 +3,32 @@ label: ...@@ -3,32 +3,32 @@ label:
en_US: Gemini Pro Vision en_US: Gemini Pro Vision
model_type: llm model_type: llm
features: features:
- vision - vision
model_properties: model_properties:
mode: chat mode: chat
context_size: 12288 context_size: 12288
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 4096 default: 4096
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'
unit: '0.000001' unit: '0.000001'
currency: USD currency: USD
\ No newline at end of file
...@@ -3,32 +3,32 @@ label: ...@@ -3,32 +3,32 @@ label:
en_US: Gemini Pro en_US: Gemini Pro
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 30720 context_size: 30720
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token. en_US: Only sample from the top K options for each subsequent token.
required: false required: false
- name: max_tokens_to_sample - name: max_tokens_to_sample
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 2048 default: 2048
min: 1 min: 1
max: 2048 max: 2048
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'
unit: '0.000001' unit: '0.000001'
currency: USD currency: USD
\ No newline at end of file
...@@ -2,9 +2,9 @@ provider: huggingface_hub ...@@ -2,9 +2,9 @@ provider: huggingface_hub
label: label:
en_US: Hugging Face Model en_US: Hugging Face Model
icon_small: icon_small:
en_US: icon_s_en.svg en_US: icon_s_en.svg
icon_large: icon_large:
en_US: icon_l_en.svg en_US: icon_l_en.svg
background: "#FFF8DC" background: "#FFF8DC"
help: help:
title: title:
...@@ -13,90 +13,90 @@ help: ...@@ -13,90 +13,90 @@ help:
url: url:
en_US: https://huggingface.co/settings/tokens en_US: https://huggingface.co/settings/tokens
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
en_US: Model Name en_US: Model Name
zh_Hans: 模型名称 zh_Hans: 模型名称
credential_form_schemas: credential_form_schemas:
- variable: huggingfacehub_api_type - variable: huggingfacehub_api_type
label:
en_US: Endpoint Type
zh_Hans: 端点类型
type: radio
required: true
default: hosted_inference_api
options:
- value: hosted_inference_api
label: label:
en_US: Hosted Inference API en_US: Endpoint Type
- value: inference_endpoints zh_Hans: 端点类型
type: radio
required: true
default: hosted_inference_api
options:
- value: hosted_inference_api
label:
en_US: Hosted Inference API
- value: inference_endpoints
label:
en_US: Inference Endpoints
- variable: huggingfacehub_api_token
label: label:
en_US: Inference Endpoints en_US: API Token
- variable: huggingfacehub_api_token zh_Hans: API Token
label: type: secret-input
en_US: API Token required: true
zh_Hans: API Token placeholder:
type: secret-input en_US: Enter your Hugging Face Hub API Token here
required: true zh_Hans: 在此输入您的 Hugging Face Hub API Token
placeholder: - variable: huggingface_namespace
en_US: Enter your Hugging Face Hub API Token here
zh_Hans: 在此输入您的 Hugging Face Hub API Token
- variable: huggingface_namespace
label:
en_US: 'User Name / Organization Name'
zh_Hans: '用户名 / 组织名称'
type: text-input
required: true
placeholder:
en_US: 'Enter your User Name / Organization Name here'
zh_Hans: '在此输入您的用户名 / 组织名称'
show_on:
- variable: __model_type
value: text-embedding
- variable: huggingfacehub_api_type
value: inference_endpoints
- variable: huggingfacehub_endpoint_url
label:
en_US: Endpoint URL
zh_Hans: 端点 URL
type: text-input
required: true
placeholder:
en_US: Enter your Endpoint URL here
zh_Hans: 在此输入您的端点 URL
show_on:
- variable: huggingfacehub_api_type
value: inference_endpoints
- variable: task_type
label:
en_US: Task
zh_Hans: Task
type: select
options:
- value: text2text-generation
label: label:
en_US: Text-to-Text Generation en_US: 'User Name / Organization Name'
zh_Hans: '用户名 / 组织名称'
type: text-input
required: true
placeholder:
en_US: 'Enter your User Name / Organization Name here'
zh_Hans: '在此输入您的用户名 / 组织名称'
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: text-embedding
- value: text-generation - variable: huggingfacehub_api_type
value: inference_endpoints
- variable: huggingfacehub_endpoint_url
label: label:
en_US: Text Generation en_US: Endpoint URL
zh_Hans: 文本生成 zh_Hans: 端点 URL
type: text-input
required: true
placeholder:
en_US: Enter your Endpoint URL here
zh_Hans: 在此输入您的端点 URL
show_on: show_on:
- variable: __model_type - variable: huggingfacehub_api_type
value: llm value: inference_endpoints
- value: feature-extraction - variable: task_type
label: label:
en_US: Feature Extraction en_US: Task
zh_Hans: Task
type: select
options:
- value: text2text-generation
label:
en_US: Text-to-Text Generation
show_on:
- variable: __model_type
value: llm
- value: text-generation
label:
en_US: Text Generation
zh_Hans: 文本生成
show_on:
- variable: __model_type
value: llm
- value: feature-extraction
label:
en_US: Feature Extraction
show_on:
- variable: __model_type
value: text-embedding
show_on: show_on:
- variable: __model_type - variable: huggingfacehub_api_type
value: text-embedding value: inference_endpoints
show_on:
- variable: huggingfacehub_api_type
value: inference_endpoints
...@@ -2,7 +2,7 @@ provider: jina ...@@ -2,7 +2,7 @@ provider: jina
label: label:
en_US: Jina en_US: Jina
description: description:
en_US: Embedding Model Supported en_US: Embedding Model Supported
icon_small: icon_small:
en_US: icon_s_en.svg en_US: icon_s_en.svg
icon_large: icon_large:
...@@ -15,16 +15,16 @@ help: ...@@ -15,16 +15,16 @@ help:
url: url:
en_US: https://jina.ai/embeddings/ en_US: https://jina.ai/embeddings/
supported_model_types: supported_model_types:
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
\ No newline at end of file
...@@ -6,4 +6,4 @@ model_properties: ...@@ -6,4 +6,4 @@ model_properties:
pricing: pricing:
input: '0.001' input: '0.001'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -6,4 +6,4 @@ model_properties: ...@@ -6,4 +6,4 @@ model_properties:
pricing: pricing:
input: '0.001' input: '0.001'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://github.com/go-skynet/LocalAI en_US: https://github.com/go-skynet/LocalAI
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
...@@ -26,33 +26,33 @@ model_credential_schema: ...@@ -26,33 +26,33 @@ model_credential_schema:
en_US: Enter your model name en_US: Enter your model name
zh_Hans: 输入模型名称 zh_Hans: 输入模型名称
credential_form_schemas: credential_form_schemas:
- variable: completion_type - variable: completion_type
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
label: label:
en_US: Completion type en_US: Completion type
type: select type: select
required: false required: false
default: chat_completion default: chat_completion
placeholder: placeholder:
zh_Hans: 选择对话类型 zh_Hans: 选择对话类型
en_US: Select completion type en_US: Select completion type
options: options:
- value: completion - value: completion
label: label:
en_US: Completion en_US: Completion
zh_Hans: 补全 zh_Hans: 补全
- value: chat_completion - value: chat_completion
label: label:
en_US: ChatCompletion en_US: ChatCompletion
zh_Hans: 对话 zh_Hans: 对话
- variable: server_url - variable: server_url
label: label:
zh_Hans: 服务器URL zh_Hans: 服务器URL
en_US: Server url en_US: Server url
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入LocalAI的服务器地址,如 https://example.com/xxx zh_Hans: 在此输入LocalAI的服务器地址,如 https://example.com/xxx
en_US: Enter the url of your LocalAI, for example https://example.com/xxx en_US: Enter the url of your LocalAI, for example https://example.com/xxx
\ No newline at end of file
...@@ -3,27 +3,27 @@ label: ...@@ -3,27 +3,27 @@ label:
en_US: Abab5-Chat en_US: Abab5-Chat
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 6144 context_size: 6144
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 6144 default: 6144
min: 1 min: 1
max: 6144 max: 6144
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.015' output: '0.015'
unit: '0.001' unit: '0.001'
currency: RMB currency: RMB
\ No newline at end of file
...@@ -3,34 +3,34 @@ label: ...@@ -3,34 +3,34 @@ label:
en_US: Abab5.5-Chat en_US: Abab5.5-Chat
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16384 context_size: 16384
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 6144 default: 6144
min: 1 min: 1
max: 16384 max: 16384
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: plugin_web_search - name: plugin_web_search
required: false required: false
default: false default: false
type: boolean type: boolean
label: label:
en_US: Enable Web Search en_US: Enable Web Search
zh_Hans: 开启网页搜索 zh_Hans: 开启网页搜索
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.015' output: '0.015'
unit: '0.001' unit: '0.001'
currency: RMB currency: RMB
\ No newline at end of file
...@@ -13,25 +13,25 @@ help: ...@@ -13,25 +13,25 @@ help:
url: url:
en_US: https://api.minimax.chat/user-center/basic-information/interface-key en_US: https://api.minimax.chat/user-center/basic-information/interface-key
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: minimax_api_key - variable: minimax_api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: minimax_group_id - variable: minimax_group_id
label: label:
en_US: Group ID en_US: Group ID
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 Group ID zh_Hans: 在此输入您的 Group ID
en_US: Enter your group ID en_US: Enter your group ID
\ No newline at end of file
...@@ -6,4 +6,4 @@ model_properties: ...@@ -6,4 +6,4 @@ model_properties:
pricing: pricing:
input: '0.0005' input: '0.0005'
unit: '0.001' unit: '0.001'
currency: RMB currency: RMB
\ No newline at end of file
...@@ -8,4 +8,4 @@ ...@@ -8,4 +8,4 @@
- gpt-3.5-turbo-1106 - gpt-3.5-turbo-1106
- gpt-3.5-turbo-0613 - gpt-3.5-turbo-0613
- gpt-3.5-turbo-instruct - gpt-3.5-turbo-instruct
- text-davinci-003 - text-davinci-003
\ No newline at end of file
...@@ -4,27 +4,27 @@ label: ...@@ -4,27 +4,27 @@ label:
en_US: gpt-3.5-turbo-0613 en_US: gpt-3.5-turbo-0613
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '0.0015' input: '0.0015'
output: '0.002' output: '0.002'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,27 +4,27 @@ label: ...@@ -4,27 +4,27 @@ label:
en_US: gpt-3.5-turbo-1106 en_US: gpt-3.5-turbo-1106
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16385 context_size: 16385
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 16385 max: 16385
pricing: pricing:
input: '0.001' input: '0.001'
output: '0.002' output: '0.002'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,27 +4,27 @@ label: ...@@ -4,27 +4,27 @@ label:
en_US: gpt-3.5-turbo-16k-0613 en_US: gpt-3.5-turbo-16k-0613
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16385 context_size: 16385
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 16385 max: 16385
pricing: pricing:
input: '0.003' input: '0.003'
output: '0.004' output: '0.004'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,27 +4,27 @@ label: ...@@ -4,27 +4,27 @@ label:
en_US: gpt-3.5-turbo-16k en_US: gpt-3.5-turbo-16k
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 16385 context_size: 16385
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 16385 max: 16385
pricing: pricing:
input: '0.003' input: '0.003'
output: '0.004' output: '0.004'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -3,26 +3,26 @@ label: ...@@ -3,26 +3,26 @@ label:
zh_Hans: gpt-3.5-turbo-instruct zh_Hans: gpt-3.5-turbo-instruct
en_US: gpt-3.5-turbo-instruct en_US: gpt-3.5-turbo-instruct
model_type: llm model_type: llm
features: [] features: [ ]
model_properties: model_properties:
mode: completion mode: completion
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '0.0015' input: '0.0015'
output: '0.002' output: '0.002'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,27 +4,27 @@ label: ...@@ -4,27 +4,27 @@ label:
en_US: gpt-3.5-turbo en_US: gpt-3.5-turbo
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '0.001' input: '0.001'
output: '0.002' output: '0.002'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,55 +4,55 @@ label: ...@@ -4,55 +4,55 @@ label:
en_US: gpt-4-1106-preview en_US: gpt-4-1106-preview
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 128000 context_size: 128000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 128000 max: 128000
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
type: int type: int
help: help:
zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
响应参数来监视变化。 响应参数来监视变化。
en_US: If specified, model will make a best effort to sample deterministically, en_US: If specified, model will make a best effort to sample deterministically,
such that repeated requests with the same seed and parameters should return such that repeated requests with the same seed and parameters should return
the same result. Determinism is not guaranteed, and you should refer to the the same result. Determinism is not guaranteed, and you should refer to the
system_fingerprint response parameter to monitor changes in the backend. system_fingerprint response parameter to monitor changes in the backend.
required: false required: false
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
type: string type: string
help: help:
zh_Hans: 指定模型必须输出的格式 zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output en_US: specifying the format that the model must output
required: false required: false
options: options:
- text - text
- json_object - json_object
pricing: pricing:
input: '0.01' input: '0.01'
output: '0.03' output: '0.03'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,55 +4,55 @@ label: ...@@ -4,55 +4,55 @@ label:
en_US: gpt-4-32k en_US: gpt-4-32k
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 32768 context_size: 32768
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 32768 max: 32768
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
type: int type: int
help: help:
zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
响应参数来监视变化。 响应参数来监视变化。
en_US: If specified, model will make a best effort to sample deterministically, en_US: If specified, model will make a best effort to sample deterministically,
such that repeated requests with the same seed and parameters should return such that repeated requests with the same seed and parameters should return
the same result. Determinism is not guaranteed, and you should refer to the the same result. Determinism is not guaranteed, and you should refer to the
system_fingerprint response parameter to monitor changes in the backend. system_fingerprint response parameter to monitor changes in the backend.
required: false required: false
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
type: string type: string
help: help:
zh_Hans: 指定模型必须输出的格式 zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output en_US: specifying the format that the model must output
required: false required: false
options: options:
- text - text
- json_object - json_object
pricing: pricing:
input: '0.06' input: '0.06'
output: '0.12' output: '0.12'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,54 +4,54 @@ label: ...@@ -4,54 +4,54 @@ label:
en_US: gpt-4-vision-preview en_US: gpt-4-vision-preview
model_type: llm model_type: llm
features: features:
- vision - vision
model_properties: model_properties:
mode: chat mode: chat
context_size: 128000 context_size: 128000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 128000 max: 128000
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
type: int type: int
help: help:
zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
响应参数来监视变化。 响应参数来监视变化。
en_US: If specified, model will make a best effort to sample deterministically, en_US: If specified, model will make a best effort to sample deterministically,
such that repeated requests with the same seed and parameters should return such that repeated requests with the same seed and parameters should return
the same result. Determinism is not guaranteed, and you should refer to the the same result. Determinism is not guaranteed, and you should refer to the
system_fingerprint response parameter to monitor changes in the backend. system_fingerprint response parameter to monitor changes in the backend.
required: false required: false
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
type: string type: string
help: help:
zh_Hans: 指定模型必须输出的格式 zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output en_US: specifying the format that the model must output
required: false required: false
options: options:
- text - text
- json_object - json_object
pricing: pricing:
input: '0.01' input: '0.01'
output: '0.03' output: '0.03'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -4,55 +4,55 @@ label: ...@@ -4,55 +4,55 @@ label:
en_US: gpt-4 en_US: gpt-4
model_type: llm model_type: llm
features: features:
- multi-tool-call - multi-tool-call
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 8192 context_size: 8192
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 8192 max: 8192
- name: seed - name: seed
label: label:
zh_Hans: 种子 zh_Hans: 种子
en_US: Seed en_US: Seed
type: int type: int
help: help:
zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
响应参数来监视变化。 响应参数来监视变化。
en_US: If specified, model will make a best effort to sample deterministically, en_US: If specified, model will make a best effort to sample deterministically,
such that repeated requests with the same seed and parameters should return such that repeated requests with the same seed and parameters should return
the same result. Determinism is not guaranteed, and you should refer to the the same result. Determinism is not guaranteed, and you should refer to the
system_fingerprint response parameter to monitor changes in the backend. system_fingerprint response parameter to monitor changes in the backend.
required: false required: false
precision: 2 precision: 2
min: 0 min: 0
max: 1 max: 1
- name: response_format - name: response_format
label: label:
zh_Hans: 回复格式 zh_Hans: 回复格式
en_US: response_format en_US: response_format
type: string type: string
help: help:
zh_Hans: 指定模型必须输出的格式 zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output en_US: specifying the format that the model must output
required: false required: false
options: options:
- text - text
- json_object - json_object
pricing: pricing:
input: '0.03' input: '0.03'
output: '0.06' output: '0.06'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -3,26 +3,26 @@ label: ...@@ -3,26 +3,26 @@ label:
zh_Hans: text-davinci-003 zh_Hans: text-davinci-003
en_US: text-davinci-003 en_US: text-davinci-003
model_type: llm model_type: llm
features: [] features: [ ]
model_properties: model_properties:
mode: completion mode: completion
context_size: 4096 context_size: 4096
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 4096 max: 4096
pricing: pricing:
input: '0.001' input: '0.001'
output: '0.002' output: '0.002'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -2,4 +2,4 @@ model: text-moderation-stable ...@@ -2,4 +2,4 @@ model: text-moderation-stable
model_type: moderation model_type: moderation
model_properties: model_properties:
max_chunks: 32 max_chunks: 32
max_characters_per_chunk: 2000 max_characters_per_chunk: 2000
\ No newline at end of file
...@@ -2,8 +2,8 @@ provider: openai ...@@ -2,8 +2,8 @@ provider: openai
label: label:
en_US: OpenAI en_US: OpenAI
description: description:
en_US: Models provided by OpenAI, such as GPT-3.5-Turbo and GPT-4. en_US: Models provided by OpenAI, such as GPT-3.5-Turbo and GPT-4.
zh_Hans: OpenAI 提供的模型,例如 GPT-3.5-Turbo 和 GPT-4。 zh_Hans: OpenAI 提供的模型,例如 GPT-3.5-Turbo 和 GPT-4。
icon_small: icon_small:
en_US: icon_s_en.svg en_US: icon_s_en.svg
icon_large: icon_large:
...@@ -16,13 +16,13 @@ help: ...@@ -16,13 +16,13 @@ help:
url: url:
en_US: https://platform.openai.com/account/api-keys en_US: https://platform.openai.com/account/api-keys
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
- speech2text - speech2text
- moderation - moderation
configurate_methods: configurate_methods:
- predefined-model - predefined-model
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
...@@ -32,57 +32,57 @@ model_credential_schema: ...@@ -32,57 +32,57 @@ model_credential_schema:
en_US: Enter your model name en_US: Enter your model name
zh_Hans: 输入模型名称 zh_Hans: 输入模型名称
credential_form_schemas: credential_form_schemas:
- variable: openai_api_key - variable: openai_api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: openai_organization - variable: openai_organization
label: label:
zh_Hans: 组织 ID zh_Hans: 组织 ID
en_US: Organization en_US: Organization
type: text-input type: text-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的组织 ID zh_Hans: 在此输入您的组织 ID
en_US: Enter your Organization ID en_US: Enter your Organization ID
- variable: openai_api_base - variable: openai_api_base
label: label:
zh_Hans: API Base zh_Hans: API Base
en_US: API Base en_US: API Base
type: text-input type: text-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的 API Base zh_Hans: 在此输入您的 API Base
en_US: Enter your API Base en_US: Enter your API Base
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: openai_api_key - variable: openai_api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: openai_organization - variable: openai_organization
label: label:
zh_Hans: 组织 ID zh_Hans: 组织 ID
en_US: Organization en_US: Organization
type: text-input type: text-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的组织 ID zh_Hans: 在此输入您的组织 ID
en_US: Enter your Organization ID en_US: Enter your Organization ID
- variable: openai_api_base - variable: openai_api_base
label: label:
zh_Hans: API Base zh_Hans: API Base
en_US: API Base en_US: API Base
type: text-input type: text-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的 API Base zh_Hans: 在此输入您的 API Base
en_US: Enter your API Base en_US: Enter your API Base
...@@ -2,4 +2,4 @@ model: whisper-1 ...@@ -2,4 +2,4 @@ model: whisper-1
model_type: speech2text model_type: speech2text
model_properties: model_properties:
file_upload_limit: 25 file_upload_limit: 25
supported_file_extensions: mp3,mp4,mpeg,mpga,m4a,wav,webm supported_file_extensions: mp3,mp4,mpeg,mpga,m4a,wav,webm
\ No newline at end of file
...@@ -6,4 +6,4 @@ model_properties: ...@@ -6,4 +6,4 @@ model_properties:
pricing: pricing:
input: '0.0001' input: '0.0001'
unit: '0.001' unit: '0.001'
currency: USD currency: USD
\ No newline at end of file
...@@ -5,73 +5,73 @@ description: ...@@ -5,73 +5,73 @@ description:
en_US: Model providers compatible with OpenAI's API standard, such as LM Studio. en_US: Model providers compatible with OpenAI's API standard, such as LM Studio.
zh_Hans: 兼容 OpenAI API 的模型供应商,例如 LM Studio 。 zh_Hans: 兼容 OpenAI API 的模型供应商,例如 LM Studio 。
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
en_US: Model Name en_US: Model Name
zh_Hans: 模型名称 zh_Hans: 模型名称
placeholder: placeholder:
en_US: Enter full model name en_US: Enter full model name
zh_Hans: 输入模型全称 zh_Hans: 输入模型全称
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: false required: false
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: endpoint_url - variable: endpoint_url
label: label:
zh_Hans: API endpoint URL zh_Hans: API endpoint URL
en_US: API endpoint URL en_US: API endpoint URL
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: Base URL, eg. https://api.openai.com/v1 zh_Hans: Base URL, eg. https://api.openai.com/v1
en_US: Base URL, eg. https://api.openai.com/v1 en_US: Base URL, eg. https://api.openai.com/v1
- variable: mode - variable: mode
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
label: label:
en_US: Completion mode en_US: Completion mode
type: select type: select
required: false required: false
default: chat default: chat
placeholder: placeholder:
zh_Hans: 选择对话类型 zh_Hans: 选择对话类型
en_US: Select completion mode en_US: Select completion mode
options: options:
- value: completion - value: completion
label: label:
en_US: Completion en_US: Completion
zh_Hans: 补全 zh_Hans: 补全
- value: chat - value: chat
label: label:
en_US: Chat en_US: Chat
zh_Hans: 对话 zh_Hans: 对话
- variable: context_size - variable: context_size
label: label:
zh_Hans: 模型上下文长度 zh_Hans: 模型上下文长度
en_US: Model context size en_US: Model context size
required: true required: true
type: text-input type: text-input
default: '4096' default: '4096'
placeholder: placeholder:
zh_Hans: 在此输入您的模型上下文长度 zh_Hans: 在此输入您的模型上下文长度
en_US: Enter your Model context size en_US: Enter your Model context size
- variable: max_tokens_to_sample - variable: max_tokens_to_sample
label: label:
zh_Hans: 最大 token 上限 zh_Hans: 最大 token 上限
en_US: Upper bound for max tokens en_US: Upper bound for max tokens
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
default: '4096' default: '4096'
type: text-input type: text-input
\ No newline at end of file
...@@ -13,10 +13,10 @@ help: ...@@ -13,10 +13,10 @@ help:
url: url:
en_US: https://github.com/bentoml/OpenLLM en_US: https://github.com/bentoml/OpenLLM
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
...@@ -26,12 +26,12 @@ model_credential_schema: ...@@ -26,12 +26,12 @@ model_credential_schema:
en_US: Enter your model name en_US: Enter your model name
zh_Hans: 输入模型名称 zh_Hans: 输入模型名称
credential_form_schemas: credential_form_schemas:
- variable: server_url - variable: server_url
label: label:
zh_Hans: 服务器URL zh_Hans: 服务器URL
en_US: Server url en_US: Server url
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入OpenLLM的服务器地址,如 https://example.com/xxx zh_Hans: 在此输入OpenLLM的服务器地址,如 https://example.com/xxx
en_US: Enter the url of your OpenLLM, for example https://example.com/xxx en_US: Enter the url of your OpenLLM, for example https://example.com/xxx
\ No newline at end of file
...@@ -13,29 +13,29 @@ help: ...@@ -13,29 +13,29 @@ help:
url: url:
en_US: https://replicate.com/account/api-tokens en_US: https://replicate.com/account/api-tokens
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
en_US: Model Name en_US: Model Name
zh_Hans: 模型名称 zh_Hans: 模型名称
credential_form_schemas: credential_form_schemas:
- variable: replicate_api_token - variable: replicate_api_token
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 Replicate API Key zh_Hans: 在此输入您的 Replicate API Key
en_US: Enter your Replicate API Key en_US: Enter your Replicate API Key
- variable: model_version - variable: model_version
label: label:
en_US: Model Version en_US: Model Version
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的模型版本 zh_Hans: 在此输入您的模型版本
en_US: Enter your model version en_US: Enter your model version
\ No newline at end of file
...@@ -5,29 +5,29 @@ model_type: llm ...@@ -5,29 +5,29 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.5 default: 0.5
help: help:
zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。 zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。
en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question. en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 512 default: 512
min: 1 min: 1
max: 4096 max: 4096
help: help:
zh_Hans: 模型回答的tokens的最大长度。 zh_Hans: 模型回答的tokens的最大长度。
en_US: 模型回答的tokens的最大长度。 en_US: 模型回答的tokens的最大长度。
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
default: 4 default: 4
min: 1 min: 1
max: 6 max: 6
help: help:
zh_Hans: 从 k 个候选中随机选择⼀个(⾮等概率)。 zh_Hans: 从 k 个候选中随机选择⼀个(⾮等概率)。
en_US: Randomly select one from k candidates (non-equal probability). en_US: Randomly select one from k candidates (non-equal probability).
required: false required: false
\ No newline at end of file
...@@ -6,29 +6,29 @@ model_type: llm ...@@ -6,29 +6,29 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.5 default: 0.5
help: help:
zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。 zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。
en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question. en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 2048 default: 2048
min: 1 min: 1
max: 8192 max: 8192
help: help:
zh_Hans: 模型回答的tokens的最大长度。 zh_Hans: 模型回答的tokens的最大长度。
en_US: 模型回答的tokens的最大长度。 en_US: 模型回答的tokens的最大长度。
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
default: 4 default: 4
min: 1 min: 1
max: 6 max: 6
help: help:
zh_Hans: 从 k 个候选中随机选择⼀个(⾮等概率)。 zh_Hans: 从 k 个候选中随机选择⼀个(⾮等概率)。
en_US: Randomly select one from k candidates (non-equal probability). en_US: Randomly select one from k candidates (non-equal probability).
required: false required: false
\ No newline at end of file
...@@ -5,29 +5,29 @@ model_type: llm ...@@ -5,29 +5,29 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.5 default: 0.5
help: help:
zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。 zh_Hans: 核采样阈值。用于决定结果随机性,取值越高随机性越强即相同的问题得到的不同答案的可能性越高。
en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question. en_US: Kernel sampling threshold. Used to determine the randomness of the results. The higher the value, the stronger the randomness, that is, the higher the possibility of getting different answers to the same question.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 2048 default: 2048
min: 1 min: 1
max: 8192 max: 8192
help: help:
zh_Hans: 模型回答的tokens的最大长度。 zh_Hans: 模型回答的tokens的最大长度。
en_US: 模型回答的tokens的最大长度。 en_US: 模型回答的tokens的最大长度。
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
default: 4 default: 4
min: 1 min: 1
max: 6 max: 6
help: help:
zh_Hans: 从 k 个候选中随机选择⼀个(⾮等概率)。 zh_Hans: 从 k 个候选中随机选择⼀个(⾮等概率)。
en_US: Randomly select one from k candidates (non-equal probability). en_US: Randomly select one from k candidates (non-equal probability).
required: false required: false
\ No newline at end of file
...@@ -15,32 +15,32 @@ help: ...@@ -15,32 +15,32 @@ help:
url: url:
en_US: https://www.xfyun.cn/solutions/xinghuoAPI en_US: https://www.xfyun.cn/solutions/xinghuoAPI
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: app_id - variable: app_id
label: label:
en_US: APPID en_US: APPID
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 APPID zh_Hans: 在此输入您的 APPID
en_US: Enter your APPID en_US: Enter your APPID
- variable: api_secret - variable: api_secret
label: label:
en_US: APISecret en_US: APISecret
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 APISecret zh_Hans: 在此输入您的 APISecret
en_US: Enter your APISecret en_US: Enter your APISecret
- variable: api_key - variable: api_key
label: label:
en_US: APIKey en_US: APIKey
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 APIKey zh_Hans: 在此输入您的 APIKey
en_US: Enter your APIKey en_US: Enter your APIKey
...@@ -2,9 +2,9 @@ provider: togetherai ...@@ -2,9 +2,9 @@ provider: togetherai
label: label:
en_US: together.ai en_US: together.ai
icon_small: icon_small:
en_US: togetherai_square.svg en_US: togetherai_square.svg
icon_large: icon_large:
en_US: togetherai.svg en_US: togetherai.svg
background: "#F1EFED" background: "#F1EFED"
help: help:
title: title:
...@@ -13,63 +13,63 @@ help: ...@@ -13,63 +13,63 @@ help:
url: url:
en_US: https://api.together.xyz/ en_US: https://api.together.xyz/
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
en_US: Model Name en_US: Model Name
zh_Hans: 模型名称 zh_Hans: 模型名称
placeholder: placeholder:
en_US: Enter full model name en_US: Enter full model name
zh_Hans: 输入模型全称 zh_Hans: 输入模型全称
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
required: true required: true
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: mode - variable: mode
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
label: label:
en_US: Completion mode en_US: Completion mode
type: select type: select
required: false required: false
default: chat default: chat
placeholder: placeholder:
zh_Hans: 选择对话类型 zh_Hans: 选择对话类型
en_US: Select completion mode en_US: Select completion mode
options: options:
- value: completion - value: completion
label: label:
en_US: Completion en_US: Completion
zh_Hans: 补全 zh_Hans: 补全
- value: chat - value: chat
label: label:
en_US: Chat en_US: Chat
zh_Hans: 对话 zh_Hans: 对话
- variable: context_size - variable: context_size
label: label:
zh_Hans: 模型上下文长度 zh_Hans: 模型上下文长度
en_US: Model context size en_US: Model context size
required: true required: true
type: text-input type: text-input
default: '4096' default: '4096'
placeholder: placeholder:
zh_Hans: 在此输入您的模型上下文长度 zh_Hans: 在此输入您的模型上下文长度
en_US: Enter your Model context size en_US: Enter your Model context size
- variable: max_tokens_to_sample - variable: max_tokens_to_sample
label: label:
zh_Hans: 最大 token 上限 zh_Hans: 最大 token 上限
en_US: Upper bound for max tokens en_US: Upper bound for max tokens
show_on: show_on:
- variable: __model_type - variable: __model_type
value: llm value: llm
default: '4096' default: '4096'
type: text-input type: text-input
\ No newline at end of file
...@@ -6,52 +6,52 @@ model_properties: ...@@ -6,52 +6,52 @@ model_properties:
mode: completion mode: completion
context_size: 32000 context_size: 32000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 1.0 default: 1.0
min: 0.0 min: 0.0
max: 2.0 max: 2.0
help: help:
zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。 zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。
en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain. en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.8 default: 0.8
help: help:
zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。 zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。
en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated. en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 2000 default: 2000
min: 1 min: 1
max: 2000 max: 2000
help: help:
zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。 zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。
en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated. en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated.
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。 zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。
en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect. en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect.
required: false required: false
- name: seed - name: seed
label: label:
zh_Hans: 随机种子 zh_Hans: 随机种子
en_US: Random seed en_US: Random seed
type: int type: int
default: 1234 default: 1234
help: help:
zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。 zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。
en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234. en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234.
required: false required: false
- name: repetition_penalty - name: repetition_penalty
label: label:
en_US: Repetition penalty en_US: Repetition penalty
type: float type: float
default: 1.1 default: 1.1
help: help:
zh_Hans: 用于控制模型生成时的重复度。提高repetition_penalty时可以降低模型生成的重复度。1.0表示不做惩罚。 zh_Hans: 用于控制模型生成时的重复度。提高repetition_penalty时可以降低模型生成的重复度。1.0表示不做惩罚。
en_US: Used to control the repetition of model generation. Increasing the repetition_penalty can reduce the repetition of model generation. 1.0 means no punishment. en_US: Used to control the repetition of model generation. Increasing the repetition_penalty can reduce the repetition of model generation. 1.0 means no punishment.
\ No newline at end of file
...@@ -6,53 +6,53 @@ model_properties: ...@@ -6,53 +6,53 @@ model_properties:
mode: completion mode: completion
context_size: 8192 context_size: 8192
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 1.0 default: 1.0
min: 0.0 min: 0.0
max: 2.0 max: 2.0
help: help:
zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。 zh_Hans: 用于控制随机性和多样性的程度。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。
en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain. en_US: Used to control the degree of randomness and diversity. Specifically, the temperature value controls the degree to which the probability distribution of each candidate word is smoothed when generating text. A higher temperature value will reduce the peak value of the probability distribution, allowing more low-probability words to be selected, and the generated results will be more diverse; while a lower temperature value will enhance the peak value of the probability distribution, making it easier for high-probability words to be selected. , the generated results are more certain.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.8 default: 0.8
help: help:
zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。 zh_Hans: 生成过程中核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。
en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated. en_US: The probability threshold of the kernel sampling method during the generation process. For example, when the value is 0.8, only the smallest set of the most likely tokens with a sum of probabilities greater than or equal to 0.8 is retained as the candidate set. The value range is (0,1.0). The larger the value, the higher the randomness generated; the lower the value, the higher the certainty generated.
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1500 default: 1500
min: 1 min: 1
max: 1500 max: 1500
help: help:
zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。 zh_Hans: 用于限制模型生成token的数量,max_tokens设置的是生成上限,并不表示一定会生成这么多的token数量。
en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated. en_US: It is used to limit the number of tokens generated by the model. max_tokens sets the upper limit of generation, which does not mean that so many tokens will be generated.
- name: top_k - name: top_k
label: label:
zh_Hans: 取样数量 zh_Hans: 取样数量
en_US: Top k en_US: Top k
type: int type: int
help: help:
zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。 zh_Hans: 生成时,采样候选集的大小。例如,取值为50时,仅将单次生成中得分最高的50个token组成随机采样的候选集。取值越大,生成的随机性越高;取值越小,生成的确定性越高。默认不传递该参数,取值为None或当top_k大于100时,表示不启用top_k策略,此时,仅有top_p策略生效。
en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect. en_US: The size of the sample candidate set when generated. For example, when the value is 50, only the 50 highest-scoring tokens in a single generation form a randomly sampled candidate set. The larger the value, the higher the randomness generated; the smaller the value, the higher the certainty generated. This parameter is not passed by default. The value is None or when top_k is greater than 100, it means that the top_k policy is not enabled. At this time, only the top_p policy takes effect.
required: false required: false
- name: seed - name: seed
label: label:
zh_Hans: 随机种子 zh_Hans: 随机种子
en_US: Random seed en_US: Random seed
type: int type: int
default: 1234 default: 1234
help: help:
zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。 zh_Hans: 生成时,随机数的种子,用于控制模型生成的随机性。如果使用相同的种子,每次运行生成的结果都将相同;当需要复现模型的生成结果时,可以使用相同的种子。seed参数支持无符号64位整数类型。默认值 1234。
en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234. en_US: When generating, the random number seed is used to control the randomness of model generation. If you use the same seed, the results generated by each run will be the same; when you need to reproduce the results of the model, you can use the same seed. The seed parameter supports unsigned 64-bit integer types. Default value 1234.
required: false required: false
- name: repetition_penalty - name: repetition_penalty
label: label:
en_US: Repetition penalty en_US: Repetition penalty
type: float type: float
default: 1.1 default: 1.1
help: help:
zh_Hans: 用于控制模型生成时的重复度。提高repetition_penalty时可以降低模型生成的重复度。1.0表示不做惩罚。 zh_Hans: 用于控制模型生成时的重复度。提高repetition_penalty时可以降低模型生成的重复度。1.0表示不做惩罚。
en_US: Used to control the repetition of model generation. Increasing the repetition_penalty can reduce the repetition of model generation. 1.0 means no punishment. en_US: Used to control the repetition of model generation. Increasing the repetition_penalty can reduce the repetition of model generation. 1.0 means no punishment.
required: false required: false
\ No newline at end of file
...@@ -15,16 +15,16 @@ help: ...@@ -15,16 +15,16 @@ help:
url: url:
en_US: https://dashscope.console.aliyun.com/api-key_management en_US: https://dashscope.console.aliyun.com/api-key_management
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: dashscope_api_key - variable: dashscope_api_key
label: label:
en_US: APIKey en_US: APIKey
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 APIKey zh_Hans: 在此输入您的 APIKey
en_US: Enter your APIKey en_US: Enter your APIKey
...@@ -3,34 +3,34 @@ label: ...@@ -3,34 +3,34 @@ label:
en_US: Ernie Bot 4 en_US: Ernie Bot 4
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4800 context_size: 4800
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
min: 1 min: 1
max: 4800 max: 4800
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: disable_search - name: disable_search
label: label:
zh_Hans: 禁用搜索 zh_Hans: 禁用搜索
en_US: Disable Search en_US: Disable Search
type: boolean type: boolean
help: help:
zh_Hans: 禁用模型自行进行外部搜索。 zh_Hans: 禁用模型自行进行外部搜索。
en_US: Disable the model to perform external search. en_US: Disable the model to perform external search.
required: false required: false
\ No newline at end of file
...@@ -3,34 +3,34 @@ label: ...@@ -3,34 +3,34 @@ label:
en_US: Ernie Bot 8k en_US: Ernie Bot 8k
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 8000 context_size: 8000
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 1024 default: 1024
min: 1 min: 1
max: 8000 max: 8000
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: disable_search - name: disable_search
label: label:
zh_Hans: 禁用搜索 zh_Hans: 禁用搜索
en_US: Disable Search en_US: Disable Search
type: boolean type: boolean
help: help:
zh_Hans: 禁用模型自行进行外部搜索。 zh_Hans: 禁用模型自行进行外部搜索。
en_US: Disable the model to perform external search. en_US: Disable the model to perform external search.
required: false required: false
\ No newline at end of file
...@@ -3,25 +3,25 @@ label: ...@@ -3,25 +3,25 @@ label:
en_US: Ernie Bot Turbo en_US: Ernie Bot Turbo
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 11200 context_size: 11200
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 1024 default: 1024
min: 1 min: 1
max: 11200 max: 11200
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
\ No newline at end of file
...@@ -3,34 +3,34 @@ label: ...@@ -3,34 +3,34 @@ label:
en_US: Ernie Bot en_US: Ernie Bot
model_type: llm model_type: llm
features: features:
- agent-thought - agent-thought
model_properties: model_properties:
mode: chat mode: chat
context_size: 4800 context_size: 4800
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
min: 0.1 min: 0.1
max: 1.0 max: 1.0
default: 0.8 default: 0.8
- name: top_p - name: top_p
use_template: top_p use_template: top_p
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
required: true required: true
default: 256 default: 256
min: 1 min: 1
max: 4800 max: 4800
- name: presence_penalty - name: presence_penalty
use_template: presence_penalty use_template: presence_penalty
- name: frequency_penalty - name: frequency_penalty
use_template: frequency_penalty use_template: frequency_penalty
- name: disable_search - name: disable_search
label: label:
zh_Hans: 禁用搜索 zh_Hans: 禁用搜索
en_US: Disable Search en_US: Disable Search
type: boolean type: boolean
help: help:
zh_Hans: 禁用模型自行进行外部搜索。 zh_Hans: 禁用模型自行进行外部搜索。
en_US: Disable the model to perform external search. en_US: Disable the model to perform external search.
required: false required: false
\ No newline at end of file
...@@ -16,24 +16,24 @@ help: ...@@ -16,24 +16,24 @@ help:
url: url:
en_US: https://cloud.baidu.com/wenxin.html en_US: https://cloud.baidu.com/wenxin.html
supported_model_types: supported_model_types:
- llm - llm
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
label: label:
en_US: API Key en_US: API Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 API Key zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key en_US: Enter your API Key
- variable: secret_key - variable: secret_key
label: label:
en_US: Secret Key en_US: Secret Key
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 Secret Key zh_Hans: 在此输入您的 Secret Key
en_US: Enter your Secret Key en_US: Enter your Secret Key
\ No newline at end of file
...@@ -13,11 +13,11 @@ help: ...@@ -13,11 +13,11 @@ help:
url: url:
en_US: https://github.com/xorbitsai/inference en_US: https://github.com/xorbitsai/inference
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
- rerank - rerank
configurate_methods: configurate_methods:
- customizable-model - customizable-model
model_credential_schema: model_credential_schema:
model: model:
label: label:
...@@ -27,21 +27,21 @@ model_credential_schema: ...@@ -27,21 +27,21 @@ model_credential_schema:
en_US: Enter your model name en_US: Enter your model name
zh_Hans: 输入模型名称 zh_Hans: 输入模型名称
credential_form_schemas: credential_form_schemas:
- variable: server_url - variable: server_url
label: label:
zh_Hans: 服务器URL zh_Hans: 服务器URL
en_US: Server url en_US: Server url
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入Xinference的服务器地址,如 https://example.com/xxx zh_Hans: 在此输入Xinference的服务器地址,如 https://example.com/xxx
en_US: Enter the url of your Xinference, for example https://example.com/xxx en_US: Enter the url of your Xinference, for example https://example.com/xxx
- variable: model_uid - variable: model_uid
label: label:
zh_Hans: 模型UID zh_Hans: 模型UID
en_US: Model uid en_US: Model uid
type: text-input type: text-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的Model UID zh_Hans: 在此输入您的Model UID
en_US: Enter the model uid en_US: Enter the model uid
\ No newline at end of file
...@@ -5,18 +5,18 @@ model_type: llm ...@@ -5,18 +5,18 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
max: 1.0 max: 1.0
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
deprecated: true deprecated: true
\ No newline at end of file
...@@ -5,18 +5,18 @@ model_type: llm ...@@ -5,18 +5,18 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
max: 1.0 max: 1.0
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
deprecated: true deprecated: true
\ No newline at end of file
...@@ -5,18 +5,18 @@ model_type: llm ...@@ -5,18 +5,18 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
max: 1.0 max: 1.0
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
deprecated: true deprecated: true
\ No newline at end of file
...@@ -5,18 +5,18 @@ model_type: llm ...@@ -5,18 +5,18 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.9 default: 0.9
min: 0.0 min: 0.0
max: 1.0 max: 1.0
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
deprecated: true deprecated: true
\ No newline at end of file
...@@ -5,38 +5,38 @@ model_type: llm ...@@ -5,38 +5,38 @@ model_type: llm
model_properties: model_properties:
mode: chat mode: chat
parameter_rules: parameter_rules:
- name: temperature - name: temperature
use_template: temperature use_template: temperature
default: 0.95 default: 0.95
min: 0.0 min: 0.0
max: 1.0 max: 1.0
help: help:
zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: top_p - name: top_p
use_template: top_p use_template: top_p
default: 0.7 default: 0.7
help: help:
zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
- name: incremental - name: incremental
label: label:
zh_Hans: 增量返回 zh_Hans: 增量返回
en_US: Incremental en_US: Incremental
type: boolean type: boolean
help: help:
zh_Hans: SSE接口调用时,用于控制每次返回内容方式是增量还是全量,不提供此参数时默认为增量返回,true 为增量返回,false 为全量返回。 zh_Hans: SSE接口调用时,用于控制每次返回内容方式是增量还是全量,不提供此参数时默认为增量返回,true 为增量返回,false 为全量返回。
en_US: When the SSE interface is called, it is used to control whether the content is returned incrementally or in full. If this parameter is not provided, the default is incremental return. true means incremental return, false means full return. en_US: When the SSE interface is called, it is used to control whether the content is returned incrementally or in full. If this parameter is not provided, the default is incremental return. true means incremental return, false means full return.
required: false required: false
- name: return_type - name: return_type
label: label:
zh_Hans: 回复类型 zh_Hans: 回复类型
en_US: Return Type en_US: Return Type
type: string type: string
help: help:
zh_Hans: 用于控制每次返回内容的类型,空或者没有此字段时默认按照 json_string 返回,json_string 返回标准的 JSON 字符串,text 返回原始的文本内容。 zh_Hans: 用于控制每次返回内容的类型,空或者没有此字段时默认按照 json_string 返回,json_string 返回标准的 JSON 字符串,text 返回原始的文本内容。
en_US: Used to control the type of content returned each time. When it is empty or does not have this field, it will be returned as json_string by default. json_string returns a standard JSON string, and text returns the original text content. en_US: Used to control the type of content returned each time. When it is empty or does not have this field, it will be returned as json_string by default. json_string returns a standard JSON string, and text returns the original text content.
required: false required: false
options: options:
- text - text
- json_string - json_string
\ No newline at end of file
model: text_embedding model: text_embedding
model_type: text-embedding model_type: text-embedding
model_properties: model_properties:
context_size: 512 context_size: 512
\ No newline at end of file
...@@ -15,17 +15,17 @@ help: ...@@ -15,17 +15,17 @@ help:
url: url:
en_US: https://open.bigmodel.cn/usercenter/apikeys en_US: https://open.bigmodel.cn/usercenter/apikeys
supported_model_types: supported_model_types:
- llm - llm
- text-embedding - text-embedding
configurate_methods: configurate_methods:
- predefined-model - predefined-model
provider_credential_schema: provider_credential_schema:
credential_form_schemas: credential_form_schemas:
- variable: api_key - variable: api_key
label: label:
en_US: APIKey en_US: APIKey
type: secret-input type: secret-input
required: true required: true
placeholder: placeholder:
zh_Hans: 在此输入您的 APIKey zh_Hans: 在此输入您的 APIKey
en_US: Enter your APIKey en_US: Enter your APIKey
...@@ -236,7 +236,7 @@ services: ...@@ -236,7 +236,7 @@ services:
# ports: # ports:
# - "5432:5432" # - "5432:5432"
healthcheck: healthcheck:
test: ["CMD", "pg_isready"] test: [ "CMD", "pg_isready" ]
interval: 1s interval: 1s
timeout: 3s timeout: 3s
retries: 30 retries: 30
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment