Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
D
dify
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ai-tech
dify
Commits
d38eac95
Unverified
Commit
d38eac95
authored
Sep 27, 2023
by
takatost
Committed by
GitHub
Sep 27, 2023
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix: wenxin model name invalid when llm call (#1248)
parent
9dbb8acd
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
7 additions
and
1 deletion
+7
-1
wenxin_model.py
api/core/model_providers/models/llm/wenxin_model.py
+1
-0
wenxin_provider.py
api/core/model_providers/providers/wenxin_provider.py
+6
-1
No files found.
api/core/model_providers/models/llm/wenxin_model.py
View file @
d38eac95
...
@@ -18,6 +18,7 @@ class WenxinModel(BaseLLM):
...
@@ -18,6 +18,7 @@ class WenxinModel(BaseLLM):
provider_model_kwargs
=
self
.
_to_model_kwargs_input
(
self
.
model_rules
,
self
.
model_kwargs
)
provider_model_kwargs
=
self
.
_to_model_kwargs_input
(
self
.
model_rules
,
self
.
model_kwargs
)
# TODO load price_config from configs(db)
# TODO load price_config from configs(db)
return
Wenxin
(
return
Wenxin
(
model
=
self
.
name
,
streaming
=
self
.
streaming
,
streaming
=
self
.
streaming
,
callbacks
=
self
.
callbacks
,
callbacks
=
self
.
callbacks
,
**
self
.
credentials
,
**
self
.
credentials
,
...
...
api/core/model_providers/providers/wenxin_provider.py
View file @
d38eac95
...
@@ -61,13 +61,18 @@ class WenxinProvider(BaseModelProvider):
...
@@ -61,13 +61,18 @@ class WenxinProvider(BaseModelProvider):
:param model_type:
:param model_type:
:return:
:return:
"""
"""
model_max_tokens
=
{
'ernie-bot'
:
4800
,
'ernie-bot-turbo'
:
11200
,
}
if
model_name
in
[
'ernie-bot'
,
'ernie-bot-turbo'
]:
if
model_name
in
[
'ernie-bot'
,
'ernie-bot-turbo'
]:
return
ModelKwargsRules
(
return
ModelKwargsRules
(
temperature
=
KwargRule
[
float
](
min
=
0.01
,
max
=
1
,
default
=
0.95
,
precision
=
2
),
temperature
=
KwargRule
[
float
](
min
=
0.01
,
max
=
1
,
default
=
0.95
,
precision
=
2
),
top_p
=
KwargRule
[
float
](
min
=
0.01
,
max
=
1
,
default
=
0.8
,
precision
=
2
),
top_p
=
KwargRule
[
float
](
min
=
0.01
,
max
=
1
,
default
=
0.8
,
precision
=
2
),
presence_penalty
=
KwargRule
[
float
](
enabled
=
False
),
presence_penalty
=
KwargRule
[
float
](
enabled
=
False
),
frequency_penalty
=
KwargRule
[
float
](
enabled
=
False
),
frequency_penalty
=
KwargRule
[
float
](
enabled
=
False
),
max_tokens
=
KwargRule
[
int
](
enabled
=
False
),
max_tokens
=
KwargRule
[
int
](
enabled
=
False
,
max
=
model_max_tokens
.
get
(
model_name
)
),
)
)
else
:
else
:
return
ModelKwargsRules
(
return
ModelKwargsRules
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment