Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
D
dify
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ai-tech
dify
Commits
f0c9bb7c
Unverified
Commit
f0c9bb7c
authored
Feb 01, 2024
by
Yeuoly
Committed by
GitHub
Feb 01, 2024
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix: typo (#2318)
parent
d8672796
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
3 deletions
+3
-3
llm.py
api/core/model_runtime/model_providers/tongyi/llm/llm.py
+3
-3
No files found.
api/core/model_runtime/model_providers/tongyi/llm/llm.py
View file @
f0c9bb7c
...
...
@@ -168,7 +168,7 @@ class TongyiLargeLanguageModel(LargeLanguageModel):
return
result
def
_handle_generate_stream_response
(
self
,
model
:
str
,
credentials
:
dict
,
responses
:
list
[
Generator
]
,
def
_handle_generate_stream_response
(
self
,
model
:
str
,
credentials
:
dict
,
responses
:
Generator
,
prompt_messages
:
list
[
PromptMessage
])
->
Generator
:
"""
Handle llm stream response
...
...
@@ -182,7 +182,7 @@ class TongyiLargeLanguageModel(LargeLanguageModel):
for
index
,
response
in
enumerate
(
responses
):
resp_finish_reason
=
response
.
output
.
finish_reason
resp_content
=
response
.
output
.
text
us
e
age
=
response
.
usage
usage
=
response
.
usage
if
resp_finish_reason
is
None
and
(
resp_content
is
None
or
resp_content
==
''
):
continue
...
...
@@ -194,7 +194,7 @@ class TongyiLargeLanguageModel(LargeLanguageModel):
if
resp_finish_reason
is
not
None
:
# transform usage
usage
=
self
.
_calc_response_usage
(
model
,
credentials
,
us
eage
.
input_tokens
,
use
age
.
output_tokens
)
usage
=
self
.
_calc_response_usage
(
model
,
credentials
,
us
age
.
input_tokens
,
us
age
.
output_tokens
)
yield
LLMResultChunk
(
model
=
model
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment