Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
D
dify
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ai-tech
dify
Commits
18d38771
Unverified
Commit
18d38771
authored
Aug 24, 2023
by
takatost
Committed by
GitHub
Aug 24, 2023
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
feat: optimize xinference stream (#989)
parent
53e83d86
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
3 deletions
+3
-3
xinference_llm.py
api/core/third_party/langchain/llms/xinference_llm.py
+3
-3
No files found.
api/core/third_party/langchain/llms/xinference_llm.py
View file @
18d38771
...
@@ -108,12 +108,12 @@ class XinferenceLLM(Xinference):
...
@@ -108,12 +108,12 @@ class XinferenceLLM(Xinference):
Yields:
Yields:
A string token.
A string token.
"""
"""
if
isinstance
(
model
,
RESTfulGenerateModelHandle
):
if
isinstance
(
model
,
(
RESTfulChatModelHandle
,
RESTfulChatglmCppChatModelHandle
)
):
streaming_response
=
model
.
generate
(
streaming_response
=
model
.
chat
(
prompt
=
prompt
,
generate_config
=
generate_config
prompt
=
prompt
,
generate_config
=
generate_config
)
)
else
:
else
:
streaming_response
=
model
.
chat
(
streaming_response
=
model
.
generate
(
prompt
=
prompt
,
generate_config
=
generate_config
prompt
=
prompt
,
generate_config
=
generate_config
)
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment