Commit 201d9943 authored by Jyong's avatar Jyong

Merge branch 'main' into feat/dataset-notion-import

parents 3a98c636 380b4b3d
---
name: "\U0001F41B Bug report"
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
<!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
Dify packages to make sure your issue has not already been fixed.
-->
Dify version: Cloud | Self Host
## Steps To Reproduce
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than Dify. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
1.
2.
## The current behavior
## The expected behavior
---
name: "\U0001F680 Feature request"
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
---
name: "\U0001F914 Questions and Help"
about: Ask a usage or consultation question
title: ''
labels: ''
assignees: ''
---
# コントリビュート
[Dify](https://dify.ai) に興味を持ち、貢献したいと思うようになったことに感謝します!始める前に、
[行動規範](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)を読み、
[既存の問題](https://github.com/langgenius/langgenius-gateway/issues)をチェックしてください。
本ドキュメントは、[Dify](https://dify.ai) をビルドしてテストするための開発環境の構築方法を説明するものです。
### 依存関係のインストール
[Dify](https://dify.ai)をビルドするには、お使いのマシンに以下の依存関係をインストールし、設定する必要があります:
- [Git](http://git-scm.com/)
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [npm](https://www.npmjs.com/) バージョン 8.x.x もしくは [Yarn](https://yarnpkg.com/)
- [Python](https://www.python.org/) バージョン 3.10.x
## ローカル開発
開発環境を構築するには、プロジェクトの git リポジトリをフォークし、適切なパッケージマネージャを使用してバックエンドとフロントエンドの依存関係をインストールし、docker-compose スタックを実行するように作成します。
### リポジトリのフォーク
[リポジトリ](https://github.com/langgenius/dify) をフォークする必要があります。
### リポジトリのクローン
GitHub でフォークしたリポジトリのクローンを作成する:
```
git clone git@github.com:<github_username>/dify.git
```
### バックエンドのインストール
バックエンドアプリケーションのインストール方法については、[Backend README](api/README.md) を参照してください。
### フロントエンドのインストール
フロントエンドアプリケーションのインストール方法については、[Frontend README](web/README.md) を参照してください。
### ブラウザで dify にアクセス
[Dify](https://dify.ai) をローカル環境で見ることができるようになりました [http://localhost:3000](http://localhost:3000)
## プルリクエストの作成
変更後、プルリクエスト (PR) をオープンしてください。プルリクエストを提出すると、Dify チーム/コミュニティの他の人があなたと一緒にそれをレビューします。
マージコンフリクトなどの問題が発生したり、プルリクエストの開き方がわからなくなったりしませんでしたか? [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) で、マージコンフリクトやその他の問題を解決する方法をチェックしてみてください。あなたの PR がマージされると、[コントリビュータチャート](https://github.com/langgenius/langgenius-gateway/graphs/contributors)にコントリビュータとして誇らしげに掲載されます。
## コミュニティチャンネル
お困りですか?何か質問がありますか? [Discord Community サーバ](https://discord.gg/AhzKf7dNgk)に参加してください。私たちがお手伝いします!
![](./images/describe-en.png) ![](./images/describe-en.png)
<p align="center"> <p align="center">
<a href="./README.md">English</a> | <a href="./README.md">English</a> |
<a href="./README_CN.md">简体中文</a> <a href="./README_CN.md">简体中文</a> |
<a href="./README_JA.md">日本語</a>
</p> </p>
[Website](https://dify.ai)[Docs](https://docs.dify.ai)[Twitter](https://twitter.com/dify_ai)[Discord](https://discord.gg/FngNHpbcY7) [Website](https://dify.ai)[Docs](https://docs.dify.ai)[Twitter](https://twitter.com/dify_ai)[Discord](https://discord.gg/FngNHpbcY7)
Vote for us on Product Hunt ↓
<a href="https://www.producthunt.com/posts/dify-ai"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?sanitize=true&post_id=dify-ai&theme=light" alt="Product Hunt Badge" width="250" height="54"></a>
**Dify** is an easy-to-use LLMOps platform designed to empower more people to create sustainable, AI-native applications. With visual orchestration for various application types, Dify offers out-of-the-box, ready-to-use applications that can also serve as Backend-as-a-Service APIs. Unify your development process with one API for plugins and datasets integration, and streamline your operations using a single interface for prompt engineering, visual analytics, and continuous improvement. **Dify** is an easy-to-use LLMOps platform designed to empower more people to create sustainable, AI-native applications. With visual orchestration for various application types, Dify offers out-of-the-box, ready-to-use applications that can also serve as Backend-as-a-Service APIs. Unify your development process with one API for plugins and datasets integration, and streamline your operations using a single interface for prompt engineering, visual analytics, and continuous improvement.
Applications created with Dify include: Applications created with Dify include:
......
![](./images/describe-cn.jpg) ![](./images/describe-cn.jpg)
<p align="center"> <p align="center">
<a href="./README.md">English</a> | <a href="./README.md">English</a> |
<a href="./README_CN.md">简体中文</a> <a href="./README_CN.md">简体中文</a> |
<a href="./README_JA.md">日本語</a>
</p> </p>
[官方网站](https://dify.ai)[文档](https://docs.dify.ai/v/zh-hans)[Twitter](https://twitter.com/dify_ai)[Discord](https://discord.gg/FngNHpbcY7) [官方网站](https://dify.ai)[文档](https://docs.dify.ai/v/zh-hans)[Twitter](https://twitter.com/dify_ai)[Discord](https://discord.gg/FngNHpbcY7)
在 Product Hunt 上投我们一票吧 ↓
<a href="https://www.producthunt.com/posts/dify-ai"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?sanitize=true&post_id=dify-ai&theme=light" alt="Product Hunt Badge" width="250" height="54"></a>
**Dify** 是一个易用的 LLMOps 平台,旨在让更多人可以创建可持续运营的原生 AI 应用。Dify 提供多种类型应用的可视化编排,应用可开箱即用,也能以“后端即服务”的 API 提供服务。 **Dify** 是一个易用的 LLMOps 平台,旨在让更多人可以创建可持续运营的原生 AI 应用。Dify 提供多种类型应用的可视化编排,应用可开箱即用,也能以“后端即服务”的 API 提供服务。
通过 Dify 创建的应用包含了: 通过 Dify 创建的应用包含了:
......
![](./images/describe-en.png)
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_CN.md">简体中文</a> |
<a href="./README_JA.md">日本語</a>
</p>
[Web サイト](https://dify.ai)[ドキュメント](https://docs.dify.ai)[Twitter](https://twitter.com/dify_ai)[Discord](https://discord.gg/FngNHpbcY7)
Product Huntで私たちに投票してください ↓
<a href="https://www.producthunt.com/posts/dify-ai"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?sanitize=true&post_id=dify-ai&theme=light" alt="Product Hunt Badge" width="250" height="54"></a>
**Dify** は、より多くの人々が持続可能な AI ネイティブアプリケーションを作成できるように設計された、使いやすい LLMOps プラットフォームです。様々なアプリケーションタイプに対応したビジュアルオーケストレーションにより Dify は Backend-as-a-Service API としても機能する、すぐに使えるアプリケーションを提供します。プラグインやデータセットを統合するための1つの API で開発プロセスを統一し、プロンプトエンジニアリング、ビジュアル分析、継続的な改善のための1つのインターフェイスを使って業務を合理化します。
Difyで作成したアプリケーションは以下の通りです:
フォームモードとチャット会話モードをサポートする、すぐに使える Web サイト
プラグイン機能、コンテキストの強化などを網羅する単一の API により、バックエンドのコーディングの手間を省きます。
アプリケーションの視覚的なデータ分析、ログレビュー、アノテーションが可能です。
Dify は LangChain と互換性があり、複数の LLM を徐々にサポートします:
- GPT 3 (text-davinci-003)
- GPT 3.5 Turbo(ChatGPT)
- GPT-4
## クラウドサービスの利用
[Dify.ai](https://dify.ai) をご覧ください
## Community Edition のインストール
### システム要件
Dify をインストールする前に、お使いのマシンが以下の最低システム要件を満たしていることを確認してください:
- CPU >= 1 Core
- RAM >= 4GB
### クイックスタート
Dify サーバーを起動する最も簡単な方法は、[docker-compose.yml](docker/docker-compose.yaml) ファイルを実行することです。インストールコマンドを実行する前に、[Docker](https://docs.docker.com/get-docker/)[Docker Compose](https://docs.docker.com/compose/install/) がお使いのマシンにインストールされていることを確認してください:
```bash
cd docker
docker-compose up -d
```
実行後、ブラウザで [http://localhost/install](http://localhost/install) にアクセスし、初期化インストール作業を開始することができます。
### 構成
カスタマイズが必要な場合は、[docker-compose.yml](docker/docker-compose.yaml) ファイルのコメントを参照し、手動で環境設定をお願いします。変更後、再度 'docker-compose up -d' を実行してください。
## ロードマップ
開発中の機能:
- **データセット**, Notionやウェブページからのコンテンツ同期など、より多くのデータセットをサポートします
テキスト、ウェブページ、さらには Notion コンテンツなど、より多くのデータセットをサポートする予定です。ユーザーは、自分のデータソースをもとに AI アプリケーションを構築することができます。
- **プラグイン**, アプリケーションに ChatGPT プラグイン標準のプラグインを導入する、または Dify 制作のプラグインを利用する
今後、ChatGPT 規格に準拠したプラグインや、ディファイ独自のプラグインを公開し、より多くの機能をアプリケーションで実現できるようにします。
- **オープンソースモデル**, 例えばモデルプロバイダーとして Llama を採用したり、さらにファインチューニングを行う
Llama のような優れたオープンソースモデルを、私たちのプラットフォームのモデルオプションとして提供したり、さらなる微調整のために使用したりすることで、協力していきます。
## Q&A
**Q: Dify で何ができるのか?**
A: Dify はシンプルでパワフルな LLM 開発・運用ツールです。商用グレードのアプリケーション、パーソナルアシスタントを構築するために使用することができます。独自のアプリケーションを開発したい場合、LangDifyGenius は OpenAI と統合する際のバックエンド作業を省き、視覚的な操作機能を提供し、GPT モデルを継続的に改善・訓練することが可能です。
**Q: Dify を使って、自分のモデルを「トレーニング」するにはどうすればいいのでしょうか?**
A: プロンプトエンジニアリング、コンテキスト拡張、ファインチューニングからなる価値あるアプリケーションです。プロンプトとプログラミング言語を組み合わせたハイブリッドプログラミングアプローチ(テンプレートエンジンのようなもの)で、長文の埋め込みやユーザー入力の YouTube 動画からの字幕取り込みなどを簡単に実現し、これらはすべて LLM が処理するコンテキストとして提出される予定です。また、アプリケーションの操作性を重視し、ユーザーがアプリケーションを使用する際に生成したデータを分析、アノテーション、継続的なトレーニングに利用できるようにしました。適切なツールがなければ、これらのステップに時間がかかることがあります。
**Q: 自分でアプリケーションを作りたい場合、何を準備すればよいですか?**
A: すでに OpenAI API Key をお持ちだと思いますが、お持ちでない場合はご登録ください。もし、すでにトレーニングのコンテキストとなるコンテンツをお持ちでしたら、それは素晴らしいことです!
**Q: インターフェイスにどの言語が使えますか?**
A: 現在、英語と中国語に対応しており、言語パックを寄贈することも可能です。
## Star ヒストリー
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## お問合せ
ご質問、ご提案、パートナーシップに関するお問い合わせは、以下のチャンネルからお気軽にご連絡ください:
- GitHub Repo で Issue や PR を提出する
- [Discord](https://discord.gg/FngNHpbcY7) コミュニティで議論に参加する。
- hello@dify.ai にメールを送信します
私たちは、皆様のお手伝いをさせていただき、より楽しく、より便利な AI アプリケーションを一緒に作っていきたいと思っています!
## コントリビュート
適切なレビューを行うため、コミットへの直接アクセスが可能なコントリビュータを含むすべてのコードコントリビュータは、プルリクエストで提出し、マージされる前にコア開発チームによって承認される必要があります。
私たちはすべてのプルリクエストを歓迎します!協力したい方は、[コントリビューションガイド](CONTRIBUTING.md) をチェックしてみてください。
## セキュリティ
プライバシー保護のため、GitHub へのセキュリティ問題の投稿は避けてください。代わりに、あなたの質問を security@dify.ai に送ってください。より詳細な回答を提供します。
## 引用
本ソフトウェアは、以下のオープンソースソフトウェアを使用しています:
- Chase, H. (2022). LangChain [Computer software]. https://github.com/hwchase17/langchain
- Liu, J. (2022). LlamaIndex [Computer software]. doi: 10.5281/zenodo.1234.
詳しくは、各ソフトウェアの公式サイトまたはライセンス文をご参照ください。
## ライセンス
このリポジトリは、[Dify Open Source License](LICENSE) のもとで利用できます。
...@@ -14,7 +14,7 @@ CONSOLE_URL=http://127.0.0.1:5001 ...@@ -14,7 +14,7 @@ CONSOLE_URL=http://127.0.0.1:5001
API_URL=http://127.0.0.1:5001 API_URL=http://127.0.0.1:5001
# Web APP base URL # Web APP base URL
APP_URL=http://127.0.0.1:5001 APP_URL=http://127.0.0.1:3000
# celery configuration # celery configuration
CELERY_BROKER_URL=redis://:difyai123456@localhost:6379/1 CELERY_BROKER_URL=redis://:difyai123456@localhost:6379/1
......
...@@ -33,3 +33,4 @@ ...@@ -33,3 +33,4 @@
flask run --host 0.0.0.0 --port=5001 --debug flask run --host 0.0.0.0 --port=5001 --debug
``` ```
7. Setup your application by visiting http://localhost:5001/console/api/setup or other apis... 7. Setup your application by visiting http://localhost:5001/console/api/setup or other apis...
8. If you need to debug local async processing, you can run `celery -A app.celery worker`, celery can do dataset importing and other async tasks.
\ No newline at end of file
...@@ -21,9 +21,11 @@ DEFAULTS = { ...@@ -21,9 +21,11 @@ DEFAULTS = {
'REDIS_HOST': 'localhost', 'REDIS_HOST': 'localhost',
'REDIS_PORT': '6379', 'REDIS_PORT': '6379',
'REDIS_DB': '0', 'REDIS_DB': '0',
'REDIS_USE_SSL': 'False',
'SESSION_REDIS_HOST': 'localhost', 'SESSION_REDIS_HOST': 'localhost',
'SESSION_REDIS_PORT': '6379', 'SESSION_REDIS_PORT': '6379',
'SESSION_REDIS_DB': '2', 'SESSION_REDIS_DB': '2',
'SESSION_REDIS_USE_SSL': 'False',
'OAUTH_REDIRECT_PATH': '/console/api/oauth/authorize', 'OAUTH_REDIRECT_PATH': '/console/api/oauth/authorize',
'OAUTH_REDIRECT_INDEX_PATH': '/', 'OAUTH_REDIRECT_INDEX_PATH': '/',
'CONSOLE_URL': 'https://cloud.dify.ai', 'CONSOLE_URL': 'https://cloud.dify.ai',
...@@ -44,6 +46,8 @@ DEFAULTS = { ...@@ -44,6 +46,8 @@ DEFAULTS = {
'CELERY_BACKEND': 'database', 'CELERY_BACKEND': 'database',
'PDF_PREVIEW': 'True', 'PDF_PREVIEW': 'True',
'LOG_LEVEL': 'INFO', 'LOG_LEVEL': 'INFO',
'DISABLE_PROVIDER_CONFIG_VALIDATION': 'False',
'DEFAULT_LLM_PROVIDER': 'openai'
} }
...@@ -105,14 +109,18 @@ class Config: ...@@ -105,14 +109,18 @@ class Config:
# redis settings # redis settings
self.REDIS_HOST = get_env('REDIS_HOST') self.REDIS_HOST = get_env('REDIS_HOST')
self.REDIS_PORT = get_env('REDIS_PORT') self.REDIS_PORT = get_env('REDIS_PORT')
self.REDIS_USERNAME = get_env('REDIS_USERNAME')
self.REDIS_PASSWORD = get_env('REDIS_PASSWORD') self.REDIS_PASSWORD = get_env('REDIS_PASSWORD')
self.REDIS_DB = get_env('REDIS_DB') self.REDIS_DB = get_env('REDIS_DB')
self.REDIS_USE_SSL = get_bool_env('REDIS_USE_SSL')
# session redis settings # session redis settings
self.SESSION_REDIS_HOST = get_env('SESSION_REDIS_HOST') self.SESSION_REDIS_HOST = get_env('SESSION_REDIS_HOST')
self.SESSION_REDIS_PORT = get_env('SESSION_REDIS_PORT') self.SESSION_REDIS_PORT = get_env('SESSION_REDIS_PORT')
self.SESSION_REDIS_USERNAME = get_env('SESSION_REDIS_USERNAME')
self.SESSION_REDIS_PASSWORD = get_env('SESSION_REDIS_PASSWORD') self.SESSION_REDIS_PASSWORD = get_env('SESSION_REDIS_PASSWORD')
self.SESSION_REDIS_DB = get_env('SESSION_REDIS_DB') self.SESSION_REDIS_DB = get_env('SESSION_REDIS_DB')
self.SESSION_REDIS_USE_SSL = get_bool_env('SESSION_REDIS_USE_SSL')
# storage settings # storage settings
self.STORAGE_TYPE = get_env('STORAGE_TYPE') self.STORAGE_TYPE = get_env('STORAGE_TYPE')
...@@ -165,10 +173,18 @@ class Config: ...@@ -165,10 +173,18 @@ class Config:
self.CELERY_BACKEND = get_env('CELERY_BACKEND') self.CELERY_BACKEND = get_env('CELERY_BACKEND')
self.CELERY_RESULT_BACKEND = 'db+{}'.format(self.SQLALCHEMY_DATABASE_URI) \ self.CELERY_RESULT_BACKEND = 'db+{}'.format(self.SQLALCHEMY_DATABASE_URI) \
if self.CELERY_BACKEND == 'database' else self.CELERY_BROKER_URL if self.CELERY_BACKEND == 'database' else self.CELERY_BROKER_URL
self.BROKER_USE_SSL = self.CELERY_BROKER_URL.startswith('rediss://')
# hosted provider credentials # hosted provider credentials
self.OPENAI_API_KEY = get_env('OPENAI_API_KEY') self.OPENAI_API_KEY = get_env('OPENAI_API_KEY')
# By default it is False
# You could disable it for compatibility with certain OpenAPI providers
self.DISABLE_PROVIDER_CONFIG_VALIDATION = get_bool_env('DISABLE_PROVIDER_CONFIG_VALIDATION')
# For temp use only
# set default LLM provider, default is 'openai', support `azure_openai`
self.DEFAULT_LLM_PROVIDER = get_env('DEFAULT_LLM_PROVIDER')
class CloudEditionConfig(Config): class CloudEditionConfig(Config):
......
...@@ -17,6 +17,6 @@ def _get_app(app_id, mode=None): ...@@ -17,6 +17,6 @@ def _get_app(app_id, mode=None):
raise NotFound("App not found") raise NotFound("App not found")
if mode and app.mode != mode: if mode and app.mode != mode:
raise AppUnavailableError() raise NotFound("The {} app not found".format(mode))
return app return app
...@@ -9,31 +9,33 @@ class AppNotFoundError(BaseHTTPException): ...@@ -9,31 +9,33 @@ class AppNotFoundError(BaseHTTPException):
class ProviderNotInitializeError(BaseHTTPException): class ProviderNotInitializeError(BaseHTTPException):
error_code = 'provider_not_initialize' error_code = 'provider_not_initialize'
description = "Provider Token not initialize." description = "No valid model provider credentials found. " \
"Please go to Settings -> Model Provider to complete your provider credentials."
code = 400 code = 400
class ProviderQuotaExceededError(BaseHTTPException): class ProviderQuotaExceededError(BaseHTTPException):
error_code = 'provider_quota_exceeded' error_code = 'provider_quota_exceeded'
description = "Provider quota exceeded." description = "Your quota for Dify Hosted OpenAI has been exhausted. " \
"Please go to Settings -> Model Provider to complete your own provider credentials."
code = 400 code = 400
class ProviderModelCurrentlyNotSupportError(BaseHTTPException): class ProviderModelCurrentlyNotSupportError(BaseHTTPException):
error_code = 'model_currently_not_support' error_code = 'model_currently_not_support'
description = "GPT-4 currently not support." description = "Dify Hosted OpenAI trial currently not support the GPT-4 model."
code = 400 code = 400
class ConversationCompletedError(BaseHTTPException): class ConversationCompletedError(BaseHTTPException):
error_code = 'conversation_completed' error_code = 'conversation_completed'
description = "Conversation was completed." description = "The conversation has ended. Please start a new conversation."
code = 400 code = 400
class AppUnavailableError(BaseHTTPException): class AppUnavailableError(BaseHTTPException):
error_code = 'app_unavailable' error_code = 'app_unavailable'
description = "App unavailable." description = "App unavailable, please check your app configurations."
code = 400 code = 400
...@@ -45,5 +47,5 @@ class CompletionRequestError(BaseHTTPException): ...@@ -45,5 +47,5 @@ class CompletionRequestError(BaseHTTPException):
class AppMoreLikeThisDisabledError(BaseHTTPException): class AppMoreLikeThisDisabledError(BaseHTTPException):
error_code = 'app_more_like_this_disabled' error_code = 'app_more_like_this_disabled'
description = "More like this disabled." description = "The 'More like this' feature is disabled. Please refresh your page."
code = 403 code = 403
...@@ -10,13 +10,14 @@ from werkzeug.exceptions import NotFound, Forbidden ...@@ -10,13 +10,14 @@ from werkzeug.exceptions import NotFound, Forbidden
import services import services
from controllers.console import api from controllers.console import api
from controllers.console.app.error import ProviderNotInitializeError from controllers.console.app.error import ProviderNotInitializeError, ProviderQuotaExceededError, \
ProviderModelCurrentlyNotSupportError
from controllers.console.datasets.error import DocumentAlreadyFinishedError, InvalidActionError, DocumentIndexingError, \ from controllers.console.datasets.error import DocumentAlreadyFinishedError, InvalidActionError, DocumentIndexingError, \
InvalidMetadataError, ArchivedDocumentImmutableError InvalidMetadataError, ArchivedDocumentImmutableError
from controllers.console.setup import setup_required from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required from controllers.console.wraps import account_initialization_required
from core.indexing_runner import IndexingRunner from core.indexing_runner import IndexingRunner
from core.llm.error import ProviderTokenNotInitError from core.llm.error import ProviderTokenNotInitError, QuotaExceededError, ModelCurrentlyNotSupportError
from extensions.ext_redis import redis_client from extensions.ext_redis import redis_client
from libs.helper import TimestampField from libs.helper import TimestampField
from extensions.ext_database import db from extensions.ext_database import db
...@@ -222,6 +223,10 @@ class DatasetDocumentListApi(Resource): ...@@ -222,6 +223,10 @@ class DatasetDocumentListApi(Resource):
document = DocumentService.save_document_with_dataset_id(dataset, args, current_user) document = DocumentService.save_document_with_dataset_id(dataset, args, current_user)
except ProviderTokenNotInitError: except ProviderTokenNotInitError:
raise ProviderNotInitializeError() raise ProviderNotInitializeError()
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
return document return document
...@@ -259,6 +264,10 @@ class DatasetInitApi(Resource): ...@@ -259,6 +264,10 @@ class DatasetInitApi(Resource):
) )
except ProviderTokenNotInitError: except ProviderTokenNotInitError:
raise ProviderNotInitializeError() raise ProviderNotInitializeError()
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
response = { response = {
'dataset': dataset, 'dataset': dataset,
......
...@@ -3,7 +3,7 @@ from libs.exception import BaseHTTPException ...@@ -3,7 +3,7 @@ from libs.exception import BaseHTTPException
class NoFileUploadedError(BaseHTTPException): class NoFileUploadedError(BaseHTTPException):
error_code = 'no_file_uploaded' error_code = 'no_file_uploaded'
description = "No file uploaded." description = "Please upload your file."
code = 400 code = 400
...@@ -27,25 +27,25 @@ class UnsupportedFileTypeError(BaseHTTPException): ...@@ -27,25 +27,25 @@ class UnsupportedFileTypeError(BaseHTTPException):
class HighQualityDatasetOnlyError(BaseHTTPException): class HighQualityDatasetOnlyError(BaseHTTPException):
error_code = 'high_quality_dataset_only' error_code = 'high_quality_dataset_only'
description = "High quality dataset only." description = "Current operation only supports 'high-quality' datasets."
code = 400 code = 400
class DatasetNotInitializedError(BaseHTTPException): class DatasetNotInitializedError(BaseHTTPException):
error_code = 'dataset_not_initialized' error_code = 'dataset_not_initialized'
description = "Dataset not initialized." description = "The dataset is still being initialized or indexing. Please wait a moment."
code = 400 code = 400
class ArchivedDocumentImmutableError(BaseHTTPException): class ArchivedDocumentImmutableError(BaseHTTPException):
error_code = 'archived_document_immutable' error_code = 'archived_document_immutable'
description = "Cannot process an archived document." description = "The archived document is not editable."
code = 403 code = 403
class DatasetNameDuplicateError(BaseHTTPException): class DatasetNameDuplicateError(BaseHTTPException):
error_code = 'dataset_name_duplicate' error_code = 'dataset_name_duplicate'
description = "Dataset name already exists." description = "The dataset name already exists. Please modify your dataset name."
code = 409 code = 409
...@@ -57,17 +57,17 @@ class InvalidActionError(BaseHTTPException): ...@@ -57,17 +57,17 @@ class InvalidActionError(BaseHTTPException):
class DocumentAlreadyFinishedError(BaseHTTPException): class DocumentAlreadyFinishedError(BaseHTTPException):
error_code = 'document_already_finished' error_code = 'document_already_finished'
description = "Document already finished." description = "The document has been processed. Please refresh the page or go to the document details."
code = 400 code = 400
class DocumentIndexingError(BaseHTTPException): class DocumentIndexingError(BaseHTTPException):
error_code = 'document_indexing' error_code = 'document_indexing'
description = "Document indexing." description = "The document is being processed and cannot be edited."
code = 400 code = 400
class InvalidMetadataError(BaseHTTPException): class InvalidMetadataError(BaseHTTPException):
error_code = 'invalid_metadata' error_code = 'invalid_metadata'
description = "Invalid metadata." description = "The metadata content is incorrect. Please check and verify."
code = 400 code = 400
...@@ -6,9 +6,12 @@ from werkzeug.exceptions import InternalServerError, NotFound, Forbidden ...@@ -6,9 +6,12 @@ from werkzeug.exceptions import InternalServerError, NotFound, Forbidden
import services import services
from controllers.console import api from controllers.console import api
from controllers.console.app.error import ProviderNotInitializeError, ProviderQuotaExceededError, \
ProviderModelCurrentlyNotSupportError
from controllers.console.datasets.error import HighQualityDatasetOnlyError, DatasetNotInitializedError from controllers.console.datasets.error import HighQualityDatasetOnlyError, DatasetNotInitializedError
from controllers.console.setup import setup_required from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required from controllers.console.wraps import account_initialization_required
from core.llm.error import ProviderTokenNotInitError, QuotaExceededError, ModelCurrentlyNotSupportError
from libs.helper import TimestampField from libs.helper import TimestampField
from services.dataset_service import DatasetService from services.dataset_service import DatasetService
from services.hit_testing_service import HitTestingService from services.hit_testing_service import HitTestingService
...@@ -92,6 +95,12 @@ class HitTestingApi(Resource): ...@@ -92,6 +95,12 @@ class HitTestingApi(Resource):
return {"query": response['query'], 'records': marshal(response['records'], hit_testing_record_fields)} return {"query": response['query'], 'records': marshal(response['records'], hit_testing_record_fields)}
except services.errors.index.IndexNotInitializedError: except services.errors.index.IndexNotInitializedError:
raise DatasetNotInitializedError() raise DatasetNotInitializedError()
except ProviderTokenNotInitError:
raise ProviderNotInitializeError()
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
except Exception as e: except Exception as e:
logging.exception("Hit testing failed.") logging.exception("Hit testing failed.")
raise InternalServerError(str(e)) raise InternalServerError(str(e))
......
...@@ -3,13 +3,14 @@ from libs.exception import BaseHTTPException ...@@ -3,13 +3,14 @@ from libs.exception import BaseHTTPException
class AlreadySetupError(BaseHTTPException): class AlreadySetupError(BaseHTTPException):
error_code = 'already_setup' error_code = 'already_setup'
description = "Application already setup." description = "Dify has been successfully installed. Please refresh the page or return to the dashboard homepage."
code = 403 code = 403
class NotSetupError(BaseHTTPException): class NotSetupError(BaseHTTPException):
error_code = 'not_setup' error_code = 'not_setup'
description = "Application not setup." description = "Dify has not been initialized and installed yet. " \
"Please proceed with the initialization and installation process first."
code = 401 code = 401
......
...@@ -19,6 +19,14 @@ class VersionApi(Resource): ...@@ -19,6 +19,14 @@ class VersionApi(Resource):
args = parser.parse_args() args = parser.parse_args()
check_update_url = current_app.config['CHECK_UPDATE_URL'] check_update_url = current_app.config['CHECK_UPDATE_URL']
if not check_update_url:
return {
'version': '0.0.0',
'release_date': '',
'release_notes': '',
'can_auto_update': False
}
try: try:
response = requests.get(check_update_url, { response = requests.get(check_update_url, {
'current_version': args.get('current_version') 'current_version': args.get('current_version')
......
...@@ -21,11 +21,11 @@ class InvalidInvitationCodeError(BaseHTTPException): ...@@ -21,11 +21,11 @@ class InvalidInvitationCodeError(BaseHTTPException):
class AccountAlreadyInitedError(BaseHTTPException): class AccountAlreadyInitedError(BaseHTTPException):
error_code = 'account_already_inited' error_code = 'account_already_inited'
description = "Account already inited." description = "The account has been initialized. Please refresh the page."
code = 400 code = 400
class AccountNotInitializedError(BaseHTTPException): class AccountNotInitializedError(BaseHTTPException):
error_code = 'account_not_initialized' error_code = 'account_not_initialized'
description = "Account not initialized." description = "The account has not been initialized yet. Please proceed with the initialization process first."
code = 400 code = 400
...@@ -82,29 +82,33 @@ class ProviderTokenApi(Resource): ...@@ -82,29 +82,33 @@ class ProviderTokenApi(Resource):
args = parser.parse_args() args = parser.parse_args()
if not args['token']: if args['token']:
raise ValueError('Token is empty') try:
ProviderService.validate_provider_configs(
try: tenant=current_user.current_tenant,
ProviderService.validate_provider_configs( provider_name=ProviderName(provider),
configs=args['token']
)
token_is_valid = True
except ValidateFailedError as ex:
raise ValueError(str(ex))
base64_encrypted_token = ProviderService.get_encrypted_token(
tenant=current_user.current_tenant, tenant=current_user.current_tenant,
provider_name=ProviderName(provider), provider_name=ProviderName(provider),
configs=args['token'] configs=args['token']
) )
token_is_valid = True else:
except ValidateFailedError: base64_encrypted_token = None
token_is_valid = False token_is_valid = False
tenant = current_user.current_tenant tenant = current_user.current_tenant
base64_encrypted_token = ProviderService.get_encrypted_token( provider_model = db.session.query(Provider).filter(
tenant=current_user.current_tenant, Provider.tenant_id == tenant.id,
provider_name=ProviderName(provider), Provider.provider_name == provider,
configs=args['token'] Provider.provider_type == ProviderType.CUSTOM.value
) ).first()
provider_model = Provider.query.filter_by(tenant_id=tenant.id, provider_name=provider,
provider_type=ProviderType.CUSTOM.value).first()
# Only allow updating token for CUSTOM provider type # Only allow updating token for CUSTOM provider type
if provider_model: if provider_model:
...@@ -117,6 +121,16 @@ class ProviderTokenApi(Resource): ...@@ -117,6 +121,16 @@ class ProviderTokenApi(Resource):
is_valid=token_is_valid) is_valid=token_is_valid)
db.session.add(provider_model) db.session.add(provider_model)
if provider_model.is_valid:
other_providers = db.session.query(Provider).filter(
Provider.tenant_id == tenant.id,
Provider.provider_name != provider,
Provider.provider_type == ProviderType.CUSTOM.value
).all()
for other_provider in other_providers:
other_provider.is_valid = False
db.session.commit() db.session.commit()
if provider in [ProviderName.ANTHROPIC.value, ProviderName.AZURE_OPENAI.value, ProviderName.COHERE.value, if provider in [ProviderName.ANTHROPIC.value, ProviderName.AZURE_OPENAI.value, ProviderName.COHERE.value,
...@@ -143,7 +157,7 @@ class ProviderTokenValidateApi(Resource): ...@@ -143,7 +157,7 @@ class ProviderTokenValidateApi(Resource):
args = parser.parse_args() args = parser.parse_args()
# todo: remove this when the provider is supported # todo: remove this when the provider is supported
if provider in [ProviderName.ANTHROPIC.value, ProviderName.AZURE_OPENAI.value, ProviderName.COHERE.value, if provider in [ProviderName.ANTHROPIC.value, ProviderName.COHERE.value,
ProviderName.HUGGINGFACEHUB.value]: ProviderName.HUGGINGFACEHUB.value]:
return {'result': 'success', 'warning': 'MOCK: This provider is not supported yet.'} return {'result': 'success', 'warning': 'MOCK: This provider is not supported yet.'}
......
...@@ -4,43 +4,45 @@ from libs.exception import BaseHTTPException ...@@ -4,43 +4,45 @@ from libs.exception import BaseHTTPException
class AppUnavailableError(BaseHTTPException): class AppUnavailableError(BaseHTTPException):
error_code = 'app_unavailable' error_code = 'app_unavailable'
description = "App unavailable." description = "App unavailable, please check your app configurations."
code = 400 code = 400
class NotCompletionAppError(BaseHTTPException): class NotCompletionAppError(BaseHTTPException):
error_code = 'not_completion_app' error_code = 'not_completion_app'
description = "Not Completion App" description = "Please check if your Completion app mode matches the right API route."
code = 400 code = 400
class NotChatAppError(BaseHTTPException): class NotChatAppError(BaseHTTPException):
error_code = 'not_chat_app' error_code = 'not_chat_app'
description = "Not Chat App" description = "Please check if your Chat app mode matches the right API route."
code = 400 code = 400
class ConversationCompletedError(BaseHTTPException): class ConversationCompletedError(BaseHTTPException):
error_code = 'conversation_completed' error_code = 'conversation_completed'
description = "Conversation Completed." description = "The conversation has ended. Please start a new conversation."
code = 400 code = 400
class ProviderNotInitializeError(BaseHTTPException): class ProviderNotInitializeError(BaseHTTPException):
error_code = 'provider_not_initialize' error_code = 'provider_not_initialize'
description = "Provider Token not initialize." description = "No valid model provider credentials found. " \
"Please go to Settings -> Model Provider to complete your provider credentials."
code = 400 code = 400
class ProviderQuotaExceededError(BaseHTTPException): class ProviderQuotaExceededError(BaseHTTPException):
error_code = 'provider_quota_exceeded' error_code = 'provider_quota_exceeded'
description = "Provider quota exceeded." description = "Your quota for Dify Hosted OpenAI has been exhausted. " \
"Please go to Settings -> Model Provider to complete your own provider credentials."
code = 400 code = 400
class ProviderModelCurrentlyNotSupportError(BaseHTTPException): class ProviderModelCurrentlyNotSupportError(BaseHTTPException):
error_code = 'model_currently_not_support' error_code = 'model_currently_not_support'
description = "GPT-4 currently not support." description = "Dify Hosted OpenAI trial currently not support the GPT-4 model."
code = 400 code = 400
......
...@@ -16,5 +16,5 @@ class DocumentIndexingError(BaseHTTPException): ...@@ -16,5 +16,5 @@ class DocumentIndexingError(BaseHTTPException):
class DatasetNotInitedError(BaseHTTPException): class DatasetNotInitedError(BaseHTTPException):
error_code = 'dataset_not_inited' error_code = 'dataset_not_inited'
description = "Dataset not inited." description = "The dataset is still being initialized or indexing. Please wait a moment."
code = 403 code = 403
...@@ -4,43 +4,45 @@ from libs.exception import BaseHTTPException ...@@ -4,43 +4,45 @@ from libs.exception import BaseHTTPException
class AppUnavailableError(BaseHTTPException): class AppUnavailableError(BaseHTTPException):
error_code = 'app_unavailable' error_code = 'app_unavailable'
description = "App unavailable." description = "App unavailable, please check your app configurations."
code = 400 code = 400
class NotCompletionAppError(BaseHTTPException): class NotCompletionAppError(BaseHTTPException):
error_code = 'not_completion_app' error_code = 'not_completion_app'
description = "Not Completion App" description = "Please check if your Completion app mode matches the right API route."
code = 400 code = 400
class NotChatAppError(BaseHTTPException): class NotChatAppError(BaseHTTPException):
error_code = 'not_chat_app' error_code = 'not_chat_app'
description = "Not Chat App" description = "Please check if your Chat app mode matches the right API route."
code = 400 code = 400
class ConversationCompletedError(BaseHTTPException): class ConversationCompletedError(BaseHTTPException):
error_code = 'conversation_completed' error_code = 'conversation_completed'
description = "Conversation Completed." description = "The conversation has ended. Please start a new conversation."
code = 400 code = 400
class ProviderNotInitializeError(BaseHTTPException): class ProviderNotInitializeError(BaseHTTPException):
error_code = 'provider_not_initialize' error_code = 'provider_not_initialize'
description = "Provider Token not initialize." description = "No valid model provider credentials found. " \
"Please go to Settings -> Model Provider to complete your provider credentials."
code = 400 code = 400
class ProviderQuotaExceededError(BaseHTTPException): class ProviderQuotaExceededError(BaseHTTPException):
error_code = 'provider_quota_exceeded' error_code = 'provider_quota_exceeded'
description = "Provider quota exceeded." description = "Your quota for Dify Hosted OpenAI has been exhausted. " \
"Please go to Settings -> Model Provider to complete your own provider credentials."
code = 400 code = 400
class ProviderModelCurrentlyNotSupportError(BaseHTTPException): class ProviderModelCurrentlyNotSupportError(BaseHTTPException):
error_code = 'model_currently_not_support' error_code = 'model_currently_not_support'
description = "GPT-4 currently not support." description = "Dify Hosted OpenAI trial currently not support the GPT-4 model."
code = 400 code = 400
...@@ -52,11 +54,11 @@ class CompletionRequestError(BaseHTTPException): ...@@ -52,11 +54,11 @@ class CompletionRequestError(BaseHTTPException):
class AppMoreLikeThisDisabledError(BaseHTTPException): class AppMoreLikeThisDisabledError(BaseHTTPException):
error_code = 'app_more_like_this_disabled' error_code = 'app_more_like_this_disabled'
description = "More like this disabled." description = "The 'More like this' feature is disabled. Please refresh your page."
code = 403 code = 403
class AppSuggestedQuestionsAfterAnswerDisabledError(BaseHTTPException): class AppSuggestedQuestionsAfterAnswerDisabledError(BaseHTTPException):
error_code = 'app_suggested_questions_after_answer_disabled' error_code = 'app_suggested_questions_after_answer_disabled'
description = "Function Suggested questions after answer disabled." description = "The 'Suggested Questions After Answer' feature is disabled. Please refresh your page."
code = 403 code = 403
from typing import Optional, List, Union from typing import Optional, List, Union, Tuple
from langchain.callbacks import CallbackManager from langchain.callbacks import CallbackManager
from langchain.chat_models.base import BaseChatModel from langchain.chat_models.base import BaseChatModel
...@@ -39,7 +39,8 @@ class Completion: ...@@ -39,7 +39,8 @@ class Completion:
memory = cls.get_memory_from_conversation( memory = cls.get_memory_from_conversation(
tenant_id=app.tenant_id, tenant_id=app.tenant_id,
app_model_config=app_model_config, app_model_config=app_model_config,
conversation=conversation conversation=conversation,
return_messages=False
) )
inputs = conversation.inputs inputs = conversation.inputs
...@@ -96,7 +97,7 @@ class Completion: ...@@ -96,7 +97,7 @@ class Completion:
) )
# get llm prompt # get llm prompt
prompt = cls.get_main_llm_prompt( prompt, stop_words = cls.get_main_llm_prompt(
mode=mode, mode=mode,
llm=final_llm, llm=final_llm,
pre_prompt=app_model_config.pre_prompt, pre_prompt=app_model_config.pre_prompt,
...@@ -114,30 +115,47 @@ class Completion: ...@@ -114,30 +115,47 @@ class Completion:
mode=mode mode=mode
) )
response = final_llm.generate([prompt]) response = final_llm.generate([prompt], stop_words)
return response return response
@classmethod @classmethod
def get_main_llm_prompt(cls, mode: str, llm: BaseLanguageModel, pre_prompt: str, query: str, inputs: dict, chain_output: Optional[str], def get_main_llm_prompt(cls, mode: str, llm: BaseLanguageModel, pre_prompt: str, query: str, inputs: dict,
chain_output: Optional[str],
memory: Optional[ReadOnlyConversationTokenDBBufferSharedMemory]) -> \ memory: Optional[ReadOnlyConversationTokenDBBufferSharedMemory]) -> \
Union[str | List[BaseMessage]]: Tuple[Union[str | List[BaseMessage]], Optional[List[str]]]:
# disable template string in query
query_params = OutLinePromptTemplate.from_template(template=query).input_variables
if query_params:
for query_param in query_params:
if query_param not in inputs:
inputs[query_param] = '{' + query_param + '}'
pre_prompt = PromptBuilder.process_template(pre_prompt) if pre_prompt else pre_prompt pre_prompt = PromptBuilder.process_template(pre_prompt) if pre_prompt else pre_prompt
if mode == 'completion': if mode == 'completion':
prompt_template = OutLinePromptTemplate.from_template( prompt_template = OutLinePromptTemplate.from_template(
template=("Use the following pieces of [CONTEXT] to answer the question at the end. " template=("""Use the following CONTEXT as your learned knowledge:
"If you don't know the answer, " [CONTEXT]
"just say that you don't know, don't try to make up an answer. \n" {context}
"```\n" [END CONTEXT]
"[CONTEXT]\n"
"{context}\n" When answer to user:
"```\n" if chain_output else "") - If you don't know, just say that you don't know.
- If you don't know when you are not sure, ask for clarification.
Avoid mentioning that you obtained the information from the context.
And answer according to the language of the user's question.
""" if chain_output else "")
+ (pre_prompt + "\n" if pre_prompt else "") + (pre_prompt + "\n" if pre_prompt else "")
+ "{query}\n" + "{query}\n"
) )
if chain_output: if chain_output:
inputs['context'] = chain_output inputs['context'] = chain_output
context_params = OutLinePromptTemplate.from_template(template=chain_output).input_variables
if context_params:
for context_param in context_params:
if context_param not in inputs:
inputs[context_param] = '{' + context_param + '}'
prompt_inputs = {k: inputs[k] for k in prompt_template.input_variables if k in inputs} prompt_inputs = {k: inputs[k] for k in prompt_template.input_variables if k in inputs}
prompt_content = prompt_template.format( prompt_content = prompt_template.format(
...@@ -147,64 +165,83 @@ class Completion: ...@@ -147,64 +165,83 @@ class Completion:
if isinstance(llm, BaseChatModel): if isinstance(llm, BaseChatModel):
# use chat llm as completion model # use chat llm as completion model
return [HumanMessage(content=prompt_content)] return [HumanMessage(content=prompt_content)], None
else: else:
return prompt_content return prompt_content, None
else: else:
messages: List[BaseMessage] = [] messages: List[BaseMessage] = []
system_message = None human_inputs = {
"query": query
}
human_message_prompt = ""
if pre_prompt: if pre_prompt:
# append pre prompt as system message pre_prompt_inputs = {k: inputs[k] for k in
system_message = PromptBuilder.to_system_message(pre_prompt, inputs) OutLinePromptTemplate.from_template(template=pre_prompt).input_variables
if k in inputs}
if pre_prompt_inputs:
human_inputs.update(pre_prompt_inputs)
if chain_output: if chain_output:
# append context as system message, currently only use simple stuff prompt human_inputs['context'] = chain_output
context_message = PromptBuilder.to_system_message( human_message_prompt += """Use the following CONTEXT as your learned knowledge.
"""Use the following pieces of [CONTEXT] to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
```
[CONTEXT] [CONTEXT]
{context} {context}
```""", [END CONTEXT]
{'context': chain_output}
When answer to user:
- If you don't know, just say that you don't know.
- If you don't know when you are not sure, ask for clarification.
Avoid mentioning that you obtained the information from the context.
And answer according to the language of the user's question.
"""
if pre_prompt:
human_message_prompt += pre_prompt
query_prompt = "\nHuman: {query}\nAI: "
if memory:
# append chat histories
tmp_human_message = PromptBuilder.to_human_message(
prompt_content=human_message_prompt + query_prompt,
inputs=human_inputs
) )
if not system_message: curr_message_tokens = memory.llm.get_messages_tokens([tmp_human_message])
system_message = context_message rest_tokens = llm_constant.max_context_token_length[memory.llm.model_name] \
else: - memory.llm.max_tokens - curr_message_tokens
system_message.content = context_message.content + "\n\n" + system_message.content rest_tokens = max(rest_tokens, 0)
histories = cls.get_history_messages_from_memory(memory, rest_tokens)
if system_message: # disable template string in query
messages.append(system_message) histories_params = OutLinePromptTemplate.from_template(template=histories).input_variables
if histories_params:
for histories_param in histories_params:
if histories_param not in human_inputs:
human_inputs[histories_param] = '{' + histories_param + '}'
human_inputs = { human_message_prompt += "\n\n" + histories
"query": query
} human_message_prompt += query_prompt
# construct main prompt # construct main prompt
human_message = PromptBuilder.to_human_message( human_message = PromptBuilder.to_human_message(
prompt_content="{query}", prompt_content=human_message_prompt,
inputs=human_inputs inputs=human_inputs
) )
if memory:
# append chat histories
tmp_messages = messages.copy() + [human_message]
curr_message_tokens = memory.llm.get_messages_tokens(tmp_messages)
rest_tokens = llm_constant.max_context_token_length[
memory.llm.model_name] - memory.llm.max_tokens - curr_message_tokens
rest_tokens = max(rest_tokens, 0)
history_messages = cls.get_history_messages_from_memory(memory, rest_tokens)
messages += history_messages
messages.append(human_message) messages.append(human_message)
return messages return messages, ['\nHuman:']
@classmethod @classmethod
def get_llm_callback_manager(cls, llm: Union[StreamableOpenAI, StreamableChatOpenAI], def get_llm_callback_manager(cls, llm: Union[StreamableOpenAI, StreamableChatOpenAI],
streaming: bool, conversation_message_task: ConversationMessageTask) -> CallbackManager: streaming: bool,
conversation_message_task: ConversationMessageTask) -> CallbackManager:
llm_callback_handler = LLMCallbackHandler(llm, conversation_message_task) llm_callback_handler = LLMCallbackHandler(llm, conversation_message_task)
if streaming: if streaming:
callback_handlers = [llm_callback_handler, DifyStreamingStdOutCallbackHandler()] callback_handlers = [llm_callback_handler, DifyStreamingStdOutCallbackHandler()]
...@@ -216,7 +253,7 @@ If you don't know the answer, just say that you don't know, don't try to make up ...@@ -216,7 +253,7 @@ If you don't know the answer, just say that you don't know, don't try to make up
@classmethod @classmethod
def get_history_messages_from_memory(cls, memory: ReadOnlyConversationTokenDBBufferSharedMemory, def get_history_messages_from_memory(cls, memory: ReadOnlyConversationTokenDBBufferSharedMemory,
max_token_limit: int) -> \ max_token_limit: int) -> \
List[BaseMessage]: str:
"""Get memory messages.""" """Get memory messages."""
memory.max_token_limit = max_token_limit memory.max_token_limit = max_token_limit
memory_key = memory.memory_variables[0] memory_key = memory.memory_variables[0]
...@@ -286,7 +323,7 @@ If you don't know the answer, just say that you don't know, don't try to make up ...@@ -286,7 +323,7 @@ If you don't know the answer, just say that you don't know, don't try to make up
) )
# get llm prompt # get llm prompt
original_prompt = cls.get_main_llm_prompt( original_prompt, _ = cls.get_main_llm_prompt(
mode="completion", mode="completion",
llm=llm, llm=llm,
pre_prompt=pre_prompt, pre_prompt=pre_prompt,
......
...@@ -56,6 +56,9 @@ class ConversationMessageTask: ...@@ -56,6 +56,9 @@ class ConversationMessageTask:
) )
def init(self): def init(self):
provider_name = LLMBuilder.get_default_provider(self.app.tenant_id)
self.model_dict['provider'] = provider_name
override_model_configs = None override_model_configs = None
if self.is_override: if self.is_override:
override_model_configs = { override_model_configs = {
...@@ -281,6 +284,9 @@ class PubHandler: ...@@ -281,6 +284,9 @@ class PubHandler:
@classmethod @classmethod
def generate_channel_name(cls, user: Union[Account | EndUser], task_id: str): def generate_channel_name(cls, user: Union[Account | EndUser], task_id: str):
if not user:
raise ValueError("user is required")
user_str = 'account-' + user.id if isinstance(user, Account) else 'end-user-' + user.id user_str = 'account-' + user.id if isinstance(user, Account) else 'end-user-' + user.id
return "generate_result:{}-{}".format(user_str, task_id) return "generate_result:{}-{}".format(user_str, task_id)
......
...@@ -11,9 +11,10 @@ from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_except ...@@ -11,9 +11,10 @@ from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_except
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
def get_embedding( def get_embedding(
text: str, text: str,
engine: Optional[str] = None, engine: Optional[str] = None,
openai_api_key: Optional[str] = None, api_key: Optional[str] = None,
**kwargs
) -> List[float]: ) -> List[float]:
"""Get embedding. """Get embedding.
...@@ -25,11 +26,12 @@ def get_embedding( ...@@ -25,11 +26,12 @@ def get_embedding(
""" """
text = text.replace("\n", " ") text = text.replace("\n", " ")
return openai.Embedding.create(input=[text], engine=engine, api_key=openai_api_key)["data"][0]["embedding"] return openai.Embedding.create(input=[text], engine=engine, api_key=api_key, **kwargs)["data"][0]["embedding"]
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key: Optional[str] = None) -> List[float]: async def aget_embedding(text: str, engine: Optional[str] = None, api_key: Optional[str] = None, **kwargs) -> List[
float]:
"""Asynchronously get embedding. """Asynchronously get embedding.
NOTE: Copied from OpenAI's embedding utils: NOTE: Copied from OpenAI's embedding utils:
...@@ -42,16 +44,17 @@ async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key ...@@ -42,16 +44,17 @@ async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key
# replace newlines, which can negatively affect performance. # replace newlines, which can negatively affect performance.
text = text.replace("\n", " ") text = text.replace("\n", " ")
return (await openai.Embedding.acreate(input=[text], engine=engine, api_key=openai_api_key))["data"][0][ return (await openai.Embedding.acreate(input=[text], engine=engine, api_key=api_key, **kwargs))["data"][0][
"embedding" "embedding"
] ]
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
def get_embeddings( def get_embeddings(
list_of_text: List[str], list_of_text: List[str],
engine: Optional[str] = None, engine: Optional[str] = None,
openai_api_key: Optional[str] = None api_key: Optional[str] = None,
**kwargs
) -> List[List[float]]: ) -> List[List[float]]:
"""Get embeddings. """Get embeddings.
...@@ -67,14 +70,14 @@ def get_embeddings( ...@@ -67,14 +70,14 @@ def get_embeddings(
# replace newlines, which can negatively affect performance. # replace newlines, which can negatively affect performance.
list_of_text = [text.replace("\n", " ") for text in list_of_text] list_of_text = [text.replace("\n", " ") for text in list_of_text]
data = openai.Embedding.create(input=list_of_text, engine=engine, api_key=openai_api_key).data data = openai.Embedding.create(input=list_of_text, engine=engine, api_key=api_key, **kwargs).data
data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input. data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input.
return [d["embedding"] for d in data] return [d["embedding"] for d in data]
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
async def aget_embeddings( async def aget_embeddings(
list_of_text: List[str], engine: Optional[str] = None, openai_api_key: Optional[str] = None list_of_text: List[str], engine: Optional[str] = None, api_key: Optional[str] = None, **kwargs
) -> List[List[float]]: ) -> List[List[float]]:
"""Asynchronously get embeddings. """Asynchronously get embeddings.
...@@ -90,7 +93,7 @@ async def aget_embeddings( ...@@ -90,7 +93,7 @@ async def aget_embeddings(
# replace newlines, which can negatively affect performance. # replace newlines, which can negatively affect performance.
list_of_text = [text.replace("\n", " ") for text in list_of_text] list_of_text = [text.replace("\n", " ") for text in list_of_text]
data = (await openai.Embedding.acreate(input=list_of_text, engine=engine, api_key=openai_api_key)).data data = (await openai.Embedding.acreate(input=list_of_text, engine=engine, api_key=api_key, **kwargs)).data
data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input. data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input.
return [d["embedding"] for d in data] return [d["embedding"] for d in data]
...@@ -98,19 +101,30 @@ async def aget_embeddings( ...@@ -98,19 +101,30 @@ async def aget_embeddings(
class OpenAIEmbedding(BaseEmbedding): class OpenAIEmbedding(BaseEmbedding):
def __init__( def __init__(
self, self,
mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE, mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE,
model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002, model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002,
deployment_name: Optional[str] = None, deployment_name: Optional[str] = None,
openai_api_key: Optional[str] = None, openai_api_key: Optional[str] = None,
**kwargs: Any, **kwargs: Any,
) -> None: ) -> None:
"""Init params.""" """Init params."""
super().__init__(**kwargs) new_kwargs = {}
if 'embed_batch_size' in kwargs:
new_kwargs['embed_batch_size'] = kwargs['embed_batch_size']
if 'tokenizer' in kwargs:
new_kwargs['tokenizer'] = kwargs['tokenizer']
super().__init__(**new_kwargs)
self.mode = OpenAIEmbeddingMode(mode) self.mode = OpenAIEmbeddingMode(mode)
self.model = OpenAIEmbeddingModelType(model) self.model = OpenAIEmbeddingModelType(model)
self.deployment_name = deployment_name self.deployment_name = deployment_name
self.openai_api_key = openai_api_key self.openai_api_key = openai_api_key
self.openai_api_type = kwargs.get('openai_api_type')
self.openai_api_version = kwargs.get('openai_api_version')
self.openai_api_base = kwargs.get('openai_api_base')
@handle_llm_exceptions @handle_llm_exceptions
def _get_query_embedding(self, query: str) -> List[float]: def _get_query_embedding(self, query: str) -> List[float]:
...@@ -122,7 +136,9 @@ class OpenAIEmbedding(BaseEmbedding): ...@@ -122,7 +136,9 @@ class OpenAIEmbedding(BaseEmbedding):
if key not in _QUERY_MODE_MODEL_DICT: if key not in _QUERY_MODE_MODEL_DICT:
raise ValueError(f"Invalid mode, model combination: {key}") raise ValueError(f"Invalid mode, model combination: {key}")
engine = _QUERY_MODE_MODEL_DICT[key] engine = _QUERY_MODE_MODEL_DICT[key]
return get_embedding(query, engine=engine, openai_api_key=self.openai_api_key) return get_embedding(query, engine=engine, api_key=self.openai_api_key,
api_type=self.openai_api_type, api_version=self.openai_api_version,
api_base=self.openai_api_base)
def _get_text_embedding(self, text: str) -> List[float]: def _get_text_embedding(self, text: str) -> List[float]:
"""Get text embedding.""" """Get text embedding."""
...@@ -133,7 +149,9 @@ class OpenAIEmbedding(BaseEmbedding): ...@@ -133,7 +149,9 @@ class OpenAIEmbedding(BaseEmbedding):
if key not in _TEXT_MODE_MODEL_DICT: if key not in _TEXT_MODE_MODEL_DICT:
raise ValueError(f"Invalid mode, model combination: {key}") raise ValueError(f"Invalid mode, model combination: {key}")
engine = _TEXT_MODE_MODEL_DICT[key] engine = _TEXT_MODE_MODEL_DICT[key]
return get_embedding(text, engine=engine, openai_api_key=self.openai_api_key) return get_embedding(text, engine=engine, api_key=self.openai_api_key,
api_type=self.openai_api_type, api_version=self.openai_api_version,
api_base=self.openai_api_base)
async def _aget_text_embedding(self, text: str) -> List[float]: async def _aget_text_embedding(self, text: str) -> List[float]:
"""Asynchronously get text embedding.""" """Asynchronously get text embedding."""
...@@ -144,7 +162,9 @@ class OpenAIEmbedding(BaseEmbedding): ...@@ -144,7 +162,9 @@ class OpenAIEmbedding(BaseEmbedding):
if key not in _TEXT_MODE_MODEL_DICT: if key not in _TEXT_MODE_MODEL_DICT:
raise ValueError(f"Invalid mode, model combination: {key}") raise ValueError(f"Invalid mode, model combination: {key}")
engine = _TEXT_MODE_MODEL_DICT[key] engine = _TEXT_MODE_MODEL_DICT[key]
return await aget_embedding(text, engine=engine, openai_api_key=self.openai_api_key) return await aget_embedding(text, engine=engine, api_key=self.openai_api_key,
api_type=self.openai_api_type, api_version=self.openai_api_version,
api_base=self.openai_api_base)
def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]: def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]:
"""Get text embeddings. """Get text embeddings.
...@@ -160,7 +180,9 @@ class OpenAIEmbedding(BaseEmbedding): ...@@ -160,7 +180,9 @@ class OpenAIEmbedding(BaseEmbedding):
if key not in _TEXT_MODE_MODEL_DICT: if key not in _TEXT_MODE_MODEL_DICT:
raise ValueError(f"Invalid mode, model combination: {key}") raise ValueError(f"Invalid mode, model combination: {key}")
engine = _TEXT_MODE_MODEL_DICT[key] engine = _TEXT_MODE_MODEL_DICT[key]
embeddings = get_embeddings(texts, engine=engine, openai_api_key=self.openai_api_key) embeddings = get_embeddings(texts, engine=engine, api_key=self.openai_api_key,
api_type=self.openai_api_type, api_version=self.openai_api_version,
api_base=self.openai_api_base)
return embeddings return embeddings
async def _aget_text_embeddings(self, texts: List[str]) -> List[List[float]]: async def _aget_text_embeddings(self, texts: List[str]) -> List[List[float]]:
...@@ -172,5 +194,7 @@ class OpenAIEmbedding(BaseEmbedding): ...@@ -172,5 +194,7 @@ class OpenAIEmbedding(BaseEmbedding):
if key not in _TEXT_MODE_MODEL_DICT: if key not in _TEXT_MODE_MODEL_DICT:
raise ValueError(f"Invalid mode, model combination: {key}") raise ValueError(f"Invalid mode, model combination: {key}")
engine = _TEXT_MODE_MODEL_DICT[key] engine = _TEXT_MODE_MODEL_DICT[key]
embeddings = await aget_embeddings(texts, engine=engine, openai_api_key=self.openai_api_key) embeddings = await aget_embeddings(texts, engine=engine, api_key=self.openai_api_key,
api_type=self.openai_api_type, api_version=self.openai_api_version,
api_base=self.openai_api_base)
return embeddings return embeddings
...@@ -33,8 +33,11 @@ class IndexBuilder: ...@@ -33,8 +33,11 @@ class IndexBuilder:
max_chunk_overlap=20 max_chunk_overlap=20
) )
provider = LLMBuilder.get_default_provider(tenant_id)
model_credentials = LLMBuilder.get_model_credentials( model_credentials = LLMBuilder.get_model_credentials(
tenant_id=tenant_id, tenant_id=tenant_id,
model_provider=provider,
model_name='text-embedding-ada-002' model_name='text-embedding-ada-002'
) )
...@@ -43,3 +46,15 @@ class IndexBuilder: ...@@ -43,3 +46,15 @@ class IndexBuilder:
prompt_helper=prompt_helper, prompt_helper=prompt_helper,
embed_model=OpenAIEmbedding(**model_credentials), embed_model=OpenAIEmbedding(**model_credentials),
) )
@classmethod
def get_fake_llm_service_context(cls, tenant_id: str) -> ServiceContext:
llm = LLMBuilder.to_llm(
tenant_id=tenant_id,
model_name='fake'
)
return ServiceContext.from_defaults(
llm_predictor=LLMPredictor(llm=llm),
embed_model=OpenAIEmbedding()
)
...@@ -83,7 +83,7 @@ class VectorIndex: ...@@ -83,7 +83,7 @@ class VectorIndex:
if not self._dataset.index_struct_dict: if not self._dataset.index_struct_dict:
return return
service_context = IndexBuilder.get_default_service_context(tenant_id=self._dataset.tenant_id) service_context = IndexBuilder.get_fake_llm_service_context(tenant_id=self._dataset.tenant_id)
index = vector_store.get_index( index = vector_store.get_index(
service_context=service_context, service_context=service_context,
...@@ -101,7 +101,7 @@ class VectorIndex: ...@@ -101,7 +101,7 @@ class VectorIndex:
if not self._dataset.index_struct_dict: if not self._dataset.index_struct_dict:
return return
service_context = IndexBuilder.get_default_service_context(tenant_id=self._dataset.tenant_id) service_context = IndexBuilder.get_fake_llm_service_context(tenant_id=self._dataset.tenant_id)
index = vector_store.get_index( index = vector_store.get_index(
service_context=service_context, service_context=service_context,
......
...@@ -400,7 +400,7 @@ class IndexingRunner: ...@@ -400,7 +400,7 @@ class IndexingRunner:
# parse document to nodes # parse document to nodes
nodes = node_parser.get_nodes_from_documents([text_doc]) nodes = node_parser.get_nodes_from_documents([text_doc])
nodes = [node for node in nodes if node.text is not None and node.text.strip()]
all_nodes.extend(nodes) all_nodes.extend(nodes)
return all_nodes return all_nodes
......
...@@ -4,9 +4,14 @@ from langchain.callbacks import CallbackManager ...@@ -4,9 +4,14 @@ from langchain.callbacks import CallbackManager
from langchain.llms.fake import FakeListLLM from langchain.llms.fake import FakeListLLM
from core.constant import llm_constant from core.constant import llm_constant
from core.llm.error import ProviderTokenNotInitError
from core.llm.provider.base import BaseProvider
from core.llm.provider.llm_provider_service import LLMProviderService from core.llm.provider.llm_provider_service import LLMProviderService
from core.llm.streamable_azure_chat_open_ai import StreamableAzureChatOpenAI
from core.llm.streamable_azure_open_ai import StreamableAzureOpenAI
from core.llm.streamable_chat_open_ai import StreamableChatOpenAI from core.llm.streamable_chat_open_ai import StreamableChatOpenAI
from core.llm.streamable_open_ai import StreamableOpenAI from core.llm.streamable_open_ai import StreamableOpenAI
from models.provider import ProviderType
class LLMBuilder: class LLMBuilder:
...@@ -31,16 +36,23 @@ class LLMBuilder: ...@@ -31,16 +36,23 @@ class LLMBuilder:
if model_name == 'fake': if model_name == 'fake':
return FakeListLLM(responses=[]) return FakeListLLM(responses=[])
provider = cls.get_default_provider(tenant_id)
mode = cls.get_mode_by_model(model_name) mode = cls.get_mode_by_model(model_name)
if mode == 'chat': if mode == 'chat':
# llm_cls = StreamableAzureChatOpenAI if provider == 'openai':
llm_cls = StreamableChatOpenAI llm_cls = StreamableChatOpenAI
else:
llm_cls = StreamableAzureChatOpenAI
elif mode == 'completion': elif mode == 'completion':
llm_cls = StreamableOpenAI if provider == 'openai':
llm_cls = StreamableOpenAI
else:
llm_cls = StreamableAzureOpenAI
else: else:
raise ValueError(f"model name {model_name} is not supported.") raise ValueError(f"model name {model_name} is not supported.")
model_credentials = cls.get_model_credentials(tenant_id, model_name) model_credentials = cls.get_model_credentials(tenant_id, provider, model_name)
return llm_cls( return llm_cls(
model_name=model_name, model_name=model_name,
...@@ -86,18 +98,31 @@ class LLMBuilder: ...@@ -86,18 +98,31 @@ class LLMBuilder:
raise ValueError(f"model name {model_name} is not supported.") raise ValueError(f"model name {model_name} is not supported.")
@classmethod @classmethod
def get_model_credentials(cls, tenant_id: str, model_name: str) -> dict: def get_model_credentials(cls, tenant_id: str, model_provider: str, model_name: str) -> dict:
""" """
Returns the API credentials for the given tenant_id and model_name, based on the model's provider. Returns the API credentials for the given tenant_id and model_name, based on the model's provider.
Raises an exception if the model_name is not found or if the provider is not found. Raises an exception if the model_name is not found or if the provider is not found.
""" """
if not model_name: if not model_name:
raise Exception('model name not found') raise Exception('model name not found')
#
# if model_name not in llm_constant.models:
# raise Exception('model {} not found'.format(model_name))
if model_name not in llm_constant.models: # model_provider = llm_constant.models[model_name]
raise Exception('model {} not found'.format(model_name))
model_provider = llm_constant.models[model_name]
provider_service = LLMProviderService(tenant_id=tenant_id, provider_name=model_provider) provider_service = LLMProviderService(tenant_id=tenant_id, provider_name=model_provider)
return provider_service.get_credentials(model_name) return provider_service.get_credentials(model_name)
@classmethod
def get_default_provider(cls, tenant_id: str) -> str:
provider = BaseProvider.get_valid_provider(tenant_id)
if not provider:
raise ProviderTokenNotInitError()
if provider.provider_type == ProviderType.SYSTEM.value:
provider_name = 'openai'
else:
provider_name = provider.provider_name
return provider_name
import json import json
import logging
from typing import Optional, Union from typing import Optional, Union
import requests import requests
from core.llm.provider.base import BaseProvider from core.llm.provider.base import BaseProvider
from core.llm.provider.errors import ValidateFailedError
from models.provider import ProviderName from models.provider import ProviderName
class AzureProvider(BaseProvider): class AzureProvider(BaseProvider):
def get_models(self, model_id: Optional[str] = None) -> list[dict]: def get_models(self, model_id: Optional[str] = None, credentials: Optional[dict] = None) -> list[dict]:
credentials = self.get_credentials(model_id) credentials = self.get_credentials(model_id) if not credentials else credentials
url = "{}/openai/deployments?api-version={}".format( url = "{}/openai/deployments?api-version={}".format(
credentials.get('openai_api_base'), str(credentials.get('openai_api_base')),
credentials.get('openai_api_version') str(credentials.get('openai_api_version'))
) )
headers = { headers = {
"api-key": credentials.get('openai_api_key'), "api-key": str(credentials.get('openai_api_key')),
"content-type": "application/json; charset=utf-8" "content-type": "application/json; charset=utf-8"
} }
...@@ -29,17 +31,18 @@ class AzureProvider(BaseProvider): ...@@ -29,17 +31,18 @@ class AzureProvider(BaseProvider):
'name': '{} ({})'.format(deployment['id'], deployment['model']) 'name': '{} ({})'.format(deployment['id'], deployment['model'])
} for deployment in result['data'] if deployment['status'] == 'succeeded'] } for deployment in result['data'] if deployment['status'] == 'succeeded']
else: else:
# TODO: optimize in future if response.status_code == 401:
raise Exception('Failed to get deployments from Azure OpenAI. Status code: {}'.format(response.status_code)) raise AzureAuthenticationError()
else:
raise AzureRequestFailedError('Failed to request Azure OpenAI. Status code: {}'.format(response.status_code))
def get_credentials(self, model_id: Optional[str] = None) -> dict: def get_credentials(self, model_id: Optional[str] = None) -> dict:
""" """
Returns the API credentials for Azure OpenAI as a dictionary. Returns the API credentials for Azure OpenAI as a dictionary.
""" """
encrypted_config = self.get_provider_api_key(model_id=model_id) config = self.get_provider_api_key(model_id=model_id)
config = json.loads(encrypted_config)
config['openai_api_type'] = 'azure' config['openai_api_type'] = 'azure'
config['deployment_name'] = model_id config['deployment_name'] = model_id.replace('.', '') if model_id else None
return config return config
def get_provider_name(self): def get_provider_name(self):
...@@ -51,12 +54,11 @@ class AzureProvider(BaseProvider): ...@@ -51,12 +54,11 @@ class AzureProvider(BaseProvider):
""" """
try: try:
config = self.get_provider_api_key() config = self.get_provider_api_key()
config = json.loads(config)
except: except:
config = { config = {
'openai_api_type': 'azure', 'openai_api_type': 'azure',
'openai_api_version': '2023-03-15-preview', 'openai_api_version': '2023-03-15-preview',
'openai_api_base': 'https://foo.microsoft.com/bar', 'openai_api_base': '',
'openai_api_key': '' 'openai_api_key': ''
} }
...@@ -65,7 +67,7 @@ class AzureProvider(BaseProvider): ...@@ -65,7 +67,7 @@ class AzureProvider(BaseProvider):
config = { config = {
'openai_api_type': 'azure', 'openai_api_type': 'azure',
'openai_api_version': '2023-03-15-preview', 'openai_api_version': '2023-03-15-preview',
'openai_api_base': 'https://foo.microsoft.com/bar', 'openai_api_base': '',
'openai_api_key': '' 'openai_api_key': ''
} }
...@@ -76,14 +78,47 @@ class AzureProvider(BaseProvider): ...@@ -76,14 +78,47 @@ class AzureProvider(BaseProvider):
def get_token_type(self): def get_token_type(self):
# TODO: change to dict when implemented # TODO: change to dict when implemented
return lambda value: value return dict
def config_validate(self, config: Union[dict | str]): def config_validate(self, config: Union[dict | str]):
""" """
Validates the given config. Validates the given config.
""" """
# TODO: implement try:
pass if not isinstance(config, dict):
raise ValueError('Config must be a object.')
if 'openai_api_version' not in config:
config['openai_api_version'] = '2023-03-15-preview'
models = self.get_models(credentials=config)
if not models:
raise ValidateFailedError("Please add deployments for 'text-davinci-003', "
"'gpt-3.5-turbo', 'text-embedding-ada-002'.")
fixed_model_ids = [
'text-davinci-003',
'gpt-35-turbo',
'text-embedding-ada-002'
]
current_model_ids = [model['id'] for model in models]
missing_model_ids = [fixed_model_id for fixed_model_id in fixed_model_ids if
fixed_model_id not in current_model_ids]
if missing_model_ids:
raise ValidateFailedError("Please add deployments for '{}'.".format(", ".join(missing_model_ids)))
except AzureAuthenticationError:
raise ValidateFailedError('Validation failed, please check your API Key.')
except (requests.ConnectionError, requests.RequestException):
raise ValidateFailedError('Validation failed, please check your API Base Endpoint.')
except AzureRequestFailedError as ex:
raise ValidateFailedError('Validation failed, error: {}.'.format(str(ex)))
except Exception as ex:
logging.exception('Azure OpenAI Credentials validation failed')
raise ValidateFailedError('Validation failed, error: {}.'.format(str(ex)))
def get_encrypted_token(self, config: Union[dict | str]): def get_encrypted_token(self, config: Union[dict | str]):
""" """
...@@ -103,3 +138,11 @@ class AzureProvider(BaseProvider): ...@@ -103,3 +138,11 @@ class AzureProvider(BaseProvider):
config = json.loads(token) config = json.loads(token)
config['openai_api_key'] = self.decrypt_token(config['openai_api_key']) config['openai_api_key'] = self.decrypt_token(config['openai_api_key'])
return config return config
class AzureAuthenticationError(Exception):
pass
class AzureRequestFailedError(Exception):
pass
...@@ -14,7 +14,7 @@ class BaseProvider(ABC): ...@@ -14,7 +14,7 @@ class BaseProvider(ABC):
def __init__(self, tenant_id: str): def __init__(self, tenant_id: str):
self.tenant_id = tenant_id self.tenant_id = tenant_id
def get_provider_api_key(self, model_id: Optional[str] = None, prefer_custom: bool = True) -> str: def get_provider_api_key(self, model_id: Optional[str] = None, prefer_custom: bool = True) -> Union[str | dict]:
""" """
Returns the decrypted API key for the given tenant_id and provider_name. Returns the decrypted API key for the given tenant_id and provider_name.
If the provider is of type SYSTEM and the quota is exceeded, raises a QuotaExceededError. If the provider is of type SYSTEM and the quota is exceeded, raises a QuotaExceededError.
...@@ -43,23 +43,35 @@ class BaseProvider(ABC): ...@@ -43,23 +43,35 @@ class BaseProvider(ABC):
Returns the Provider instance for the given tenant_id and provider_name. Returns the Provider instance for the given tenant_id and provider_name.
If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag. If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag.
""" """
providers = db.session.query(Provider).filter( return BaseProvider.get_valid_provider(self.tenant_id, self.get_provider_name().value, prefer_custom)
Provider.tenant_id == self.tenant_id,
Provider.provider_name == self.get_provider_name().value @classmethod
).order_by(Provider.provider_type.desc() if prefer_custom else Provider.provider_type).all() def get_valid_provider(cls, tenant_id: str, provider_name: str = None, prefer_custom: bool = False) -> Optional[Provider]:
"""
Returns the Provider instance for the given tenant_id and provider_name.
If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag.
"""
query = db.session.query(Provider).filter(
Provider.tenant_id == tenant_id
)
if provider_name:
query = query.filter(Provider.provider_name == provider_name)
providers = query.order_by(Provider.provider_type.desc() if prefer_custom else Provider.provider_type).all()
custom_provider = None custom_provider = None
system_provider = None system_provider = None
for provider in providers: for provider in providers:
if provider.provider_type == ProviderType.CUSTOM.value: if provider.provider_type == ProviderType.CUSTOM.value and provider.is_valid and provider.encrypted_config:
custom_provider = provider custom_provider = provider
elif provider.provider_type == ProviderType.SYSTEM.value: elif provider.provider_type == ProviderType.SYSTEM.value and provider.is_valid:
system_provider = provider system_provider = provider
if custom_provider and custom_provider.is_valid and custom_provider.encrypted_config: if custom_provider:
return custom_provider return custom_provider
elif system_provider and system_provider.is_valid: elif system_provider:
return system_provider return system_provider
else: else:
return None return None
...@@ -80,7 +92,7 @@ class BaseProvider(ABC): ...@@ -80,7 +92,7 @@ class BaseProvider(ABC):
try: try:
config = self.get_provider_api_key() config = self.get_provider_api_key()
except: except:
config = 'THIS-IS-A-MOCK-TOKEN' config = ''
if obfuscated: if obfuscated:
return self.obfuscated_token(config) return self.obfuscated_token(config)
......
import requests
from langchain.schema import BaseMessage, ChatResult, LLMResult from langchain.schema import BaseMessage, ChatResult, LLMResult
from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import AzureChatOpenAI
from typing import Optional, List from typing import Optional, List, Dict, Any
from pydantic import root_validator
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
class StreamableAzureChatOpenAI(AzureChatOpenAI): class StreamableAzureChatOpenAI(AzureChatOpenAI):
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
import openai
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"engine": self.deployment_name,
"api_type": self.openai_api_type,
"api_base": self.openai_api_base,
"api_version": self.openai_api_version,
"api_key": self.openai_api_key,
"organization": self.openai_organization if self.openai_organization else None,
}
def get_messages_tokens(self, messages: List[BaseMessage]) -> int: def get_messages_tokens(self, messages: List[BaseMessage]) -> int:
"""Get the number of tokens in a list of messages. """Get the number of tokens in a list of messages.
......
import os
from langchain.llms import AzureOpenAI
from langchain.schema import LLMResult
from typing import Optional, List, Dict, Mapping, Any
from pydantic import root_validator
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
class StreamableAzureOpenAI(AzureOpenAI):
openai_api_type: str = "azure"
openai_api_version: str = ""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
import openai
values["client"] = openai.Completion
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**super()._invocation_params, **{
"api_type": self.openai_api_type,
"api_base": self.openai_api_base,
"api_version": self.openai_api_version,
"api_key": self.openai_api_key,
"organization": self.openai_organization if self.openai_organization else None,
}}
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {**super()._identifying_params, **{
"api_type": self.openai_api_type,
"api_base": self.openai_api_base,
"api_version": self.openai_api_version,
"api_key": self.openai_api_key,
"organization": self.openai_organization if self.openai_organization else None,
}}
@handle_llm_exceptions
def generate(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> LLMResult:
return super().generate(prompts, stop)
@handle_llm_exceptions_async
async def agenerate(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> LLMResult:
return await super().agenerate(prompts, stop)
import os
from langchain.schema import BaseMessage, ChatResult, LLMResult from langchain.schema import BaseMessage, ChatResult, LLMResult
from langchain.chat_models import ChatOpenAI from langchain.chat_models import ChatOpenAI
from typing import Optional, List from typing import Optional, List, Dict, Any
from pydantic import root_validator
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
class StreamableChatOpenAI(ChatOpenAI): class StreamableChatOpenAI(ChatOpenAI):
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
import openai
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"api_type": 'openai',
"api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
"api_version": None,
"api_key": self.openai_api_key,
"organization": self.openai_organization if self.openai_organization else None,
}
def get_messages_tokens(self, messages: List[BaseMessage]) -> int: def get_messages_tokens(self, messages: List[BaseMessage]) -> int:
"""Get the number of tokens in a list of messages. """Get the number of tokens in a list of messages.
......
import os
from langchain.schema import LLMResult from langchain.schema import LLMResult
from typing import Optional, List from typing import Optional, List, Dict, Any, Mapping
from langchain import OpenAI from langchain import OpenAI
from pydantic import root_validator
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
class StreamableOpenAI(OpenAI): class StreamableOpenAI(OpenAI):
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
import openai
values["client"] = openai.Completion
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**super()._invocation_params, **{
"api_type": 'openai',
"api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
"api_version": None,
"api_key": self.openai_api_key,
"organization": self.openai_organization if self.openai_organization else None,
}}
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {**super()._identifying_params, **{
"api_type": 'openai',
"api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
"api_version": None,
"api_key": self.openai_api_key,
"organization": self.openai_organization if self.openai_organization else None,
}}
@handle_llm_exceptions @handle_llm_exceptions
def generate( def generate(
self, prompts: List[str], stop: Optional[List[str]] = None self, prompts: List[str], stop: Optional[List[str]] = None
......
...@@ -29,7 +29,7 @@ class WeaviateVectorStoreClient(BaseVectorStoreClient): ...@@ -29,7 +29,7 @@ class WeaviateVectorStoreClient(BaseVectorStoreClient):
return weaviate.Client( return weaviate.Client(
url=endpoint, url=endpoint,
auth_client_secret=auth_config, auth_client_secret=auth_config,
timeout_config=(5, 15), timeout_config=(5, 60),
startup_period=None startup_period=None
) )
......
...@@ -15,9 +15,24 @@ def init_app(app: Flask) -> Celery: ...@@ -15,9 +15,24 @@ def init_app(app: Flask) -> Celery:
backend=app.config["CELERY_BACKEND"], backend=app.config["CELERY_BACKEND"],
task_ignore_result=True, task_ignore_result=True,
) )
# Add SSL options to the Celery configuration
ssl_options = {
"ssl_cert_reqs": None,
"ssl_ca_certs": None,
"ssl_certfile": None,
"ssl_keyfile": None,
}
celery_app.conf.update( celery_app.conf.update(
result_backend=app.config["CELERY_RESULT_BACKEND"], result_backend=app.config["CELERY_RESULT_BACKEND"],
) )
if app.config["BROKER_USE_SSL"]:
celery_app.conf.update(
broker_use_ssl=ssl_options, # Add the SSL options to the broker configuration
)
celery_app.set_default() celery_app.set_default()
app.extensions["celery"] = celery_app app.extensions["celery"] = celery_app
return celery_app return celery_app
import redis import redis
from redis.connection import SSLConnection, Connection
redis_client = redis.Redis() redis_client = redis.Redis()
def init_app(app): def init_app(app):
connection_class = Connection
if app.config.get('REDIS_USE_SSL', False):
connection_class = SSLConnection
redis_client.connection_pool = redis.ConnectionPool(**{ redis_client.connection_pool = redis.ConnectionPool(**{
'host': app.config.get('REDIS_HOST', 'localhost'), 'host': app.config.get('REDIS_HOST', 'localhost'),
'port': app.config.get('REDIS_PORT', 6379), 'port': app.config.get('REDIS_PORT', 6379),
'username': app.config.get('REDIS_USERNAME', None),
'password': app.config.get('REDIS_PASSWORD', None), 'password': app.config.get('REDIS_PASSWORD', None),
'db': app.config.get('REDIS_DB', 0), 'db': app.config.get('REDIS_DB', 0),
'encoding': 'utf-8', 'encoding': 'utf-8',
'encoding_errors': 'strict', 'encoding_errors': 'strict',
'decode_responses': False 'decode_responses': False
}) }, connection_class=connection_class)
app.extensions['redis'] = redis_client app.extensions['redis'] = redis_client
import redis import redis
from redis.connection import SSLConnection, Connection
from flask import request from flask import request
from flask_session import Session, SqlAlchemySessionInterface, RedisSessionInterface from flask_session import Session, SqlAlchemySessionInterface, RedisSessionInterface
from flask_session.sessions import total_seconds from flask_session.sessions import total_seconds
...@@ -23,16 +24,21 @@ def init_app(app): ...@@ -23,16 +24,21 @@ def init_app(app):
if session_type == 'sqlalchemy': if session_type == 'sqlalchemy':
app.session_interface = sqlalchemy_session_interface app.session_interface = sqlalchemy_session_interface
elif session_type == 'redis': elif session_type == 'redis':
connection_class = Connection
if app.config.get('SESSION_REDIS_USE_SSL', False):
connection_class = SSLConnection
sess_redis_client = redis.Redis() sess_redis_client = redis.Redis()
sess_redis_client.connection_pool = redis.ConnectionPool(**{ sess_redis_client.connection_pool = redis.ConnectionPool(**{
'host': app.config.get('SESSION_REDIS_HOST', 'localhost'), 'host': app.config.get('SESSION_REDIS_HOST', 'localhost'),
'port': app.config.get('SESSION_REDIS_PORT', 6379), 'port': app.config.get('SESSION_REDIS_PORT', 6379),
'username': app.config.get('SESSION_REDIS_USERNAME', None),
'password': app.config.get('SESSION_REDIS_PASSWORD', None), 'password': app.config.get('SESSION_REDIS_PASSWORD', None),
'db': app.config.get('SESSION_REDIS_DB', 2), 'db': app.config.get('SESSION_REDIS_DB', 2),
'encoding': 'utf-8', 'encoding': 'utf-8',
'encoding_errors': 'strict', 'encoding_errors': 'strict',
'decode_responses': False 'decode_responses': False
}) }, connection_class=connection_class)
app.extensions['session_redis'] = sess_redis_client app.extensions['session_redis'] = sess_redis_client
......
...@@ -21,7 +21,7 @@ class TimestampField(fields.Raw): ...@@ -21,7 +21,7 @@ class TimestampField(fields.Raw):
def email(email): def email(email):
# Define a regex pattern for email addresses # Define a regex pattern for email addresses
pattern = r"^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$" pattern = r"^[\w\.-]+@([\w-]+\.)+[\w-]{2,}$"
# Check if the email matches the pattern # Check if the email matches the pattern
if re.match(pattern, email) is not None: if re.match(pattern, email) is not None:
return email return email
......
...@@ -18,6 +18,7 @@ from services.errors.account import NoPermissionError ...@@ -18,6 +18,7 @@ from services.errors.account import NoPermissionError
from services.errors.dataset import DatasetNameDuplicateError from services.errors.dataset import DatasetNameDuplicateError
from services.errors.document import DocumentIndexingError from services.errors.document import DocumentIndexingError
from services.errors.file import FileNotExistsError from services.errors.file import FileNotExistsError
from tasks.deal_dataset_vector_index_task import deal_dataset_vector_index_task
from tasks.document_indexing_task import document_indexing_task from tasks.document_indexing_task import document_indexing_task
...@@ -97,7 +98,12 @@ class DatasetService: ...@@ -97,7 +98,12 @@ class DatasetService:
def update_dataset(dataset_id, data, user): def update_dataset(dataset_id, data, user):
dataset = DatasetService.get_dataset(dataset_id) dataset = DatasetService.get_dataset(dataset_id)
DatasetService.check_dataset_permission(dataset, user) DatasetService.check_dataset_permission(dataset, user)
if dataset.indexing_technique != data['indexing_technique']:
# if update indexing_technique
if data['indexing_technique'] == 'economy':
deal_dataset_vector_index_task.delay(dataset_id, 'remove')
elif data['indexing_technique'] == 'high_quality':
deal_dataset_vector_index_task.delay(dataset_id, 'add')
filtered_data = {k: v for k, v in data.items() if v is not None or k == 'description'} filtered_data = {k: v for k, v in data.items() if v is not None or k == 'description'}
filtered_data['updated_by'] = user.id filtered_data['updated_by'] = user.id
......
...@@ -62,6 +62,8 @@ class ProviderService: ...@@ -62,6 +62,8 @@ class ProviderService:
@staticmethod @staticmethod
def validate_provider_configs(tenant, provider_name: ProviderName, configs: Union[dict | str]): def validate_provider_configs(tenant, provider_name: ProviderName, configs: Union[dict | str]):
if current_app.config['DISABLE_PROVIDER_CONFIG_VALIDATION']:
return
llm_provider_service = LLMProviderService(tenant.id, provider_name.value) llm_provider_service = LLMProviderService(tenant.id, provider_name.value)
return llm_provider_service.config_validate(configs) return llm_provider_service.config_validate(configs)
......
import logging
import time
import click
from celery import shared_task
from llama_index.data_structs.node_v2 import DocumentRelationship, Node
from core.index.vector_index import VectorIndex
from extensions.ext_database import db
from models.dataset import DocumentSegment, Document, Dataset
@shared_task
def deal_dataset_vector_index_task(dataset_id: str, action: str):
"""
Async deal dataset from index
:param dataset_id: dataset_id
:param action: action
Usage: deal_dataset_vector_index_task.delay(dataset_id, action)
"""
logging.info(click.style('Start deal dataset vector index: {}'.format(dataset_id), fg='green'))
start_at = time.perf_counter()
try:
dataset = Dataset.query.filter_by(
id=dataset_id
).first()
if not dataset:
raise Exception('Dataset not found')
documents = Document.query.filter_by(dataset_id=dataset_id).all()
if documents:
vector_index = VectorIndex(dataset=dataset)
for document in documents:
# delete from vector index
if action == "remove":
vector_index.del_doc(document.id)
elif action == "add":
segments = db.session.query(DocumentSegment).filter(
DocumentSegment.document_id == document.id,
DocumentSegment.enabled == True
) .order_by(DocumentSegment.position.asc()).all()
nodes = []
previous_node = None
for segment in segments:
relationships = {
DocumentRelationship.SOURCE: document.id
}
if previous_node:
relationships[DocumentRelationship.PREVIOUS] = previous_node.doc_id
previous_node.relationships[DocumentRelationship.NEXT] = segment.index_node_id
node = Node(
doc_id=segment.index_node_id,
doc_hash=segment.index_node_hash,
text=segment.content,
extra_info=None,
node_info=None,
relationships=relationships
)
previous_node = node
nodes.append(node)
# save vector index
vector_index.add_nodes(
nodes=nodes,
duplicate_check=True
)
end_at = time.perf_counter()
logging.info(
click.style('Deal dataset vector index: {} latency: {}'.format(dataset_id, end_at - start_at), fg='green'))
except Exception:
logging.exception("Deal dataset vector index failed")
...@@ -36,14 +36,18 @@ services: ...@@ -36,14 +36,18 @@ services:
# It is consistent with the configuration in the 'redis' service below. # It is consistent with the configuration in the 'redis' service below.
REDIS_HOST: redis REDIS_HOST: redis
REDIS_PORT: 6379 REDIS_PORT: 6379
REDIS_USERNAME: ''
REDIS_PASSWORD: difyai123456 REDIS_PASSWORD: difyai123456
REDIS_USE_SSL: 'false'
# use redis db 0 for redis cache # use redis db 0 for redis cache
REDIS_DB: 0 REDIS_DB: 0
# The configurations of session, Supported values are `sqlalchemy`. `redis` # The configurations of session, Supported values are `sqlalchemy`. `redis`
SESSION_TYPE: redis SESSION_TYPE: redis
SESSION_REDIS_HOST: redis SESSION_REDIS_HOST: redis
SESSION_REDIS_PORT: 6379 SESSION_REDIS_PORT: 6379
SESSION_REDIS_USERNAME: ''
SESSION_REDIS_PASSWORD: difyai123456 SESSION_REDIS_PASSWORD: difyai123456
SESSION_REDIS_USE_SSL: 'false'
# use redis db 2 for session store # use redis db 2 for session store
SESSION_REDIS_DB: 2 SESSION_REDIS_DB: 2
# The configurations of celery broker. # The configurations of celery broker.
...@@ -129,8 +133,10 @@ services: ...@@ -129,8 +133,10 @@ services:
# The configurations of redis cache connection. # The configurations of redis cache connection.
REDIS_HOST: redis REDIS_HOST: redis
REDIS_PORT: 6379 REDIS_PORT: 6379
REDIS_USERNAME: ''
REDIS_PASSWORD: difyai123456 REDIS_PASSWORD: difyai123456
REDIS_DB: 0 REDIS_DB: 0
REDIS_USE_SSL: 'false'
# The configurations of celery broker. # The configurations of celery broker.
CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1 CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
# The type of storage to use for storing user files. Supported values are `local` and `s3`, Default: `local` # The type of storage to use for storing user files. Supported values are `local` and `s3`, Default: `local`
......
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
# parcel-bundler cache (https://parceljs.org/)
.cache
# Next.js build output
.next
# Nuxt.js build / generate output
.nuxt
dist
# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and *not* Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# TernJS port file
.tern-port
# npm
package-lock.json
# yarn
.pnp.cjs
.pnp.loader.mjs
.yarn/
yarn.lock
.yarnrc.yml
# pmpm
pnpm-lock.yaml
\ No newline at end of file
This diff is collapsed.
const registerAPI = function (app) {
app.post('/login', async (req, res) => {
res.send({
result: 'success'
})
})
// get user info
app.get('/account/profile', async (req, res) => {
res.send({
id: '11122222',
name: 'Joel',
email: 'iamjoel007@gmail.com'
})
})
// logout
app.get('/logout', async (req, res) => {
res.send({
result: 'success'
})
})
// Langgenius version
app.get('/version', async (req, res) => {
res.send({
current_version: 'v1.0.0',
latest_version: 'v1.0.0',
upgradeable: true,
compatible_upgrade: true
})
})
}
module.exports = registerAPI
const registerAPI = function (app) {
app.get("/datasets/:id/documents", async (req, res) => {
if (req.params.id === "0") res.send({ data: [] });
else {
res.send({
data: [
{
id: 1,
name: "Steve Jobs' life",
words: "70k",
word_count: 100,
updated_at: 1681801029,
indexing_status: "completed",
archived: true,
enabled: false,
data_source_info: {
upload_file: {
// id: string
// name: string
// size: number
// mime_type: string
// created_at: number
// created_by: string
extension: "pdf",
},
},
},
{
id: 2,
name: "Steve Jobs' life",
word_count: "10k",
hit_count: 10,
updated_at: 1681801029,
indexing_status: "waiting",
archived: true,
enabled: false,
data_source_info: {
upload_file: {
extension: "json",
},
},
},
{
id: 3,
name: "Steve Jobs' life xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
word_count: "100k",
hit_count: 0,
updated_at: 1681801029,
indexing_status: "indexing",
archived: false,
enabled: true,
data_source_info: {
upload_file: {
extension: "txt",
},
},
},
{
id: 4,
name: "Steve Jobs' life xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
word_count: "100k",
hit_count: 0,
updated_at: 1681801029,
indexing_status: "splitting",
archived: false,
enabled: true,
data_source_info: {
upload_file: {
extension: "md",
},
},
},
{
id: 5,
name: "Steve Jobs' life",
word_count: "100k",
hit_count: 0,
updated_at: 1681801029,
indexing_status: "error",
archived: false,
enabled: false,
data_source_info: {
upload_file: {
extension: "html",
},
},
},
],
total: 100,
id: req.params.id,
});
}
});
app.get("/datasets/:id/documents/:did/segments", async (req, res) => {
if (req.params.id === "0") res.send({ data: [] });
else {
res.send({
data: new Array(100).fill({
id: 1234,
content: `他的坚持让我很为难。众所周知他非常注意保护自己的隐私,而我想他应该从来没有看过我写的书。也许将来的某个时候吧,我还是这么说。但是,到了2009年,他的妻子劳伦·鲍威尔(Laurene Powell)直言不讳地对我说:“如果你真的打算写一本关于史蒂夫的书,最好现在就开始。”他当时刚刚第二次因病休假。我向劳伦坦承,当乔布斯第一次提出这个想法时,我并不知道他病了。几乎没有人知道,她说。他是在接受癌症手术之前给我打的电话,直到今天他还将此事作为一个秘密,她这么解释道。\n
他的坚持让我很为难。众所周知他非常注意保护自己的隐私,而我想他应该从来没有看过我写的书。也许将来的某个时候吧,我还是这么说。但是,到了2009年,他的妻子劳伦·鲍威尔(Laurene Powell)直言不讳地对我说:“如果你真的打算写一本关于史蒂夫的书,最好现在就开始。”他当时刚刚第二次因病休假。我向劳伦坦承,当乔布斯第一次提出这个想法时,我并不知道他病了。几乎没有人知道,她说。他是在接受癌症手术之前给我打的电话,直到今天他还将此事作为一个秘密,她这么解释道。`,
enabled: true,
keyWords: [
"劳伦·鲍威尔",
"劳伦·鲍威尔",
"手术",
"秘密",
"癌症",
"乔布斯",
"史蒂夫",
"书",
"休假",
"坚持",
"隐私",
],
word_count: 120,
hit_count: 100,
status: "ok",
index_node_hash: "index_node_hash value",
}),
limit: 100,
has_more: true,
});
}
});
// get doc detail
app.get("/datasets/:id/documents/:did", async (req, res) => {
const fixedParams = {
// originInfo: {
originalFilename: "Original filename",
originalFileSize: "16mb",
uploadDate: "2023-01-01",
lastUpdateDate: "2023-01-05",
source: "Source",
// },
// technicalParameters: {
segmentSpecification: "909090",
segmentLength: 100,
avgParagraphLength: 130,
};
const bookData = {
doc_type: "book",
doc_metadata: {
title: "机器学习实战",
language: "zh",
author: "Peter Harrington",
publisher: "人民邮电出版社",
publicationDate: "2013-01-01",
ISBN: "9787115335500",
category: "技术",
},
};
const webData = {
doc_type: "webPage",
doc_metadata: {
title: "深度学习入门教程",
url: "https://www.example.com/deep-learning-tutorial",
language: "zh",
publishDate: "2020-05-01",
authorPublisher: "张三",
topicsKeywords: "深度学习, 人工智能, 教程",
description:
"这是一篇详细的深度学习入门教程,适用于对人工智能和深度学习感兴趣的初学者。",
},
};
const postData = {
doc_type: "socialMediaPost",
doc_metadata: {
platform: "Twitter",
authorUsername: "example_user",
publishDate: "2021-08-15",
postURL: "https://twitter.com/example_user/status/1234567890",
topicsTags:
"AI, DeepLearning, Tutorial, Example, Example2, Example3, AI, DeepLearning, Tutorial, Example, Example2, Example3, AI, DeepLearning, Tutorial, Example, Example2, Example3,",
},
};
res.send({
id: "550e8400-e29b-41d4-a716-446655440000",
position: 1,
dataset_id: "550e8400-e29b-41d4-a716-446655440002",
data_source_type: "upload_file",
data_source_info: {
upload_file: {
extension: "html",
id: "550e8400-e29b-41d4-a716-446655440003",
},
},
dataset_process_rule_id: "550e8400-e29b-41d4-a716-446655440004",
batch: "20230410123456123456",
name: "example_document",
created_from: "web",
created_by: "550e8400-e29b-41d4-a716-446655440005",
created_api_request_id: "550e8400-e29b-41d4-a716-446655440006",
created_at: 1671269696,
processing_started_at: 1671269700,
word_count: 11,
parsing_completed_at: 1671269710,
cleaning_completed_at: 1671269720,
splitting_completed_at: 1671269730,
tokens: 10,
indexing_latency: 5.0,
completed_at: 1671269740,
paused_by: null,
paused_at: null,
error: null,
stopped_at: null,
indexing_status: "completed",
enabled: true,
disabled_at: null,
disabled_by: null,
archived: false,
archived_reason: null,
archived_by: null,
archived_at: null,
updated_at: 1671269740,
...(req.params.did === "book"
? bookData
: req.params.did === "web"
? webData
: req.params.did === "post"
? postData
: {}),
segment_count: 10,
hit_count: 9,
status: "ok",
});
});
// // logout
// app.get("/logout", async (req, res) => {
// res.send({
// result: "success",
// });
// });
// // Langgenius version
// app.get("/version", async (req, res) => {
// res.send({
// current_version: "v1.0.0",
// latest_version: "v1.0.0",
// upgradeable: true,
// compatible_upgrade: true,
// });
// });
};
module.exports = registerAPI;
const registerAPI = function (app) {
const coversationList = [
{
id: '1',
name: '梦的解析',
inputs: {
book: '《梦的解析》',
callMe: '大师',
},
chats: []
},
{
id: '2',
name: '生命的起源',
inputs: {
book: '《x x x》',
}
},
]
// site info
app.get('/apps/site/info', async (req, res) => {
// const id = req.params.id
res.send({
enable_site: true,
appId: '1',
site: {
title: 'Story Bot',
description: '这是一款解梦聊天机器人,你可以选择你喜欢的解梦人进行解梦,这句话是客户端应用说明',
},
prompt_public: true, //id === '1',
prompt_template: '你是我的解梦小助手,请参考 {{book}} 回答我有关梦境的问题。在回答前请称呼我为 {{myName}}。',
})
})
app.post('/apps/:id/chat-messages', async (req, res) => {
const conversationId = req.body.conversation_id ? req.body.conversation_id : Date.now() + ''
res.send({
id: Date.now() + '',
conversation_id: Date.now() + '',
answer: 'balabababab'
})
})
app.post('/apps/:id/completion-messages', async (req, res) => {
res.send({
id: Date.now() + '',
answer: `做为一个AI助手,我可以为你提供随机生成的段落,这些段落可以用于测试、占位符、或者其他目的。以下是一个随机生成的段落:
“随着科技的不断发展,越来越多的人开始意识到人工智能的重要性。人工智能已经成为我们生活中不可或缺的一部分,它可以帮助我们完成很多繁琐的工作,也可以为我们提供更智能、更便捷的服务。虽然人工智能带来了很多好处,但它也面临着很多挑战。例如,人工智能的算法可能会出现偏见,导致对某些人群不公平。此外,人工智能的发展也可能会导致一些工作的失业。因此,我们需要不断地研究人工智能的发展,以确保它能够为人类带来更多的好处。”`
})
})
// share api
// chat list
app.get('/apps/:id/coversations', async (req, res) => {
res.send({
data: coversationList
})
})
app.get('/apps/:id/variables', async (req, res) => {
res.send({
variables: [
{
key: 'book',
name: '书',
value: '《梦境解析》',
type: 'string'
},
{
key: 'myName',
name: '称呼',
value: '',
type: 'string'
}
],
})
})
}
module.exports = registerAPI
// const chatList = [
// {
// id: 1,
// content: 'AI 开场白',
// isAnswer: true,
// },
// {
// id: 2,
// content: '梦见在山上手撕鬼子,大师解解梦',
// more: { time: '5.6 秒' },
// },
// {
// id: 3,
// content: '梦境通常是个人内心深处的反映,很难确定每个人梦境的确切含义,因为它们可能会受到梦境者的文化背景、生活经验和情感状态等多种因素的影响。',
// isAnswer: true,
// more: { time: '99 秒' },
// },
// {
// id: 4,
// content: '梦见在山上手撕鬼子,大师解解梦',
// more: { time: '5.6 秒' },
// },
// {
// id: 5,
// content: '梦见在山上手撕鬼子,大师解解梦',
// more: { time: '5.6 秒' },
// },
// {
// id: 6,
// content: '梦见在山上手撕鬼子,大师解解梦',
// more: { time: '5.6 秒' },
// },
// ]
\ No newline at end of file
const registerAPI = function (app) {
app.get('/demo', async (req, res) => {
res.send({
des: 'get res'
})
})
app.post('/demo', async (req, res) => {
res.send({
des: 'post res'
})
})
}
module.exports = registerAPI
\ No newline at end of file
const express = require('express')
const app = express()
const bodyParser = require('body-parser')
var cors = require('cors')
const commonAPI = require('./api/common')
const demoAPI = require('./api/demo')
const appsApi = require('./api/apps')
const debugAPI = require('./api/debug')
const datasetsAPI = require('./api/datasets')
const port = 3001
app.use(bodyParser.json()) // for parsing application/json
app.use(bodyParser.urlencoded({ extended: true })) // for parsing application/x-www-form-urlencoded
const corsOptions = {
origin: true,
credentials: true,
}
app.use(cors(corsOptions)) // for cross origin
app.options('*', cors(corsOptions)) // include before other routes
demoAPI(app)
commonAPI(app)
appsApi(app)
debugAPI(app)
datasetsAPI(app)
app.get('/', (req, res) => {
res.send('rootpath')
})
app.listen(port, () => {
console.log(`Mock run on port ${port}`)
})
const sleep = (ms) => {
return new Promise(resolve => setTimeout(resolve, ms))
}
{
"name": "server",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"dev": "nodemon node app.js",
"start": "node app.js",
"tcp": "node tcp.js"
},
"keywords": [],
"author": "",
"license": "MIT",
"engines": {
"node": ">=16.0.0"
},
"dependencies": {
"body-parser": "^1.20.2",
"cors": "^2.8.5",
"express": "4.18.2",
"express-jwt": "8.4.1"
},
"devDependencies": {
"nodemon": "2.0.21"
}
}
# For production release, change this to PRODUCTION
NEXT_PUBLIC_DEPLOY_ENV=DEVELOPMENT
# The deployment edition, SELF_HOSTED or CLOUD
NEXT_PUBLIC_EDITION=SELF_HOSTED
# The base URL of console application, refers to the Console base URL of WEB service if console domain is
# different from api or web app domain.
# example: http://cloud.dify.ai/console/api
NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api
# The URL for Web APP, refers to the Web App base URL of WEB service if web app domain is different from
# console or api domain.
# example: http://udify.app/api
NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api
\ No newline at end of file
'use client' 'use client'
import type { FC } from 'react' import { FC, useRef } from 'react'
import React, { useEffect, useState } from 'react' import React, { useEffect, useState } from 'react'
import { usePathname, useRouter, useSelectedLayoutSegments } from 'next/navigation' import { usePathname, useRouter, useSelectedLayoutSegments } from 'next/navigation'
import useSWR, { SWRConfig } from 'swr' import useSWR, { SWRConfig } from 'swr'
...@@ -8,7 +8,7 @@ import { fetchAppList } from '@/service/apps' ...@@ -8,7 +8,7 @@ import { fetchAppList } from '@/service/apps'
import { fetchDatasets } from '@/service/datasets' import { fetchDatasets } from '@/service/datasets'
import { fetchLanggeniusVersion, fetchUserProfile, logout } from '@/service/common' import { fetchLanggeniusVersion, fetchUserProfile, logout } from '@/service/common'
import Loading from '@/app/components/base/loading' import Loading from '@/app/components/base/loading'
import AppContext from '@/context/app-context' import { AppContextProvider } from '@/context/app-context'
import DatasetsContext from '@/context/datasets-context' import DatasetsContext from '@/context/datasets-context'
import type { LangGeniusVersionResponse, UserProfileResponse } from '@/models/common' import type { LangGeniusVersionResponse, UserProfileResponse } from '@/models/common'
...@@ -23,6 +23,7 @@ const CommonLayout: FC<ICommonLayoutProps> = ({ children }) => { ...@@ -23,6 +23,7 @@ const CommonLayout: FC<ICommonLayoutProps> = ({ children }) => {
const pattern = pathname.replace(/.*\/app\//, '') const pattern = pathname.replace(/.*\/app\//, '')
const [idOrMethod] = pattern.split('/') const [idOrMethod] = pattern.split('/')
const isNotDetailPage = idOrMethod === 'list' const isNotDetailPage = idOrMethod === 'list'
const pageContainerRef = useRef<HTMLDivElement>(null)
const appId = isNotDetailPage ? '' : idOrMethod const appId = isNotDetailPage ? '' : idOrMethod
...@@ -71,14 +72,14 @@ const CommonLayout: FC<ICommonLayoutProps> = ({ children }) => { ...@@ -71,14 +72,14 @@ const CommonLayout: FC<ICommonLayoutProps> = ({ children }) => {
<SWRConfig value={{ <SWRConfig value={{
shouldRetryOnError: false shouldRetryOnError: false
}}> }}>
<AppContext.Provider value={{ apps: appList.data, mutateApps, userProfile, mutateUserProfile }}> <AppContextProvider value={{ apps: appList.data, mutateApps, userProfile, mutateUserProfile, pageContainerRef }}>
<DatasetsContext.Provider value={{ datasets: datasetList?.data || [], mutateDatasets, currentDataset }}> <DatasetsContext.Provider value={{ datasets: datasetList?.data || [], mutateDatasets, currentDataset }}>
<div className='relative flex flex-col h-full overflow-scroll bg-gray-100'> <div ref={pageContainerRef} className='relative flex flex-col h-full overflow-auto bg-gray-100'>
<Header isBordered={['/apps', '/datasets'].includes(pathname)} curApp={curApp as any} appItems={appList.data} userProfile={userProfile} onLogout={onLogout} langeniusVersionInfo={langeniusVersionInfo} /> <Header isBordered={['/apps', '/datasets'].includes(pathname)} curApp={curApp as any} appItems={appList.data} userProfile={userProfile} onLogout={onLogout} langeniusVersionInfo={langeniusVersionInfo} />
{children} {children}
</div> </div>
</DatasetsContext.Provider> </DatasetsContext.Provider>
</AppContext.Provider> </AppContextProvider>
</SWRConfig> </SWRConfig>
) )
} }
......
...@@ -49,7 +49,7 @@ const AppDetailLayout: FC<IAppDetailLayoutProps> = (props) => { ...@@ -49,7 +49,7 @@ const AppDetailLayout: FC<IAppDetailLayoutProps> = (props) => {
return null return null
return ( return (
<div className={cn(s.app, 'flex', 'overflow-hidden')}> <div className={cn(s.app, 'flex', 'overflow-hidden')}>
<AppSideBar title={response.name} desc={appModeName} navigation={navigation} /> <AppSideBar title={response.name} icon={response.icon} icon_background={response.icon_background} desc={appModeName} navigation={navigation} />
<div className="bg-white grow">{children}</div> <div className="bg-white grow">{children}</div>
</div> </div>
) )
......
...@@ -16,10 +16,12 @@ import AppsContext from '@/context/app-context' ...@@ -16,10 +16,12 @@ import AppsContext from '@/context/app-context'
export type AppCardProps = { export type AppCardProps = {
app: App app: App
onDelete?: () => void
} }
const AppCard = ({ const AppCard = ({
app, app,
onDelete
}: AppCardProps) => { }: AppCardProps) => {
const { t } = useTranslation() const { t } = useTranslation()
const { notify } = useContext(ToastContext) const { notify } = useContext(ToastContext)
...@@ -35,6 +37,8 @@ const AppCard = ({ ...@@ -35,6 +37,8 @@ const AppCard = ({
try { try {
await deleteApp(app.id) await deleteApp(app.id)
notify({ type: 'success', message: t('app.appDeleted') }) notify({ type: 'success', message: t('app.appDeleted') })
if (onDelete)
onDelete()
mutateApps() mutateApps()
} }
catch (e: any) { catch (e: any) {
...@@ -47,7 +51,7 @@ const AppCard = ({ ...@@ -47,7 +51,7 @@ const AppCard = ({
<> <>
<Link href={`/app/${app.id}/overview`} className={style.listItem}> <Link href={`/app/${app.id}/overview`} className={style.listItem}>
<div className={style.listItemTitle}> <div className={style.listItemTitle}>
<AppIcon size='small' /> <AppIcon size='small' icon={app.icon} background={app.icon_background} />
<div className={style.listItemHeading}> <div className={style.listItemHeading}>
<div className={style.listItemHeadingContent}>{app.name}</div> <div className={style.listItemHeadingContent}>{app.name}</div>
</div> </div>
......
'use client' 'use client'
import { useEffect } from 'react' import { useEffect, useRef } from 'react'
import useSWRInfinite from 'swr/infinite'
import { debounce } from 'lodash-es'
import AppCard from './AppCard' import AppCard from './AppCard'
import NewAppCard from './NewAppCard' import NewAppCard from './NewAppCard'
import { useAppContext } from '@/context/app-context' import { AppListResponse } from '@/models/app'
import { fetchAppList } from '@/service/apps'
import { useSelector } from '@/context/app-context'
const getKey = (pageIndex: number, previousPageData: AppListResponse) => {
if (!pageIndex || previousPageData.has_more)
return { url: 'apps', params: { page: pageIndex + 1, limit: 30 } }
return null
}
const Apps = () => { const Apps = () => {
const { apps, mutateApps } = useAppContext() const { data, isLoading, setSize, mutate } = useSWRInfinite(getKey, fetchAppList, { revalidateFirstPage: false })
const loadingStateRef = useRef(false)
const pageContainerRef = useSelector(state => state.pageContainerRef)
const anchorRef = useRef<HTMLAnchorElement>(null)
useEffect(() => { useEffect(() => {
mutateApps() loadingStateRef.current = isLoading
}, [isLoading])
useEffect(() => {
const onScroll = debounce(() => {
if (!loadingStateRef.current) {
const { scrollTop, clientHeight } = pageContainerRef.current!
const anchorOffset = anchorRef.current!.offsetTop
if (anchorOffset - scrollTop - clientHeight < 100) {
setSize(size => size + 1)
}
}
}, 50)
pageContainerRef.current?.addEventListener('scroll', onScroll)
return () => pageContainerRef.current?.removeEventListener('scroll', onScroll)
}, []) }, [])
return ( return (
<nav className='grid content-start grid-cols-1 gap-4 px-12 pt-8 sm:grid-cols-2 lg:grid-cols-4 grow shrink-0'> <nav className='grid content-start grid-cols-1 gap-4 px-12 pt-8 sm:grid-cols-2 lg:grid-cols-4 grow shrink-0'>
{apps.map(app => (<AppCard key={app.id} app={app} />))} {data?.map(({ data: apps }) => apps.map(app => (
<NewAppCard /> <AppCard key={app.id} app={app} onDelete={mutate} />
)))}
<NewAppCard ref={anchorRef} onSuccess={mutate} />
</nav> </nav>
) )
} }
......
'use client' 'use client'
import { useState } from 'react' import { forwardRef, useState } from 'react'
import classNames from 'classnames' import classNames from 'classnames'
import { useTranslation } from 'react-i18next' import { useTranslation } from 'react-i18next'
import style from '../list.module.css' import style from '../list.module.css'
import NewAppDialog from './NewAppDialog' import NewAppDialog from './NewAppDialog'
const CreateAppCard = () => { export type CreateAppCardProps = {
onSuccess?: () => void
}
const CreateAppCard = forwardRef<HTMLAnchorElement, CreateAppCardProps>(({ onSuccess }, ref) => {
const { t } = useTranslation() const { t } = useTranslation()
const [showNewAppDialog, setShowNewAppDialog] = useState(false) const [showNewAppDialog, setShowNewAppDialog] = useState(false)
return ( return (
<a className={classNames(style.listItem, style.newItemCard)} onClick={() => setShowNewAppDialog(true)}> <a ref={ref} className={classNames(style.listItem, style.newItemCard)} onClick={() => setShowNewAppDialog(true)}>
<div className={style.listItemTitle}> <div className={style.listItemTitle}>
<span className={style.newItemIcon}> <span className={style.newItemIcon}>
<span className={classNames(style.newItemIconImage, style.newItemIconAdd)} /> <span className={classNames(style.newItemIconImage, style.newItemIconAdd)} />
...@@ -21,9 +24,9 @@ const CreateAppCard = () => { ...@@ -21,9 +24,9 @@ const CreateAppCard = () => {
</div> </div>
</div> </div>
{/* <div className='text-xs text-gray-500'>{t('app.createFromConfigFile')}</div> */} {/* <div className='text-xs text-gray-500'>{t('app.createFromConfigFile')}</div> */}
<NewAppDialog show={showNewAppDialog} onClose={() => setShowNewAppDialog(false)} /> <NewAppDialog show={showNewAppDialog} onSuccess={onSuccess} onClose={() => setShowNewAppDialog(false)} />
</a> </a>
) )
} })
export default CreateAppCard export default CreateAppCard
...@@ -17,12 +17,15 @@ import { createApp, fetchAppTemplates } from '@/service/apps' ...@@ -17,12 +17,15 @@ import { createApp, fetchAppTemplates } from '@/service/apps'
import AppIcon from '@/app/components/base/app-icon' import AppIcon from '@/app/components/base/app-icon'
import AppsContext from '@/context/app-context' import AppsContext from '@/context/app-context'
import EmojiPicker from '@/app/components/base/emoji-picker'
type NewAppDialogProps = { type NewAppDialogProps = {
show: boolean show: boolean
onSuccess?: () => void
onClose?: () => void onClose?: () => void
} }
const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { const NewAppDialog = ({ show, onSuccess, onClose }: NewAppDialogProps) => {
const router = useRouter() const router = useRouter()
const { notify } = useContext(ToastContext) const { notify } = useContext(ToastContext)
const { t } = useTranslation() const { t } = useTranslation()
...@@ -31,6 +34,11 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { ...@@ -31,6 +34,11 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
const [newAppMode, setNewAppMode] = useState<AppMode>() const [newAppMode, setNewAppMode] = useState<AppMode>()
const [isWithTemplate, setIsWithTemplate] = useState(false) const [isWithTemplate, setIsWithTemplate] = useState(false)
const [selectedTemplateIndex, setSelectedTemplateIndex] = useState<number>(-1) const [selectedTemplateIndex, setSelectedTemplateIndex] = useState<number>(-1)
// Emoji Picker
const [showEmojiPicker, setShowEmojiPicker] = useState(false)
const [emoji, setEmoji] = useState({ icon: '🍌', icon_background: '#FFEAD5' })
const mutateApps = useContextSelector(AppsContext, state => state.mutateApps) const mutateApps = useContextSelector(AppsContext, state => state.mutateApps)
const { data: templates, mutate } = useSWR({ url: '/app-templates' }, fetchAppTemplates) const { data: templates, mutate } = useSWR({ url: '/app-templates' }, fetchAppTemplates)
...@@ -67,9 +75,13 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { ...@@ -67,9 +75,13 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
try { try {
const app = await createApp({ const app = await createApp({
name, name,
icon: emoji.icon,
icon_background: emoji.icon_background,
mode: isWithTemplate ? templates.data[selectedTemplateIndex].mode : newAppMode!, mode: isWithTemplate ? templates.data[selectedTemplateIndex].mode : newAppMode!,
config: isWithTemplate ? templates.data[selectedTemplateIndex].model_config : undefined, config: isWithTemplate ? templates.data[selectedTemplateIndex].model_config : undefined,
}) })
if (onSuccess)
onSuccess()
if (onClose) if (onClose)
onClose() onClose()
notify({ type: 'success', message: t('app.newApp.appCreated') }) notify({ type: 'success', message: t('app.newApp.appCreated') })
...@@ -80,9 +92,20 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { ...@@ -80,9 +92,20 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
notify({ type: 'error', message: t('app.newApp.appCreateFailed') }) notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
} }
isCreatingRef.current = false isCreatingRef.current = false
}, [isWithTemplate, newAppMode, notify, router, templates, selectedTemplateIndex]) }, [isWithTemplate, newAppMode, notify, router, templates, selectedTemplateIndex, emoji])
return ( return <>
{showEmojiPicker && <EmojiPicker
onSelect={(icon, icon_background) => {
console.log(icon, icon_background)
setEmoji({ icon, icon_background })
setShowEmojiPicker(false)
}}
onClose={() => {
setEmoji({ icon: '🍌', icon_background: '#FFEAD5' })
setShowEmojiPicker(false)
}}
/>}
<Dialog <Dialog
show={show} show={show}
title={t('app.newApp.startToCreate')} title={t('app.newApp.startToCreate')}
...@@ -96,7 +119,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { ...@@ -96,7 +119,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
<h3 className={style.newItemCaption}>{t('app.newApp.captionName')}</h3> <h3 className={style.newItemCaption}>{t('app.newApp.captionName')}</h3>
<div className='flex items-center justify-between gap-3 mb-8'> <div className='flex items-center justify-between gap-3 mb-8'>
<AppIcon size='large' /> <AppIcon size='large' onClick={() => { setShowEmojiPicker(true) }} className='cursor-pointer' icon={emoji.icon} background={emoji.icon_background} />
<input ref={nameInputRef} className='h-10 px-3 text-sm font-normal bg-gray-100 rounded-lg grow' /> <input ref={nameInputRef} className='h-10 px-3 text-sm font-normal bg-gray-100 rounded-lg grow' />
</div> </div>
...@@ -187,7 +210,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { ...@@ -187,7 +210,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
)} )}
</div> </div>
</Dialog> </Dialog>
) </>
} }
export default NewAppDialog export default NewAppDialog
...@@ -155,6 +155,8 @@ const DatasetDetailLayout: FC<IAppDetailLayoutProps> = (props) => { ...@@ -155,6 +155,8 @@ const DatasetDetailLayout: FC<IAppDetailLayoutProps> = (props) => {
<div className='flex' style={{ height: 'calc(100vh - 56px)' }}> <div className='flex' style={{ height: 'calc(100vh - 56px)' }}>
{!hideSideBar && <AppSideBar {!hideSideBar && <AppSideBar
title={datasetRes?.name || '--'} title={datasetRes?.name || '--'}
icon={datasetRes?.icon || 'https://static.dify.ai/images/dataset-default-icon.png'}
icon_background={datasetRes?.icon_background || '#F5F5F5'}
desc={datasetRes?.description || '--'} desc={datasetRes?.description || '--'}
navigation={navigation} navigation={navigation}
extraInfo={<ExtraInfo />} extraInfo={<ExtraInfo />}
......
...@@ -18,16 +18,16 @@ import classNames from 'classnames' ...@@ -18,16 +18,16 @@ import classNames from 'classnames'
export type DatasetCardProps = { export type DatasetCardProps = {
dataset: DataSet dataset: DataSet
onDelete?: () => void
} }
const DatasetCard = ({ const DatasetCard = ({
dataset, dataset,
onDelete
}: DatasetCardProps) => { }: DatasetCardProps) => {
const { t } = useTranslation() const { t } = useTranslation()
const { notify } = useContext(ToastContext) const { notify } = useContext(ToastContext)
const { mutate: mutateDatasets } = useSWR({ url: '/datasets', params: { page: 1 } }, fetchDatasets)
const [showConfirmDelete, setShowConfirmDelete] = useState(false) const [showConfirmDelete, setShowConfirmDelete] = useState(false)
const onDeleteClick: MouseEventHandler = useCallback((e) => { const onDeleteClick: MouseEventHandler = useCallback((e) => {
e.preventDefault() e.preventDefault()
...@@ -37,7 +37,8 @@ const DatasetCard = ({ ...@@ -37,7 +37,8 @@ const DatasetCard = ({
try { try {
await deleteDataset(dataset.id) await deleteDataset(dataset.id)
notify({ type: 'success', message: t('dataset.datasetDeleted') }) notify({ type: 'success', message: t('dataset.datasetDeleted') })
mutateDatasets() if (onDelete)
onDelete()
} }
catch (e: any) { catch (e: any) {
notify({ type: 'error', message: `${t('dataset.datasetDeleteFailed')}${'message' in e ? `: ${e.message}` : ''}` }) notify({ type: 'error', message: `${t('dataset.datasetDeleteFailed')}${'message' in e ? `: ${e.message}` : ''}` })
......
'use client' 'use client'
import { useEffect } from 'react' import { useEffect, useRef } from 'react'
import useSWR from 'swr' import useSWRInfinite from 'swr/infinite'
import { DataSet } from '@/models/datasets'; import { debounce } from 'lodash-es';
import { DataSetListResponse } from '@/models/datasets';
import NewDatasetCard from './NewDatasetCard' import NewDatasetCard from './NewDatasetCard'
import DatasetCard from './DatasetCard'; import DatasetCard from './DatasetCard';
import { fetchDatasets } from '@/service/datasets'; import { fetchDatasets } from '@/service/datasets';
import { useSelector } from '@/context/app-context';
const getKey = (pageIndex: number, previousPageData: DataSetListResponse) => {
if (!pageIndex || previousPageData.has_more)
return { url: 'datasets', params: { page: pageIndex + 1, limit: 30 } }
return null
}
const Datasets = () => { const Datasets = () => {
// const { datasets, mutateDatasets } = useAppContext() const { data, isLoading, setSize, mutate } = useSWRInfinite(getKey, fetchDatasets, { revalidateFirstPage: false })
const { data: datasetList, mutate: mutateDatasets } = useSWR({ url: '/datasets', params: { page: 1 } }, fetchDatasets) const loadingStateRef = useRef(false)
const pageContainerRef = useSelector(state => state.pageContainerRef)
const anchorRef = useRef<HTMLAnchorElement>(null)
useEffect(() => { useEffect(() => {
mutateDatasets() loadingStateRef.current = isLoading
}, [isLoading])
useEffect(() => {
const onScroll = debounce(() => {
if (!loadingStateRef.current) {
const { scrollTop, clientHeight } = pageContainerRef.current!
const anchorOffset = anchorRef.current!.offsetTop
if (anchorOffset - scrollTop - clientHeight < 100) {
setSize(size => size + 1)
}
}
}, 50)
pageContainerRef.current?.addEventListener('scroll', onScroll)
return () => pageContainerRef.current?.removeEventListener('scroll', onScroll)
}, []) }, [])
return ( return (
<nav className='grid content-start grid-cols-1 gap-4 px-12 pt-8 sm:grid-cols-2 lg:grid-cols-4 grow shrink-0'> <nav className='grid content-start grid-cols-1 gap-4 px-12 pt-8 sm:grid-cols-2 lg:grid-cols-4 grow shrink-0'>
{datasetList?.data.map(dataset => (<DatasetCard key={dataset.id} dataset={dataset} />))} {data?.map(({ data: datasets }) => datasets.map(dataset => (
<NewDatasetCard /> <DatasetCard key={dataset.id} dataset={dataset} onDelete={mutate} />)
))}
<NewDatasetCard ref={anchorRef} />
</nav> </nav>
) )
} }
......
'use client' 'use client'
import { useState } from 'react' import { forwardRef, useState } from 'react'
import classNames from 'classnames' import classNames from 'classnames'
import { useTranslation } from 'react-i18next' import { useTranslation } from 'react-i18next'
import style from '../list.module.css' import style from '../list.module.css'
const CreateAppCard = () => { const CreateAppCard = forwardRef<HTMLAnchorElement>((_, ref) => {
const { t } = useTranslation() const { t } = useTranslation()
const [showNewAppDialog, setShowNewAppDialog] = useState(false) const [showNewAppDialog, setShowNewAppDialog] = useState(false)
return ( return (
<a className={classNames(style.listItem, style.newItemCard)} href='/datasets/create'> <a ref={ref} className={classNames(style.listItem, style.newItemCard)} href='/datasets/create'>
<div className={style.listItemTitle}> <div className={style.listItemTitle}>
<span className={style.newItemIcon}> <span className={style.newItemIcon}>
<span className={classNames(style.newItemIconImage, style.newItemIconAdd)} /> <span className={classNames(style.newItemIconImage, style.newItemIconAdd)} />
...@@ -23,6 +23,6 @@ const CreateAppCard = () => { ...@@ -23,6 +23,6 @@ const CreateAppCard = () => {
{/* <div className='text-xs text-gray-500'>{t('app.createFromConfigFile')}</div> */} {/* <div className='text-xs text-gray-500'>{t('app.createFromConfigFile')}</div> */}
</a> </a>
) )
} })
export default CreateAppCard export default CreateAppCard
export async function GET(_request: Request) {
return new Response('Hello, Next.js!')
}
...@@ -15,7 +15,8 @@ export function randomString(length: number) { ...@@ -15,7 +15,8 @@ export function randomString(length: number) {
export type IAppBasicProps = { export type IAppBasicProps = {
iconType?: 'app' | 'api' | 'dataset' iconType?: 'app' | 'api' | 'dataset'
iconUrl?: string icon?: string,
icon_background?: string,
name: string name: string
type: string | React.ReactNode type: string | React.ReactNode
hoverTip?: string hoverTip?: string
...@@ -41,15 +42,20 @@ const ICON_MAP = { ...@@ -41,15 +42,20 @@ const ICON_MAP = {
'dataset': <AppIcon innerIcon={DatasetSvg} className='!border-[0.5px] !border-indigo-100 !bg-indigo-25' /> 'dataset': <AppIcon innerIcon={DatasetSvg} className='!border-[0.5px] !border-indigo-100 !bg-indigo-25' />
} }
export default function AppBasic({ iconUrl, name, type, hoverTip, textStyle, iconType = 'app' }: IAppBasicProps) { export default function AppBasic({ icon, icon_background, name, type, hoverTip, textStyle, iconType = 'app' }: IAppBasicProps) {
return ( return (
<div className="flex items-start"> <div className="flex items-start">
{iconUrl && ( {icon && icon_background && iconType === 'app' && (
<div className='flex-shrink-0 mr-3'> <div className='flex-shrink-0 mr-3'>
{/* <img className="inline-block rounded-lg h-9 w-9" src={iconUrl} alt={name} /> */} <AppIcon icon={icon} background={icon_background} />
{ICON_MAP[iconType]}
</div> </div>
)} )}
{iconType !== 'app' &&
<div className='flex-shrink-0 mr-3'>
{ICON_MAP[iconType]}
</div>
}
<div className="group"> <div className="group">
<div className={`flex flex-row items-center text-sm font-semibold text-gray-700 group-hover:text-gray-900 ${textStyle?.main}`}> <div className={`flex flex-row items-center text-sm font-semibold text-gray-700 group-hover:text-gray-900 ${textStyle?.main}`}>
{name} {name}
......
...@@ -7,6 +7,8 @@ export type IAppDetailNavProps = { ...@@ -7,6 +7,8 @@ export type IAppDetailNavProps = {
iconType?: 'app' | 'dataset' iconType?: 'app' | 'dataset'
title: string title: string
desc: string desc: string
icon: string
icon_background: string
navigation: Array<{ navigation: Array<{
name: string name: string
href: string href: string
...@@ -16,13 +18,12 @@ export type IAppDetailNavProps = { ...@@ -16,13 +18,12 @@ export type IAppDetailNavProps = {
extraInfo?: React.ReactNode extraInfo?: React.ReactNode
} }
const sampleAppIconUrl = 'https://images.unsplash.com/photo-1472099645785-5658abf4ff4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=facearea&facepad=2&w=256&h=256&q=80'
const AppDetailNav: FC<IAppDetailNavProps> = ({ title, desc, navigation, extraInfo, iconType = 'app' }) => { const AppDetailNav: FC<IAppDetailNavProps> = ({ title, desc, icon, icon_background, navigation, extraInfo, iconType = 'app' }) => {
return ( return (
<div className="flex flex-col w-56 overflow-y-auto bg-white border-r border-gray-200 shrink-0"> <div className="flex flex-col w-56 overflow-y-auto bg-white border-r border-gray-200 shrink-0">
<div className="flex flex-shrink-0 p-4"> <div className="flex flex-shrink-0 p-4">
<AppBasic iconType={iconType} iconUrl={sampleAppIconUrl} name={title} type={desc} /> <AppBasic iconType={iconType} icon={icon} icon_background={icon_background} name={title} type={desc} />
</div> </div>
<nav className="flex-1 p-4 space-y-1 bg-white"> <nav className="flex-1 p-4 space-y-1 bg-white">
{navigation.map((item, index) => { {navigation.map((item, index) => {
......
'use client'
import React from 'react'
import Tooltip from '@/app/components/base/tooltip'
import { t } from 'i18next'
import s from './style.module.css'
import copy from 'copy-to-clipboard'
type ICopyBtnProps = {
value: string
className?: string
}
const CopyBtn = ({
value,
className,
}: ICopyBtnProps) => {
const [isCopied, setIsCopied] = React.useState(false)
return (
<div className={`${className}`}>
<Tooltip
selector="copy-btn-tooltip"
content={(isCopied ? t('appApi.copied') : t('appApi.copy')) as string}
className='z-10'
>
<div
className={`box-border p-0.5 flex items-center justify-center rounded-md bg-white cursor-pointer`}
style={{
boxShadow: '0px 4px 8px -2px rgba(16, 24, 40, 0.1), 0px 2px 4px -2px rgba(16, 24, 40, 0.06)'
}}
onClick={() => {
copy(value)
setIsCopied(true)
}}
>
<div className={`w-6 h-6 hover:bg-gray-50 ${s.copyIcon} ${isCopied ? s.copied : ''}`}></div>
</div>
</Tooltip>
</div>
)
}
export default CopyBtn
.copyIcon {
background-image: url(~@/app/components/develop/secret-key/assets/copy.svg);
background-position: center;
background-repeat: no-repeat;
}
.copyIcon:hover {
background-image: url(~@/app/components/develop/secret-key/assets/copy-hover.svg);
background-position: center;
background-repeat: no-repeat;
}
.copyIcon.copied {
background-image: url(~@/app/components/develop/secret-key/assets/copied.svg);
}
\ No newline at end of file
...@@ -17,6 +17,7 @@ import AppContext from '@/context/app-context' ...@@ -17,6 +17,7 @@ import AppContext from '@/context/app-context'
import { Markdown } from '@/app/components/base/markdown' import { Markdown } from '@/app/components/base/markdown'
import LoadingAnim from './loading-anim' import LoadingAnim from './loading-anim'
import { formatNumber } from '@/utils/format' import { formatNumber } from '@/utils/format'
import CopyBtn from './copy-btn'
const stopIcon = ( const stopIcon = (
<svg width="14" height="14" viewBox="0 0 14 14" fill="none" xmlns="http://www.w3.org/2000/svg"> <svg width="14" height="14" viewBox="0 0 14 14" fill="none" xmlns="http://www.w3.org/2000/svg">
...@@ -290,70 +291,76 @@ const Answer: FC<IAnswerProps> = ({ item, feedbackDisabled = false, isHideFeedba ...@@ -290,70 +291,76 @@ const Answer: FC<IAnswerProps> = ({ item, feedbackDisabled = false, isHideFeedba
</div> </div>
} }
</div> </div>
<div className={`${s.answerWrap} ${showEdit ? 'w-full' : ''}`}> <div className={s.answerWrapWrap}>
<div className={`${s.answer} relative text-sm text-gray-900`}> <div className={`${s.answerWrap} ${showEdit ? 'w-full' : ''}`}>
<div className={'ml-2 py-3 px-4 bg-gray-100 rounded-tr-2xl rounded-b-2xl'}> <div className={`${s.answer} relative text-sm text-gray-900`}>
{item.isOpeningStatement && ( <div className={'ml-2 py-3 px-4 bg-gray-100 rounded-tr-2xl rounded-b-2xl'}>
<div className='flex items-center mb-1 gap-1'> {item.isOpeningStatement && (
<OpeningStatementIcon /> <div className='flex items-center mb-1 gap-1'>
<div className='text-xs text-gray-500'>{t('appDebug.openingStatement.title')}</div> <OpeningStatementIcon />
</div> <div className='text-xs text-gray-500'>{t('appDebug.openingStatement.title')}</div>
)}
{(isResponsing && !content) ? (
<div className='flex items-center justify-center w-6 h-5'>
<LoadingAnim type='text' />
</div>
) : (
<Markdown content={content} />
)}
{!showEdit
? (annotation?.content
&& <>
<Divider name={annotation?.account?.name || userProfile?.name} />
{annotation.content}
</>)
: <>
<Divider name={annotation?.account?.name || userProfile?.name} />
<AutoHeightTextarea
placeholder={t('appLog.detail.operation.annotationPlaceholder') as string}
value={inputValue}
onChange={e => setInputValue(e.target.value)}
minHeight={58}
className={`${cn(s.textArea)} !py-2 resize-none block w-full !px-3 bg-gray-50 border border-gray-200 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm text-gray-700 tracking-[0.2px]`}
/>
<div className="mt-2 flex flex-row">
<Button
type='primary'
className='mr-2'
loading={loading}
onClick={async () => {
if (!inputValue)
return
setLoading(true)
const res = await onSubmitAnnotation?.(id, inputValue)
if (res)
setAnnotation({ ...annotation, content: inputValue } as any)
setLoading(false)
setShowEdit(false)
}}>{t('common.operation.confirm')}</Button>
<Button
onClick={() => {
setInputValue(annotation?.content ?? '')
setShowEdit(false)
}}>{t('common.operation.cancel')}</Button>
</div> </div>
</> )}
} {(isResponsing && !content) ? (
</div> <div className='flex items-center justify-center w-6 h-5'>
<div className='absolute top-[-14px] right-[-14px] flex flex-row justify-end gap-1'> <LoadingAnim type='text' />
{!feedbackDisabled && !item.feedbackDisabled && renderItemOperation(displayScene !== 'console')} </div>
{/* Admin feedback is displayed only in the background. */} ) : (
{!feedbackDisabled && renderFeedbackRating(localAdminFeedback?.rating, false, false)} <Markdown content={content} />
{/* User feedback must be displayed */} )}
{!feedbackDisabled && renderFeedbackRating(feedback?.rating, !isHideFeedbackEdit, displayScene !== 'console')} {!showEdit
? (annotation?.content
&& <>
<Divider name={annotation?.account?.name || userProfile?.name} />
{annotation.content}
</>)
: <>
<Divider name={annotation?.account?.name || userProfile?.name} />
<AutoHeightTextarea
placeholder={t('appLog.detail.operation.annotationPlaceholder') as string}
value={inputValue}
onChange={e => setInputValue(e.target.value)}
minHeight={58}
className={`${cn(s.textArea)} !py-2 resize-none block w-full !px-3 bg-gray-50 border border-gray-200 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm text-gray-700 tracking-[0.2px]`}
/>
<div className="mt-2 flex flex-row">
<Button
type='primary'
className='mr-2'
loading={loading}
onClick={async () => {
if (!inputValue)
return
setLoading(true)
const res = await onSubmitAnnotation?.(id, inputValue)
if (res)
setAnnotation({ ...annotation, content: inputValue } as any)
setLoading(false)
setShowEdit(false)
}}>{t('common.operation.confirm')}</Button>
<Button
onClick={() => {
setInputValue(annotation?.content ?? '')
setShowEdit(false)
}}>{t('common.operation.cancel')}</Button>
</div>
</>
}
</div>
<div className='absolute top-[-14px] right-[-14px] flex flex-row justify-end gap-1'>
<CopyBtn
value={content}
className={cn(s.copyBtn, 'mr-1')}
/>
{!feedbackDisabled && !item.feedbackDisabled && renderItemOperation(displayScene !== 'console')}
{/* Admin feedback is displayed only in the background. */}
{!feedbackDisabled && renderFeedbackRating(localAdminFeedback?.rating, false, false)}
{/* User feedback must be displayed */}
{!feedbackDisabled && renderFeedbackRating(feedback?.rating, !isHideFeedbackEdit, displayScene !== 'console')}
</div>
</div> </div>
{more && <MoreInfo more={more} isQuestion={false} />}
</div> </div>
{more && <MoreInfo more={more} isQuestion={false} />}
</div> </div>
</div> </div>
</div> </div>
...@@ -367,7 +374,7 @@ const Question: FC<IQuestionProps> = ({ id, content, more, useCurrentUserAvatar ...@@ -367,7 +374,7 @@ const Question: FC<IQuestionProps> = ({ id, content, more, useCurrentUserAvatar
const userName = userProfile?.name const userName = userProfile?.name
return ( return (
<div className='flex items-start justify-end' key={id}> <div className='flex items-start justify-end' key={id}>
<div> <div className={s.questionWrapWrap}>
<div className={`${s.question} relative text-sm text-gray-900`}> <div className={`${s.question} relative text-sm text-gray-900`}>
<div <div
className={'mr-2 py-3 px-4 bg-blue-500 rounded-tl-2xl rounded-b-2xl'} className={'mr-2 py-3 px-4 bg-blue-500 rounded-tl-2xl rounded-b-2xl'}
......
...@@ -38,6 +38,31 @@ ...@@ -38,6 +38,31 @@
background: url(./icons/answer.svg) no-repeat; background: url(./icons/answer.svg) no-repeat;
} }
.copyBtn {
display: none;
}
.answerWrapWrap,
.questionWrapWrap {
width: 0;
flex-grow: 1;
}
.questionWrapWrap {
display: flex;
justify-content: flex-end;
}
.answerWrap,
.question {
display: inline-block;
max-width: 100%;
}
.answerWrap:hover .copyBtn {
display: block;
}
.answerWrap .itemOperation { .answerWrap .itemOperation {
display: none; display: none;
} }
......
...@@ -11,6 +11,7 @@ import type { CompletionParams } from '@/models/debug' ...@@ -11,6 +11,7 @@ import type { CompletionParams } from '@/models/debug'
import { Cog8ToothIcon, InformationCircleIcon, ChevronDownIcon } from '@heroicons/react/24/outline' import { Cog8ToothIcon, InformationCircleIcon, ChevronDownIcon } from '@heroicons/react/24/outline'
import { AppType } from '@/types/app' import { AppType } from '@/types/app'
import { TONE_LIST } from '@/config' import { TONE_LIST } from '@/config'
import Toast from '@/app/components/base/toast'
export type IConifgModelProps = { export type IConifgModelProps = {
mode: string mode: string
...@@ -93,7 +94,7 @@ const ConifgModel: FC<IConifgModelProps> = ({ ...@@ -93,7 +94,7 @@ const ConifgModel: FC<IConifgModelProps> = ({
key: 'max_tokens', key: 'max_tokens',
tip: t('common.model.params.maxTokenTip'), tip: t('common.model.params.maxTokenTip'),
step: 100, step: 100,
max: 4000, max: modelId === 'gpt-4' ? 8000 : 4000,
}, },
] ]
...@@ -114,6 +115,16 @@ const ConifgModel: FC<IConifgModelProps> = ({ ...@@ -114,6 +115,16 @@ const ConifgModel: FC<IConifgModelProps> = ({
onShowUseGPT4Confirm() onShowUseGPT4Confirm()
return return
} }
if(id !== 'gpt-4' && completionParams.max_tokens > 4000) {
Toast.notify({
type: 'warning',
message: t('common.model.params.setToCurrentModelMaxTokenTip')
})
onCompletionParamsChange({
...completionParams,
max_tokens: 4000
})
}
setModelId(id) setModelId(id)
} }
} }
......
...@@ -73,7 +73,7 @@ const PromptValuePanel: FC<IPromptValuePanelProps> = ({ ...@@ -73,7 +73,7 @@ const PromptValuePanel: FC<IPromptValuePanelProps> = ({
{ {
(promptTemplate && promptTemplate?.trim()) ? ( (promptTemplate && promptTemplate?.trim()) ? (
<div <div
className="max-h-48 overflow-y-auto text-sm text-gray-700" className="max-h-48 overflow-y-auto text-sm text-gray-700 break-all"
dangerouslySetInnerHTML={{ dangerouslySetInnerHTML={{
__html: format(replaceStringWithValuesWithFormat(promptTemplate, promptVariables, inputs)), __html: format(replaceStringWithValuesWithFormat(promptTemplate, promptVariables, inputs)),
}} }}
......
...@@ -166,7 +166,7 @@ function DetailPanel<T extends ChatConversationFullDetailResponse | CompletionCo ...@@ -166,7 +166,7 @@ function DetailPanel<T extends ChatConversationFullDetailResponse | CompletionCo
return res return res
})?.name ?? 'custom' })?.name ?? 'custom'
return (<div className='rounded-xl border-[0.5px] border-gray-200 h-full flex flex-col'> return (<div className='rounded-xl border-[0.5px] border-gray-200 h-full flex flex-col overflow-auto'>
{/* Panel Header */} {/* Panel Header */}
<div className='border-b border-gray-100 py-4 px-6 flex items-center justify-between'> <div className='border-b border-gray-100 py-4 px-6 flex items-center justify-between'>
<div className='flex-1'> <div className='flex-1'>
...@@ -207,7 +207,7 @@ function DetailPanel<T extends ChatConversationFullDetailResponse | CompletionCo ...@@ -207,7 +207,7 @@ function DetailPanel<T extends ChatConversationFullDetailResponse | CompletionCo
<div className='text-gray-700 font-medium text-sm mt-2'>{detail.model_config?.pre_prompt || emptyText}</div> <div className='text-gray-700 font-medium text-sm mt-2'>{detail.model_config?.pre_prompt || emptyText}</div>
</div> </div>
{!isChatMode {!isChatMode
? <div className="px-2.5 py-4 overflow-y-auto"> ? <div className="px-2.5 py-4">
<Chat <Chat
chatList={getFormattedChatList([detail.message])} chatList={getFormattedChatList([detail.message])}
isHideSendInput={true} isHideSendInput={true}
...@@ -217,7 +217,7 @@ function DetailPanel<T extends ChatConversationFullDetailResponse | CompletionCo ...@@ -217,7 +217,7 @@ function DetailPanel<T extends ChatConversationFullDetailResponse | CompletionCo
/> />
</div> </div>
: items.length < 8 : items.length < 8
? <div className="px-2.5 pt-4 mb-4 overflow-y-auto"> ? <div className="px-2.5 pt-4 mb-4">
<Chat <Chat
chatList={items} chatList={items}
isHideSendInput={true} isHideSendInput={true}
......
...@@ -29,9 +29,6 @@ export type IAppCardProps = { ...@@ -29,9 +29,6 @@ export type IAppCardProps = {
onGenerateCode?: () => Promise<any> onGenerateCode?: () => Promise<any>
} }
// todo: get image url from appInfo
const defaultUrl = 'https://images.unsplash.com/photo-1472099645785-5658abf4ff4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=facearea&facepad=2&w=256&h=256&q=80'
function AppCard({ function AppCard({
appInfo, appInfo,
cardType = 'app', cardType = 'app',
...@@ -104,7 +101,8 @@ function AppCard({ ...@@ -104,7 +101,8 @@ function AppCard({
<div className="mb-2.5 flex flex-row items-start justify-between"> <div className="mb-2.5 flex flex-row items-start justify-between">
<AppBasic <AppBasic
iconType={isApp ? 'app' : 'api'} iconType={isApp ? 'app' : 'api'}
iconUrl={defaultUrl} icon={appInfo.icon}
icon_background={appInfo.icon_background}
name={basicName} name={basicName}
type={ type={
isApp isApp
......
...@@ -71,7 +71,7 @@ const CustomizeModal: FC<IShareLinkProps> = ({ ...@@ -71,7 +71,7 @@ const CustomizeModal: FC<IShareLinkProps> = ({
<div className='flex flex-col w-full'> <div className='flex flex-col w-full'>
<div className='text-gray-900'>{t(`${prefixCustomize}.way1.step2`)}</div> <div className='text-gray-900'>{t(`${prefixCustomize}.way1.step2`)}</div>
<div className='text-gray-500 text-xs mt-1 mb-2'>{t(`${prefixCustomize}.way1.step2Tip`)}</div> <div className='text-gray-500 text-xs mt-1 mb-2'>{t(`${prefixCustomize}.way1.step2Tip`)}</div>
<pre className='box-border py-3 px-4 bg-gray-100 text-xs font-medium rounded-lg'> <pre className='box-border py-3 px-4 bg-gray-100 text-xs font-medium rounded-lg select-text'>
export const APP_ID = '{appId}'<br /> export const APP_ID = '{appId}'<br />
export const API_KEY = {`'<Web API Key From Dify>'`} export const API_KEY = {`'<Web API Key From Dify>'`}
</pre> </pre>
......
...@@ -2,6 +2,11 @@ import type { FC } from 'react' ...@@ -2,6 +2,11 @@ import type { FC } from 'react'
import classNames from 'classnames' import classNames from 'classnames'
import style from './style.module.css' import style from './style.module.css'
import data from '@emoji-mart/data'
import { init } from 'emoji-mart'
init({ data })
export type AppIconProps = { export type AppIconProps = {
size?: 'tiny' | 'small' | 'medium' | 'large' size?: 'tiny' | 'small' | 'medium' | 'large'
rounded?: boolean rounded?: boolean
...@@ -9,14 +14,17 @@ export type AppIconProps = { ...@@ -9,14 +14,17 @@ export type AppIconProps = {
background?: string background?: string
className?: string className?: string
innerIcon?: React.ReactNode innerIcon?: React.ReactNode
onClick?: () => void
} }
const AppIcon: FC<AppIconProps> = ({ const AppIcon: FC<AppIconProps> = ({
size = 'medium', size = 'medium',
rounded = false, rounded = false,
icon,
background, background,
className, className,
innerIcon, innerIcon,
onClick,
}) => { }) => {
return ( return (
<span <span
...@@ -29,8 +37,9 @@ const AppIcon: FC<AppIconProps> = ({ ...@@ -29,8 +37,9 @@ const AppIcon: FC<AppIconProps> = ({
style={{ style={{
background, background,
}} }}
onClick={onClick}
> >
{innerIcon ? innerIcon : <>🤖</>} {innerIcon ? innerIcon : icon && icon !== '' ? <em-emoji id={icon} /> : <em-emoji id={'banana'} />}
</span> </span>
) )
} }
......
...@@ -63,7 +63,7 @@ const BlockInput: FC<IBlockInputProps> = ({ ...@@ -63,7 +63,7 @@ const BlockInput: FC<IBlockInputProps> = ({
}, [isEditing]) }, [isEditing])
const style = classNames({ const style = classNames({
'block px-4 py-1 w-full h-full text-sm text-gray-900 outline-0 border-0': true, 'block px-4 py-1 w-full h-full text-sm text-gray-900 outline-0 border-0 break-all': true,
'block-input--editing': isEditing, 'block-input--editing': isEditing,
}) })
......
...@@ -33,7 +33,7 @@ const CustomDialog = ({ ...@@ -33,7 +33,7 @@ const CustomDialog = ({
const close = useCallback(() => onClose?.(), [onClose]) const close = useCallback(() => onClose?.(), [onClose])
return ( return (
<Transition appear show={show} as={Fragment}> <Transition appear show={show} as={Fragment}>
<Dialog as="div" className="relative z-10" onClose={close}> <Dialog as="div" className="relative z-40" onClose={close}>
<Transition.Child <Transition.Child
as={Fragment} as={Fragment}
enter="ease-out duration-300" enter="ease-out duration-300"
......
'use client'
import React from 'react'
import { useState, FC, ChangeEvent } from 'react'
import data from '@emoji-mart/data'
import { init, SearchIndex } from 'emoji-mart'
import cn from 'classnames'
import Divider from '@/app/components/base/divider'
import Button from '@/app/components/base/button'
import s from './style.module.css'
import {
MagnifyingGlassIcon
} from '@heroicons/react/24/outline'
import Modal from '@/app/components/base/modal'
import { useTranslation } from 'react-i18next'
declare global {
namespace JSX {
interface IntrinsicElements {
'em-emoji': React.DetailedHTMLProps<
React.HTMLAttributes<HTMLElement>,
HTMLElement
>;
}
}
}
init({ data })
async function search(value: string) {
const emojis = await SearchIndex.search(value) || []
const results = emojis.map((emoji: any) => {
return emoji.skins[0].native
})
return results
}
const backgroundColors = [
'#FFEAD5',
'#E4FBCC',
'#D3F8DF',
'#E0F2FE',
'#E0EAFF',
'#EFF1F5',
'#FBE8FF',
'#FCE7F6',
'#FEF7C3',
'#E6F4D7',
'#D5F5F6',
'#D1E9FF',
'#D1E0FF',
'#D5D9EB',
'#ECE9FE',
'#FFE4E8',
]
interface IEmojiPickerProps {
isModal?: boolean
onSelect?: (emoji: string, background: string) => void
onClose?: () => void
}
const EmojiPicker: FC<IEmojiPickerProps> = ({
isModal = true,
onSelect,
onClose
}) => {
const { t } = useTranslation()
const { categories } = data as any
const [selectedEmoji, setSelectedEmoji] = useState('')
const [selectedBackground, setSelectedBackground] = useState(backgroundColors[0])
const [searchedEmojis, setSearchedEmojis] = useState([])
const [isSearching, setIsSearching] = useState(false)
return isModal ? <Modal
onClose={() => { }}
isShow
closable={false}
wrapperClassName='!z-40'
className={cn(s.container, '!w-[362px] !p-0')}
>
<div className='flex flex-col items-center w-full p-3'>
<div className="relative w-full">
<div className="absolute inset-y-0 left-0 flex items-center pl-3 pointer-events-none">
<MagnifyingGlassIcon className="w-5 h-5 text-gray-400" aria-hidden="true" />
</div>
<input
type="search"
id="search"
className='block w-full h-10 px-3 pl-10 text-sm font-normal bg-gray-100 rounded-lg'
placeholder="Search emojis..."
onChange={async (e: ChangeEvent<HTMLInputElement>) => {
if (e.target.value === '') {
setIsSearching(false)
return
} else {
setIsSearching(true)
const emojis = await search(e.target.value)
setSearchedEmojis(emojis)
}
}}
/>
</div>
</div>
<Divider className='m-0 mb-3' />
<div className="w-full max-h-[200px] overflow-x-hidden overflow-y-auto px-3">
{isSearching && <>
<div key={`category-search`} className='flex flex-col'>
<p className='font-medium uppercase text-xs text-[#101828] mb-1'>Search</p>
<div className='w-full h-full grid grid-cols-8 gap-1'>
{searchedEmojis.map((emoji: string, index: number) => {
return <div
key={`emoji-search-${index}`}
className='inline-flex w-10 h-10 rounded-lg items-center justify-center'
onClick={() => {
setSelectedEmoji(emoji)
}}
>
<div className='cursor-pointer w-8 h-8 p-1 flex items-center justify-center rounded-lg hover:ring-1 ring-offset-1 ring-gray-300'>
<em-emoji id={emoji} />
</div>
</div>
})}
</div>
</div>
</>}
{categories.map((category: any, index: number) => {
return <div key={`category-${index}`} className='flex flex-col'>
<p className='font-medium uppercase text-xs text-[#101828] mb-1'>{category.id}</p>
<div className='w-full h-full grid grid-cols-8 gap-1'>
{category.emojis.map((emoji: string, index: number) => {
return <div
key={`emoji-${index}`}
className='inline-flex w-10 h-10 rounded-lg items-center justify-center'
onClick={() => {
setSelectedEmoji(emoji)
}}
>
<div className='cursor-pointer w-8 h-8 p-1 flex items-center justify-center rounded-lg hover:ring-1 ring-offset-1 ring-gray-300'>
<em-emoji id={emoji} />
</div>
</div>
})}
</div>
</div>
})}
</div>
{/* Color Select */}
<div className={cn('flex flex-col p-3 ', selectedEmoji == '' ? 'opacity-25' : '')}>
<p className='font-medium uppercase text-xs text-[#101828] mb-2'>Choose Style</p>
<div className='w-full h-full grid grid-cols-8 gap-1'>
{backgroundColors.map((color) => {
return <div
key={color}
className={
cn(
'cursor-pointer',
`hover:ring-1 ring-offset-1`,
'inline-flex w-10 h-10 rounded-lg items-center justify-center',
color === selectedBackground ? `ring-1 ring-gray-300` : '',
)}
onClick={() => {
setSelectedBackground(color)
}}
>
<div className={cn(
'w-8 h-8 p-1 flex items-center justify-center rounded-lg',
)
} style={{ background: color }}>
{selectedEmoji !== '' && <em-emoji id={selectedEmoji} />}
</div>
</div>
})}
</div>
</div>
<Divider className='m-0' />
<div className='w-full flex items-center justify-center p-3 gap-2'>
<Button type="default" className='w-full' onClick={() => {
onClose && onClose()
}}>
{t('app.emoji.cancel')}
</Button>
<Button
disabled={selectedEmoji == ''}
type="primary"
className='w-full'
onClick={() => {
onSelect && onSelect(selectedEmoji, selectedBackground)
}}>
{t('app.emoji.ok')}
</Button>
</div>
</Modal> : <>
</>
}
export default EmojiPicker
.container {
display: flex;
flex-direction: column;
align-items: flex-start;
width: 362px;
max-height: 552px;
border: 0.5px solid #EAECF0;
box-shadow: 0px 12px 16px -4px rgba(16, 24, 40, 0.08), 0px 4px 6px -2px rgba(16, 24, 40, 0.03);
border-radius: 12px;
background: #fff;
}
...@@ -5,6 +5,7 @@ import { XMarkIcon } from '@heroicons/react/24/outline' ...@@ -5,6 +5,7 @@ import { XMarkIcon } from '@heroicons/react/24/outline'
type IModal = { type IModal = {
className?: string className?: string
wrapperClassName?: string
isShow: boolean isShow: boolean
onClose: () => void onClose: () => void
title?: React.ReactNode title?: React.ReactNode
...@@ -15,6 +16,7 @@ type IModal = { ...@@ -15,6 +16,7 @@ type IModal = {
export default function Modal({ export default function Modal({
className, className,
wrapperClassName,
isShow, isShow,
onClose, onClose,
title, title,
...@@ -23,51 +25,51 @@ export default function Modal({ ...@@ -23,51 +25,51 @@ export default function Modal({
closable = false, closable = false,
}: IModal) { }: IModal) {
return ( return (
<Transition appear show={isShow} as={Fragment}> <Transition appear show={isShow} as={Fragment}>
<Dialog as="div" className="relative z-10" onClose={onClose}> <Dialog as="div" className={`relative z-10 ${wrapperClassName}`} onClose={onClose}>
<Transition.Child <Transition.Child
as={Fragment} as={Fragment}
enter="ease-out duration-300" enter="ease-out duration-300"
enterFrom="opacity-0" enterFrom="opacity-0"
enterTo="opacity-100" enterTo="opacity-100"
leave="ease-in duration-200" leave="ease-in duration-200"
leaveFrom="opacity-100" leaveFrom="opacity-100"
leaveTo="opacity-0" leaveTo="opacity-0"
> >
<div className="fixed inset-0 bg-black bg-opacity-25" /> <div className="fixed inset-0 bg-black bg-opacity-25" />
</Transition.Child> </Transition.Child>
<div className="fixed inset-0 overflow-y-auto"> <div className="fixed inset-0 overflow-y-auto">
<div className="flex min-h-full items-center justify-center p-4 text-center"> <div className="flex min-h-full items-center justify-center p-4 text-center">
<Transition.Child <Transition.Child
as={Fragment} as={Fragment}
enter="ease-out duration-300" enter="ease-out duration-300"
enterFrom="opacity-0 scale-95" enterFrom="opacity-0 scale-95"
enterTo="opacity-100 scale-100" enterTo="opacity-100 scale-100"
leave="ease-in duration-200" leave="ease-in duration-200"
leaveFrom="opacity-100 scale-100" leaveFrom="opacity-100 scale-100"
leaveTo="opacity-0 scale-95" leaveTo="opacity-0 scale-95"
> >
<Dialog.Panel className={`w-full max-w-md transform overflow-hidden rounded-2xl bg-white p-6 text-left align-middle shadow-xl transition-all ${className}`}> <Dialog.Panel className={`w-full max-w-md transform overflow-hidden rounded-2xl bg-white p-6 text-left align-middle shadow-xl transition-all ${className}`}>
{title && <Dialog.Title {title && <Dialog.Title
as="h3" as="h3"
className="text-lg font-medium leading-6 text-gray-900" className="text-lg font-medium leading-6 text-gray-900"
> >
{title} {title}
</Dialog.Title>} </Dialog.Title>}
{description && <Dialog.Description className='text-gray-500 text-xs font-normal mt-2'> {description && <Dialog.Description className='text-gray-500 text-xs font-normal mt-2'>
{description} {description}
</Dialog.Description>} </Dialog.Description>}
{closable {closable
&& <div className='absolute top-6 right-6 w-5 h-5 rounded-2xl flex items-center justify-center hover:cursor-pointer hover:bg-gray-100'> && <div className='absolute top-6 right-6 w-5 h-5 rounded-2xl flex items-center justify-center hover:cursor-pointer hover:bg-gray-100'>
<XMarkIcon className='w-4 h-4 text-gray-500' onClick={onClose} /> <XMarkIcon className='w-4 h-4 text-gray-500' onClick={onClose} />
</div>} </div>}
{children} {children}
</Dialog.Panel> </Dialog.Panel>
</Transition.Child> </Transition.Child>
</div> </div>
</div> </div>
</Dialog> </Dialog>
</Transition> </Transition>
) )
} }
...@@ -43,4 +43,7 @@ ...@@ -43,4 +43,7 @@
background: #f9fafb center no-repeat url(../assets/Loading.svg); background: #f9fafb center no-repeat url(../assets/Loading.svg);
background-size: contain; background-size: contain;
} }
.fileContent {
white-space: pre-line;
}
\ No newline at end of file
...@@ -190,13 +190,15 @@ const FileUploader = ({ file, onFileUpdate }: IFileUploaderProps) => { ...@@ -190,13 +190,15 @@ const FileUploader = ({ file, onFileUpdate }: IFileUploaderProps) => {
onChange={fileChangeHandle} onChange={fileChangeHandle}
/> />
<div className={s.title}>{t('datasetCreation.stepOne.uploader.title')}</div> <div className={s.title}>{t('datasetCreation.stepOne.uploader.title')}</div>
{!currentFile && !file && ( <div ref={dropRef}>
<div ref={dropRef} className={cn(s.uploader, dragging && s.dragging)}> {!currentFile && !file && (
<span>{t('datasetCreation.stepOne.uploader.button')}</span> <div className={cn(s.uploader, dragging && s.dragging)}>
<label className={s.browse} onClick={selectHandle}>{t('datasetCreation.stepOne.uploader.browse')}</label> <span>{t('datasetCreation.stepOne.uploader.button')}</span>
{dragging && <div ref={dragRef} className={s.draggingCover}/>} <label className={s.browse} onClick={selectHandle}>{t('datasetCreation.stepOne.uploader.browse')}</label>
</div> {dragging && <div ref={dragRef} className={s.draggingCover}/>}
)} </div>
)}
</div>
{currentFile && ( {currentFile && (
<div className={cn(s.file, uploading && s.uploading)}> <div className={cn(s.file, uploading && s.uploading)}>
{uploading && ( {uploading && (
......
...@@ -41,7 +41,7 @@ const PreviewItem: FC<IPreviewItemProps> = ({ ...@@ -41,7 +41,7 @@ const PreviewItem: FC<IPreviewItemProps> = ({
</div> </div>
</div> </div>
<div className='mt-2 max-h-[120px] line-clamp-6 overflow-hidden text-sm text-gray-800'> <div className='mt-2 max-h-[120px] line-clamp-6 overflow-hidden text-sm text-gray-800'>
{content} <div style={{ whiteSpace: 'pre-line'}}>{content}</div>
</div> </div>
</div> </div>
) )
......
...@@ -44,7 +44,8 @@ ...@@ -44,7 +44,8 @@
@apply h-8 py-0 bg-gray-50 hover:bg-gray-100 rounded-lg shadow-none !important; @apply h-8 py-0 bg-gray-50 hover:bg-gray-100 rounded-lg shadow-none !important;
} }
.segModalContent { .segModalContent {
@apply h-96 text-gray-800 text-base overflow-y-scroll; @apply h-96 text-gray-800 text-base break-all overflow-y-scroll;
white-space: pre-line;
} }
.footer { .footer {
@apply flex items-center justify-between box-border border-t-gray-200 border-t-[0.5px] pt-3 mt-4; @apply flex items-center justify-between box-border border-t-gray-200 border-t-[0.5px] pt-3 mt-4;
......
...@@ -69,7 +69,7 @@ type IDocumentsProps = { ...@@ -69,7 +69,7 @@ type IDocumentsProps = {
datasetId: string datasetId: string
} }
export const fetcher = (url: string) => get(url, {}, { isMock: true }) export const fetcher = (url: string) => get(url, {}, {})
const Documents: FC<IDocumentsProps> = ({ datasetId }) => { const Documents: FC<IDocumentsProps> = ({ datasetId }) => {
const { t } = useTranslation() const { t } = useTranslation()
......
...@@ -75,6 +75,7 @@ export default function AccountSetting({ ...@@ -75,6 +75,7 @@ export default function AccountSetting({
isShow isShow
onClose={() => { }} onClose={() => { }}
className={s.modal} className={s.modal}
wrapperClassName='pt-[60px]'
> >
<div className='flex'> <div className='flex'>
<div className='w-[200px] p-4 border border-gray-100'> <div className='w-[200px] p-4 border border-gray-100'>
......
import type { Provider, ProviderAzureToken } from '@/models/common' import type { Provider, ProviderAzureToken } from '@/models/common'
import { ProviderName } from '@/models/common'
import { useTranslation } from 'react-i18next' import { useTranslation } from 'react-i18next'
import Link from 'next/link' import Link from 'next/link'
import { ArrowTopRightOnSquareIcon } from '@heroicons/react/24/outline' import { ArrowTopRightOnSquareIcon } from '@heroicons/react/24/outline'
import ProviderInput, { ProviderValidateTokenInput} from '../provider-input' import { useState, useEffect } from 'react'
import { useState } from 'react' import ProviderInput from '../provider-input'
import { ValidatedStatus } from '../provider-input/useValidateToken' import useValidateToken, { ValidatedStatus } from '../provider-input/useValidateToken'
import {
ValidatedErrorIcon,
ValidatedSuccessIcon,
ValidatingTip,
ValidatedErrorOnAzureOpenaiTip
} from '../provider-input/Validate'
interface IAzureProviderProps { interface IAzureProviderProps {
provider: Provider provider: Provider
...@@ -17,52 +24,72 @@ const AzureProvider = ({ ...@@ -17,52 +24,72 @@ const AzureProvider = ({
onValidatedStatus onValidatedStatus
}: IAzureProviderProps) => { }: IAzureProviderProps) => {
const { t } = useTranslation() const { t } = useTranslation()
const [token, setToken] = useState(provider.token as ProviderAzureToken || {}) const [token, setToken] = useState<ProviderAzureToken>(provider.provider_name === ProviderName.AZURE_OPENAI ? {...provider.token}: {})
const handleFocus = () => { const [ validating, validatedStatus, setValidatedStatus, validate ] = useValidateToken(provider.provider_name)
if (token === provider.token) { const handleFocus = (type: keyof ProviderAzureToken) => {
token.azure_api_key = '' if (token[type] === (provider?.token as ProviderAzureToken)[type]) {
token[type] = ''
setToken({...token}) setToken({...token})
onTokenChange({...token}) onTokenChange({...token})
setValidatedStatus(undefined)
} }
} }
const handleChange = (type: keyof ProviderAzureToken, v: string) => { const handleChange = (type: keyof ProviderAzureToken, v: string, validate: any) => {
token[type] = v token[type] = v
setToken({...token}) setToken({...token})
onTokenChange({...token}) onTokenChange({...token})
validate({...token}, {
beforeValidating: () => {
if (!token.openai_api_base || !token.openai_api_key) {
setValidatedStatus(undefined)
return false
}
return true
}
})
} }
const getValidatedIcon = () => {
if (validatedStatus === ValidatedStatus.Error || validatedStatus === ValidatedStatus.Exceed) {
return <ValidatedErrorIcon />
}
if (validatedStatus === ValidatedStatus.Success) {
return <ValidatedSuccessIcon />
}
}
const getValidatedTip = () => {
if (validating) {
return <ValidatingTip />
}
if (validatedStatus === ValidatedStatus.Error) {
return <ValidatedErrorOnAzureOpenaiTip />
}
}
useEffect(() => {
if (typeof onValidatedStatus === 'function') {
onValidatedStatus(validatedStatus)
}
}, [validatedStatus])
return ( return (
<div className='px-4 py-3'> <div className='px-4 py-3'>
<ProviderInput <ProviderInput
className='mb-4' className='mb-4'
name={t('common.provider.azure.resourceName')} name={t('common.provider.azure.apiBase')}
placeholder={t('common.provider.azure.resourceNamePlaceholder')} placeholder={t('common.provider.azure.apiBasePlaceholder')}
value={token.azure_api_base} value={token.openai_api_base}
onChange={(v) => handleChange('azure_api_base', v)} onChange={(v) => handleChange('openai_api_base', v, validate)}
/> onFocus={() => handleFocus('openai_api_base')}
<ProviderInput validatedIcon={getValidatedIcon()}
className='mb-4'
name={t('common.provider.azure.deploymentId')}
placeholder={t('common.provider.azure.deploymentIdPlaceholder')}
value={token.azure_api_type}
onChange={v => handleChange('azure_api_type', v)}
/> />
<ProviderInput <ProviderInput
className='mb-4'
name={t('common.provider.azure.apiVersion')}
placeholder={t('common.provider.azure.apiVersionPlaceholder')}
value={token.azure_api_version}
onChange={v => handleChange('azure_api_version', v)}
/>
<ProviderValidateTokenInput
className='mb-4' className='mb-4'
name={t('common.provider.azure.apiKey')} name={t('common.provider.azure.apiKey')}
placeholder={t('common.provider.azure.apiKeyPlaceholder')} placeholder={t('common.provider.azure.apiKeyPlaceholder')}
value={token.azure_api_key} value={token.openai_api_key}
onChange={v => handleChange('azure_api_key', v)} onChange={(v) => handleChange('openai_api_key', v, validate)}
onFocus={handleFocus} onFocus={() => handleFocus('openai_api_key')}
onValidatedStatus={onValidatedStatus} validatedIcon={getValidatedIcon()}
providerName={provider.provider_name} validatedTip={getValidatedTip()}
/> />
<Link className="flex items-center text-xs cursor-pointer text-primary-600" href="https://platform.openai.com/account/api-keys" target={'_blank'}> <Link className="flex items-center text-xs cursor-pointer text-primary-600" href="https://platform.openai.com/account/api-keys" target={'_blank'}>
{t('common.provider.azure.helpTip')} {t('common.provider.azure.helpTip')}
...@@ -72,4 +99,4 @@ const AzureProvider = ({ ...@@ -72,4 +99,4 @@ const AzureProvider = ({
) )
} }
export default AzureProvider export default AzureProvider
\ No newline at end of file
...@@ -67,7 +67,7 @@ const ProviderPage = () => { ...@@ -67,7 +67,7 @@ const ProviderPage = () => {
const providerHosted = data?.filter(provider => provider.provider_name === 'openai' && provider.provider_type === 'system')?.[0] const providerHosted = data?.filter(provider => provider.provider_name === 'openai' && provider.provider_type === 'system')?.[0]
return ( return (
<div> <div className='pb-7'>
{ {
providerHosted && !IS_CE_EDITION && ( providerHosted && !IS_CE_EDITION && (
<> <>
......
import type { Provider } from '@/models/common'
import { useState } from 'react'
import { useTranslation } from 'react-i18next'
import { ProviderValidateTokenInput } from '../provider-input'
import Link from 'next/link'
import { ArrowTopRightOnSquareIcon } from '@heroicons/react/24/outline'
import { ValidatedStatus } from '../provider-input/useValidateToken'
interface IOpenaiProviderProps {
provider: Provider
onValidatedStatus: (status?: ValidatedStatus) => void
onTokenChange: (token: string) => void
}
const OpenaiProvider = ({
provider,
onValidatedStatus,
onTokenChange
}: IOpenaiProviderProps) => {
const { t } = useTranslation()
const [token, setToken] = useState(provider.token as string || '')
const handleFocus = () => {
if (token === provider.token) {
setToken('')
onTokenChange('')
}
}
const handleChange = (v: string) => {
setToken(v)
onTokenChange(v)
}
return (
<div className='px-4 pt-3 pb-4'>
<ProviderValidateTokenInput
value={token}
name={t('common.provider.apiKey')}
placeholder={t('common.provider.enterYourKey')}
onChange={handleChange}
onFocus={handleFocus}
onValidatedStatus={onValidatedStatus}
providerName={provider.provider_name}
/>
<Link className="inline-flex items-center mt-3 text-xs font-normal cursor-pointer text-primary-600 w-fit" href="https://platform.openai.com/account/api-keys" target={'_blank'}>
{t('appOverview.welcome.getKeyTip')}
<ArrowTopRightOnSquareIcon className='w-3 h-3 ml-1 text-primary-600' aria-hidden="true" />
</Link>
</div>
)
}
export default OpenaiProvider
\ No newline at end of file
import Link from 'next/link'
import { CheckCircleIcon, ExclamationCircleIcon } from '@heroicons/react/24/solid'
import { useTranslation } from 'react-i18next'
import { useContext } from 'use-context-selector'
import I18n from '@/context/i18n'
export const ValidatedErrorIcon = () => {
return <ExclamationCircleIcon className='w-4 h-4 text-[#D92D20]' />
}
export const ValidatedSuccessIcon = () => {
return <CheckCircleIcon className='w-4 h-4 text-[#039855]' />
}
export const ValidatingTip = () => {
const { t } = useTranslation()
return (
<div className={`mt-2 text-primary-600 text-xs font-normal`}>
{t('common.provider.validating')}
</div>
)
}
export const ValidatedExceedOnOpenaiTip = () => {
const { t } = useTranslation()
const { locale } = useContext(I18n)
return (
<div className={`mt-2 text-[#D92D20] text-xs font-normal`}>
{t('common.provider.apiKeyExceedBill')}&nbsp;
<Link
className='underline'
href="https://platform.openai.com/account/api-keys"
target={'_blank'}>
{locale === 'en' ? 'this link' : '这篇文档'}
</Link>
</div>
)
}
export const ValidatedErrorOnOpenaiTip = () => {
const { t } = useTranslation()
return (
<div className={`mt-2 text-[#D92D20] text-xs font-normal`}>
{t('common.provider.invalidKey')}
</div>
)
}
export const ValidatedErrorOnAzureOpenaiTip = () => {
const { t } = useTranslation()
return (
<div className={`mt-2 text-[#D92D20] text-xs font-normal`}>
{t('common.provider.invalidApiKey')}
</div>
)
}
\ No newline at end of file
import { ChangeEvent, useEffect } from 'react' import { ChangeEvent } from 'react'
import Link from 'next/link' import { ReactElement } from 'react-markdown/lib/react-markdown'
import { CheckCircleIcon, ExclamationCircleIcon } from '@heroicons/react/24/solid'
import { useTranslation } from 'react-i18next'
import { useContext } from 'use-context-selector'
import I18n from '@/context/i18n'
import useValidateToken, { ValidatedStatus } from './useValidateToken'
interface IProviderInputProps { interface IProviderInputProps {
value?: string value?: string
...@@ -13,6 +8,8 @@ interface IProviderInputProps { ...@@ -13,6 +8,8 @@ interface IProviderInputProps {
className?: string className?: string
onChange: (v: string) => void onChange: (v: string) => void
onFocus?: () => void onFocus?: () => void
validatedIcon?: ReactElement
validatedTip?: ReactElement
} }
const ProviderInput = ({ const ProviderInput = ({
...@@ -22,6 +19,8 @@ const ProviderInput = ({ ...@@ -22,6 +19,8 @@ const ProviderInput = ({
className, className,
onChange, onChange,
onFocus, onFocus,
validatedIcon,
validatedTip
}: IProviderInputProps) => { }: IProviderInputProps) => {
const handleChange = (e: ChangeEvent<HTMLInputElement>) => { const handleChange = (e: ChangeEvent<HTMLInputElement>) => {
...@@ -47,95 +46,9 @@ const ProviderInput = ({ ...@@ -47,95 +46,9 @@ const ProviderInput = ({
onChange={handleChange} onChange={handleChange}
onFocus={onFocus} onFocus={onFocus}
/> />
{validatedIcon}
</div> </div>
</div> {validatedTip}
)
}
type TproviderInputProps = IProviderInputProps
& {
onValidatedStatus?: (status?: ValidatedStatus) => void
providerName: string
}
export const ProviderValidateTokenInput = ({
value,
name,
placeholder,
className,
onChange,
onFocus,
onValidatedStatus,
providerName
}: TproviderInputProps) => {
const { t } = useTranslation()
const { locale } = useContext(I18n)
const [ validating, validatedStatus, validate ] = useValidateToken(providerName)
useEffect(() => {
if (typeof onValidatedStatus === 'function') {
onValidatedStatus(validatedStatus)
}
}, [validatedStatus])
const handleChange = (e: ChangeEvent<HTMLInputElement>) => {
const inputValue = e.target.value
onChange(inputValue)
validate(inputValue)
}
return (
<div className={className}>
<div className="mb-2 text-[13px] font-medium text-gray-800">{name}</div>
<div className='
flex items-center px-3 bg-white rounded-lg
shadow-[0_1px_2px_rgba(16,24,40,0.05)]
'>
<input
className='
w-full py-[9px]
text-xs font-medium text-gray-700 leading-[18px]
appearance-none outline-none bg-transparent
'
value={value}
placeholder={placeholder}
onChange={handleChange}
onFocus={onFocus}
/>
{
validatedStatus === ValidatedStatus.Error && <ExclamationCircleIcon className='w-4 h-4 text-[#D92D20]' />
}
{
validatedStatus === ValidatedStatus.Success && <CheckCircleIcon className='w-4 h-4 text-[#039855]' />
}
</div>
{
validating && (
<div className={`mt-2 text-primary-600 text-xs font-normal`}>
{t('common.provider.validating')}
</div>
)
}
{
validatedStatus === ValidatedStatus.Exceed && !validating && (
<div className={`mt-2 text-[#D92D20] text-xs font-normal`}>
{t('common.provider.apiKeyExceedBill')}&nbsp;
<Link
className='underline'
href="https://platform.openai.com/account/api-keys"
target={'_blank'}>
{locale === 'en' ? 'this link' : '这篇文档'}
</Link>
</div>
)
}
{
validatedStatus === ValidatedStatus.Error && !validating && (
<div className={`mt-2 text-[#D92D20] text-xs font-normal`}>
{t('common.provider.invalidKey')}
</div>
)
}
</div> </div>
) )
} }
......
import { useState, useCallback } from 'react' import { useState, useCallback, SetStateAction, Dispatch } from 'react'
import debounce from 'lodash-es/debounce' import debounce from 'lodash-es/debounce'
import { DebouncedFunc } from 'lodash-es' import { DebouncedFunc } from 'lodash-es'
import { validateProviderKey } from '@/service/common' import { validateProviderKey } from '@/service/common'
...@@ -8,14 +8,24 @@ export enum ValidatedStatus { ...@@ -8,14 +8,24 @@ export enum ValidatedStatus {
Error = 'error', Error = 'error',
Exceed = 'exceed' Exceed = 'exceed'
} }
export type SetValidatedStatus = Dispatch<SetStateAction<ValidatedStatus | undefined>>
export type ValidateFn = DebouncedFunc<(token: any, config: ValidateFnConfig) => void>
type ValidateTokenReturn = [
boolean,
ValidatedStatus | undefined,
SetValidatedStatus,
ValidateFn
]
export type ValidateFnConfig = {
beforeValidating: (token: any) => boolean
}
const useValidateToken = (providerName: string): [boolean, ValidatedStatus | undefined, DebouncedFunc<(token: string) => Promise<void>>] => { const useValidateToken = (providerName: string): ValidateTokenReturn => {
const [validating, setValidating] = useState(false) const [validating, setValidating] = useState(false)
const [validatedStatus, setValidatedStatus] = useState<ValidatedStatus | undefined>() const [validatedStatus, setValidatedStatus] = useState<ValidatedStatus | undefined>()
const validate = useCallback(debounce(async (token: string) => { const validate = useCallback(debounce(async (token: string, config: ValidateFnConfig) => {
if (!token) { if (!config.beforeValidating(token)) {
setValidatedStatus(undefined) return false
return
} }
setValidating(true) setValidating(true)
try { try {
...@@ -24,8 +34,10 @@ const useValidateToken = (providerName: string): [boolean, ValidatedStatus | und ...@@ -24,8 +34,10 @@ const useValidateToken = (providerName: string): [boolean, ValidatedStatus | und
} catch (e: any) { } catch (e: any) {
if (e.status === 400) { if (e.status === 400) {
e.json().then(({ code }: any) => { e.json().then(({ code }: any) => {
if (code === 'provider_request_failed') { if (code === 'provider_request_failed' && providerName === 'openai') {
setValidatedStatus(ValidatedStatus.Exceed) setValidatedStatus(ValidatedStatus.Exceed)
} else {
setValidatedStatus(ValidatedStatus.Error)
} }
}) })
} else { } else {
...@@ -39,7 +51,8 @@ const useValidateToken = (providerName: string): [boolean, ValidatedStatus | und ...@@ -39,7 +51,8 @@ const useValidateToken = (providerName: string): [boolean, ValidatedStatus | und
return [ return [
validating, validating,
validatedStatus, validatedStatus,
validate, setValidatedStatus,
validate
] ]
} }
......
This diff is collapsed.
...@@ -5,7 +5,7 @@ import Welcome from '../welcome' ...@@ -5,7 +5,7 @@ import Welcome from '../welcome'
const ConfigSence: FC<IWelcomeProps> = (props) => { const ConfigSence: FC<IWelcomeProps> = (props) => {
return ( return (
<div className='mb-5 antialiased font-sans overflow-hidden shrink-0'> <div className='mb-5 antialiased font-sans shrink-0'>
<Welcome {...props} /> <Welcome {...props} />
</div> </div>
) )
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment