diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b62934ade..cd3dbd898 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,162 +1,159 @@ # Contributing Guidelines -*Pull requests, bug reports, and all other forms of contribution are welcomed and highly encouraged!* :octocat: +*Pull requests, bug reports, and all other forms of contribution are welcome and highly encouraged!* :octocat: ### Contents - [Code of Conduct](#book-code-of-conduct) - [Overview](#mag-overview) -- [Contribution Guide of Different Services](#love_letter-contribution-guide-of-different-services) +- [Contribution Guide for Different Services](#love_letter-contribution-guide-for-different-services) - [Creating a New Data Source Connector](#electric_plug-creating-a-new-data-source-connector) -> **This guide serves to set clear expectations for everyone involved with the project so that we can improve it together while also creating a welcoming space for everyone to participate. Following these guidelines will help ensure a positive experience for contributors and maintainers.** +> **This guide serves to set clear expectations for everyone involved with the project so that we can improve it together while creating a welcoming space for all participants. Following these guidelines will help ensure a positive experience for contributors and maintainers.** ## :book: Code of Conduct -Please review our [Code of Conduct](https://github.com/Canner/WrenAI/blob/main/CODE_OF_CONDUCT.md). It is in effect at all times. We expect it to be honored by everyone who contributes to this project. Acting like an asshole will not be tolerated. - +Please review our [Code of Conduct](https://github.com/Canner/WrenAI/blob/main/CODE_OF_CONDUCT.md). It is in effect at all times, and we expect everyone contributing to this project to follow it. Acting disrespectfully will not be tolerated. ## :rocket: Get Started -1. Visit [How Wren AI works?](https://docs.getwren.ai/oss/overview/how_wrenai_works) to understand the architecture of Wren AI -1. After you understand the architecture of Wren AI, understand the scope of the services you want to contribute to. - Check each service's section under [Contribution Guide of Different Services](#love_letter-contribution-guide-of-different-services) to learn how to contribute to each service. - 1. If you are dealing with UI-related tasks, such as adding a dark mode, you only need to contribute to the [Wren UI Service](#wren-ui-service). - 2. If you are dealing with LLM-related tasks, such as enhancing the prompts used in the LLM pipelines, you only need to contribute to the [Wren AI Service](#wren-ai-service). - 3. If you are working on data-source-related tasks, such as fixing a bug in SQL server connector, you will need to contribute to the [Wren Engine Service](#wren-engine-service). -1. If you are not sure which service to contribute to, please reach out to us in [Discord](https://discord.gg/canner) or [GitHub Issues](https://github.com/Canner/WrenAI/issues). -1. It's possible that you need to contribute to multiple services. For example, if you are adding a new data source, you will need to contribute to the [Wren UI Service](#wren-ui-service) and [Wren Engine Service](#wren-engine-service). Follow [Guide for Contributing to Multiple Services](#guide-for-contributing-to-multiple-services) to learn how to contribute to multiple services. +1. Visit [How Wren AI works](https://docs.getwren.ai/oss/overview/how_wrenai_works) to understand the architecture of Wren AI. +2. Once you understand the architecture, focus on the scope of the services you want to contribute to. + Check each service's section under [Contribution Guide for Different Services](#love_letter-contribution-guide-for-different-services) to learn how to contribute to specific services: + 1. If you're working on UI-related tasks (e.g., adding a dark mode), contribute to the [Wren UI Service](#wren-ui-service). + 2. If you're working on LLM-related tasks (e.g., enhancing prompts for LLM pipelines), contribute to the [Wren AI Service](#wren-ai-service). + 3. If you're working on data-source-related tasks (e.g., fixing a bug in the SQL server connector), contribute to the [Wren Engine Service](#wren-engine-service). +3. If you’re unsure which service to contribute to, please reach out to us on [Discord](https://discord.gg/canner) or [GitHub Issues](https://github.com/Canner/WrenAI/issues). +4. In some cases, you may need to contribute to multiple services. For example, if you're adding a new data source, you'll need to contribute to both the [Wren UI Service](#wren-ui-service) and [Wren Engine Service](#wren-engine-service). Follow the [Guide for Contributing to Multiple Services](#guide-for-contributing-to-multiple-services) for instructions. -## :love_letter: Contribution Guide of Different Services +## :love_letter: Contribution Guide for Different Services ### Wren AI Service -Wren AI Service is responsible for LLM-related tasks like converting natural language questions into SQL queries and providing step-by-step SQL breakdowns. - -To contribute to Wren AI Service, please refer to the [Wren AI Service Contributing Guide](https://github.com/Canner/WrenAI/blob/main/wren-ai-service/CONTRIBUTING.md) +The Wren AI Service handles LLM-related tasks, such as converting natural language questions into SQL queries and providing step-by-step SQL breakdowns. +To contribute, refer to the [Wren AI Service Contributing Guide](https://github.com/Canner/WrenAI/blob/main/wren-ai-service/CONTRIBUTING.md). ### Wren UI Service -Wren UI is the client service of WrenAI. It is built with Next.js and TypeScript. -To contribute to Wren UI, you can refer to the [WrenAI/wren-ui/README.md](https://github.com/Canner/WrenAI/blob/main/wren-ui/README.md) file for instructions on how to set up the development environment and run the development server. - +The Wren UI Service is the client-side service for WrenAI, built with Next.js and TypeScript. +To contribute, refer to the [WrenAI/wren-ui/README.md](https://github.com/Canner/WrenAI/blob/main/wren-ui/README.md) for instructions on setting up the development environment and running the development server. ### Wren Engine Service -Wren Engine is the backbone of the Wren AI project. The semantic engine for LLMs, bringing business context to AI agents. -To contribute, please refer to [Wren Engine Contributing Guide](https://github.com/Canner/wren-engine/blob/main/ibis-server/docs/CONTRIBUTING.md) +The Wren Engine is the core of the Wren AI project, serving as the semantic engine for LLMs, adding business context to AI agents. + +To contribute, refer to the [Wren Engine Contributing Guide](https://github.com/Canner/wren-engine/blob/main/ibis-server/docs/CONTRIBUTING.md). ## Guide for Contributing to Multiple Services -We rely on docker-compose to start all services. If you are contributing to multiple services, you could just comment out the services you'd like to start from the source code and change the `env` variables to point to the services you started by yourself. + +We use Docker Compose to start all services. If you're contributing to multiple services, you can comment out the services you'd like to start from the source code and update the `env` variables to point to the services you're running locally. ### Example: Contributing to the [Wren UI Service](#wren-ui-service) and [Wren Engine Service](#wren-engine-service) -If you are contributing to both the [Wren UI Service](#wren-ui-service) and [Wren Engine Service](#wren-engine-service), you should comment out the `wren-engine` service in the `docker/docker-compose-dev.yml` file (note that the UI service is already excluded from `docker/docker-compose-dev.yml`). Then, adjust the environment variables in your `.env` file to point to the services you have started manually. This will ensure that your local development environment correctly interfaces with the services you are working on. -1. Prepare your `.env` file: In the `WrenAI/docker` folder, use the `.env.example` file as a template. Copy this file to create a `.env.local` file. +If you're contributing to both the [Wren UI Service](#wren-ui-service) and [Wren Engine Service](#wren-engine-service), comment out the `wren-engine` service in the `docker/docker-compose-dev.yml` file (the UI service is already excluded from this file). Then, adjust the environment variables in your `.env` file to point to the services you've started manually. + +1. Prepare your `.env` file: In the `WrenAI/docker` folder, use the `.env.example` file as a template to create a `.env.local` file: ```sh - # assuming the current directory is wren-ui + # Assuming the current directory is wren-ui cd ../docker cp .env.example .env.local ``` -1. Modify your `.env.local` file: Fill in the `LLM_OPENAI_API_KEY` and `EMBEDDER_OPENAI_API_KEY` with your OpenAI API keys before starting. -1. Start the UI and engine services from the source code. -1. Update the `env` variables in the `.env.local` file to point to the services you started manually. -1. Start the other services using docker-compose: +2. Modify your `.env.local` file: Add your OpenAI API keys by filling in `LLM_OPENAI_API_KEY` and `EMBEDDER_OPENAI_API_KEY` before starting. +3. Start the UI and Engine services from the source code. +4. Update the `env` variables in your `.env.local` file to point to the services you started manually. +5. Start the remaining services using Docker Compose: ```sh - # current directory is WrenAI/docker + # Current directory is WrenAI/docker docker-compose -f docker-compose-dev.yaml --env-file .env.example up - # you can add the -d flag to run the services in the background + # You can add the -d flag to run the services in the background docker-compose -f docker-compose-dev.yaml --env-file .env.example up -d - # to stop the services, use + + # To stop the services, use: docker-compose -f docker-compose-dev.yaml --env-file .env.example down ``` -1. Happy coding! +6. Happy coding! ## :electric_plug: Creating a New Data Source Connector -To develop a new data source connector, you'll need to modify both the front-end and back-end of the Wren UI, in addition to the Wren Engine. +To develop a new data source connector, you'll need to make changes to both the front-end and back-end of the Wren UI, as well as the Wren Engine. -Below is a brief overview of a data source connector: +Below is a brief overview of how a data source connector works: -The UI is primarily responsible for storing database connection settings, providing an interface for users to input these settings, and submitting them to the Engine, which then connects to the database. - -The UI must be aware of the connection details it needs to retain, as specified by the Engine. Therefore, the implementation sequence would be as follows: +The UI stores database connection settings, provides an interface for users to input these settings, and sends them to the Engine, which connects to the database. +The UI must know the connection details it needs to store, as specified by the Engine. The general implementation flow is: -- Engine: - - Implement the new data source (you’ll determine what connection information is needed and how it should be passed from the UI). +- **Engine**: + - Implement the new data source (determine the required connection information and how it should be passed from the UI). - Implement the metadata API for the UI to access. -- UI: - - Back-End: - - Safely store the connection information. +- **UI**: + - **Back-End**: + - Store the connection information securely. - Provide the connection information to the Engine. - - Front-End: - - Prepare an icon for the data source. - - Set up the form template for users to input the connection information. + - **Front-End**: + - Add an icon for the data source. + - Create a form template for users to input the connection information. - Update the data source list. ### Wren Engine -- To implement a new data source, please refer to [How to Add a New Data Source](https://github.com/Canner/wren-engine/blob/main/ibis-server/docs/how-to-add-data-source.md). -- After adding a new data source, you can proceed with implementing the metadata API for the UI. +To implement a new data source, refer to [How to Add a New Data Source](https://github.com/Canner/wren-engine/blob/main/ibis-server/docs/how-to-add-data-source.md). +After adding a new data source, proceed with implementing the metadata API for the UI. - Here are some previous PRs that introduced new data sources: - - [Add MSSQL data source](https://github.com/Canner/wren-engine/pull/631) - - [Add MySQL data source](https://github.com/Canner/wren-engine/pull/618) - - [Add ClickHouse data source](https://github.com/Canner/wren-engine/pull/648) +Here are some previous PRs that introduced new data sources: +- [Add MSSQL data source](https://github.com/Canner/wren-engine/pull/631) +- [Add MySQL data source](https://github.com/Canner/wren-engine/pull/618) +- [Add ClickHouse data source](https://github.com/Canner/wren-engine/pull/648) ### Wren UI Guide -We'll describe what should be done in the UI for each new data source. - -If you prefer to learn by example, you can refer to this Trino [issue](https://github.com/Canner/WrenAI/issues/492) and [PR](https://github.com/Canner/WrenAI/pull/535). +For each new data source, here’s what needs to be done in the UI: +If you'd prefer to learn by example, refer to this Trino [issue](https://github.com/Canner/WrenAI/issues/492) and [PR](https://github.com/Canner/WrenAI/pull/535). -#### BE -1. Define the data source in `wren-ui/src/apollo/server/dataSource.ts` - - define the `toIbisConnectionInfo` and `sensitiveProps` methods +#### Back-End +1. Define the data source in `wren-ui/src/apollo/server/dataSource.ts`: + - Define the `toIbisConnectionInfo` and `sensitiveProps` methods. -2. Modify the ibis adaptor in `wren-ui/src/apollo/server/adaptors/ibisAdaptor.ts` - - define an ibis connection info type for the new data source - - set up the `dataSourceUrlMap` for the new data source +2. Modify the Ibis adaptor in `wren-ui/src/apollo/server/adaptors/ibisAdaptor.ts`: + - Define an Ibis connection info type for the new data source. + - Set up the `dataSourceUrlMap` for the new data source. -3. Modify the repository in `wren-ui/src/apollo/server/repositories/projectRepository.ts` - - define the wren ui connection info type for the new data source +3. Modify the repository in `wren-ui/src/apollo/server/repositories/projectRepository.ts`: + - Define the Wren UI connection info type for the new data source. -4. Update the graphql schema in `wren-ui/src/apollo/server/schema.ts` so that the new data source can be used in the UI - - add the new data source to the `DataSource` enum +4. Update the GraphQL schema in `wren-ui/src/apollo/server/schema.ts` to enable the new data source in the UI: + - Add the new data source to the `DataSource` enum. -5. Update the type definition in `wren-ui/src/apollo/server/types/dataSource.ts` - - add the new data source to the `DataSourceName` enum - -#### FE -1. Prepare the data source's logo: - - Image size should be `40 x 40` px - - Preferably use SVG format - - Ensure the logo is centered within a `30px` container for consistent formatting +5. Update the type definition in `wren-ui/src/apollo/server/types/dataSource.ts`: + - Add the new data source to the `DataSourceName` enum. +#### Front-End +1. Prepare the data source’s logo: + - Image size should be `40 x 40` px. + - Preferably use SVG format. + - Ensure the logo is centered within a `30px` container for consistency. + Example: 2. Create the data source form template: - - In `wren-ui/src/components/pages/setup/dataSources`, add a new file named `${dataSource}Properties.tsx` - - Implement the data source form template in this file + - In `wren-ui/src/components/pages/setup/dataSources`, add a new file named `${dataSource}Properties.tsx`. + - Implement the form template for the new data source. 3. Set up the data source template: - - Navigate to `wren-ui/src/components/pages/setup/utils` > `DATA_SOURCE_FORM` - - Update the necessary files to include the new data source template settings + - Navigate to `wren-ui/src/components/pages/setup/utils` and update the `DATA_SOURCE_FORM` settings for the new data source 4. Update the data source list: - - Add the new data source to the `DATA_SOURCES` enum in `wren-ui/src/utils/enum/dataSources.ts` - - Update relevant files in `wren-ui/src/components/pages/setup/` to include the new data source - - Ensure `wren-ui/src/apollo/server/adaptors/ibisAdaptor.ts` handle the new data source + - Add the new data source to the `DATA_SOURCES` enum in `wren-ui/src/utils/enum/dataSources.ts`. + - Update relevant files in `wren-ui/src/components/pages/setup/` to include the new data source. + - Ensure `wren-ui/src/apollo/server/adaptors/ibisAdaptor.ts` supports the new data source. 5. Test the new connector: - - Ensure the new data source appears in the UI - - Verify that the form works correctly - - Test the connection to the new data source - + - Ensure the new data source appears in the UI. + - Verify that the form works correctly. + - Test the connection to the new data source. diff --git a/docker/.env.ai.example b/docker/.env.ai.example index 96725223c..f207ed016 100644 --- a/docker/.env.ai.example +++ b/docker/.env.ai.example @@ -1,57 +1,63 @@ -## LLM -# openai_llm, azure_openai_llm, ollama_llm -LLM_PROVIDER=openai_llm -LLM_TIMEOUT=120 -GENERATION_MODEL=gpt-4o-mini -GENERATION_MODEL_KWARGS={"temperature": 0, "n": 1, "max_tokens": 4096, "response_format": {"type": "json_object"}} -COLUMN_INDEXING_BATCH_SIZE=50 -TABLE_RETRIEVAL_SIZE=10 -TABLE_COLUMN_RETRIEVAL_SIZE=1000 -QUERY_CACHE_TTL=3600 - -# openai or openai-api-compatible -LLM_OPENAI_API_KEY=sk-xxxx -LLM_OPENAI_API_BASE=https://api.openai.com/v1 - -# azure_openai -LLM_AZURE_OPENAI_API_KEY= -LLM_AZURE_OPENAI_API_BASE= -LLM_AZURE_OPENAI_VERSION= - -# ollama -LLM_OLLAMA_URL=http://host.docker.internal:11434 - - -## EMBEDDER -# openai_embedder, azure_openai_embedder, ollama_embedder -EMBEDDER_PROVIDER=openai_embedder -EMBEDDER_TIMEOUT=120 -# supported embedding models providers by qdrant: https://qdrant.tech/documentation/embeddings/ -EMBEDDING_MODEL=text-embedding-3-large -EMBEDDING_MODEL_DIMENSION=3072 - -# openai or openai-api-compatible -EMBEDDER_OPENAI_API_KEY=sk-xxxx -EMBEDDER_OPENAI_API_BASE=https://api.openai.com/v1 - -# azure_openai -EMBEDDER_AZURE_OPENAI_API_KEY= -EMBEDDER_AZURE_OPENAI_API_BASE= -EMBEDDER_AZURE_OPENAI_VERSION= - -# ollama -EMBEDDER_OLLAMA_URL=http://host.docker.internal:11434 - - -## DOCUMENT_STORE -DOCUMENT_STORE_PROVIDER=qdrant -QDRANT_HOST=qdrant -QDRANT_TIMEOUT=120 - - -## Langfuse: https://langfuse.com/ -# empty means disabled -LANGFUSE_ENABLE= -LANGFUSE_SECRET_KEY= -LANGFUSE_PUBLIC_KEY= -LANGFUSE_HOST=https://cloud.langfuse.com +## LLM (Language Learning Model) +# Available providers: openai_llm, azure_openai_llm, ollama_llm +LLM_PROVIDER=openai_llm # The LLM provider to use +LLM_TIMEOUT=120 # Timeout for LLM requests (in seconds) +GENERATION_MODEL=gpt-4o-mini # Model used for text generation tasks +GENERATION_MODEL_KWARGS={"temperature": 0, "n": 1, "max_tokens": 4096, "response_format": {"type": "json_object"}} # Model configuration + +# Indexing and retrieval configuration +COLUMN_INDEXING_BATCH_SIZE=50 # Number of columns indexed per batch +TABLE_RETRIEVAL_SIZE=10 # Number of tables retrieved per query +TABLE_COLUMN_RETRIEVAL_SIZE=1000 # Maximum number of columns retrieved per table +QUERY_CACHE_TTL=3600 # Query cache time-to-live (in seconds) + +# OpenAI or OpenAI-compatible API +LLM_OPENAI_API_KEY=sk-xxxx # Your OpenAI API Key (required) +LLM_OPENAI_API_BASE=https://api.openai.com/v1 # OpenAI API base URL + +# Azure OpenAI +LLM_AZURE_OPENAI_API_KEY= # Azure OpenAI API Key +LLM_AZURE_OPENAI_API_BASE= # Azure OpenAI base URL +LLM_AZURE_OPENAI_VERSION= # Azure OpenAI API version + +# Ollama +LLM_OLLAMA_URL=http://host.docker.internal:11434 # Ollama API base URL + + +## EMBEDDER (Embeddings Service) +# Available providers: openai_embedder, azure_openai_embedder, ollama_embedder +EMBEDDER_PROVIDER=openai_embedder # The embedding provider to use +EMBEDDER_TIMEOUT=120 # Timeout for embedding requests (in seconds) + +# Embedding model configuration (supported by Qdrant) +# Refer to: https://qdrant.tech/documentation/embeddings/ for supported models +EMBEDDING_MODEL=text-embedding-3-large # Embedding model used +EMBEDDING_MODEL_DIMENSION=3072 # Dimensionality of the embedding model + +# OpenAI or OpenAI-compatible API +EMBEDDER_OPENAI_API_KEY=sk-xxxx # Your OpenAI Embedding API Key (required) +EMBEDDER_OPENAI_API_BASE=https://api.openai.com/v1 # OpenAI API base URL + +# Azure OpenAI +EMBEDDER_AZURE_OPENAI_API_KEY= # Azure OpenAI Embedding API Key +EMBEDDER_AZURE_OPENAI_API_BASE= # Azure OpenAI base URL +EMBEDDER_AZURE_OPENAI_VERSION= # Azure OpenAI API version + +# Ollama +EMBEDDER_OLLAMA_URL=http://host.docker.internal:11434 # Ollama API base URL + + +## DOCUMENT_STORE (Vector Database) +# Available providers: qdrant (default) +DOCUMENT_STORE_PROVIDER=qdrant # Document store provider +QDRANT_HOST=qdrant # Qdrant host URL +QDRANT_TIMEOUT=120 # Timeout for Qdrant requests (in seconds) + + +## Langfuse Integration (Optional) +# Langfuse URL: https://langfuse.com/ +# Leave empty to disable Langfuse integration +LANGFUSE_ENABLE= # Enable or disable Langfuse (true/false) +LANGFUSE_SECRET_KEY= # Langfuse secret API key +LANGFUSE_PUBLIC_KEY= # Langfuse public API key +LANGFUSE_HOST=https://cloud.langfuse.com # Langfuse API base URL diff --git a/docker/.env.example b/docker/.env.example index cb242ea44..6d95b090e 100644 --- a/docker/.env.example +++ b/docker/.env.example @@ -1,26 +1,26 @@ +# Project Configuration COMPOSE_PROJECT_NAME=wren PLATFORM=linux/amd64 - PROJECT_DIR=. -# service port -WREN_ENGINE_PORT=8080 -WREN_ENGINE_SQL_PORT=7432 -WREN_AI_SERVICE_PORT=5555 -WREN_UI_PORT=3000 -IBIS_SERVER_PORT=8000 +# Service Ports +WREN_ENGINE_PORT=8080 # Wren Engine API port +WREN_ENGINE_SQL_PORT=7432 # Wren Engine SQL service port +WREN_AI_SERVICE_PORT=5555 # Wren AI service port +WREN_UI_PORT=3000 # Wren UI port +IBIS_SERVER_PORT=8000 # Ibis Server port -# service endpoint (for docker-compose-dev.yaml file) -WREN_UI_ENDPOINT=http://docker.for.mac.localhost:3000 +# Service Endpoints (for docker-compose-dev.yaml) +WREN_UI_ENDPOINT=http://docker.for.mac.localhost:3000 # Wren UI service endpoint -# LLM -LLM_OPENAI_API_KEY= -EMBEDDER_OPENAI_API_KEY= -# gpt-4o-mini, gpt-4o, gpt-4-turbo, gpt-3.5-turbo -GENERATION_MODEL=gpt-4o-mini +# LLM (Language Learning Model) Settings +LLM_OPENAI_API_KEY= # Your OpenAI API Key (required) +EMBEDDER_OPENAI_API_KEY= # Your OpenAI Embedding API Key (required) +# Supported models: gpt-4o-mini, gpt-4o, gpt-4-turbo, gpt-3.5-turbo +GENERATION_MODEL=gpt-4o-mini # Default LLM model used for generation tasks -# version -# CHANGE THIS TO THE LATEST VERSION +# Version Information +# Update these to the latest versions as needed WREN_PRODUCT_VERSION=0.9.0-rc.5 WREN_ENGINE_VERSION=0.10.4 WREN_AI_SERVICE_VERSION=0.8.19 @@ -28,21 +28,21 @@ IBIS_SERVER_VERSION=sha-d9f5ebf WREN_UI_VERSION=0.13.2 WREN_BOOTSTRAP_VERSION=0.1.5 -# AI service related env variables -AI_SERVICE_ENABLE_TIMER= -AI_SERVICE_LOGGING_LEVEL=INFO -SHOULD_FORCE_DEPLOY=1 -QDRANT_HOST=qdrant +# AI Service Related Configuration +AI_SERVICE_ENABLE_TIMER= # Enable or disable AI service timer (true/false) +AI_SERVICE_LOGGING_LEVEL=INFO # Logging level for AI service (DEBUG, INFO, WARN, ERROR) +SHOULD_FORCE_DEPLOY=1 # Force deploy AI service (1 for yes, 0 for no) +QDRANT_HOST=qdrant # Host for Qdrant (vector search engine) -# user id (uuid v4) -USER_UUID= +# User Configuration (UUID v4) +USER_UUID= # User's unique identifier (optional, auto-generated if empty) -# for other services -POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE -POSTHOG_HOST=https://app.posthog.com -TELEMETRY_ENABLED=true +# Telemetry and Analytics +POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE # PostHog API Key for tracking +POSTHOG_HOST=https://app.posthog.com # PostHog service endpoint +TELEMETRY_ENABLED=true # Enable telemetry data collection (true/false) -# the port exposes to the host -# OPTIONAL: change the port if you have a conflict -HOST_PORT=3000 -AI_SERVICE_FORWARD_PORT=5555 +# Host Port Configuration (optional) +# Change these if you have port conflicts on your host machine +HOST_PORT=3000 # Port exposed to the host for Wren UI +AI_SERVICE_FORWARD_PORT=5555 # Port exposed to the host for Wren AI service