diff --git a/docs/cody/capabilities/chat.mdx b/docs/cody/capabilities/chat.mdx index 57ae53a8d..9d219bf36 100644 --- a/docs/cody/capabilities/chat.mdx +++ b/docs/cody/capabilities/chat.mdx @@ -1,23 +1,16 @@ # Chat -

Use Cody's chat to get contextually-aware answers to your questions.

+

Chat directly with AI to ask questions about your code, generate code, and edit code.

-Cody **chat** allows you to ask coding-related questions about any part of your codebase or specific code snippets. You can do it from the **Chat** panel of the supported editor extensions (VS Code, JetBrains) or in the web app. +You can **chat** with Cody to ask questions about any part of your codebase or specific code snippets. You can do it from the chat panel of the supported editor extensions (VS Code, JetBrains) or from the web app. -Key functionalities in the VS Code extension include support for multiple simultaneous chats, enhanced chat context configurability through @-mentions, detailed visibility into the code that Cody read before providing a response, and more. +Cody has context of your open file and repository by default, and you can use `@-mention` to add context on specific files, symbols, remote repositories, or other non-code artifacts. You can learn more about the IDE support for these functionalities in the [feature parity reference](/cody/clients/feature-reference#chat). -## Prerequisites - -To use Cody's chat, you'll need to have the following: - -- A Free or Pro account via Sourcegraph.com or a Sourcegraph Enterprise account -- A supported editor extension (VS Code, JetBrains) installed - ## How does chat work? -Cody answers questions by searching your codebase and retrieving context relevant to your questions. Cody uses several methods to search for context, including keyword search and embeddings search. Finding and using context allows Cody to make informed responses based on your code rather than being limited to general knowledge. When Cody retrieves context to answer a question, it will tell you which code files it read to generate its response. +Cody answers questions by searching your codebase and retrieving context relevant to your questions. Cody uses several methods to search for context like Sourcegraph's native search API and keyword search. Finding and using context allows Cody to make informed responses based on your code rather than being limited to general code knowledge. When Cody retrieves context to answer a question, it will tell you which code files it read to generate its response. Cody can assist you with various use cases such as: @@ -25,16 +18,321 @@ Cody can assist you with various use cases such as: - Locating a specific component in your codebase: Cody can identify and describe the files where a particular component is defined - Handling questions that involve multiple files, like understanding data population in a React app: Cody can locate React component definitions, helping you understand how data is passed and where it originates +## Chat features + +Let's learn how you can leverage some of the features of Cody chat with different IDE extensions (VS Code, JetBrains) and the Sourcegraph web app. + + + + +## Prerequisites + +To use Cody's chat in VS Code, you'll need to have the following: + +- A Free Sourcegraph.com or a Sourcegraph Enterprise account +- Cody for VS Code editor extension installed and activated + +## Chat interface + +Cody chat in VS Code is available in a unified interface opened right next to your code. Once connected to Sourcegraph, a new chat input field is opened with a default `@-mention` [context chips](#context-retrieval). + +All your previous and existing chats are stored for later use and can be accessed via the **History** icon from the top menu. You can download them to share or use later in a `.json` file or delete them altogether. + +The chat interface is designed intuitively. Your very first chat input lives at the top of the panel, and the first message in any chat log will stay pinned to the top of the chat. After your first message, the chat input window moves to the bottom of the sidebar. + +Since your first message to Cody anchors the conversation, you can return to the top chat box anytime, edit your prompt, or re-run it using a different LLM model. + + + +## Chat history + +A chat history icon at the top of your chat input window allows you to navigate between chats (and search chats) without opening the Cody sidebar. + +## Context retrieval + +When you start a new Cody chat, the chat input window opens with a default `@-mention` context chips for all the context it intends to use. This context is based on your current repository and current file (or a file selection if you have code highlighted). + +![context-retrieval](https://storage.googleapis.com/sourcegraph-assets/Docs/context-retrieval-0724.jpg) + +At any point in time, you can edit these context chips or remove them completely if you do not want to use these as context. Any chat without a context chip will instruct Cody to use no codebase context. However, you can always provide an alternate `@-mention` file or symbols to let Cody use it as a new source of context. + +When you have both a repository and files @-mentioned, Cody will search the repository for context while prioritizing the mentioned files. + +### Selecting Context with @-mentions + +Cody's chat allows you to add files and symbols as context in your messages. + +- Type `@-file` and then a filename to include a file as a context +- Type `@#` and then a symbol name to include the symbol's definition as context. Functions, methods, classes, types, etc., are all symbols + +The `@-file` also supports line numbers to query the context of large files. You can add ranges of large files to your context by @-mentioning a large file and appending a number range to the filename, for example, `@filepath/filename:1-10`. + +When you `@-mention` files to add to Cody’s context window, the file lookup takes `files.exclude`, `search.exclude`, and `.gitgnore` files into account. This makes the file search faster as a result up to 100ms. + +Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you get **File too large** errors for further more `@-mention` files. + +Cody defaults to showing @-mention context chips for all the context it intends to use. When you open a new chat, Cody will show context chips for your current repository and current file (or file selection if you have code highlighted). + +### @-mention context providers with OpenCtx + +OpenCtx context providers is in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10). + +[OpenCtx](https://openctx.org/) is an open standard for bringing contextual info about code into your dev tools. Cody Free and Pro users can use OpenCtx providers to fetch and use context from the following sources: + +- [Webpages](https://openctx.org/docs/providers/web) (via URL) +- [Jira tickets](https://openctx.org/docs/providers/jira) +- [Linear issues](https://openctx.org/docs/providers/linear-issues) +- [Notion pages](https://openctx.org/docs/providers/notion) +- [Google Docs](https://openctx.org/docs/providers/google-docs) +- [Sourcegraph code search](https://openctx.org/docs/providers/sourcegraph-search) + +To try it out, add context providers to your VS Code settings. For example, to use the [DevDocs provider](https://openctx.org/docs/providers/devdocs), add the following to your `settings.json`: + +```json +"openctx.providers": { + "https://openctx.org/npm/@openctx/provider-devdocs": { + "urls": ["https://devdocs.io/go/", "https://devdocs.io/angular~16/"] + } +}, +``` + +You don't need the OpenCtx VS Code extension to use context fetching with OpenCtx. We recommend uninstalling the extension before using this feature in Cody. + +## Context filters + +Context Filters is available for all Cody Enterprise users running Cody VS Code extension version `>=1.20.0`. + +Admins on the Sourcegraph Enterprise instance can use the Cody Context Filters to determine which repositories Cody can use as the context in its requests to third-party LLMs. Inside your site configuration, you can define a set of `include` and `exclude` rules that will be used to filter the list of repositories Cody can access. + +For repos mentioned in the `exclude` field, Cody's commands are disabled, and you cannot use them for context fetching. If you try running any of these, you'll be prompted with an error message. However, Cody chat will still work, and you can use it to ask questions. + +[Read more about the Cody Context Filters here →](/cody/capabilities/ignore-context) + +## Prompts and Commands + +Cody offers quick, ready-to-use [prompts and commands](/cody/capabilities/commands) for common actions to write, describe, fix, and smell code. These allow you to run predefined actions with smart context-fetching anywhere in the editor, like: + +- **New Chat**: Ask Cody a question +- **Document Code**: Add code documentation +- **Edit Code**: Edit code with instructions +- **Explain Code**: Describe your code with more details +- **Find Code Smells**: Identify bad code practices and bugs +- **Generate Unit Tests**: Write tests for your code +- **Custom Commands**: Helps you write and define your own commands + +Let's understand how the `Document Code` command generates code documentation for a function. + + + +### Custom Commands + +Custom Commands are currently available in Beta for all users and are currently supported by Cody for the VS Code extension version 0.8 and above. + +For customization and advanced use cases, you can create **Custom Commands** tailored to your requirements. You can also bind keyboard shortcuts to run your custom commands quickly. To bind a keyboard shortcut, open the Keyboard Shortcuts editor and search for `cody.command.custom.` to see the list of your custom commands. + +Settings + +Learn more about Custom Commands [here](/cody/capabilities/commands#custom-commands) + +## Supported LLM models + +Claude Sonnet 3.5 is the default LLM model for inline edits and commands. If you've used Claude 3 Sonnet for inline edit or commands before, remember to manually update the model. The default model change only affects new users. + +Users on Cody **Free** and **Pro** can choose from a list of supported LLM models for Chat and Commands. + +![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/LLM-Select-Free.png) + +Enterprise users get Claude 3 (Opus and Sonnet) as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, AWS Bedrock (limited availability), and GCP Vertex. + +For enterprise users on AWS Bedrock: 3.5 Sonnet is unavailable in `us-west-2` but available in `us-east-1`. Check the current model availability on AWS and your customer's instance location before switching. Provisioned throughput via AWS is not supported for 3.5 Sonnet. + +You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) Anthropic (like Claude 2.0 and Claude Instant), OpenAI (GPT 3.5 and GPT 4) and Google Gemini 1.5 models (Flash and Pro). + +Read more about all the supported LLM models [here](/cody/capabilities/supported-models) + +## LLM selection + + You need to be a Cody Free or Pro user to have multi-model selection capability. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model. + +LLM selection for chat: + +- Open chat or toggle between editor and chat (Opt+L/Alt+L) +- Click on the model selector (which by default indicates Claude 3.5 Sonnet) +- See the selection of models and click the model you desire. This model will now be the default model going forward on any new chats + +LLM selection for inline edits: + +- On any file, select some code and a right-click +- Select Cody->Edit Code (optionally, you can do this with Opt+K/Alt+K) +- Select the default model available (this is Claude 3 Opus) +- See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits + +## Supported local Ollama models with Cody + +Support with Ollama is currently in the Experimental stage and is available for Cody Free and Pro plans. + +### Cody Autocomplete with Ollama + +To get autocomplete suggestions from Ollama locally, follow these steps: + +- Install and run [Ollama](https://ollama.ai/) +- Download one of the supported local models: + - `ollama pull deepseek-coder:6.7b-base-q4_K_M` for [deepseek-coder](https://ollama.ai/library/deepseek-coder) + - `ollama pull codellama:7b-code` for [codellama](https://ollama.ai/library/codellama) + - `ollama pull starcoder2:7b` for [codellama](https://ollama.ai/library/starcoder2) +- Update Cody's VS Code settings to use the `experimental-ollama` autocomplete provider and configure the right model: + +```json + + { + "cody.autocomplete.advanced.provider": "experimental-ollama", + "cody.autocomplete.experimental.ollamaOptions": { + "url": "http://localhost:11434", + "model": "deepseek-coder:6.7b-base-q4_K_M" + } + } +``` + +- Confirm Cody uses Ollama by looking at the Cody output channel or the autocomplete trace view (in the command palette) + +#### Cody Chat and Commands with Ollama + +Chat model selector + +To generate chat and commands with Ollama locally, follow these steps: + +- Download [Ollama](https://ollama.com/download) +- Start Ollama (make sure the Ollama logo is showing up in your menu bar) +- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library) +- Pull the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`) +- Once the chat model is downloaded successfully, open Cody in VS Code +- Open a new Cody chat +- In the new chat panel, you should see the chat model you've pulled in the dropdown list +- Currently, you will need to restart VS Code to see the new models + +You can run `ollama list` in your terminal to see what models are currently available on your machine. + +#### Run Cody offline with local Ollama models + +You can use Cody with or without an internet connection. The offline mode does not require you to sign in with your Sourcegraph account to use Ollama. Click the button below the Ollama logo and you'll be ready to go. + +![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg) + +You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc. + +## Experimental models + +Support for the following models is currently in the Experimental stage, and available for Cody Free and Pro plans. + +The following experimental model providers can be configured in Cody's extension settings JSON: + +- Google (requires [Google AI Studio API key](https://aistudio.google.com/app/apikey)) +- Groq (requires [GroqCloud API key](https://console.groq.com/docs/api-keys)) +- OpenAI & OpenAI-Compatible API (requires [OpenAI API key](https://platform.openai.com/account/api-keys)) +- Ollama (remote) + +Once configured, and VS Code has been restarted, you can select the configured model from the dropdown both for chat and for edits. + +Example VS Code user settings JSON configuration: + +```json +{ + "cody.dev.models": [ + // Google (e.g. Gemini 1.5 Pro) + { + "provider": "google", + "model": "gemini-1.5-pro-latest", + "tokens": 1000000, + "apiKey": "xyz" + }, + // Groq (e.g. llama2 70b) + { + "provider": "groq", + "model": "llama2-70b-4096", + "tokens": 4096, + "apiKey": "xyz" + }, + // OpenAI & OpenAI-compatible APIs + { + "provider": "openai", // keep groq as provider + "model": "some-model-id", + "apiKey": "xyz", + "apiEndpoint": "https://host.domain/path" + }, + // Ollama (remote) + { + "provider": "ollama", + "model": "some-model-id", + "apiEndpoint": "https://host.domain/path" + } + ] +} +``` + +### Provider configuration options + +- `provider`: `"google"`, `"groq"` or `"ollama"` + - The LLM provider type. +- `model`: `string` + - The ID of the model, e.g. `"gemini-1.5-pro-latest"` +- `tokens`: `number` - optional + - The context window size of the model. Default: `7000`. +- `apiKey`: `string` - optional + - The API key for the endpoint. Required if the provider is `"google"` or `"groq"`. +- `apiEndpoint`: `string` - optional + - The endpoint URL, if you don't want to use the provider’s default endpoint. + +## Smart Apply code suggestions + +Cody lets you dynamically insert code from chat into your files with **Smart Apply**. Every time Cody provides you with a code suggestion, you can click the **Apply** button. Cody will then analyze your open code file, find where that relevant code should live, and add a diff. + +For chat messages where Cody provides multiple code suggestions, you can apply each in sequence to go from chat suggestions to written code. + +## Inline Edits + +Highlight code, hit the edit hotkey, and describe a change. Cody will generate a diff for the change in seconds. + +## Ask Cody to fix + +For errors in your code, hit Ask Cody to Fix and Cody will propose a diff based on the error. + +// END Of DRAFT + + + + +## Heading 2 + +Lorem Ipsum + +## Heading 2 + +Lorem Ipsum + + + ## Ask Cody your first question -Let's use Cody VS Code extension's chat interface to answer your first question. +Let's use Cody chat interface to answer your first question. -- Click the Cody icon in the sidebar to view the detailed panel -- Next, click the icon for **New Chat** to open a new chat window +- Click the Cody icon from the sidebar in VS Code to open the chat panel +- The chat input field will show pre-added context about the files and repository it reads +- If you want to add or remove context, you can use the `@-mention` syntax +- Select your LLM via the drop-down. By default Cody will use Claude Sonnet 3.5 for chat - Write your question or instruction to Cody and then press **Enter**. -For example, ask Cody "What does this file do?" - Cody will take a few seconds to process your question, providing contextual information about the files it reads and generating the answer. +## Ask Cody your first question + +Let's use Cody chat interface to answer your first question. + +- Click the Cody icon from the sidebar in VS Code to open the chat panel +- The chat input field will show pre-added context about the files and repository it reads +- If you want to add or remove context, you can use the `@-mention` syntax +- Select your LLM via the drop-down. By default Cody will use Claude Sonnet 3.5 for chat +- Write your question or instruction to Cody and then press **Enter**. + +Cody will take a few seconds to process your question, providing contextual information about the files it reads and generating the answer. + + + +## Ask Cody to write code + +The chat feature can also write code for your questions. For example, in VS Code, ask Cody to "write a function that sorts an array in ascending order". + +You are provided with code suggestions in the chat window along with the following options for using the code. + +- The **Copy Code** icon to your clipboard and paste the code suggestion into your code editor +- Insert the code suggestion at the current cursor location by the **Insert Code at Cursor** icon +- The **Save Code to New File** icon to save the code suggestion to a new file in your project + +During the chat, if Cody needs additional context, it can ask you to provide more information with a follow-up question. If your question is beyond the scope of the context, Cody will ask you to provide an alternate question aligned with the context of your codebase. + +## Keyboard shortcuts + +Cody provides a set of powerful keyboard shortcuts to streamline your workflow and boost productivity. These shortcuts allow you to quickly access Cody's features without leaving your keyboard. + +* `Opt+L` (macOS) or `Alt+L` (Windows/Linux): Toggles between the chat view and the last active text editor. If a chat view doesn't exist, it opens a new one. When used with an active selection in a text editor, it adds the selected code to the chat for context. + +* `Shift+Opt+L` (macOS) or `Shift+Alt+L` (Windows/Linux): Instantly starts a new chat session, perfect for when you want to begin a fresh conversation with Cody. + +* `Opt+K` (macOS) or `Alt+K` (Windows/Linux): Opens the Edit Code instruction box. This works with either selected code or the code at the cursor position, allowing you to quickly request edits or improvements. + +* `Opt+C` (macOS) or `Alt+C` (Windows/Linux): Opens the Cody Commands Menu, giving you quick access to a range of Cody's powerful features. + +* `Cmd+.` (macOS) or `Ctrl+.` (Windows/Linux): Opens the Quick Fix menu, which includes options for Cody to edit or generate code based on your current context. + +## Updating the extension + +VS Code will typically notify you when updates are available for installed extensions. Follow the prompts to update the Cody AI extension to the latest version. + +## Authenticating Cody with VS Code forks + +Cody also works with Cursor, Gitpod, IDX, and other similar VS Code forks. To access VS Code forks like Cursor, select **Sign in with URL and access token** and generate an access token. Next, copy and paste into the allocated field, using `https://sourcegraph.com` as the URL. + +## Add/remove account + +To add/remove an account you can do the following: + +1. Open Cody by clicking the Cody icon on the left navbar +1. On the open sidebar select the Account icon +1. Select `Sign Out` to remove account or `Switch Account` to login to a different account + +//END OF DRAFT + + ## Chat Cody chat in VS Code is available in a unified interface opened right next to your code. Once connected to Sourcegraph, a new chat input field is opened with a default `@-mention` [context chips](#context-retrieval). diff --git a/docs/cody/core-concepts/enterprise-architecture.mdx b/docs/cody/core-concepts/enterprise-architecture.mdx index c67583176..d249ff870 100644 --- a/docs/cody/core-concepts/enterprise-architecture.mdx +++ b/docs/cody/core-concepts/enterprise-architecture.mdx @@ -1,4 +1,6 @@ -# Enterprise architecture +# Cody Enterprise Architecture + +

The diagrams on this page explain the core architecture for Cody Enterprise users.

[Cody Enterprise](/cody/clients/enable-cody-enterprise) can run in the Sourcegraph Cloud environment or on your own infrastructure. Here are a few possible deployment architectures. @@ -15,4 +17,4 @@ ## Data flow - \ No newline at end of file + diff --git a/docs/cody/enterprise/configure-cody.mdx b/docs/cody/enterprise/configure-cody.mdx new file mode 100644 index 000000000..3030c96fd --- /dev/null +++ b/docs/cody/enterprise/configure-cody.mdx @@ -0,0 +1,74 @@ +# Advanced Features + +

Learn about all the advanced features that users on Cody Enterprise get.

+ + + + Site administrators can customize the time duration of the access token used by other users on the Sourcegraph Enterprise instance to connect Cody from their IDEs via the **Site admin** page. Administrators can choose from various options, including 7, 14, 30, 60, and 90 days. + + ![ide-token-expiry](https://storage.googleapis.com/sourcegraph-assets/Docs/token-expiry.png) + + + Guardrails for public code is currently in Beta stage and is supported with VS Code and JetBrains IDEs extensions. + + Open source attribution guardrails for public code reduce the exposure to copyrighted code, commonly called copyright guardrails. This involves the implementation of a verification mechanism within Cody to ensure that any code generated by the platform does not replicate open source code. + + Guardrails for public code are available to all Sourcegraph Enterprise instances and are **disabled** by default. You can enable it from the **Site configuration**. You can do so by setting `"attribution.enabled": true` in site config. + + It only matches code snippets that are at least **10 lines** or longer, and the search corpus **290,000** open source repositories. + + Guardrails don't differentiate between license types. It matches any code snippet that is at least 10 lines long from the 290,000 indexed open source repositories. + + + Admin Controls is supported with VS Code and JetBrains IDE extension. + + Sourcegraph account admins have selective control over users' access to Cody Enterprise, which is now managed via the Sourcegraph role-based access control system. This provides a more intuitive user interface for assigning permission to use Cody. + + + Cody Analytics are supported with VS Code IDE extension and on the latest versions of JetBrains IDE. + + Cody Enterprise users get a clear view of usage analytics for their instance on a self-service basis. A separately managed cloud service for Cody analytics handles user auth, gets metrics data from Sourcegraph's BigQuery instance, and visualizes the metrics data. + + The following metrics are available for Cody Enterprise users: + + | **Metric Type** | **What is measured?** | + | --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Active users | - Total active users
- Average daily users
- Average no. of days each user used Cody (of last 30 days)
- Cody users by day (last 30 days)
- Cody users by month (last two months)
- Cody users by number of days used | + | Completions | - Total accepted completions
- Minutes saved per completion
- Hours saved by completions
- Cody completions by day
- Completions acceptance rate
- Weighted completions acceptance rate
- Average completion latency
- Acceptance rate by language
| + | Chat | - Total chat events
- Minutes saved per chat
- Hours saved by chats
- Cody chats by day | + | Commands | - Total command events
- Minutes saved per command
- Hours saved by commands
- Cody commands by day
- Most used commands | + + To enable Cody Analytics: + + - Create an account on [Sourcegraph Accounts](https://accounts.sourcegraph.com/) + - A user already having an account on Sourcegraph.com gets automatically migrated to Sourcegraph Accounts. Users can sign in to Cody Analytics using their email and password + - Users without a Sourcegraph.com account, please get in touch with one of our teammates. They can help with both the account setup and assigning instances to specific users + - Map your user account to a Sourcegraph instance, and this gives you access to Cody's analytics + +
+ + + Multi-repo context for Sourcegraph Enterprise is supported with VS Code and JetBrains editor extensions. + + Cody Enterprise supports searching up to 10 repositories to find relevant context in chat. + + * In VS Code, open a new Cody chat, type `@`, and select `Remote Repositories` to search other repositories for context + * In JetBrains, use the enhanced context selector + + ### @-mention directory + + @-mentioning directory is available for Enterprise users on VS Code, JetBrains, and Cody Web. + + To better support teams working with large monorepos, Enterprise users can `@-mention` directories when chatting with Cody. This helps you define more specific directories and sub-directories within that monorepo to give more precise context. + + To do this, type `@` in the chat, and then select **Directories** to search other repositories for context in your codebase. + + + + Please note that you can only `@-mention` remote directories (i.e., directories in your Sourcegraph instance) but not local directories. This means any recent changes to your directories can't be utilized as context until your Sourcegraph instance re-indexes any changes. + + If you want to include recent changes that haven't been indexed in your Sourcegraph instance, you can `@-mention` specific files, lines of code, or symbols. + +
diff --git a/docs/cody/enterprise/index.mdx b/docs/cody/enterprise/index.mdx new file mode 100644 index 000000000..38ba4b59e --- /dev/null +++ b/docs/cody/enterprise/index.mdx @@ -0,0 +1 @@ +# Cody for Admins diff --git a/docs/cody/enterprise/llm-models.mdx b/docs/cody/enterprise/llm-models.mdx new file mode 100644 index 000000000..c9b783ea7 --- /dev/null +++ b/docs/cody/enterprise/llm-models.mdx @@ -0,0 +1,32 @@ +# Supported LLM Models and Configuration + +

This page will help you explore and learn about all the LLM models and their configuratiojns that are supported on Cody Enterprise.

+ +Sourcegraph Enterprise supports many different LLM providers and models. You can use state of the art code completion models such as Anthropic's Claude or OpenAI's ChatGPT by adjusting your Sourcegraph instance's configuration. + +Refer to the [Model Configuration](/cody/clients/model-configuration) or [Supported Models](/cody/capabilities/supported-models) sections for more information. +Use the drop-down menu to make your desired selection and get a detailed breakdown of the supported LLM models for each provider on Cody Enterprise. + + +For the supported LLM models listed above refer to the following notes: + +1. Microsoft Azure is planning to deprecate the APIs used in SG version less than `5.3.3` on July 1, 2024 [Source](https://learn.microsoft.com/en-us/azure/ai-services/openai/api-version-deprecation) +2. Claude 2.1 is not recommended +3. Sourcegraph doesn't recommend GPT-4 non-turbo, Claude 1 or 2 models +4. Only supported through legacy completions API +5. BYOK with managed services are only supported for Self-hosted Sourcegraph instances +6. GPT-4 and GPT-4o for completions has a bug that is resulting in many failed completions + +### Supported model configuration + +Use the drop-down menu to make your desired selection and get a detailed breakdown of the supported model configuration for each provider on Cody Enterprise. This is an on-site configuration. Admins should pick a value from the table for `chatModel` to configure their chat model. + + + +For the supported LLM model configuration listed above refer to the following notes: + +1. Microsoft Azure is planning to deprecate the APIs used in SG version less than `5.3.3` on July 1, 2024 [Source](https://learn.microsoft.com/en-us/azure/ai-services/openai/api-version-deprecation) +2. Claude 2.1 is not recommended +3. Sourcegraph doesn't recommend GPT-4 non-turbo, Claude 1 or 2 models +4. Only supported through legacy completions API +5. BYOK with managed services are only supported for Self-hosted Sourcegraph instances diff --git a/src/components/Logo.tsx b/src/components/Logo.tsx index acaff9d34..1e11b6b7c 100644 --- a/src/components/Logo.tsx +++ b/src/components/Logo.tsx @@ -5,12 +5,12 @@ export function Logo(props: React.ComponentPropsWithoutRef<'svg'>) { <> Sourcegraph Docs Sourcegraph Docs diff --git a/src/data/navigation.ts b/src/data/navigation.ts index 2e6e3b256..457e36ba8 100644 --- a/src/data/navigation.ts +++ b/src/data/navigation.ts @@ -24,62 +24,123 @@ export type VersionNavigations = Record; export const navigation: NavigationItem[] = [ { - separator: "Code Intelligence", + separator: "Cody", topics: [ { - title: "Cody", + title: "Introduction", href: "/cody", sections: [ { title: "Quickstart", href: "/cody/quickstart" }, - { - title: "Installation", href: "/cody/clients", - subsections: [ - { title: "Cody for VS Code", href: "/cody/clients/install-vscode", }, - { title: "Cody for JetBrains", href: "/cody/clients/install-jetbrains", }, - { title: "Cody for Web", href: "/cody/clients/cody-with-sourcegraph", }, - { title: "Cody for CLI", href: "/cody/clients/install-cli", }, - { title: "Cody for Enterprise", href: "/cody/clients/enable-cody-enterprise", }, - { title: "Model Configuration", href: "/cody/clients/model-configuration", }, - ] - }, - { - title: "Capabilities", href: "/cody/capabilities", - subsections: [ - { title: "Chat", href: "/cody/capabilities/chat", }, - { title: "Autocomplete", href: "/cody/capabilities/autocomplete", }, - { title: "Prompts & Commands", href: "/cody/capabilities/commands", }, - { title: "OpenCtx", href: "/cody/capabilities/openctx", }, - { title: "Debug Code", href: "/cody/capabilities/debug-code", }, - { title: "Context Filters", href: "/cody/capabilities/ignore-context", }, - { title: "Proxy Setup", href: "/cody/capabilities/proxy-setup", }, - { title: "Supported Models", href: "/cody/capabilities/supported-models", }, - { title: "Feature Parity Reference", href: "/cody/clients/feature-reference", }, - ] - }, - { - title: "Core Concepts", href: "/cody/core-concepts/context", - subsections: [ - { title: "Context", href: "/cody/core-concepts/context", }, - { title: "Token Limits", href: "/cody/core-concepts/token-limits", }, - // { title: "Embeddings", href: "/cody/core-concepts/embeddings", }, - { title: "Keyword Search", href: "/cody/core-concepts/keyword-search", }, - // { title: "Code Graph", href: "/cody/core-concepts/code-graph", }, - { title: "Cody Gateway", href: "/cody/core-concepts/cody-gateway", }, - { title: "Enterprise Architecture", href: "/cody/core-concepts/enterprise-architecture", }, - ] - }, - // { - // title: "Use Cases", href: "/cody/use-cases/generate-unit-tests", - // subsections: [ - // { title: "Generate Unit Tests", href: "/cody/use-cases/generate-unit-tests", }, - // // { title: "Build UI", href: "/cody/use-cases/build-ui", }, - // ] - // }, - { title: "Usage and Pricing", href: "/cody/usage-and-pricing" }, - { title: "Troubleshooting", href: "/cody/troubleshooting" }, - { title: "FAQs", href: "/cody/faq" }, ], }, + { + title: "Cody for Admins", + href: "/cody/enterprise", + sections: [ + { title: "Get Started", href: "/cody/clients/enable-cody-enterprise" }, + { title: "Configuring Cody", href: "/cody/enterprise/configure-cody" }, + { title: "LLM Models & Configurations", href: "/cody/enterprise/llm-models" }, + { title: "Enterprise Architecture", href: "/cody/core-concepts/enterprise-architecture" }, + + ], + }, + { + title: "Cody for Users", + href: "/cody/clients", + sections: [ + { title: "VS Code", href: "/cody/clients/install-vscode" }, + { title: "JetBrains", href: "/cody/clients/install-jetbrains" }, + { title: "Web", href: "/cody/clients/cody-with-sourcegraph" }, + ], + }, + { + title: "Capabilities", + href: "/cody/capabilities", + sections: [ + { title: "Chat", href: "/cody/capabilities/chat" }, + { title: "Autocomplete", href: "/cody/capabilities/autocomplete" }, + { title: "Prompts & Commands", href: "/cody/capabilities/commands" }, + { title: "Debug Code", href: "/cody/capabilities/debug-code" }, + { title: "Context", href: "/cody/core-concepts/context" }, + { title: "Cody CLI", href: "/cody/clients/install-cli" }, + { title: "OpenCtx", href: "/cody/capabilities/openctx" }, + ], + }, + { + title: "Cody at a Glance", + href: "/cody", + sections: [ + { title: "Feature Parity Matrix", href: "/cody/clients/feature-reference" }, + { title: "Supported LLM Models", href: "/cody/capabilities/supported-models" }, + { title: "Token Limits", href: "/cody/core-concepts/token-limits" }, + ], + }, + { title: "Usage and Pricing", href: "/cody/usage-and-pricing" }, + { title: "Troubleshooting", href: "/cody/troubleshooting" }, + { title: "FAQs", href: "/cody/faq" }, + ], + }, + + // { + // separator: "Cody", + // topics: [ + // { + // title: "Introduction", + // href: "/cody", + // sections: [ + // { title: "QuickStart", href: "/code-search/features" }, + // { + // title: "Cody for Admins", href: "/code-search/queries", + // subsections: [ + // { title: "Get Started", href: "/code-search/working/search_filters", }, + // { title: "Configuring Cody", href: "/code-search/queries/examples", }, + // { title: "LLM Models & Configurations", href: "/code-search/types/symbol", }, + // { title: "Enterprise Architecture", href: "/code-search/queries/language", }, + // ] + // }, + // { + // title: "Cody for Users", href: "/code-search/queries", + // subsections: [ + // { title: "Cody Clients", href: "/code-search/working/search_filters", }, + // ] + // }, + // { + // title: "Features", href: "/code-search/code-navigation", + // subsections: [ + // { title: "Chat", href: "/code-search/code-navigation/features", }, + // { title: "Autocomplete", href: "/code-search/code-navigation/search_based_code_navigation", }, + // { title: "Prompts & Commands", href: "/code-search/code-navigation/precise_code_navigation", }, + // { title: "Debug Code", href: "/code-search/code-navigation/writing_an_indexer", }, + // { title: "Context", href: "/code-search/code-navigation/auto_indexing", }, + // { title: "OpenCtx", href: "/code-search/code-navigation/envvars", }, + // { title: "Cody CLI", href: "/code-search/code-navigation/troubleshooting", }, + // ] + // }, + // { + // title: "Resources", href: "/code-search/working/saved_searches", + // subsections: [ + // { title: "Feature Parity Matrix", href: "/code-search/types/fuzzy", }, + // { title: "Token Limits", href: "/code-search/working/search_contexts", }, + // { title: "Supported LLM Models", href: "/code-search/types/search-jobs", }, + // ] + // }, + // { + // title: "Usage & Pricing", href: "/code-search/faq", + // }, + // { + // title: "Troubleshooting", href: "/code-search/faq", + // }, + // { + // title: "FAQs", href: "/code-search/faq", + // }, + // ], + // }, + // ], + // }, + + { + separator: "Code Search", + topics: [ { title: "Code Search", href: "/code-search",