From f456ca46702ba8d8fc16d447c4e245d15fa70b0d Mon Sep 17 00:00:00 2001 From: Marc LeBlanc Date: Tue, 17 Dec 2024 13:27:20 -0700 Subject: [PATCH] Fixing issues for this PR, remaining issues to be resolved in next PR --- docs/admin/deploy/docker-compose/operations.mdx | 2 +- docs/admin/deploy/kubernetes/index.mdx | 8 +++++--- docs/admin/observability/opentelemetry.mdx | 2 +- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/docs/admin/deploy/docker-compose/operations.mdx b/docs/admin/deploy/docker-compose/operations.mdx index 2b16e509..54ba1aa0 100644 --- a/docs/admin/deploy/docker-compose/operations.mdx +++ b/docs/admin/deploy/docker-compose/operations.mdx @@ -27,7 +27,7 @@ docker exec -it codeinsights-db psql -U postgres #access codeinsights-db contain The `frontend` container in the `docker-compose.yaml` file will automatically run on startup and migrate the databases if any changes are required, however administrators may wish to migrate their databases before upgrading the rest of the system when working with large databases. Sourcegraph guarantees database backward compatibility to the most recent minor point release so the database can safely be upgraded before the application code. -To execute the database migrations independently, follow the [docker-compose instructions on how to manually run database migrations](/admin/updates/migrator/migrator-operations#docker-compose). Running the `up` (default) command on the `migrator` of the *version you are upgrading to* will apply all migrations required by that version of Sourcegraph. +To execute the database migrations independently, follow the [docker-compose instructions on how to manually run database migrations](/admin/updates/migrator/migrator-operations#docker-compose). Running the `up` (default) command on the `migrator` of the *version you are upgrading to* will apply all migrations required by the next version of Sourcegraph. ## Backup and restore diff --git a/docs/admin/deploy/kubernetes/index.mdx b/docs/admin/deploy/kubernetes/index.mdx index b5f36417..f7eb3c91 100644 --- a/docs/admin/deploy/kubernetes/index.mdx +++ b/docs/admin/deploy/kubernetes/index.mdx @@ -360,7 +360,9 @@ jaeger: #### Configure OpenTelemetry Collector to use an external tracing backend -To configure the bundled otel-collector to export traces to an external OTel-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file: +To configure the bundled otel-collector to export traces to an external OTel-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file. + +For the specific configurations to set, see our [OpenTelemetry](/admin/observability/opentelemetry) page. ```yaml openTelemetry: @@ -368,9 +370,9 @@ openTelemetry: config: traces: exporters: - ... + # Your exporter configuration here processors: - ... + # Your processor configuration here ``` To use an external Jaeger instance, copy and customize the configs from the [opentelemetry-exporter/override.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override.yaml) file, and add them to your Helm values override file: diff --git a/docs/admin/observability/opentelemetry.mdx b/docs/admin/observability/opentelemetry.mdx index 5d475087..ca4db075 100644 --- a/docs/admin/observability/opentelemetry.mdx +++ b/docs/admin/observability/opentelemetry.mdx @@ -6,7 +6,7 @@ To handle this data, Sourcegraph deployments include a bundled [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) (otel-collector) container, which can be configured to ingest, process, and export observability data to a backend of your choice. This approach offers great flexibility. -> NOTE: Sourcegraph currently uses OTel for HTTP Traces, and plans to use it for metrics and logs in the future. +> NOTE: Sourcegraph currently uses OTel for HTTP Traces, and may use it for metrics and logs in the future. For an in-depth explanation of the parts that compose a full collector pipeline, see OpenTelemetry's [documentation](https://opentelemetry.io/docs/collector/configuration/).