diff --git a/docs/getting-started/index.md b/docs/getting-started/index.md
index 65cffa87971..adecb3302a8 100644
--- a/docs/getting-started/index.md
+++ b/docs/getting-started/index.md
@@ -21,3 +21,33 @@ functions in ClickHouse. The sample datasets include:
+| Page | Description |
+|-----|-----|
+| [New York Taxi Data](/getting-started/example-datasets/nyc-taxi) | Data for billions of taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in New York City since 2009 |
+| [Terabyte Click Logs from Criteo](/getting-started/example-datasets/criteo) | A terabyte of Click Logs from Criteo |
+| [WikiStat](/getting-started/example-datasets/wikistat) | Explore the WikiStat dataset containing 0.5 trillion records. |
+| [TPC-DS (2012)](/getting-started/example-datasets/tpcds) | The TPC-DS benchmark data set and queries. |
+| [Recipes Dataset](/getting-started/example-datasets/recipes) | The RecipeNLG dataset, containing 2.2 million recipes |
+| [COVID-19 Open-Data](/getting-started/example-datasets/covid19) | COVID-19 Open-Data is a large, open-source database of COVID-19 epidemiological data and related factors like demographics, economics, and government responses |
+| [NOAA Global Historical Climatology Network](/getting-started/example-datasets/noaa) | 2.5 billion rows of climate data for the last 120 yrs |
+| [GitHub Events Dataset](/getting-started/example-datasets/github-events) | Dataset containing all events on GitHub from 2011 to Dec 6 2020, with a size of 3.1 billion records. |
+| [Amazon Customer Review](/getting-started/example-datasets/amazon-reviews) | Over 150M customer reviews of Amazon products |
+| [Brown University Benchmark](/getting-started/example-datasets/brown-benchmark) | A new analytical benchmark for machine-generated log data |
+| [Writing Queries in ClickHouse using GitHub Data](/getting-started/example-datasets/github) | Dataset containing all of the commits and changes for the ClickHouse repository |
+| [Analyzing Stack Overflow data with ClickHouse](/getting-started/example-datasets/stackoverflow) | Analyzing Stack Overflow data with ClickHouse |
+| [AMPLab Big Data Benchmark](/getting-started/example-datasets/amplab-benchmark) | A benchmark dataset used for comparing the performance of data warehousing solutions. |
+| [New York Public Library "What's on the Menu?" Dataset](/getting-started/example-datasets/menus) | Dataset containing 1.3 million records of historical data on the menus of hotels, restaurants and cafes with the dishes along with their prices. |
+| [Laion-400M dataset](/getting-started/example-datasets/laion-400m-dataset) | Dataset containing 400 million images with English image captions |
+| [Star Schema Benchmark (SSB, 2009)](/getting-started/example-datasets/star-schema) | The Star Schema Benchmark (SSB) data set and queries |
+| [The UK property prices dataset](/getting-started/example-datasets/uk-price-paid) | Learn how to use projections to improve the performance of queries that you run frequently using the UK property dataset, which contains data about prices paid for real-estate property in England and Wales |
+| [Reddit comments dataset](/getting-started/example-datasets/reddit-comments) | Dataset containing publicly available comments on Reddit from December 2005 to March 2023 with over 14B rows of data in JSON format |
+| [OnTime](/getting-started/example-datasets/ontime) | Dataset containing the on-time performance of airline flights |
+| [Taiwan Historical Weather Datasets](/getting-started/example-datasets/tw-weather) | 131 million rows of weather observation data for the last 128 yrs |
+| [Crowdsourced air traffic data from The OpenSky Network 2020](/getting-started/example-datasets/opensky) | The data in this dataset is derived and cleaned from the full OpenSky dataset to illustrate the development of air traffic during the COVID-19 pandemic. |
+| [NYPD Complaint Data](/getting-started/example-datasets/nypd_complaint_data) | Ingest and query Tab Separated Value data in 5 steps |
+| [TPC-H (1999)](/getting-started/example-datasets/tpch) | The TPC-H benchmark data set and queries. |
+| [Foursquare places](/getting-started/example-datasets/foursquare-places) | Dataset with over 100 million records containing information about places on a map, such as shops, restaurants, parks, playgrounds, and monuments. |
+| [YouTube dataset of dislikes](/getting-started/example-datasets/youtube-dislikes) | A collection is dislikes of YouTube videos. |
+| [Geo Data using the Cell Tower Dataset](/getting-started/example-datasets/cell-towers) | Learn how to load OpenCelliD data into ClickHouse, connect Apache Superset to ClickHouse and build a dashboard based on data |
+| [Environmental Sensors Data](/getting-started/example-datasets/environmental-sensors) | Over 20 billion records of data from Sensor.Community, a contributors-driven global sensor network that creates Open Environmental Data. |
+| [Anonymized Web Analytics](/getting-started/example-datasets/metrica) | Dataset consisting of two tables containing anonymized web analytics data with hits and visits |
diff --git a/docs/use-cases/observability/build-your-own/demo-application.md b/docs/use-cases/observability/build-your-own/demo-application.md
new file mode 100644
index 00000000000..acf870ef5de
--- /dev/null
+++ b/docs/use-cases/observability/build-your-own/demo-application.md
@@ -0,0 +1,8 @@
+---
+title: 'Demo Application'
+description: 'Demo application for observability'
+slug: /observability/demo-application
+keywords: ['observability', 'logs', 'traces', 'metrics', 'OpenTelemetry', 'Grafana', 'OTel']
+---
+
+The OpenTelemetry project includes a [demo application](https://opentelemetry.io/docs/demo/). A maintained fork of this application with ClickHouse as a data source for logs and traces can be found [here](https://github.com/ClickHouse/opentelemetry-demo). The [official demo instructions](https://opentelemetry.io/docs/demo/docker-deployment/) can be followed to deploy this demo with docker. In addition to the [existing components](https://opentelemetry.io/docs/demo/collector-data-flow-dashboard/), an instance of ClickHouse will be deployed and used for the storage of logs and traces.
diff --git a/docs/use-cases/observability/grafana.md b/docs/use-cases/observability/build-your-own/grafana.md
similarity index 92%
rename from docs/use-cases/observability/grafana.md
rename to docs/use-cases/observability/build-your-own/grafana.md
index c5c592e3647..970ed6eee11 100644
--- a/docs/use-cases/observability/grafana.md
+++ b/docs/use-cases/observability/build-your-own/grafana.md
@@ -23,11 +23,11 @@ import Image from '@theme/IdealImage';
Grafana represents the preferred visualization tool for Observability data in ClickHouse. This is achieved using the official ClickHouse plugin for Grafana. Users can follow the installation instructions found [here](/integrations/grafana).
V4 of the plugin makes logs and traces a first-class citizen in a new query builder experience. This minimizes the need for SREs to write SQL queries and simplifies SQL-based Observability, moving the needle forward for this emerging paradigm.
-Part of this has been placing Open Telemetry (OTel) at the core of the plugin, as we believe this will be the foundation of SQL-based Observability over the coming years and how data will be collected.
+Part of this has been placing OpenTelemetry (OTel) at the core of the plugin, as we believe this will be the foundation of SQL-based Observability over the coming years and how data will be collected.
-## Open Telemetry Integration {#open-telemetry-integration}
+## OpenTelemetry Integration {#open-telemetry-integration}
-On configuring a Clickhouse datasource in Grafana, the plugin allows the users to specify a default database and table for logs and traces and whether these tables conform to the OTel schema. This allows the plugin to return the columns required for correct log and trace rendering in Grafana. If you've made changes to the default OTel schema and prefer to use your own column names, these can be specified. Usage of the default OTel column names for columns such as time (Timestamp), log level (SeverityText), or message body (Body) means no changes need to be made.
+On configuring a ClickHouse datasource in Grafana, the plugin allows the user to specify a default database and table for logs and traces and whether these tables conform to the OTel schema. This allows the plugin to return the columns required for correct log and trace rendering in Grafana. If you've made changes to the default OTel schema and prefer to use your own column names, these can be specified. Usage of the default OTel column names for columns such as time (`Timestamp`), log level (`SeverityText`), or message body (`Body`) means no changes need to be made.
:::note HTTP or Native
Users can connect Grafana to ClickHouse over either the HTTP or Native protocol. The latter offers marginal performance advantages which are unlikely to be appreciable in the aggregation queries issued by Grafana users. Conversely, the HTTP protocol is typically simpler for users to proxy and introspect.
diff --git a/docs/use-cases/observability/build-your-own/index.md b/docs/use-cases/observability/build-your-own/index.md
new file mode 100644
index 00000000000..1ec79f79ba7
--- /dev/null
+++ b/docs/use-cases/observability/build-your-own/index.md
@@ -0,0 +1,18 @@
+---
+slug: /use-cases/observability/build-your-own
+title: 'Build Your Own Observability Stack'
+pagination_prev: null
+pagination_next: null
+description: 'Landing page building your own observability stack'
+---
+
+This guide helps you build a custom observability stack using ClickHouse as the foundation. Learn how to design, implement, and optimize your observability solution for logs, metrics, and traces, with practical examples and best practices.
+
+| Page | Description |
+|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Introduction](/use-cases/observability/introduction) | This guide is designed for users looking to build their own observability solution using ClickHouse, focusing on logs and traces. |
+| [Schema design](/use-cases/observability/schema-design) | Learn why users are recommended to create their own schema for logs and traces, along with some best practices for doing so. |
+| [Managing data](/observability/managing-data) | Deployments of ClickHouse for observability invariably involve large datasets, which need to be managed. ClickHouse offers features to assist with data management. |
+| [Integrating OpenTelemetry](/observability/integrating-opentelemetry) | Collecting and exporting logs and traces using OpenTelemetry with ClickHouse. |
+| [Using Visualization Tools](/observability/grafana) | Learn how to use observability visualization tools for ClickHouse, including HyperDX and Grafana. |
+| [Demo Application](/observability/demo-application) | Explore the OpenTelemetry demo application forked to work with ClickHouse for logs and traces. |
diff --git a/docs/use-cases/observability/integrating-opentelemetry.md b/docs/use-cases/observability/build-your-own/integrating-opentelemetry.md
similarity index 95%
rename from docs/use-cases/observability/integrating-opentelemetry.md
rename to docs/use-cases/observability/build-your-own/integrating-opentelemetry.md
index c80137ad5c6..ea8bce63995 100644
--- a/docs/use-cases/observability/integrating-opentelemetry.md
+++ b/docs/use-cases/observability/build-your-own/integrating-opentelemetry.md
@@ -25,7 +25,7 @@ Unlike ClickHouse or Prometheus, OpenTelemetry is not an observability backend a
## ClickHouse relevant components {#clickhouse-relevant-components}
-Open Telemetry consists of a number of components. As well as providing a data and API specification, standardized protocol, and naming conventions for fields/columns, OTel provides two capabilities which are fundamental to building an Observability solution with ClickHouse:
+OpenTelemetry consists of a number of components. As well as providing a data and API specification, standardized protocol, and naming conventions for fields/columns, OTel provides two capabilities which are fundamental to building an Observability solution with ClickHouse:
- The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) is a proxy that receives, processes, and exports telemetry data. A ClickHouse-powered solution uses this component for both log collection and event processing prior to batching and inserting.
- [Language SDKs](https://opentelemetry.io/docs/languages/) that implement the specification, APIs, and export of telemetry data. These SDKs effectively ensure traces are correctly recorded within an application's code, generating constituent spans and ensuring context is propagated across services through metadata - thus formulating distributed traces and ensuring spans can be correlated. These SDKs are complemented by an ecosystem that automatically implements common libraries and frameworks, thus meaning the user is not required to change their code and obtains out-of-the-box instrumentation.
@@ -204,7 +204,7 @@ For users needing to collect local or Kubernetes log files, we recommend users b
## Collecting Kubernetes Logs {#collecting-kubernetes-logs}
-For the collection of Kubernetes logs, we recommend the [Open Telemetry documentation guide](https://opentelemetry.io/docs/kubernetes/). The [Kubernetes Attributes Processor](https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor) is recommended for enriching logs and metrics with pod metadata. This can potentially produce dynamic metadata e.g. labels, stored in the column `ResourceAttributes`. ClickHouse currently uses the type `Map(String, String)` for this column. See [Using Maps](/use-cases/observability/schema-design#using-maps) and [Extracting from maps](/use-cases/observability/schema-design#extracting-from-maps) for further details on handling and optimizing this type.
+For the collection of Kubernetes logs, we recommend the [OpenTelemetry documentation guide](https://opentelemetry.io/docs/kubernetes/). The [Kubernetes Attributes Processor](https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor) is recommended for enriching logs and metrics with pod metadata. This can potentially produce dynamic metadata e.g. labels, stored in the column `ResourceAttributes`. ClickHouse currently uses the type `Map(String, String)` for this column. See [Using Maps](/use-cases/observability/schema-design#using-maps) and [Extracting from maps](/use-cases/observability/schema-design#extracting-from-maps) for further details on handling and optimizing this type.
## Collecting traces {#collecting-traces}
@@ -276,7 +276,7 @@ The full schema of trace messages is maintained [here](https://opentelemetry.io/
## Processing - filtering, transforming and enriching {#processing---filtering-transforming-and-enriching}
-As demonstrated in the earlier example of setting the timestamp for a log event, users will invariably want to filter, transform, and enrich event messages. This can be achieved using a number of capabilities in Open Telemetry:
+As demonstrated in the earlier example of setting the timestamp for a log event, users will invariably want to filter, transform, and enrich event messages. This can be achieved using a number of capabilities in OpenTelemetry:
- **Processors** - Processors take the data collected by [receivers and modify or transform](https://opentelemetry.io/docs/collector/transforming-telemetry/) it before sending it to the exporters. Processors are applied in the order as configured in the `processors` section of the collector configuration. These are optional, but the minimal set is [typically recommended](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor#recommended-processors). When using an OTel collector with ClickHouse, we recommend limiting processors to:
@@ -498,29 +498,29 @@ The default schema for logs is shown below (`otelcol-contrib v0.102.1`):
```sql
CREATE TABLE default.otel_logs
(
- `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
- `TraceId` String CODEC(ZSTD(1)),
- `SpanId` String CODEC(ZSTD(1)),
- `TraceFlags` UInt32 CODEC(ZSTD(1)),
- `SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
- `SeverityNumber` Int32 CODEC(ZSTD(1)),
- `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
- `Body` String CODEC(ZSTD(1)),
- `ResourceSchemaUrl` String CODEC(ZSTD(1)),
- `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
- `ScopeSchemaUrl` String CODEC(ZSTD(1)),
- `ScopeName` String CODEC(ZSTD(1)),
- `ScopeVersion` String CODEC(ZSTD(1)),
- `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
- `LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
- INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
- INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
- INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
- INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
- INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
- INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
- INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
- INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 1
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TraceId` String CODEC(ZSTD(1)),
+ `SpanId` String CODEC(ZSTD(1)),
+ `TraceFlags` UInt32 CODEC(ZSTD(1)),
+ `SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
+ `SeverityNumber` Int32 CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Body` String CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` String CODEC(ZSTD(1)),
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` String CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(Timestamp)
@@ -538,9 +538,9 @@ A few important notes on this schema:
- The table uses the classic [`MergeTree` engine](/engines/table-engines/mergetree-family/mergetree). This is recommended for logs and traces and should not need to be changed.
- The table is ordered by `ORDER BY (ServiceName, SeverityText, toUnixTimestamp(Timestamp), TraceId)`. This means queries will be optimized for filters on `ServiceName`, `SeverityText`, `Timestamp` and `TraceId` - earlier columns in the list will filter faster than later ones e.g. filtering by `ServiceName` will be significantly faster than filtering by `TraceId`. Users should modify this ordering according to their expected access patterns - see [Choosing a primary key](/use-cases/observability/schema-design#choosing-a-primary-ordering-key).
- The above schema applies `ZSTD(1)` to columns. This offers the best compression for logs. Users can increase the ZSTD compression level (above the default of 1) for better compression, although this is rarely beneficial. Increasing this value will incur greater CPU overhead at insert time (during compression), although decompression (and thus queries) should remain comparable. See [here](https://clickhouse.com/blog/optimize-clickhouse-codecs-compression-schema) for further details. Additional [delta encoding](/sql-reference/statements/create/table#delta) is applied to the Timestamp with the aim of reducing its size on disk.
-- Note how [`ResourceAttributes`](https://opentelemetry.io/docs/specs/otel/resource/sdk/), [`LogAttributes`](https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-attributes) and [`ScopeAttributes`](https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-instrumentationscope) are maps. Users should familiarize themselves with the difference between these. For how to access these maps and optimize accessing keys within them, see [Using maps](/use-cases/observability/integrating-opentelemetry.md).
-- Most other types here e.g. `ServiceName` as LowCardinality, are optimized. Note the Body, although JSON in our example logs, is stored as a String.
-- Bloom filters are applied to map keys and values, as well as the Body column. These aim to improve query times on accessing these columns but are typically not required. See [Secondary/Data skipping indices](/use-cases/observability/schema-design#secondarydata-skipping-indices).
+- Note how [`ResourceAttributes`](https://opentelemetry.io/docs/specs/otel/resource/sdk/), [`LogAttributes`](https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-attributes) and [`ScopeAttributes`](https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-instrumentationscope) are maps. Users should familiarize themselves with the difference between these. For how to access these maps and optimize accessing keys within them, see [Using maps](/use-cases/observability/schema-design#using-maps).
+- Most other types here e.g. `ServiceName` as LowCardinality, are optimized. Note that `Body`, which is JSON in our example logs, is stored as a String.
+- Bloom filters are applied to map keys and values, as well as the `Body` column. These aim to improve query times for queries accessing these columns but are typically not required. See [Secondary/Data skipping indices](/use-cases/observability/schema-design#secondarydata-skipping-indices).
```sql
CREATE TABLE default.otel_traces
diff --git a/docs/use-cases/observability/introduction.md b/docs/use-cases/observability/build-your-own/introduction.md
similarity index 96%
rename from docs/use-cases/observability/introduction.md
rename to docs/use-cases/observability/build-your-own/introduction.md
index 0f755d840d2..6862cf3da06 100644
--- a/docs/use-cases/observability/introduction.md
+++ b/docs/use-cases/observability/build-your-own/introduction.md
@@ -40,9 +40,9 @@ More specifically, the following means ClickHouse is ideally suited for the stor
- **Fast Aggregations** - Observability solutions typically heavily involve the visualization of data through charts e.g. lines showing error rates or bar charts showing traffic sources. Aggregations, or GROUP BYs, are fundamental to powering these charts which must also be fast and responsive when applying filters in workflows for issue diagnosis. ClickHouse's column-oriented format combined with a vectorized query execution engine is ideal for fast aggregations, with sparse indexing allowing rapid filtering of data in response to users' actions.
- **Fast Linear scans** - While alternative technologies rely on inverted indices for fast querying of logs, these invariably result in high disk and resource utilization. While ClickHouse provides inverted indices as an additional optional index type, linear scans are highly parallelized and use all of the available cores on a machine (unless configured otherwise). This potentially allows 10s of GB/s per second (compressed) to be scanned for matches with [highly optimized text-matching operators](/sql-reference/functions/string-search-functions).
- **Familiarity of SQL** - SQL is the ubiquitous language with which all engineers are familiar. With over 50 years of development, it has proven itself as the de facto language for data analytics and remains the [3rd most popular programming language](https://clickhouse.com/blog/the-state-of-sql-based-observability#lingua-franca). Observability is just another data problem for which SQL is ideal.
-- **Analytical functions** - ClickHouse extends ANSI SQL with analytical functions designed to make SQL queries simple and easier to write. These are essential for users performing root cause analysis where data needs to be sliced and diced.
+- **Analytical functions** - ClickHouse extends ANSI SQL with analytical functions designed to make SQL queries simpler and easier to write. These are essential for users performing root cause analysis where data needs to be sliced and diced.
- **Secondary indices** - ClickHouse supports secondary indexes, such as bloom filters, to accelerate specific query profiles. These can be optionally enabled at a column level, giving the user granular control and allowing them to assess the cost-performance benefit.
-- **Open-source & Open standards** - As an open-source database, ClickHouse embraces open standards such as Open Telemetry. The ability to contribute and actively participate in projects is appealing while avoiding the challenges of vendor lock-in.
+- **Open-source & Open standards** - As an open-source database, ClickHouse embraces open standards such as OpenTelemetry. The ability to contribute and actively participate in projects is appealing while avoiding the challenges of vendor lock-in.
## When should you use ClickHouse for Observability {#when-should-you-use-clickhouse-for-observability}
diff --git a/docs/use-cases/observability/managing-data.md b/docs/use-cases/observability/build-your-own/managing-data.md
similarity index 100%
rename from docs/use-cases/observability/managing-data.md
rename to docs/use-cases/observability/build-your-own/managing-data.md
diff --git a/docs/use-cases/observability/schema-design.md b/docs/use-cases/observability/build-your-own/schema-design.md
similarity index 100%
rename from docs/use-cases/observability/schema-design.md
rename to docs/use-cases/observability/build-your-own/schema-design.md
diff --git a/docs/use-cases/observability/clickstack/alerts.md b/docs/use-cases/observability/clickstack/alerts.md
new file mode 100644
index 00000000000..493fd78da28
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/alerts.md
@@ -0,0 +1,42 @@
+---
+slug: /use-cases/observability/clickstack/alerts
+title: 'Search with ClickStack'
+sidebar_label: 'Alerts'
+pagination_prev: null
+pagination_next: null
+description: 'Alerts with ClickStack'
+---
+
+import Image from '@theme/IdealImage';
+import search_alert from '@site/static/images/use-cases/observability/search_alert.png';
+
+
+## Search alerts {#search-alerts}
+
+After entering a [search](/use-cases/observability/clickstack/search), you can create an alert to be
+notified when the number of events (logs or spans) matching the search exceeds or falls below a threshold.
+
+### Creating an Alert {#creating-an-alert}
+
+You can create an alert by clicking the `Alerts` button on the top right of the `Search` page.
+
+From here, you can name the alert, as well as set the threshold, duration, and notification method for the alert (Slack, Email, PagerDuty or Slack webhook).
+
+The `grouped by` value allows the search to be subject to an aggregation e.g. `ServiceName`, thus allowing potential multiple alerts to be triggered off the same search.
+
+
+
+### Common Alert Scenarios {#common-alert-scenarios}
+
+Here are a few common alert scenarios that you can use HyperDX for:
+
+**Errors:** We first recommend setting up alerts for the default
+`All Error Events` and `HTTP Status >= 400` saved searches to be notified when
+excess error occurs.
+
+**Slow Operations:** You can set up a search for slow operations (ex.
+`duration:>5000`) and then alert when there are too many slow operations
+occurring.
+
+**User Events:** You can also set up alerts for customer-facing teams to be
+notified when new users sign up, or a critical user action is performed.
diff --git a/docs/use-cases/observability/clickstack/architecture.md b/docs/use-cases/observability/clickstack/architecture.md
new file mode 100644
index 00000000000..861b556e5c3
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/architecture.md
@@ -0,0 +1,64 @@
+---
+slug: /use-cases/observability/clickstack/architecture
+pagination_prev: null
+pagination_next: null
+description: 'Architecture of ClickStack - The ClickHouse Observability Stack'
+title: 'Architecture'
+---
+
+import Image from '@theme/IdealImage';
+import architecture from '@site/static/images/use-cases/observability/clickstack-architecture.png';
+
+
+The ClickStack architecture is built around three core components: **ClickHouse**, **HyperDX**, and a **OpenTelemetry (OTel) collector**. A **MongoDB** instance provides storage for the application state. Together, they provide a high-performance, open-source observability stack optimized for logs, metrics, and traces.
+
+## Architecture Overview {#architecture-overview}
+
+
+
+## ClickHouse: The database engine {#clickhouse}
+
+At the heart of ClickStack is ClickHouse, a column-oriented database designed for real-time analytics at scale. It powers the ingestion and querying of observability data, enabling:
+
+- Sub-second search across terabytes of events
+- Ingestion of billions of high-cardinality records per day
+- High compression rates of at least 10x on observability data
+- Native support for semi-structured JSON data, allowing dynamic schema evolution
+- A powerful SQL engine with hundreds of built-in analytical functions
+
+ClickHouse handles observability data as wide events, allowing for deep correlation across logs, metrics, and traces in a single unified structure.
+
+## OpenTelemetry collector: data ingestion {#open-telemetry-collector}
+
+ClickStack includes a pre-configured OpenTelemetry (OTel) collector to ingest telemetry in an open, standardized way. Users can send data using the OTLP protocol via:
+
+- gRPC (port `4317`)
+- HTTP (port `4318`)
+
+The collector exports telemetry to ClickHouse in efficient batches. It supports optimized table schemas per data source, ensuring scalable performance across all signal types.
+
+## HyperDX: The interface {#hyperdx}
+
+HyperDX is the user interface for ClickStack. It offers:
+
+- Natural language and Lucene-style search
+- Live tailing for real-time debugging
+- Unified views of logs, metrics, and traces
+- Session replay for frontend observability
+- Dashboard creation and alert configuration
+- SQL query interface for advanced analysis
+
+Designed specifically for ClickHouse, HyperDX combines powerful search with intuitive workflows, enabling users to spot anomalies, investigate issues, and gain insights fast.
+
+## MongoDB: application state {#mongo}
+
+ClickStack uses MongoDB to store application-level state, including:
+
+- Dashboards
+- Alerts
+- User profiles
+- Saved visualizations
+
+This separation of state from event data ensures performance and scalability while simplifying backup and configuration.
+
+This modular architecture enables ClickStack to deliver an out-of-the-box observability platform that is fast, flexible, and open-source.
diff --git a/docs/use-cases/observability/clickstack/config.md b/docs/use-cases/observability/clickstack/config.md
new file mode 100644
index 00000000000..35558c80b85
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/config.md
@@ -0,0 +1,396 @@
+---
+slug: /use-cases/observability/clickstack/config
+title: 'Configuration Options'
+pagination_prev: null
+pagination_next: null
+description: 'Configuration options for ClickStack - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_25 from '@site/static/images/use-cases/observability/hyperdx-25.png';
+import hyperdx_26 from '@site/static/images/use-cases/observability/hyperdx-26.png';
+
+The following configuration options are available for each component of ClickStack:
+
+## Modifying settings {#modifying-settings}
+
+### Docker {#docker}
+
+If using the [All in One](/use-cases/observability/clickstack/deployment/all-in-one), [HyperDX Only](/use-cases/observability/clickstack/deployment/hyperdx-only) or [Local Mode](/use-cases/observability/clickstack/deployment/local-mode-only) simply pass the desired setting via an environment variable e.g.
+
+```bash
+docker run -e HYPERDX_LOG_LEVEL='debug' -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+### Docker compose {#docker-compose}
+
+If using the [Docker Compose](/use-cases/observability/clickstack/deployment/docker-compose) deployment guide, the [`.env`](https://github.com/hyperdxio/hyperdx/blob/main/.env) file can be used to modify settings.
+
+Alternatively, explicitly overwrite settings in the [`docker-compose.yaml`](https://github.com/hyperdxio/hyperdx/blob/main/docker-compose.yml) file e.g.
+
+Example:
+```yaml
+services:
+ app:
+ environment:
+ HYPERDX_API_KEY: ${HYPERDX_API_KEY}
+ HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
+ # ... other settings
+```
+
+### Helm {#helm}
+
+#### Customizing values (Optional) {#customizing-values}
+
+You can customize settings by using `--set` flags e.g.
+
+```bash
+helm install my-hyperdx hyperdx/hdx-oss-v2 \
+ --set replicaCount=2 \
+ --set resources.limits.cpu=500m \
+ --set resources.limits.memory=512Mi \
+ --set resources.requests.cpu=250m \
+ --set resources.requests.memory=256Mi \
+ --set ingress.enabled=true \
+ --set ingress.annotations."kubernetes\.io/ingress\.class"=nginx \
+ --set ingress.hosts[0].host=hyperdx.example.com \
+ --set ingress.hosts[0].paths[0].path=/ \
+ --set ingress.hosts[0].paths[0].pathType=ImplementationSpecific \
+ --set env[0].name=CLICKHOUSE_USER \
+ --set env[0].value=abc
+```
+
+Alternatively edit the `values.yaml`. To retrieve the default values:
+
+```sh
+helm show values hyperdx/hdx-oss-v2 > values.yaml
+```
+
+Example config:
+
+```yaml
+replicaCount: 2
+resources:
+ limits:
+ cpu: 500m
+ memory: 512Mi
+ requests:
+ cpu: 250m
+ memory: 256Mi
+ingress:
+ enabled: true
+ annotations:
+ kubernetes.io/ingress.class: nginx
+ hosts:
+ - host: hyperdx.example.com
+ paths:
+ - path: /
+ pathType: ImplementationSpecific
+ env:
+ - name: CLICKHOUSE_USER
+ value: abc
+```
+
+## HyperDX {#hyperdx}
+
+### Data source settings {#datasource-settings}
+
+HyperDX relies on the user defining a source for each of the Observability data types/pillars:
+
+- `Logs`
+- `Traces`
+- `Metrics`
+- `Sessions`
+
+This configuration can be performed inside the application from `Team Settings -> Sources`, as shown below for logs:
+
+
+
+Each of these sources require at least one table specified on creation as well as a set of columns which allow HyperDX to query the data.
+
+If using the [default OpenTelemetry (OTel) schema](/observability/integrating-opentelemetry#out-of-the-box-schema) distributed with ClickStack, these columns can be automatically inferred for each of the sources. If [modifying the schema](#clickhouse) or using a custom schema, users are required to specify and update these mappings.
+
+:::note
+The default schema for ClickHouse distributed with ClickStack is the schema created by the [ClickHouse exporter for the OTel collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/clickhouseexporter). These column names correlate with the OTel official specification documented [here](https://opentelemetry.io/docs/specs/otel/logs/data-model/).
+:::
+
+The following settings are available for each source:
+
+#### Logs {#logs}
+
+| Setting | Description | Required | Inferred in Default Schema | Inferred Value |
+|-------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------|-----------------------------------------------------|
+| `Name` | Source name. | Yes | No | – |
+| `Server Connection` | Server connection name. | Yes | No | `Default` |
+| `Database` | ClickHouse database name. | Yes | Yes | `default` |
+| `Table` | Target table name. Set to `otel_logs` if default schema is used. | Yes | No | |
+| `Timestamp Column` | Datetime column or expression that is part of your primary key. | Yes | Yes | `TimestampTime` |
+| `Default Select` | Columns shown in default search results. | Yes | Yes | `Timestamp`, `ServiceName`, `SeverityText`, `Body` |
+| `Service Name Expression` | Expression or column for the service name. | Yes | Yes | `ServiceName` |
+| `Log Level Expression` | Expression or column for the log level. | Yes | Yes | `SeverityText` |
+| `Body Expression` | Expression or column for the log message. | Yes | Yes | `Body` |
+| `Log Attributes Expression` | Expression or column for custom log attributes. | Yes | Yes | `LogAttributes` |
+| `Resource Attributes Expression` | Expression or column for resource-level attributes. | Yes | Yes | `ResourceAttributes` |
+| `Displayed Timestamp Column` | Timestamp column used in UI display. | Yes | Yes | `ResourceAttributes` |
+| `Correlated Metric Source` | Linked metric source (e.g. HyperDX metrics). | No | No | – |
+| `Correlated Trace Source` | Linked trace source (e.g. HyperDX traces). | No | No | – |
+| `Trace Id Expression` | Expression or column used to extract trace ID. | Yes | Yes | `TraceId` |
+| `Span Id Expression` | Expression or column used to extract span ID. | Yes | Yes | `SpanId` |
+| `Implicit Column Expression` | Column used for full-text search if no field is specified (Lucene-style). Typically the log body. | Yes | Yes | `Body` |
+
+#### Traces {#traces}
+
+| Setting | Description | Required | Inferred in Default Schema | Inferred Value |
+|----------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------|------------------------|
+| `Name` | Source name. | Yes | No | – |
+| `Server Connection` | Server connection name. | Yes | No | `Default` |
+| `Database` | ClickHouse database name. | Yes | Yes | `default` |
+| `Table` | Target table name. Set to `otel_traces` if using the default schema. | Yes | Yes | - |
+| `Timestamp Column` | Datetime column or expression that is part of your primary key. | Yes | Yes | `Timestamp` |
+| `Timestamp` | Alias for `Timestamp Column`. | Yes | Yes | `Timestamp` |
+| `Default Select` | Columns shown in default search results. | Yes | Yes | `Timestamp, ServiceName as service, StatusCode as level, round(Duration / 1e6) as duration, SpanName` |
+| `Duration Expression` | Expression for calculating span duration. | Yes | Yes | `Duration` |
+| `Duration Precision` | Precision for the duration expression (e.g. nanoseconds, microseconds). | Yes | Yes | ns |
+| `Trace Id Expression` | Expression or column for trace IDs. | Yes | Yes | `TraceId` |
+| `Span Id Expression` | Expression or column for span IDs. | Yes | Yes | `SpanId` |
+| `Parent Span Id Expression` | Expression or column for parent span IDs. | Yes | Yes | `ParentSpanId` |
+| `Span Name Expression` | Expression or column for span names. | Yes | Yes | `SpanName` |
+| `Span Kind Expression` | Expression or column for span kind (e.g. client, server). | Yes | Yes | `SpanKind` |
+| `Correlated Log Source` | Optional. Linked log source (e.g. HyperDX logs). | No | No | – |
+| `Correlated Session Source` | Optional. Linked session source. | No | No | – |
+| `Correlated Metric Source` | Optional. Linked metric source (e.g. HyperDX metrics). | No | No | – |
+| `Status Code Expression` | Expression for the span status code. | Yes | Yes | `StatusCode` |
+| `Status Message Expression` | Expression for the span status message. | Yes | Yes | `StatusMessage` |
+| `Service Name Expression` | Expression or column for the service name. | Yes | Yes | `ServiceName` |
+| `Resource Attributes Expression`| Expression or column for resource-level attributes. | Yes | Yes | `ResourceAttributes` |
+| `Event Attributes Expression` | Expression or column for event attributes. | Yes | Yes | `SpanAttributes` |
+| `Span Events Expression` | Expression to extract span events. Typically a `Nested` type column. This allows rendering of exception stack traces with supported language SDKs. | Yes | Yes | `Events` |
+| `Implicit Column Expression` | Column used for full-text search if no field is specified (Lucene-style). Typically the log body. | Yes | Yes | `SpanName`|
+
+
+
+#### Metrics {#metrics}
+
+| Setting | Description | Required | Inferred in Default Schema | Inferred Value |
+|------------------------|-----------------------------------------------------------------------------------------------|----------|-----------------------------|-----------------------------|
+| `Name` | Source name. | Yes | No | – |
+| `Server Connection` | Server connection name. | Yes | No | `Default` |
+| `Database` | ClickHouse database name. | Yes | Yes | `default` |
+| `Gauge Table` | Table storing gauge-type metrics. | Yes | No | `otel_metrics_gauge` |
+| `Histogram Table` | Table storing histogram-type metrics. | Yes | No | `otel_metrics_histogram` |
+| `Sum Table` | Table storing sum-type (counter) metrics. | Yes | No | `otel_metrics_sum` |
+| `Correlated Log Source`| Optional. Linked log source (e.g. HyperDX logs). | No | No | – |
+
+#### Sessions {#settings}
+
+| Setting | Description | Required | Inferred in Default Schema | Inferred Value |
+|-------------------------------|-----------------------------------------------------------------------------------------------------|----------|-----------------------------|------------------------|
+| `Name` | Source name. | Yes | No | – |
+| `Server Connection` | Server connection name. | Yes | No | `Default` |
+| `Database` | ClickHouse database name. | Yes | Yes | `default` |
+| `Table` | Target table for session data. Target table name. Set to `hyperdx_sessions` if using the default schema. | Yes | Yes | - |
+| `Timestamp Column` | Datetime column or expression that is part of your primary key. | Yes | Yes | `TimestampTime` |
+| `Log Attributes Expression` | Expression for extracting log-level attributes from session data. | Yes | Yes | `LogAttributes` |
+| `LogAttributes` | Alias or field reference used to store log attributes. | Yes | Yes | `LogAttributes` |
+| `Resource Attributes Expression` | Expression for extracting resource-level metadata. | Yes | Yes | `ResourceAttributes` |
+| `Correlated Trace Source` | Optional. Linked trace source for session correlation. | No | No | – |
+| `Implicit Column Expression` | Column used for full-text search when no field is specified (e.g. Lucene-style query parsing). | Yes | Yes | `Body` |
+
+### Correlated sources {#correlated-sources}
+
+To enable full cross-source correlation in ClickStack, users must configure correlated sources for logs, traces, metrics, and sessions. This allows HyperDX to associate related data and provide rich context when rendering events.
+
+- `Logs`: Can be correlated with traces and metrics.
+- `Traces`: Can be correlated with logs, sessions, and metrics.
+- `Metrics`: Can be correlated with logs.
+- `Sessions`: Can be correlated with traces.
+
+By setting these correlations, HyperDX can, for example, render relevant logs alongside a trace or surface metric anomalies linked to a session. Proper configuration ensures a unified and contextual observability experience.
+
+For example, below is the Logs source configured with correlated sources:
+
+
+
+### Application configuration settings {#application-configuration-settings}
+
+- `HYPERDX_API_KEY`
+ - **Default:** None (required)
+ - **Description:** Authentication key for the HyperDX API.
+ - **Guidance:**
+ - Required for telemetry and logging
+ - In local development, can be any non-empty value
+ - For production, use a secure, unique key
+ - Can be obtained from the team settings page after account creation
+
+- `HYPERDX_LOG_LEVEL`
+ - **Default:** `info`
+ - **Description:** Sets the logging verbosity level.
+ - **Options:** `debug`, `info`, `warn`, `error`
+ - **Guidance:**
+ - Use `debug` for detailed troubleshooting
+ - Use `info` for normal operation
+ - Use `warn` or `error` in production to reduce log volume
+
+- `HYPERDX_API_PORT`
+ - **Default:** `8000`
+ - **Description:** Port for the HyperDX API server.
+ - **Guidance:**
+ - Ensure this port is available on your host
+ - Change if you have port conflicts
+ - Must match the port in your API client configurations
+
+- `HYPERDX_APP_PORT`
+ - **Default:** `8000`
+ - **Description:** Port for the HyperDX frontend app.
+ - **Guidance:**
+ - Ensure this port is available on your host
+ - Change if you have port conflicts
+ - Must be accessible from your browser
+
+- `HYPERDX_APP_URL`
+ - **Default:** `http://localhost`
+ - **Description:** Base URL for the frontend app.
+ - **Guidance:**
+ - Set to your domain in production
+ - Include protocol (http/https)
+ - Don't include trailing slash
+
+- `MONGO_URI`
+ - **Default:** `mongodb://db:27017/hyperdx`
+ - **Description:** MongoDB connection string.
+ - **Guidance:**
+ - Use default for local development with Docker
+ - For production, use a secure connection string
+ - Include authentication if required
+ - Example: `mongodb://user:pass@host:port/db`
+
+- `MINER_API_URL`
+ - **Default:** `http://miner:5123`
+ - **Description:** URL for the log pattern mining service.
+ - **Guidance:**
+ - Use default for local development with Docker
+ - Set to your miner service URL in production
+ - Must be accessible from the API service
+
+- `FRONTEND_URL`
+ - **Default:** `http://localhost:3000`
+ - **Description:** URL for the frontend app.
+ - **Guidance:**
+ - Use default for local development
+ - Set to your domain in production
+ - Must be accessible from the API service
+
+- `OTEL_SERVICE_NAME`
+ - **Default:** `hdx-oss-api`
+ - **Description:** Service name for OpenTelemetry instrumentation.
+ - **Guidance:**
+ - Use descriptive name for your HyperDX service. Applicable if HyperDX self-instruments.
+ - Helps identify the HyperDX service in telemetry data
+
+- `NEXT_PUBLIC_OTEL_EXPORTER_OTLP_ENDPOINT`
+ - **Default:** `http://localhost:4318`
+ - **Description:** OpenTelemetry collector endpoint.
+ - **Guidance:**
+ - Relevant of self-instrumenting HyperDX.
+ - Use default for local development
+ - Set to your collector URL in production
+ - Must be accessible from your HyperDX service
+
+- `USAGE_STATS_ENABLED`
+ - **Default:** `true`
+ - **Description:** Toggles usage statistics collection.
+ - **Guidance:**
+ - Set to `false` to disable usage tracking
+ - Useful for privacy-sensitive deployments
+ - Default is `true` for better product improvement
+
+- `IS_OSS`
+ - **Default:** `true`
+ - **Description:** Indicates if running in OSS mode.
+ - **Guidance:**
+ - Keep as `true` for open-source deployments
+ - Set to `false` for enterprise deployments
+ - Affects feature availability
+
+- `IS_LOCAL_MODE`
+ - **Default:** `false`
+ - **Description:** Indicates if running in local mode.
+ - **Guidance:**
+ - Set to `true` for local development
+ - Disables certain production features
+ - Useful for testing and development
+
+- `EXPRESS_SESSION_SECRET`
+ - **Default:** `hyperdx is cool 👋`
+ - **Description:** Secret for Express session management.
+ - **Guidance:**
+ - Change in production
+ - Use a strong, random string
+ - Keep secret and secure
+
+- `ENABLE_SWAGGER`
+ - **Default:** `false`
+ - **Description:** Toggles Swagger API documentation.
+ - **Guidance:**
+ - Set to `true` to enable API documentation
+ - Useful for development and testing
+ - Disable in production
+
+
+## OpenTelemetry collector {#otel-collector}
+
+See ["ClickStack OpenTelemetry Collector"](/use-cases/observability/clickstack/ingesting-data/otel-collector) for more details.
+
+- `CLICKHOUSE_ENDPOINT`
+ - **Default:** *None (required)* if standalone image. If All-in-one or Docker Compose distribution this is set to the integrated ClickHouse instance.
+ - **Description:** The HTTPS URL of the ClickHouse instance to export telemetry data to.
+ - **Guidance:**
+ - Must be a full HTTPS endpoint including port (e.g., `https://clickhouse.example.com:8443`)
+ - Required for the collector to send data to ClickHouse
+
+- `CLICKHOUSE_USER`
+ - **Default:** `default`
+ - **Description:** Username used to authenticate with the ClickHouse instance.
+ - **Guidance:**
+ - Ensure the user has `INSERT` and `CREATE TABLE` permissions
+ - Recommended to create a dedicated user for ingestion
+
+- `CLICKHOUSE_PASSWORD`
+ - **Default:** *None (required if authentication is enabled)*
+ - **Description:** Password for the specified ClickHouse user.
+ - **Guidance:**
+ - Required if the user account has a password set
+ - Store securely via secrets in production deployments
+
+- `HYPERDX_LOG_LEVEL`
+ - **Default:** `info`
+ - **Description:** Log verbosity level for the collector.
+ - **Guidance:**
+ - Accepts values like `debug`, `info`, `warn`, `error`
+ - Use `debug` during troubleshooting
+
+- `OPAMP_SERVER_URL`
+ - **Default:** *None (required)* if standalone image. If All-in-one or Docker Compose distribution this points to the deployed HyperDX instance.
+ - **Description:** URL of the OpAMP server used to manage the collector (e.g., HyperDX instance). This is port `4320` by default.
+ - **Guidance:**
+ - Must point to your HyperDX instance
+ - Enables dynamic configuration and secure ingestion
+
+- `HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE`
+ - **Default:** `default`
+ - **Description:** ClickHouse database the collector writes telemetry data to.
+ - **Guidance:**
+ - Set if using a custom database name
+ - Ensure the specified user has access to this database
+
+## ClickHouse {#clickhouse}
+
+ClickStack ships with a default ClickHouse configuration designed for multi-terabyte scale, but users are free to modify and optimize it to suit their workload.
+
+To tune ClickHouse effectively, users should understand key storage concepts such as [parts](/parts), [partitions](/partitions), [shards and replicas](/shards), as well as how [merges](/merges) occur at insert time. We recommend reviewing the fundamentals of [primary indices](/primary-indexes), [sparse secondary indices](/optimize/skipping-indexes), and data skipping indices, along with techniques for [managing data lifecycle](/observability/managing-data) e.g. using a TTL lifecycle.
+
+ClickStack supports [schema customization](/use-cases/observability/schema-design) - users may modify column types, extract new fields (e.g. from logs), apply codecs and dictionaries, and accelerate queries using projections.
+
+Additionally, materialized views can be used to [transform or filter data during ingestion](/use-cases/observability/schema-design#materialized-columns), provided that data is written to the source table of the view and the application reads from the target table.
+
+For more details, refer to ClickHouse documentation on schema design, indexing strategies, and data management best practices - most of which apply directly to ClickStack deployments.
diff --git a/docs/use-cases/observability/clickstack/deployment/all-in-one.md b/docs/use-cases/observability/clickstack/deployment/all-in-one.md
new file mode 100644
index 00000000000..04bd513f6cd
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/deployment/all-in-one.md
@@ -0,0 +1,114 @@
+---
+slug: /use-cases/observability/clickstack/deployment/all-in-one
+title: 'All in one'
+pagination_prev: null
+pagination_next: null
+sidebar_position: 0
+description: 'Deploying ClickStack with All In One - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
+import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
+
+This comprehensive Docker image bundles all ClickStack components:
+
+* **ClickHouse**
+* **HyperDX**
+* **OpenTelemetry (OTel) collector** (exposing OTLP on ports `4317` and `4318`)
+* **MongoDB** (for persistent application state)
+
+This option includes authentication, enabling the persistence of dashboards, alerts, and saved searches across sessions and users.
+
+### Suitable for {#suitable-for}
+
+* Demos
+* Local testing of the full stack
+
+## Deployment steps {#deployment-steps}
+
+
+
+
+### Deploy with Docker {#deploy-with-docker}
+
+The following will run an OpenTelemetry collector (on port 4317 and 4318) and the HyperDX UI (on port 8080).
+
+```bash
+docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password which meets the requirements.
+
+On clicking `Create` data sources will be created for the integrated ClickHouse instance.
+
+
+
+For an example of using an alternative ClickHouse instance, see ["Create a ClickHouse Cloud connection"](/use-cases/observability/clickstack/getting-started#create-a-cloud-connection).
+
+### Ingest data {#ingest-data}
+
+To ingest data see ["Ingesting data"](/use-cases/observability/clickstack/ingesting-data).
+
+
+
+## Persisting data and settings {#persisting-data-and-settings}
+
+To persist data and settings across restarts of the container, users can modify the above docker command to mount the paths `/data/db`, `/var/lib/clickhouse` and `/var/log/clickhouse-server`. For example:
+
+```bash
+# ensure directories exist
+mkdir -p .volumes/db .volumes/ch_data .volumes/ch_logs
+# modify command to mount paths
+docker run \
+ -p 8080:8080 \
+ -p 4317:4317 \
+ -p 4318:4318 \
+ -v "$(pwd)/.volumes/db:/data/db" \
+ -v "$(pwd)/.volumes/ch_data:/var/lib/clickhouse" \
+ -v "$(pwd)/.volumes/ch_logs:/var/log/clickhouse-server" \
+ docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+## Deploying to production {#deploying-to-production}
+
+This option should not be deployed to production for the following reasons:
+
+- **Non-persistent storage:** All data is stored using the Docker native overlay filesystem. This setup does not support performance at scale, and data will be lost if the container is removed or restarted - unless users [mount the required file paths](#persisting-data-and-settings).
+- **Lack of component isolation:** All components run within a single Docker container. This prevents independent scaling and monitoring and applies any `cgroup` limits globally to all processes. As a result, components may compete for CPU and memory.
+
+## Customizing ports {#customizing-ports-deploy}
+
+If you need to customize the application (8080) or API (8000) ports that HyperDX Local runs on, you'll need to modify the `docker run` command to forward the appropriate ports and set a few environment variables.
+
+Customizing the OpenTelemetry ports can simply be changed by modifying the port forwarding flags. For example, replacing `-p 4318:4318` with `-p 4999:4318` to change the OpenTelemetry HTTP port to 4999.
+
+```bash
+docker run -p 8080:8080 -p 4317:4317 -p 4999:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+## Using ClickHouse Cloud {#using-clickhouse-cloud}
+
+This distribution can be used with ClickHouse Cloud. While the local ClickHouse instance will still be deployed (and ignored), the OTel collector can be configured to use a ClickHouse Cloud instance by setting the environment variables `CLICKHOUSE_ENDPOINT`, `CLICKHOUSE_USER` and `CLICKHOUSE_PASSWORD`.
+
+For example:
+
+```bash
+export CLICKHOUSE_ENDPOINT=
+export CLICKHOUSE_USER=
+export CLICKHOUSE_PASSWORD=
+
+docker run -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+The `CLICKHOUSE_ENDPOINT` should be the ClickHouse Cloud HTTPS endpoint, including the port `8443` e.g. `https://mxl4k3ul6a.us-east-2.aws.clickhouse.com:8443`
+
+On connecting to the HyperDX UI, navigate to [`Team Settings`](http://localhost:8080/team) and create a connection to your ClickHouse Cloud service - followed by the required sources. For an example flow, see [here](/use-cases/observability/clickstack/getting-started#create-a-cloud-connection).
+
+## Configuring the OpenTelemetry collector {#configuring-collector}
+
+The OTel collector configuration can be modified if required - see ["Modifying configuration"](/use-cases/observability/clickstack/ingesting-data/otel-collector#modifying-otel-collector-configuration).
diff --git a/docs/use-cases/observability/clickstack/deployment/docker-compose.md b/docs/use-cases/observability/clickstack/deployment/docker-compose.md
new file mode 100644
index 00000000000..e4a3e98b7ed
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/deployment/docker-compose.md
@@ -0,0 +1,152 @@
+---
+slug: /use-cases/observability/clickstack/deployment/docker-compose
+title: 'Docker Compose'
+pagination_prev: null
+pagination_next: null
+sidebar_position: 2
+description: 'Deploying ClickStack with Docker Compose - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
+import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
+
+All ClickStack components are distributed separately as individual Docker images:
+
+* **ClickHouse**
+* **HyperDX**
+* **OpenTelemetry (OTel) collector**
+* **MongoDB**
+
+These images can be combined and deployed locally using Docker Compose.
+
+Docker Compose exposes additional ports for observability and ingestion based on the default `otel-collector` setup:
+
+- `13133`: Health check endpoint for the `health_check` extension
+- `24225`: Fluentd receiver for log ingestion
+- `4317`: OTLP gRPC receiver (standard for traces, logs, and metrics)
+- `4318`: OTLP HTTP receiver (alternative to gRPC)
+- `8888`: Prometheus metrics endpoint for monitoring the collector itself
+
+These ports enable integrations with a variety of telemetry sources and make the OpenTelemetry collector production-ready for diverse ingestion needs.
+
+### Suitable for {#suitable-for}
+
+* Local testing
+* Proof of concepts
+* Production deployments where fault tolerance is not required and a single server is sufficient to host all ClickHouse data
+* When deploying ClickStack but hosting ClickHouse separately e.g. using ClickHouse Cloud.
+
+## Deployment steps {#deployment-steps}
+
+
+
+
+### Clone the repo {#clone-the-repo}
+
+To deploy with Docker Compose clone the HyperDX repo, change into the directory and run `docker-compose up`:
+
+```bash
+git clone git@github.com:hyperdxio/hyperdx.git
+cd hyperdx
+# switch to the v2 branch
+git checkout v2
+docker compose up
+```
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password which meets the requirements.
+
+On clicking `Create` data sources will be created for the ClickHouse instance deployed with the Helm chart.
+
+:::note Overriding default connection
+You can override the default connection to the integrated ClickHouse instance. For details, see ["Using ClickHouse Cloud"](#using-clickhouse-cloud).
+:::
+
+
+
+For an example of using an alternative ClickHouse instance, see ["Create a ClickHouse Cloud connection"](/use-cases/observability/clickstack/getting-started#create-a-cloud-connection).
+
+### Complete connection details {#complete-connection-details}
+
+To connect to the deployed ClickHouse instance, simply click **Create** and accept the default settings.
+
+If you prefer to connect to your own **external ClickHouse cluster** e.g. ClickHouse Cloud, you can manually enter your connection credentials.
+
+If prompted to create a source, retain all default values and complete the `Table` field with the value `otel_logs`. All other settings should be auto-detected, allowing you to click `Save New Source`.
+
+
+
+
+
+## Modifying compose settings {#modifying-settings}
+
+Users can modify settings for the stack, such as the version used, through the environment variable file:
+
+```bash
+user@example-host hyperdx % cat .env
+# Used by docker-compose.yml
+# Used by docker-compose.yml
+HDX_IMAGE_REPO=docker.hyperdx.io
+IMAGE_NAME=ghcr.io/hyperdxio/hyperdx
+IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx
+LOCAL_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-local
+LOCAL_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-local
+ALL_IN_ONE_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-all-in-one
+ALL_IN_ONE_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-all-in-one
+OTEL_COLLECTOR_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-otel-collector
+OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-otel-collector
+CODE_VERSION=2.0.0-beta.16
+IMAGE_VERSION_SUB_TAG=.16
+IMAGE_VERSION=2-beta
+IMAGE_NIGHTLY_TAG=2-nightly
+
+# Set up domain URLs
+HYPERDX_API_PORT=8000 #optional (should not be taken by other services)
+HYPERDX_APP_PORT=8080
+HYPERDX_APP_URL=http://localhost
+HYPERDX_LOG_LEVEL=debug
+HYPERDX_OPAMP_PORT=4320
+
+# Otel/Clickhouse config
+HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE=default
+```
+
+### Configuring the OpenTelemetry collector {#configuring-collector}
+
+The OTel collector configuration can be modified if required - see ["Modifying configuration"](/use-cases/observability/clickstack/ingesting-data/otel-collector#modifying-otel-collector-configuration).
+
+## Using ClickHouse Cloud {#using-clickhouse-cloud}
+
+This distribution can be used with ClickHouse Cloud. Users should:
+
+- Remove the ClickHouse service from the [`docker-compose.yaml`](https://github.com/hyperdxio/hyperdx/blob/86465a20270b895320eb21dca13560b65be31e68/docker-compose.yml#L89) file. This is optional if testing, as the deployed ClickHouse instance will simply be ignored - although waste local resources. If removing the service, ensure [any references](https://github.com/hyperdxio/hyperdx/blob/86465a20270b895320eb21dca13560b65be31e68/docker-compose.yml#L65) to the service such as `depends_on` are removed.
+- Modify the OTel collector to use a ClickHouse Cloud instance by setting the environment variables `CLICKHOUSE_ENDPOINT`, `CLICKHOUSE_USER` and `CLICKHOUSE_PASSWORD` in the compose file. Specifically, add the environment variables to the OTel collector service:
+
+ ```bash
+ otel-collector:
+ image: ${OTEL_COLLECTOR_IMAGE_NAME}:${IMAGE_VERSION}
+ environment:
+ CLICKHOUSE_ENDPOINT: '' # https endpoint here
+ CLICKHOUSE_USER: ''
+ CLICKHOUSE_PASSWORD: ''
+ HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE: ${HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE}
+ HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
+ OPAMP_SERVER_URL: 'http://app:${HYPERDX_OPAMP_PORT}'
+ ports:
+ - '13133:13133' # health_check extension
+ - '24225:24225' # fluentd receiver
+ - '4317:4317' # OTLP gRPC receiver
+ - '4318:4318' # OTLP http receiver
+ - '8888:8888' # metrics extension
+ restart: always
+ networks:
+ - internal
+ ```
+
+ The `CLICKHOUSE_ENDPOINT` should be the ClickHouse Cloud HTTPS endpoint, including the port `8443` e.g. `https://mxl4k3ul6a.us-east-2.aws.clickhouse.com:8443`
+
+- On connecting to the HyperDX UI and creating a connection to ClickHouse, use your Cloud credentials.
diff --git a/docs/use-cases/observability/clickstack/deployment/helm.md b/docs/use-cases/observability/clickstack/deployment/helm.md
new file mode 100644
index 00000000000..fe0218e2cb7
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/deployment/helm.md
@@ -0,0 +1,293 @@
+---
+slug: /use-cases/observability/clickstack/deployment/helm
+title: 'Helm'
+pagination_prev: null
+pagination_next: null
+sidebar_position: 1
+description: 'Deploying ClickStack with Helm - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_24 from '@site/static/images/use-cases/observability/hyperdx-24.png';
+import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
+
+The helm chart for HyperDX can be found [here](https://github.com/hyperdxio/helm-charts) and is the **recommended** method for production deployments.
+
+By default, the Helm chart provisions all core components, including:
+
+* **ClickHouse**
+* **HyperDX**
+* **OpenTelemetry (OTel) collector**
+* **MongoDB** (for persistent application state)
+
+However, it can be easily customized to integrate with an existing ClickHouse deployment - for example, one hosted in **ClickHouse Cloud**.
+
+The chart supports standard Kubernetes best practices, including:
+
+- Environment-specific configuration via `values.yaml`
+- Resource limits and pod-level scaling
+- TLS and ingress configuration
+- Secrets management and authentication setup
+
+### Suitable for {#suitable-for}
+
+* Proof of concepts
+* Production
+
+## Deployment steps {#deployment-steps}
+
+
+
+
+### Prerequisites {#prerequisites}
+
+- [Helm](https://helm.sh/) v3+
+- Kubernetes cluster (v1.20+ recommended)
+- `kubectl` configured to interact with your cluster
+
+### Add the HyperDX Helm Repository {#add-the-hyperdx-helm-repository}
+
+Add the HyperDX Helm repository:
+
+```sh
+helm repo add hyperdx https://hyperdxio.github.io/helm-charts
+helm repo update
+```
+
+### Installing HyperDX {#installing-hyperdx}
+
+To install the HyperDX chart with default values:
+
+```sh
+helm install my-hyperdx hyperdx/hdx-oss-v2
+```
+
+### Verify the installation {#verify-the-installation}
+
+Verify the installation:
+
+```bash
+kubectl get pods -l "app.kubernetes.io/name=hdx-oss-v2"
+```
+
+When all pods are ready, proceed.
+
+### Forward ports {#forward-ports}
+
+Port forwarding allows us to access and set up HyperDX. Users deploying to production should instead expose the service via an ingress or load balancer to ensure proper network access, TLS termination, and scalability. Port forwarding is best suited for local development or one-off administrative tasks, not long-term or high-availability environments.
+
+```bash
+kubectl port-forward \
+ pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}') \
+ 8080:3000
+```
+
+### Navigate to the UI {#navigate-to-the-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password which means the requirements.
+
+
+
+
+On clicking `Create`, data sources will be created for the ClickHouse instance deployed with the Helm chart.
+
+:::note Overriding default connection
+You can override the default connection to the integrated ClickHouse instance. For details, see ["Using ClickHouse Cloud"](#using-clickhouse-cloud).
+:::
+
+For an example of using an alternative ClickHouse instance, see ["Create a ClickHouse Cloud connection"](/use-cases/observability/clickstack/getting-started#create-a-cloud-connection).
+
+### Customizing values (Optional) {#customizing-values}
+
+You can customize settings by using `--set` flags. For example:
+
+```bash
+helm install my-hyperdx hyperdx/hdx-oss-v2 --set key=value
+
+Alternatively, edit the `values.yaml`. To retrieve the default values:
+
+```sh
+helm show values hyperdx/hdx-oss-v2 > values.yaml
+```
+
+Example config:
+
+```yaml
+replicaCount: 2
+resources:
+ limits:
+ cpu: 500m
+ memory: 512Mi
+ requests:
+ cpu: 250m
+ memory: 256Mi
+ingress:
+ enabled: true
+ annotations:
+ kubernetes.io/ingress.class: nginx
+ hosts:
+ - host: hyperdx.example.com
+ paths:
+ - path: /
+ pathType: ImplementationSpecific
+```
+
+```bash
+helm install my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
+```
+
+### Using Secrets (Optional) {#using-secrets}
+
+For handling sensitive data such as API keys or database credentials, use Kubernetes secrets. The HyperDX Helm charts provide default secret files that you can modify and apply to your cluster.
+
+#### Using Pre-Configured Secrets {#using-pre-configured-secrets}
+
+The Helm chart includes a default secret template located at [`charts/hdx-oss-v2/templates/secrets.yaml`](https://github.com/hyperdxio/helm-charts/blob/main/charts/hdx-oss-v2/templates/secrets.yaml). This file provides a base structure for managing secrets.
+
+
+If you need to manually apply a secret, modify and apply the provided `secrets.yaml` template:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: hyperdx-secret
+ annotations:
+ "helm.sh/resource-policy": keep
+type: Opaque
+data:
+ API_KEY:
+```
+
+Apply the secret to your cluster:
+
+```sh
+kubectl apply -f secrets.yaml
+```
+
+#### Creating a Custom Secret {#creating-a-custom-secret}
+
+If you prefer, you can create a custom Kubernetes secret manually:
+
+```sh
+kubectl create secret generic hyperdx-secret \
+ --from-literal=API_KEY=my-secret-api-key
+```
+
+#### Referencing a Secret {#referencing-a-secret}
+
+To reference a secret in `values.yaml`:
+
+```yaml
+hyperdx:
+ apiKey:
+ valueFrom:
+ secretKeyRef:
+ name: hyperdx-secret
+ key: API_KEY
+```
+
+
+
+## Using ClickHouse Cloud {#using-clickhouse-cloud}
+
+If using ClickHouse Cloud users disable the ClickHouse instance deployed by the Helm chart and specify the Cloud Cloud credentials:
+
+```bash
+# specify ClickHouse Cloud credentials
+export CLICKHOUSE_URL= # full https url
+export CLICKHOUSE_USER=
+export CLICKHOUSE_PASSWORD=
+
+# how to overwrite default connection
+helm install myrelease hyperdx-helm --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.clickhouseEndpoint=${CLICKHOUSE_URL} --set clickhouse.config.users.otelUser=${CLICKHOUSE_USER} --set clickhouse.config.users.otelUserPassword=${CLICKHOUSE_PASSWORD}
+```
+
+Alternatively, use a `values.yaml` file:
+
+```yaml
+clickhouse:
+ enabled: false
+ persistence:
+ enabled: false
+ config:
+ users:
+ otelUser: ${CLICKHOUSE_USER}
+ otelUserPassword: ${CLICKHOUSE_PASSWORD}
+
+otel:
+ clickhouseEndpoint: ${CLICKHOUSE_URL}
+```
+
+```bash
+helm install my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
+# or if installed...
+# helm upgrade my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
+```
+
+
+## Production notes {#production-notes}
+
+By default, this chart also installs ClickHouse and the OTel collector. However, for production, it is recommended that you manage ClickHouse and the OTel collector separately.
+
+To disable ClickHouse and the OTel collector, set the following values:
+
+```bash
+helm install myrelease hyperdx-helm --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.enabled=false
+```
+
+## Task Configuration {#task-configuration}
+
+By default, there is one task in the chart setup as a cronjob, responsible for checking whether alerts should fire. Here are its configuration options:
+
+| Parameter | Description | Default |
+|-----------|-------------|---------|
+| `tasks.enabled` | Enable/Disable cron tasks in the cluster. By default, the HyperDX image will run cron tasks in the process. Change to true if you'd rather use a separate cron task in the cluster. | `false` |
+| `tasks.checkAlerts.schedule` | Cron schedule for the check-alerts task | `*/1 * * * *` |
+| `tasks.checkAlerts.resources` | Resource requests and limits for the check-alerts task | See `values.yaml` |
+
+## Upgrading the Chart {#upgrading-the-chart}
+
+To upgrade to a newer version:
+
+```sh
+helm upgrade my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
+```
+
+To check available chart versions:
+
+```sh
+helm search repo hyperdx
+```
+
+## Uninstalling HyperDX {#uninstalling-hyperdx}
+
+To remove the deployment:
+
+```sh
+helm uninstall my-hyperdx
+```
+
+This will remove all resources associated with the release, but persistent data (if any) may remain.
+
+## Troubleshooting {#troubleshooting}
+
+### Checking Logs {#checking-logs}
+
+```sh
+kubectl logs -l app.kubernetes.io/name=hdx-oss-v2
+```
+
+### Debugging a Failed Install {#debugging-a-failed-instance}
+
+```sh
+helm install my-hyperdx hyperdx/hdx-oss-v2 --debug --dry-run
+```
+
+### Verifying Deployment {#verifying-deployment}
+
+```sh
+kubectl get pods -l app.kubernetes.io/name=hdx-oss-v2
+```
diff --git a/docs/use-cases/observability/clickstack/deployment/hyperdx-only.md b/docs/use-cases/observability/clickstack/deployment/hyperdx-only.md
new file mode 100644
index 00000000000..7cef48dce9f
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/deployment/hyperdx-only.md
@@ -0,0 +1,73 @@
+---
+slug: /use-cases/observability/clickstack/deployment/hyperdx-only
+title: 'HyperDX Only'
+pagination_prev: null
+pagination_next: null
+sidebar_position: 4
+description: 'Deploying HyperDX only'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
+import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
+import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
+
+This option is designed for users who already have a running ClickHouse instance populated with observability or event data.
+
+HyperDX can be used independently of the rest of the stack and is compatible with any data schema - not just OpenTelemetry (OTel). This makes it suitable for custom observability pipelines already built on ClickHouse.
+
+To enable full functionality, you must provide a MongoDB instance for storing application state, including dashboards, saved searches, user settings, and alerts.
+
+In this mode, data ingestion is left entirely to the user. You can ingest data into ClickHouse using your own hosted OpenTelemetry collector, direct ingestion from client libraries, ClickHouse-native table engines (such as Kafka or S3), ETL pipelines, or managed ingestion services like ClickPipes. This approach offers maximum flexibility and is suitable for teams that already operate ClickHouse and want to layer HyperDX on top for visualization, search, and alerting.
+
+### Suitable for {#suitable-for}
+
+- Existing ClickHouse users
+- Custom event pipelines
+
+## Deployment steps {#deployment-steps}
+
+
+
+
+### Deploy with Docker {#deploy-hyperdx-with-docker}
+
+Run the following command, modifying `YOUR_MONGODB_URI` as required.
+
+```bash
+docker run -e MONGO_URI=mongodb://YOUR_MONGODB_URI -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx
+```
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password which meets the requirements.
+
+On clicking `Create` you'll be prompted for connection details.
+
+
+
+### Complete connection details {#complete-connection-details}
+
+Connect to your own external ClickHouse cluster e.g. ClickHouse Cloud.
+
+
+
+If prompted to create a source, retain all default values and complete the `Table` field with the value `otel_logs`. All other settings should be auto-detected, allowing you to click `Save New Source`.
+
+:::note Creating a source
+Creating a source requires tables to exist in ClickHouse. If you don't have data, we recommend deploying the ClickStack OpenTelemetry collector to create tables.
+:::
+
+
+
+## Using Docker Compose {#using-docker-compose}
+
+Users can modify the [Docker Compose configuration](/use-cases/observability/clickstack/deployment/docker-compose) to achieve the same effect as this guide, removing the OTel collector and ClickHouse instance from the manifest.
+
+## ClickStack OpenTelemetry collector {#otel-collector}
+
+Even if you are managing your own OpenTelemetry collector, independent of the other components in the stack, we still recommend using the ClickStack distribution of the collector. This ensures the default schema is used and best practices for ingestion are applied.
+
+For details on deploying and configuring a standalone collector see ["Ingesting with OpenTelemetry"](/use-cases/observability/clickstack/ingesting-data/otel-collector#modifying-otel-collector-configuration).
diff --git a/docs/use-cases/observability/clickstack/deployment/index.md b/docs/use-cases/observability/clickstack/deployment/index.md
new file mode 100644
index 00000000000..915f069c696
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/deployment/index.md
@@ -0,0 +1,19 @@
+---
+slug: /use-cases/observability/clickstack/deployment
+title: 'Deployment Options'
+pagination_prev: null
+pagination_next: null
+description: 'Deploying ClickStack - The ClickHouse Observability Stack'
+---
+
+ClickStack provides multiple deployment options to suit various use cases.
+
+Each of the deployment options are summarized below. The [Getting Started Guide](/use-cases/observability/clickstack/getting-started) specifically demonstrates options 1 and 2. They are included here for completeness.
+
+| Name | Description | Suitable For | Limitations | Example Link |
+|------------------|----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
+| All-in-One | Single Docker container with all ClickStack components bundled. | Demos, local full-stack testing | Not recommended for production | [All-in-One](/use-cases/observability/clickstack/deployment/all-in-one) |
+| Helm | Official Helm chart for Kubernetes-based deployments. Supports ClickHouse Cloud and production scaling. | Production deployments on Kubernetes | Kubernetes knowledge required, customization via Helm | [Helm](/use-cases/observability/clickstack/deployment/helm) |
+| Docker Compose | Deploy each ClickStack component individually via Docker Compose. | Local testing, proof of concepts, production on single server, BYO ClickHouse | No fault tolerance, requires managing multiple containers | [Docker Compose](/use-cases/observability/clickstack/deployment/docker-compose) |
+| HyperDX Only | Use HyperDX independently with your own ClickHouse and schema. | Existing ClickHouse users, custom event pipelines | No ClickHouse included, user must manage ingestion and schema | [HyperDX Only](/use-cases/observability/clickstack/deployment/hyperdx-only) |
+| Local Mode Only | Runs entirely in the browser with local storage. No backend or persistence. | Demos, debugging, dev with HyperDX | No auth, no persistence, no alerting, single-user only | [Local Mode Only](/use-cases/observability/clickstack/deployment/local-mode-only) |
\ No newline at end of file
diff --git a/docs/use-cases/observability/clickstack/deployment/local-mode-only.md b/docs/use-cases/observability/clickstack/deployment/local-mode-only.md
new file mode 100644
index 00000000000..f4cf2ab6c72
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/deployment/local-mode-only.md
@@ -0,0 +1,53 @@
+---
+slug: /use-cases/observability/clickstack/deployment/local-mode-only
+title: 'Local Mode Only'
+pagination_prev: null
+pagination_next: null
+sidebar_position: 5
+description: 'Deploying ClickStack with Local Mode Only - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
+import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
+
+This mode includes the UI with all application state stored locally in the browser.
+
+**User authentication is disabled for this distribution of HyperDX**
+
+It does not include a MongoDB instance, meaning dashboards, saved searches, and alerts are not persisted across users.
+
+### Suitable for {#suitable-for}
+
+* Demos
+* Debugging
+* Development where HyperDX is used
+
+## Deployment steps {#deployment-steps}
+
+
+
+
+### Deploy with Docker {#deploy-with-docker}
+
+Local mode deploys the HyperDX UI only, accessible on port 8080.
+
+```bash
+docker run -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx-local
+```
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+**You will not be prompted to create a user, as authentication is not enabled in this deployment mode.**
+
+Connect to your own external ClickHouse cluster e.g. ClickHouse Cloud.
+
+
+
+Create a source, retain all default values, and complete the `Table` field with the value `otel_logs`. All other settings should be auto-detected, allowing you to click `Save New Source`.
+
+
+
+
diff --git a/docs/use-cases/observability/clickstack/example-datasets/index.md b/docs/use-cases/observability/clickstack/example-datasets/index.md
new file mode 100644
index 00000000000..0cd8cb18670
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/example-datasets/index.md
@@ -0,0 +1,15 @@
+---
+slug: /use-cases/observability/clickstack/sample-datasets
+title: 'Sample Datasets'
+pagination_prev: null
+pagination_next: null
+description: 'Getting started with ClickStack and sample datasets'
+---
+
+This section provides various sample datasets and examples to help you get started with ClickStack. These examples demonstrate different ways to work with observability data in ClickStack, from local development to production scenarios.
+
+| Dataset | Description |
+|---------|-------------|
+| [Sample Data](sample-data.md) | Load a sample dataset containing logs, traces and metrics from our demo environment |
+| [Local Data](local-data.md) | Collect local system metrics and logs sending them to ClickStack for analysis |
+| [Remote Demo Data](remote-demo-data.md) | Connect to our remote demo cluster and explore an issue |
diff --git a/docs/use-cases/observability/clickstack/example-datasets/local-data.md b/docs/use-cases/observability/clickstack/example-datasets/local-data.md
new file mode 100644
index 00000000000..52b8d515cd6
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/example-datasets/local-data.md
@@ -0,0 +1,168 @@
+---
+slug: /use-cases/observability/clickstack/getting-started/local-data
+title: 'Local Logs & Metrics'
+sidebar_position: 1
+pagination_prev: null
+pagination_next: null
+description: 'Getting started with ClickStack local and system data and metrics'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx from '@site/static/images/use-cases/observability/hyperdx-1.png';
+import hyperdx_20 from '@site/static/images/use-cases/observability/hyperdx-20.png';
+import hyperdx_3 from '@site/static/images/use-cases/observability/hyperdx-3.png';
+import hyperdx_4 from '@site/static/images/use-cases/observability/hyperdx-4.png';
+import hyperdx_21 from '@site/static/images/use-cases/observability/hyperdx-21.png';
+import hyperdx_22 from '@site/static/images/use-cases/observability/hyperdx-22.png';
+import hyperdx_23 from '@site/static/images/use-cases/observability/hyperdx-23.png';
+import copy_api_key from '@site/static/images/use-cases/observability/copy_api_key.png';
+
+This getting started guide allows you collect local logs and metrics from your system, sending them to ClickStack for visualization and analysis.
+
+**This example works on OSX and Linux systems only**
+
+The following example assumes you have started ClickStack using the [instructions for the all-in-one image](/use-cases/observability/clickstack/getting-started) and connected to the [local ClickHouse instance](/use-cases/observability/clickstack/getting-started#complete-connection-credentials) or a [ClickHouse Cloud instance](/use-cases/observability/clickstack/getting-started#create-a-cloud-connection).
+
+
+
+## Navigate to the HyperDX UI {#navigate-to-the-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+## Copy ingestion API key {#copy-ingestion-api-key}
+
+Navigate to [`Team Settings`](http://localhost:8080/team) and copy the `Ingestion API Key` from the `API Keys` section. This API key ensures data ingestion through the OpenTelemetry collector is secure.
+
+
+
+## Create a local OpenTelemetry configuration {#create-otel-configuration}
+
+Create a `otel-file-collector.yaml` file with the following content.
+
+**Important**: Populate the value `` with your ingestion API key copied above.
+
+```yml
+receivers:
+ filelog:
+ include:
+ - /var/log/**/*.log # Linux
+ - /var/log/syslog
+ - /var/log/messages
+ - /private/var/log/*.log # macOS
+ start_at: beginning # modify to collect new files only
+
+ hostmetrics:
+ collection_interval: 1s
+ scrapers:
+ cpu:
+ metrics:
+ system.cpu.time:
+ enabled: true
+ system.cpu.utilization:
+ enabled: true
+ memory:
+ metrics:
+ system.memory.usage:
+ enabled: true
+ system.memory.utilization:
+ enabled: true
+ filesystem:
+ metrics:
+ system.filesystem.usage:
+ enabled: true
+ system.filesystem.utilization:
+ enabled: true
+ paging:
+ metrics:
+ system.paging.usage:
+ enabled: true
+ system.paging.utilization:
+ enabled: true
+ system.paging.faults:
+ enabled: true
+ disk:
+ load:
+ network:
+ processes:
+
+exporters:
+ otlp:
+ endpoint: localhost:4317
+ headers:
+ authorization:
+ tls:
+ insecure: true
+ sending_queue:
+ enabled: true
+ num_consumers: 10
+ queue_size: 262144 # 262,144 items × ~8 KB per item ≈ 2 GB
+
+service:
+ pipelines:
+ logs:
+ receivers: [filelog]
+ exporters: [otlp]
+ metrics:
+ receivers: [hostmetrics]
+ exporters: [otlp]
+```
+
+This configuration collects system logs and metric for OSX and Linux systems, sending the results to ClickStack via the OTLP endpoint on port 4317.
+
+:::note Ingestion timestamps
+This configuration adjusts timestamps at ingest, assigning an updated time value to each event. Users should ideally [preprocess or parse timestamps](/use-cases/observability/clickstack/ingesting-data/otel-collector#processing-filtering-transforming-enriching) using OTel processors or operators in their log files to ensure accurate event time is retained.
+
+With this example setup, if the receiver or file processor is configured to start at the beginning of the file, all existing log entries will be assigned the same adjusted timestamp - the time of processing rather than the original event time. Any new events appended to the file will receive timestamps approximating their actual generation time.
+
+To avoid this behavior, you can set the start position to `end` in the receiver configuration. This ensures only new entries are ingested and timestamped near their true arrival time.
+:::
+
+For more details on the OpenTelemetry (OTel) configuration structure, we recommend [the official guide](https://opentelemetry.io/docs/collector/configuration/).
+
+## Start the collector {#start-the-collector}
+
+Run the following docker command to start an instance of the OTel collector.
+
+```bash
+docker run --network=host --rm -it \
+ --user 0:0 \
+ -v "$(pwd)/otel-file-collector.yaml":/etc/otel/config.yaml \
+ -v /var/log:/var/log:ro \
+ -v /private/var/log:/private/var/log:ro \
+ otel/opentelemetry-collector-contrib:latest \
+ --config /etc/otel/config.yaml
+```
+
+:::note Root user
+We run the collector as the root user to access all system logs—this is necessary to capture logs from protected paths on Linux-based systems. However, this approach is not recommended for production. In production environments, the OpenTelemetry Collector should be deployed as a local agent with only the minimal permissions required to access the intended log sources.
+:::
+
+The collector will immediately begin collecting local system logs and metrics.
+
+## Explore system logs {#explore-system-logs}
+
+Navigate to the HyperDX UI. The search UI should be populated with local system logs. Expand the filters to select the `system.log`:
+
+
+
+## Explore system metrics {#explore-system-metrics}
+
+We can explore our metrics using charts.
+
+Navigate to the Chart Explorer via the left menu. Select the source `Metrics` and `Maximum` as the aggregation type.
+
+For the `Select a Metric` menu simply type `memory` before selecting `system.memory.utilization (Gauge)`.
+
+Press the run button to visualize your memory utilization over time.
+
+
+
+Note the number is returned as a floating point `%`. To render it more clearly, select `Set number format`.
+
+
+
+From the subsequent menu you can select `Percentage` from the `Output format` drop down before clicking `Apply`.
+
+
+
+
diff --git a/docs/use-cases/observability/clickstack/example-datasets/remote-demo-data.md b/docs/use-cases/observability/clickstack/example-datasets/remote-demo-data.md
new file mode 100644
index 00000000000..b1d47f3ddbf
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/example-datasets/remote-demo-data.md
@@ -0,0 +1,314 @@
+---
+slug: /use-cases/observability/clickstack/getting-started/remote-demo-data
+title: 'Remote Demo Dataset'
+sidebar_position: 2
+pagination_prev: null
+pagination_next: null
+description: 'Getting started with ClickStack and a remote demo dataset'
+---
+
+import Image from '@theme/IdealImage';
+import demo_connection from '@site/static/images/use-cases/observability/hyperdx-demo/demo_connection.png';
+import edit_demo_connection from '@site/static/images/use-cases/observability/hyperdx-demo/edit_demo_connection.png';
+import edit_demo_source from '@site/static/images/use-cases/observability/hyperdx-demo/edit_demo_source.png';
+import step_2 from '@site/static/images/use-cases/observability/hyperdx-demo/step_2.png';
+import step_3 from '@site/static/images/use-cases/observability/hyperdx-demo/step_3.png';
+import step_4 from '@site/static/images/use-cases/observability/hyperdx-demo/step_4.png';
+import step_5 from '@site/static/images/use-cases/observability/hyperdx-demo/step_5.png';
+import step_6 from '@site/static/images/use-cases/observability/hyperdx-demo/step_6.png';
+import step_7 from '@site/static/images/use-cases/observability/hyperdx-demo/step_7.png';
+import step_8 from '@site/static/images/use-cases/observability/hyperdx-demo/step_8.png';
+import step_9 from '@site/static/images/use-cases/observability/hyperdx-demo/step_9.png';
+import step_10 from '@site/static/images/use-cases/observability/hyperdx-demo/step_10.png';
+import step_11 from '@site/static/images/use-cases/observability/hyperdx-demo/step_11.png';
+import step_12 from '@site/static/images/use-cases/observability/hyperdx-demo/step_12.png';
+import step_13 from '@site/static/images/use-cases/observability/hyperdx-demo/step_13.png';
+import step_14 from '@site/static/images/use-cases/observability/hyperdx-demo/step_14.png';
+import step_15 from '@site/static/images/use-cases/observability/hyperdx-demo/step_15.png';
+import step_16 from '@site/static/images/use-cases/observability/hyperdx-demo/step_16.png';
+import step_17 from '@site/static/images/use-cases/observability/hyperdx-demo/step_17.png';
+import step_18 from '@site/static/images/use-cases/observability/hyperdx-demo/step_18.png';
+import step_19 from '@site/static/images/use-cases/observability/hyperdx-demo/step_19.png';
+import step_20 from '@site/static/images/use-cases/observability/hyperdx-demo/step_20.png';
+import step_21 from '@site/static/images/use-cases/observability/hyperdx-demo/step_21.png';
+import step_22 from '@site/static/images/use-cases/observability/hyperdx-demo/step_22.png';
+import step_23 from '@site/static/images/use-cases/observability/hyperdx-demo/step_23.png';
+import step_24 from '@site/static/images/use-cases/observability/hyperdx-demo/step_24.png';
+import architecture from '@site/static/images/use-cases/observability/hyperdx-demo/architecture.png';
+import demo_sources from '@site/static/images/use-cases/observability/hyperdx-demo//demo_sources.png';
+import edit_connection from '@site/static/images/use-cases/observability/edit_connection.png';
+
+
+**The following guide assumes you have deployed ClickStack using the [instructions for the all-in-one image](/use-cases/observability/clickstack/getting-started), or [Local Mode Only](/use-cases/observability/clickstack/deployment/local-mode-only) and completed initial user creation.**
+
+This getting started guide uses a dataset available on the demo server that users can access when first deploying HyperDX. The dataset is hosted on the public ClickHouse instance at [sql.clickhouse.com](https://sql.clickhouse.com).
+
+It contains approximately 40 hours of data captured from the ClickHouse version of the official OpenTelemetry (OTel) demo. The data is replayed nightly with timestamps adjusted to the current time window, allowing users to explore system behavior using HyperDX's integrated logs, traces, and metrics.
+
+:::note Data variations
+Because the dataset is replayed from midnight each day, the exact visualizations may vary depending on when you explore the demo.
+:::
+
+## Demo scenario {#demo-scenario}
+
+In this demo, we investigate an incident involving an e-commerce website that sells telescopes and related accessories.
+
+The customer support team has reported that users are experiencing issues completing payments at checkout. The issue has been escalated to the Site Reliability Engineering (SRE) team for investigation.
+
+Using HyperDX, the SRE team will analyze logs, traces, and metrics to diagnose and resolve the issue—then review session data to confirm whether their conclusions align with actual user behavior.
+
+## Demo architecture {#demo-architecture}
+
+This demo reuses the official OpenTelemetry demo. This is composed of microservices written in different programming languages that talk to each other over gRPC and HTTP and a load generator that uses Locust to fake user traffic.
+
+
+
+_Credit: https://opentelemetry.io/docs/demo/architecture/_
+
+Further details on the demo can be found in the [official OpenTelemetry documentation](https://opentelemetry.io/docs/demo/).
+
+## Demo steps {#demo-steps}
+
+**We have instrumented this demo with [ClickStack SDKs](/use-cases/observability/clickstack/sdks), deploying the services in Kubernetes, from which metrics and logs have also been collected.**
+
+
+
+### Connect to the demo server {#connect-to-the-demo-server}
+
+:::note Local-Only mode
+This step can be skipped if you clicked `Connect to Demo Server` when deploying in Local Mode. If using this mode, sources will be prefixed with `Demo_` e.g. `Demo_Logs`
+:::
+
+Navigate to `Team Settings` and click `Edit` for the `Local Connection`:
+
+
+
+Rename the connection to `Demo` and complete the subsequent form with the following connection details for the demo server:
+
+- `Connection Name`: `Demo`
+- `Host`: `https://sql-clickhouse.clickhouse.com`
+- `Username`: `otel_demo`
+- `Password`: Leave empty
+
+
+
+### Modify the sources {#modify-sources}
+
+:::note Local-Only mode
+This step can be skipped if you clicked `Connect to Demo Server` when deploying in Local Mode. If using this mode, sources will be prefixed with `Demo_` e.g. `Demo_Logs`
+:::
+
+Scroll up to `Sources` and modify each of the sources - `Logs`, `Traces`, `Metrics`, and `Sessions` - to use the `otel_v2` database.
+
+
+
+:::note
+You may need to reload the page to ensure the full list of databases is listed in each source.
+:::
+
+### Adjust the time frame {#adjust-the-timeframe}
+
+Adjust the time to show all data from the previous `1 day` using the time picker in the top right.
+
+
+
+You may a small difference in the number of errors in the overview bar chart, with a small increase in red in several consecutive bars.
+
+:::note
+The location of the bars will differ depending on when you query the dataset.
+:::
+
+### Filter to errors {#filter-to-errors}
+
+To highlight occurrences of errors, use the `SeverityText` filter and select `error` to display only error-level entries.
+
+The error should be more apparent:
+
+
+
+### Identify the error patterns {#identify-error-patterns}
+
+With HyperDX's Clustering feature, you can automatically identify errors and group them into meaningful patterns. This accelerates user analysis when dealing with large volumes of log and traces. To use it, select `Event Patterns` from the `Analysis Mode` menu on the left panel.
+
+The error clusters reveal issues related to failed payments, including a named pattern `Failed to place order`. Additional clusters also indicate problems charging cards and caches being full.
+
+
+
+Note that these error clusters likely originate from different services.
+
+### Explore an error pattern {#explore-error-pattern}
+
+Click the most obvious error clusters which correlates with our reported issue of users being able to complete payments: `Failed to place order`.
+
+This will display a list of all occurrences of this error which are associated with the `frontend` service:
+
+
+
+Select any of the resulting errors. The logs metadata will be shown in detail. Scrolling through both the `Overview` and `Column Values` suggests an issue with the charging cards due to a cache:
+
+`failed to charge card: could not charge the card: rpc error: code = Unknown desc = Visa cache full: cannot add new item.`
+
+
+
+### Explore the infrastructure {#explore-the-infrastructure}
+
+We've identified a cache-related error that's likely causing payment failures. We still need to identify where this issue is originating from in our microservice architecture.
+
+Given the cache issue, it makes sense to investigate the underlying infrastructure - potentially we have memory problem in the associated pods? In ClickStack, logs and metrics are unified and displayed in context, making it easier to uncover the root cause quickly.
+
+Select the `Infrastructure` tab to view the metrics associated with the underlying pods for the `frontend` service and widen the timespan to `1d`:
+
+
+
+The issue does not seem to infrastructure related - no metrics have appreciably changed over the time period: either before or after the error. Close the infrastructure tab.
+
+### Explore a trace {#explore-a-trace}
+
+In ClickStack, traces are also automatically correlated with both logs and metrics. Let's explore the trace linked to our selected log to identify the service responsible.
+
+Select `Trace` to visualize the associated trace. Scrolling down through the subsequent view we can see how HyperDX is able to visualize the distributed trace across the microservices, connecting the spans in each service. A payment clearly involves multiple microservices, including those that performance checkout and currency conversions.
+
+
+
+By scrolling to the bottom of the view we can see that the `payment` service is causing the error, which in turn propagates back up the call chain.
+
+
+
+### Searching traces {#searching-traces}
+
+We have established users are failing to complete purchases due to a cache issue in the payment service. Let's explore the traces for this service in more detail to see if we can learn more about the root cause.
+
+Switch to the main Search view by selecting `Search`. Switch the data source for `Traces` and select the `Results table` view. **Ensure the timespan is still over the last day.**
+
+
+
+This view shows all traces in the last day. We know the issue originates in our payment service, so apply the `payment` filter to the `ServiceName`.
+
+
+
+If we apply event clustering to the traces by selecting `Event Patterns`, we can immediately see our cache issue with the `payment` service.
+
+
+
+### Explore infrastructure for a trace {#explore-infrastructure-for-a-trace}
+
+Switch to the results view by clicking on `Results table`. Filter to errors using the `StatusCode` filter and `Error` value.
+
+
+
+Select a `Error: Visa cache full: cannot add new item.` error, switch to the `Infrastructure` tab and widen the timespan to `1d`.
+
+
+
+By correlating traces with metrics we can see that memory and CPU increased with the `payment` service, before collapsing to `0` (we can attribute this to a pod restart) - suggesting the cache issue caused resource issues. We can expect this has impacted payment completion times.
+
+### Event deltas for faster resolution {#event-deltas-for-faster-resolution}
+
+Event Deltas help surface anomalies by attributing changes in performance or error rates to specific subsets of data—making it easier to quickly pinpoint the root cause.
+
+While we know that the `payment` service has a cache issue, causing an increase in resource consumption, we haven't fully identified the root cause.
+
+Return to the result table view and select the time period containing the errors to limit the data. Ensure you select several hours to the left of the errors and after if possible (the issue may still be occurring):
+
+
+
+Remove the errors filter and select `Event Deltas` from the left `Analysis Mode` menu.
+
+
+
+The top panel shows the distribution of timings, with colors indicating event density (number of spans). The subset of events outside of the main concentration are typically those worth investigating.
+
+If we select the events with a duration greater than `200ms`, and apply the filter `Filter by selection`, we can limit our analysis to slower events:
+
+
+
+With analysis performed on the subset of data, we can see most performance spikes are associated with `visa` transactions.
+
+### Using charts for more context {#using-charts-for-more-context}
+
+In ClickStack, we can chart any numeric value from logs, traces, or metrics for greater context.
+
+We have established:
+
+- Our issue resides with the payment service
+- A cache is full
+- This caused increases in resource consumption
+- The issue prevented visa payments from completing - or at least causing them to take a long time to complete.
+
+
+
+Select `Chart Explorer` from the left menu. Complete the following values to chart the time taken for payments to complete by chart type:
+
+- `Data Source`: `Traces`
+- `Metric`: `Maximum`
+- `SQL Column`: `Duration`
+- `Where`: `ServiceName: payment`
+- `Timespan`: `Last 1 day`
+
+
+
+Clicking `▶️` will show how the performance of payments degraded over time.
+
+
+
+If we set `Group By` to `SpanAttributes['app.payment.card_type']` (just type `card` for autocomplete) we can see how the performance of the service degraded for Visa transactions relative to Mastercard:
+
+
+
+Note than once the error occurs responses return in `0s`.
+
+### Exploring metrics more context {#exploring-metrics-for-more-context}
+
+Finally, let's plot the cache size as a metric to see how it behaved over time, thus giving us more context.
+
+Complete the following values:
+
+- `Data Source`: `Metrics`
+- `Metric`: `Maximum`
+- `SQL Column`: `visa_validation_cache.size (gauge)` (just type `cache` for autocomplete)
+- `Where`: `ServiceName: payment`
+- `Group By`: ``
+
+We can see how the cache size increased over a 4-5 hr period (likely after a software deployment) before reaching a maximum size of `100,000`. From the `Sample Matched Events` we can see our errors correlate with the cache reaching this limit and, after which it is recorded as having a size of `0` with responses also returning in `0s`.
+
+
+
+In summary, by exploring logs, traces and finally metrics we have concluded:
+
+- Our issue resides with the payment service
+- A change in service behavior, likely due to a deployment, resulted in a slow increase of a visa cache over a 4-5 hr period - reaching a maximum size of `100,000`.
+- This caused increases in resource consumption as the cache grew in size - likely due to a poor implementation
+- As the cache grew, the performance of Visa payments degraded
+- On reaching the maximum size, the cache rejected payments and reported itself as size `0`.
+
+### Using sessions {#using-sessions}
+
+Sessions allow us to replay the user experience, offering a visual account of how an error occurred from the user's perspective. While not typically used to diagnose root causes, they are valuable for confirming issues reported to customer support and can serve as a starting point for deeper investigation.
+
+In HyperDX, sessions are linked to traces and logs, providing a complete view of the underlying cause.
+
+For example, if the support team provides the email of a user who encountered a payment issue `Braulio.Roberts23@hotmail.com` - it's often more effective to begin with their session rather than directly searching logs or traces.
+
+Navigate to the `Client Sessions` tab from the left menu before ensuring the data source is set to `Sessions` and the time period is set to the `Last 1 day`:
+
+
+
+Search for `SpanAttributes.userEmail: Braulio` to find our customer's session. Selecting the session will show the browser events and associated spans for the customer's session on the left, with the user's browser experience re-rendered to the right:
+
+
+
+### Replaying sessions {#replaying-sessions}
+
+Sessions can be replayed by pressing the ▶️ button. Switching between `Highlighted` and `All Events` allows varying degrees of span granularity, with the former highlighting key events and errors.
+
+If we scroll to the bottom of the spans we can see a `500` error associated with `/api/checkout`. Selecting the ▶️ button for this specific span moves the replay to this point in the session, allowing us to confirm the customer's experience - payment seems to simply not work with no error rendered.
+
+
+
+Selecting the span we can confirm this was caused by an internal error. By clicking the `Trace` tab and scrolling though the connected spans, we are able to confirm the customer indeed was a victim of our cache issue.
+
+
+
+
+
+This demo walks through a real-world incident involving failed payments in an e-commerce app, showing how ClickStack helps uncover root causes through unified logs, traces, metrics, and session replays - explore our [other getting started guides](/use-cases/observability/clickstack/sample-datasets) to dive deeper into specific features.
diff --git a/docs/use-cases/observability/clickstack/example-datasets/sample-data.md b/docs/use-cases/observability/clickstack/example-datasets/sample-data.md
new file mode 100644
index 00000000000..906ac05e515
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/example-datasets/sample-data.md
@@ -0,0 +1,165 @@
+---
+slug: /use-cases/observability/clickstack/getting-started/sample-data
+title: 'Sample Logs, Traces and Metrics'
+sidebar_position: 0
+pagination_prev: null
+pagination_next: null
+description: 'Getting started with ClickStack and a sample dataset with logs, sessions, traces and metrics'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx from '@site/static/images/use-cases/observability/hyperdx.png';
+import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
+import hyperdx_3 from '@site/static/images/use-cases/observability/hyperdx-3.png';
+import hyperdx_4 from '@site/static/images/use-cases/observability/hyperdx-4.png';
+import hyperdx_5 from '@site/static/images/use-cases/observability/hyperdx-5.png';
+import hyperdx_6 from '@site/static/images/use-cases/observability/hyperdx-6.png';
+import hyperdx_7 from '@site/static/images/use-cases/observability/hyperdx-7.png';
+import hyperdx_8 from '@site/static/images/use-cases/observability/hyperdx-8.png';
+import hyperdx_9 from '@site/static/images/use-cases/observability/hyperdx-9.png';
+import hyperdx_10 from '@site/static/images/use-cases/observability/hyperdx-10.png';
+import hyperdx_11 from '@site/static/images/use-cases/observability/hyperdx-11.png';
+import hyperdx_12 from '@site/static/images/use-cases/observability/hyperdx-12.png';
+import hyperdx_13 from '@site/static/images/use-cases/observability/hyperdx-13.png';
+import hyperdx_14 from '@site/static/images/use-cases/observability/hyperdx-14.png';
+import hyperdx_15 from '@site/static/images/use-cases/observability/hyperdx-15.png';
+import hyperdx_16 from '@site/static/images/use-cases/observability/hyperdx-16.png';
+import hyperdx_17 from '@site/static/images/use-cases/observability/hyperdx-17.png';
+import hyperdx_18 from '@site/static/images/use-cases/observability/hyperdx-18.png';
+import hyperdx_19 from '@site/static/images/use-cases/observability/hyperdx-19.png';
+import copy_api_key from '@site/static/images/use-cases/observability/copy_api_key.png';
+
+# ClickStack - Sample logs, traces and metrics {#clickstack-sample-dataset}
+
+The following example assumes you have started ClickStack using the [instructions for the all-in-one image](/use-cases/observability/clickstack/getting-started) and connected to the [local ClickHouse instance](/use-cases/observability/clickstack/getting-started#complete-connection-credentials) or a [ClickHouse Cloud instance](/use-cases/observability/clickstack/getting-started#create-a-cloud-connection).
+
+
+
+## Navigate to the HyperDX UI {#navigate-to-the-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+
+
+## Copy ingestion API key {#copy-ingestion-api-key}
+
+Navigate to [`Team Settings`](http://localhost:8080/team) and copy the `Ingestion API Key` from the `API Keys` section. This API key ensures data ingestion through the OpenTelemetry collector is secure.
+
+
+
+## Download sample data {#download-sample-data}
+
+In order to populate the UI with sample data, download the following file:
+
+[Sample data](https://storage.googleapis.com/hyperdx/sample.tar.gz)
+
+```bash
+# curl
+curl -O https://storage.googleapis.com/hyperdx/sample.tar.gz
+# or
+# wget https://storage.googleapis.com/hyperdx/sample.tar.gz
+```
+
+This file contains example logs, metrics, and traces from our public [OpenTelemetry demo](http://example.com) - a simple e-commerce store with microservices. Copy this file to a directory of your choosing.
+
+## Load sample data {#load-sample-data}
+
+To load this data, we simply send it to the HTTP endpoint of the deployed OpenTelemetry (OTel) collector.
+
+First, export the API key copied above.
+
+```bash
+# export API key
+export CLICKSTACK_API_KEY=
+```
+
+Run the following command to send the data to the OTel collector:
+
+```bash
+for filename in $(tar -tf sample.tar.gz); do
+ endpoint="http://localhost:4318/v1/${filename%.json}"
+ echo "loading ${filename%.json}"
+ tar -xOf sample.tar.gz "$filename" | while read -r line; do
+ echo "$line" | curl -s -o /dev/null -X POST "$endpoint" \
+ -H "Content-Type: application/json" \
+ -H "authorization: ${CLICKSTACK_API_KEY}" \
+ --data-binary @-
+ done
+done
+```
+
+This simulates OLTP log, trace, and metric sources sending data to the OTel collector. In production, these sources may be language clients or even other OTel collectors.
+
+Returning to the `Search` view, you should see that data has started to load:
+
+
+
+Data loading will take a few minutes. Allow for the load to be completed before progressing to the next steps.
+
+## Explore sessions {#explore-sessions}
+
+Suppose we have reports that our users are experiencing issues paying for goods. We can view their experience using HyperDX's session replay capabilities.
+
+Select [`Client Sessions`](http://localhost:8080/sessions?from=1747312320000&to=1747312920000&sessionSource=l1324572572) from the left menu.
+
+
+
+This view allows us to see front-end sessions for our e-commerce store. Sessions remain Anonymous until users check out and try to complete a purchase.
+
+Note that some sessions with emails have an associated error, potentially confirming reports of failed transactions.
+
+Select a trace with a failure and associated email. The subsequent view allows us to replay the user's session and review their issue. Press play to watch the session.
+
+
+
+The replay shows the user navigating the site, adding items to their cart. Feel free to skip to later in the session where they attempt to complete a payment.
+
+:::tip
+Any errors are annotated on the timeline in red.
+:::
+
+The user was unable to place the order, with no obvious error. Scroll to the bottom of the left panel, containing the network and console events from the user's browser. You will notice a 500 error was thrown on making a `/api/checkout` call.
+
+
+
+Select this `500` error. Neither the `Overview` nor `Column Values` indicate the source of the issue, other than the fact the error is unexpected, causing an `Internal Error`.
+
+## Explore traces {#explore-traces}
+
+Navigate to the `Trace` tab to see the full distributed trace.
+
+
+
+Scroll down the trace to see the origin of the error - the `checkout` service span. Select the `Payment` service span.
+
+
+
+Select the tab `Column Values` and scroll down. We can see the issue is associated with a cache being full.
+
+
+
+Scrolling up and returning to the trace, we can see logs are correlated with the span, thanks to our earlier configuration. These provide further context.
+
+
+
+We've established that a cache is getting filled in the payment service, which is preventing payments from completing.
+
+## Explore logs {#explore-logs}
+
+For further details, we can return to the [`Search` view](http://localhost:8080/search):
+
+Select `Logs` from the sources and apply a filter to the `payment` service.
+
+
+
+We can see that while the issue is recent, the number of impacted payments is high. Furthermore, a cache related to the visa payments appears to be causing issues.
+
+## Chart metrics {#chart-metrics}
+
+While an error has clearly been introduced in the code, we can use metrics to confirm the cache size. Navigate to the `Chart Explorer` view.
+
+Select `Metrics` as the data source. Complete the chart builder to plot the `Maximum` of `visa_validation_cache.size (Gauge)` and press the play button. The cache was clearly increasing before reaching a maximum size, after which errors were generated.
+
+
+
+
diff --git a/docs/use-cases/observability/clickstack/getting-started.md b/docs/use-cases/observability/clickstack/getting-started.md
new file mode 100644
index 00000000000..9a1f0ecb9a8
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/getting-started.md
@@ -0,0 +1,194 @@
+---
+slug: /use-cases/observability/clickstack/getting-started
+title: 'Getting Started with ClickStack'
+sidebar_label: 'Getting Started'
+pagination_prev: null
+pagination_next: use-cases/observability/clickstack/example-datasets/index
+description: 'Getting started with ClickStack - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
+import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
+import hyperdx from '@site/static/images/use-cases/observability/hyperdx-1.png';
+import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
+import connect_cloud from '@site/static/images/use-cases/observability/connect-cloud-creds.png';
+import add_connection from '@site/static/images/use-cases/observability/add_connection.png';
+import hyperdx_cloud from '@site/static/images/use-cases/observability/hyperdx-cloud.png';
+import edit_cloud_connection from '@site/static/images/use-cases/observability/edit_cloud_connection.png';
+import delete_source from '@site/static/images/use-cases/observability/delete_source.png';
+import delete_connection from '@site/static/images/use-cases/observability/delete_connection.png';
+import created_sources from '@site/static/images/use-cases/observability/created_sources.png';
+import edit_connection from '@site/static/images/use-cases/observability/edit_connection.png';
+
+
+Getting started with **ClickStack** is straightforward thanks to the availability of prebuilt Docker images. These images are based on the official ClickHouse Debian package and are available in multiple distributions to suit different use cases.
+
+## Local deployment {#local-deployment}
+
+The simplest option is a **single-image distribution** that includes all core components of the stack bundled together:
+
+- **HyperDX UI**
+- **OpenTelemetry (OTel) collector**
+- **ClickHouse**
+
+This all-in-one image allows you to launch the full stack with a single command, making it ideal for testing, experimentation, or quick local deployments.
+
+
+
+### Deploy stack with docker {#deploy-stack-with-docker}
+
+The following will run an OpenTelemetry collector (on port 4317 and 4318) and the HyperDX UI (on port 8080).
+
+```bash
+docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+:::note Persisting data and settings
+To persist data and settings across restarts of the container, users can modify the above docker command to mount the paths `/data/db`, `/var/lib/clickhouse` and `/var/log/clickhouse-server`.
+
+For example:
+
+```bash
+# modify command to mount paths
+docker run \
+ -p 8080:8080 \
+ -p 4317:4317 \
+ -p 4318:4318 \
+ -v "$(pwd)/.volumes/db:/data/db" \
+ -v "$(pwd)/.volumes/ch_data:/var/lib/clickhouse" \
+ -v "$(pwd)/.volumes/ch_logs:/var/log/clickhouse-server" \
+ docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+:::
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password that meets the complexity requirements.
+
+
+
+HyperDX will automatically connect to the local cluster and create data sources for the logs, traces, metrics, and sessions - allowing you to explore the product immediately.
+
+### Explore the product {#explore-the-product}
+
+With the stack deployed, try one of our same datasets.
+
+To continue using the local cluster:
+
+- [Example dataset](/use-cases/observability/clickstack/getting-started/sample-data) - Load an example dataset from our public demo. Diagnose a simple issue.
+- [Local files and metrics](/use-cases/observability/clickstack/getting-started/local-data) - Load local files and monitor system on OSX or Linux using a local OTel collector.
+
+
+Alternatively, you can connect to a demo cluster where you can explore a larger dataset:
+
+- [Remote demo dataset](/use-cases/observability/clickstack/getting-started/remote-demo-data) - Explore a demo dataset in our demo ClickHouse service.
+
+
+
+## Deploy with ClickHouse Cloud {#deploy-with-clickhouse-cloud}
+
+Users can deploy ClickStack against ClickHouse Cloud, benefiting from a fully managed, secure backend while retaining complete control over ingestion, schema, and observability workflows.
+
+
+
+### Create a ClickHouse Cloud service {#create-a-service}
+
+Follow the [getting started guide for ClickHouse Cloud](/cloud/get-started/cloud-quick-start#1-create-a-clickhouse-service) to create a service.
+
+### Copy connection details {#copy-cloud-connection-details}
+
+To find the connection details for HyperDX, navigate to the ClickHouse Cloud console and click the Connect button on the sidebar.
+
+Copy the HTTP connection details, specifically the HTTPS endpoint (`endpoint`) and password.
+
+
+
+:::note Deploying to production
+While we will use the `default` user to connect HyperDX, we recommend creating a dedicated user when [going to production](/use-cases/observability/clickstack/production#create-a-user).
+:::
+
+### Deploy with docker {#deploy-with-docker}
+
+Open a terminal and export the credentials copied above:
+
+```bash
+export CLICKHOUSE_USER=default
+export CLICKHOUSE_ENDPOINT=
+export CLICKHOUSE_PASSWORD=
+```
+
+Run the following docker command:
+
+```bash
+docker run -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+This will expose an OpenTelemetry collector (on port 4317 and 4318), and the HyperDX UI (on port 8080).
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui-cloud}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password which meets the complexity requirements.
+
+
+
+### Create a ClickHouse Cloud connection {#create-a-cloud-connection}
+
+Navigate to `Team Settings` and click `Edit` for the `Local Connection`:
+
+
+
+Rename the connection to `Cloud` and complete the subsequent form with your ClickHouse Cloud service credentials before clicking `Save`:
+
+
+
+### Explore the product {#explore-the-product-cloud}
+
+With the stack deployed, try one of our same datasets.
+
+- [Example dataset](/use-cases/observability/clickstack/getting-started/sample-data) - Load an example dataset from our public demo. Diagnose a simple issue.
+- [Local files and metrics](/use-cases/observability/clickstack/getting-started/local-data) - Load local files and monitor the system on OSX or Linux using a local OTel collector.
+
+
+
+## Local mode {#local-mode}
+
+Local mode is a way to deploy HyperDX without a database or OTel collector. You can connect directly to a ClickHouse server from your browser directly, with configuration stored locally in your browser's local or session storage. This image **only** includes the HyperDX UI.
+
+Authentication is not supported.
+
+This mode is intended to be used for quick testing, development, demos and debugging use cases where deploying a full HyperDX instance is not necessary.
+
+### Hosted Version {#hosted-version}
+
+You can use a hosted version of HyperDX in local mode available at [play.hyperdx.io](https://play.hyperdx.io).
+
+### Self-Hosted Version {#self-hosted-version}
+
+
+
+### Run with docker {#run-local-with-docker}
+
+The self-hosted local mode image comes with an OpenTelemetry collector and a ClickHouse server pre-configured as well. This makes it easy to consume telemetry data from your applications and visualize it in HyperDX with minimal external setup. To get started with the self-hosted version, simply run the Docker container with the appropriate ports forwarded:
+
+```bash
+docker run -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx-local
+```
+
+You will not be promoted to create a user as local mode does not include authentication.
+
+### Complete connection credentials {#complete-connection-credentials}
+
+To connect to your own **external ClickHouse cluster**, you can manually enter your connection credentials.
+
+Alternatively, for a quick exploration of the product, you can also click **Connect to Demo Server** to access preloaded datasets and try ClickStack with no setup required.
+
+
+
+If connecting to the demo server, users can explore the dataset with the [demo dataset instructions](/use-cases/observability/clickstack/getting-started/remote-demo-data).
+
+
diff --git a/docs/use-cases/observability/clickstack/index.md b/docs/use-cases/observability/clickstack/index.md
new file mode 100644
index 00000000000..0d8edef60d2
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/index.md
@@ -0,0 +1,22 @@
+---
+slug: /use-cases/observability/clickstack
+title: 'ClickStack - The ClickHouse Observability Stack'
+pagination_prev: null
+pagination_next: null
+description: 'Landing page for the ClickHouse Observability Stack'
+---
+
+**ClickStack** is a production-grade observability platform built on ClickHouse and OpenTelemetry (OTel), unifying logs, traces, metrics and session in a single high-performance solution. Designed for monitoring and debugging complex systems, ClickStack enables developers and SREs to trace issues end-to-end without switching between tools or manually stitching together data using timestamps or correlation IDs.
+
+| Section | Description |
+|---------|-------------|
+| [Overview](/use-cases/observability/clickstack/overview) | Introduction to ClickStack and its key features |
+| [Getting Started](/use-cases/observability/clickstack/getting-started) | Quick start guide and basic setup instructions |
+| [Sample Datasets](/use-cases/observability/clickstack/sample-datasets) | Sample datasets and use cases |
+| [Architecture](/use-cases/observability/clickstack/architecture) | System architecture and components overview |
+| [Deployment](/use-cases/observability/clickstack/deployment) | Deployment guides and options |
+| [Configuration](/use-cases/observability/clickstack/config) | Detailed configuration options and settings |
+| [Ingesting Data](/use-cases/observability/clickstack/ingesting-data) | Guidelines for ingesting data to ClickStack |
+| [Search](/use-cases/observability/clickstack/search) | How to search and query your observability data |
+| [Production](/use-cases/observability/clickstack/production) | Best practices for production deployment |
+
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/collector.md b/docs/use-cases/observability/clickstack/ingesting-data/collector.md
new file mode 100644
index 00000000000..5bd61776145
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/collector.md
@@ -0,0 +1,293 @@
+---
+slug: /use-cases/observability/clickstack/ingesting-data/otel-collector
+pagination_prev: null
+pagination_next: null
+description: 'OpenTelemetry collector for ClickStack - The ClickHouse Observability Stack'
+sidebar_label: 'OpenTelemetry Collector'
+title: 'ClickStack OpenTelemetry Collector'
+---
+
+import Image from '@theme/IdealImage';
+import observability_6 from '@site/static/images/use-cases/observability/observability-6.png';
+import observability_8 from '@site/static/images/use-cases/observability/observability-8.png';
+import clickstack_with_gateways from '@site/static/images/use-cases/observability/clickstack-with-gateways.png';
+import clickstack_with_kafka from '@site/static/images/use-cases/observability/clickstack-with-kafka.png';
+import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
+
+This page includes details on configuring the official ClickStack OpenTelemetry (OTel) collector.
+
+## Collector roles {#collector-roles}
+
+OpenTelemetry collectors can be deployed in two principal roles:
+
+- **Agent** - Agent instances collect data at the edge e.g. on servers or on Kubernetes nodes, or receive events directly from applications - instrumented with an OpenTelemetry SDK. In the latter case, the agent instance runs with the application or on the same host as the application (such as a sidecar or a DaemonSet). Agents can either send their data directly to ClickHouse or to a gateway instance. In the former case, this is referred to as [Agent deployment pattern](https://opentelemetry.io/docs/collector/deployment/agent/).
+
+- **Gateway** - Gateway instances provide a standalone service (for example, a deployment in Kubernetes), typically per cluster, per data center, or per region. These receive events from applications (or other collectors as agents) via a single OTLP endpoint. Typically, a set of gateway instances are deployed, with an out-of-the-box load balancer used to distribute the load amongst them. If all agents and applications send their signals to this single endpoint, it is often referred to as a [Gateway deployment pattern](https://opentelemetry.io/docs/collector/deployment/gateway/).
+
+**Important: The collector, including in default distributions of ClickStack, assumes the [gateway role described below](#collector-roles), receiving data from agents or SDKs.**
+
+Users deploying OTel collectors in the agent role will typically use the [default contrib distribution of the collector](https://github.com/open-telemetry/opentelemetry-collector-contrib) and not the ClickStack version but are free to use other OTLP compatible technologies such as [Fluentd](https://www.fluentd.org/) and [Vector](https://vector.dev/).
+
+## Deploying the collector {#configuring-the-collector}
+
+If you are managing your own OpenTelemetry collector in a standalone deployment - such as when using the HyperDX-only distribution - we [recommend still using the official ClickStack distribution of the collector](/use-cases/observability/clickstack/deployment/hyperdx-only#otel-collector) for the gateway role where possible, but if you choose to bring your own, ensure it includes the [ClickHouse exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/clickhouseexporter).
+
+### Standalone {#standalone}
+
+To deploy the ClickStack distribution of the OTel connector in a standalone mode, run the following docker command:
+
+```bash
+docker run -e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-otel-collector
+```
+
+Note that we can overwrite the target ClickHouse instance with environment variables for `CLICKHOUSE_ENDPOINT`, `CLICKHOUSE_USERNAME`, and `CLICKHOUSE_PASSWORD`. The `CLICKHOUSE_ENDPOINT` should be the full ClickHouse HTTP endpoint, including the protocol and port—for example, `http://localhost:8123`.
+
+**These environment variables can be used with any of the docker distributions which include the connector.**
+
+The `OPAMP_SERVER_URL` should point to your HyperDX deployment - for example, `http://localhost:4320`. HyperDX exposes an OpAMP (Open Agent Management Protocol) server at `/v1/opamp` on port `4320` by default. Make sure to expose this port from the container running HyperDX (e.g., using `-p 4320:4320`).
+
+:::note Exposing and connecting to the OpAMP port
+For the collector to connect to the OpAMP port it must be exposed by the HyperDX container e.g. `-p 4320:4320`. For local testing, OSX users can then set `OPAMP_SERVER_URL=http://host.docker.internal:4320`. Linux users can start the collector container with `--network=host`.
+:::
+
+Users should use a user with the [appropriate credentials](/use-cases/observability/clickstack/ingesting-data/otel-collector#creating-an-ingestion-user) in production.
+
+### Modifying configuration {#modifying-otel-collector-configuration}
+
+#### Using docker {#using-docker}
+
+All docker images, which include the OpenTelemetry collector, can be configured to use a clickhouse instance via the environment variables `OPAMP_SERVER_URL`,`CLICKHOUSE_ENDPOINT`, `CLICKHOUSE_USERNAME` and `CLICKHOUSE_PASSWORD`:
+
+For example the all-in-one image:
+
+```bash
+export OPAMP_SERVER_URL=
+export CLICKHOUSE_ENDPOINT=
+export CLICKHOUSE_USER=
+export CLICKHOUSE_PASSWORD=
+```
+
+```bash
+docker run -e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
+```
+
+#### Docker Compose {#docker-compose-otel}
+
+With Docker Compose, modify the collector configuration using the same environment variables as above:
+
+```yaml
+ otel-collector:
+ image: hyperdx/hyperdx-otel-collector
+ environment:
+ CLICKHOUSE_ENDPOINT: 'https://mxl4k3ul6a.us-east-2.aws.clickhouse-staging.com:8443'
+ HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
+ CLICKHOUSE_USER: 'default'
+ CLICKHOUSE_PASSWORD: 'password'
+ OPAMP_SERVER_URL: 'http://app:${HYPERDX_OPAMP_PORT}'
+ ports:
+ - '13133:13133' # health_check extension
+ - '24225:24225' # fluentd receiver
+ - '4317:4317' # OTLP gRPC receiver
+ - '4318:4318' # OTLP http receiver
+ - '8888:8888' # metrics extension
+ restart: always
+ networks:
+ - internal
+```
+
+### Advanced configuration {#advanced-configuration}
+
+Currently, the ClickStack distribution of the OTel collector does not support modification of its configuration file. If you need a more complex configuration e.g. [configuring TLS](#securing-the-collector), or modifying the batch size, we recommend copying and modifying the [default configuration](https://github.com/hyperdxio/hyperdx/blob/main/docker/otel-collector/config.yaml) and deploying your own version of the OTel collector using the ClickHouse exporter documented [here](/observability/integrating-opentelemetry#exporting-to-clickhouse) and [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/clickhouseexporter/README.md#configuration-options).
+
+The default ClickStack configuration for the OpenTelemetry (OTel) collector can be found [here](https://github.com/hyperdxio/hyperdx/blob/main/docker/otel-collector/config.yaml).
+
+#### Configuration structure {#configuration-structure}
+
+For details on configuring OTel collectors, including [`receivers`](https://opentelemetry.io/docs/collector/transforming-telemetry/), [`operators`](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/README.md), and [`processors`](https://opentelemetry.io/docs/collector/configuration/#processors), we recommend the [official OpenTelemetry collector documentation](https://opentelemetry.io/docs/collector/configuration).
+
+
+## Securing the collector {#securing-the-collector}
+
+The ClickStack distribution of the OpenTelemetry collector includes built-in support for OpAMP (Open Agent Management Protocol), which it uses to securely configure and manage the OTLP endpoint. On startup, users must provide an `OPAMP_SERVER_URL` environment variable — this should point to the HyperDX app, which hosts the OpAMP API at `/v1/opamp`.
+
+This integration ensures that the OTLP endpoint is secured using an auto-generated ingestion API key, created when the HyperDX app is deployed. All telemetry data sent to the collector must include this API key for authentication. You can find the key in the HyperDX app under `Team Settings → API Keys`.
+
+
+
+To further secure your deployment, we recommend:
+
+- Configuring the collector to communicate with ClickHouse over HTTPS.
+- Create a dedicated user for ingestion with limited permissions - see below.
+- Enabling TLS for the OTLP endpoint, ensuring encrypted communication between SDKs/agents and the collector. **Currently, this requires users to deploy a default distribution of the collector and manage the configuration themselves**.
+
+### Creating an ingestion user {#creating-an-ingestion-user}
+
+We recommend creating a dedicated database and user for the OTel collector for ingestion into ClickHouse. This should have the ability to create and insert into the [tables created and used by ClickStack](/use-cases/observability/clickstack/ingesting-data/schemas).
+
+```sql
+CREATE DATABASE otel;
+CREATE USER hyperdx_ingest IDENTIFIED WITH sha256_password BY 'ClickH0u3eRocks123!';
+GRANT SELECT, INSERT, CREATE TABLE, CREATE VIEW ON otel.* TO hyperdx_ingest;
+```
+
+This assumes the collector has been configured to use the database `otel`. This can be controlled through the environment variable `HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE`. Pass this to the image hosting the collector [similar to other environment variables](#modifying-otel-collector-configuration).
+
+## Processing - filtering, transforming, and enriching {#processing-filtering-transforming-enriching}
+
+Users will invariably want to filter, transform, and enrich event messages during ingestion. Since the configuration for the ClickStack connector cannot be modified, we recommend users who need further event filtering and processing either:
+
+- Deploy their own version of the OTel collector performing filtering and processing, sending events to the ClickStack collector via OTLP for ingestion into ClickHouse.
+- Deploy their own version of the OTel collector and send events directly to ClickHouse using the ClickHouse exporter.
+
+If processing is done using the OTel collector, we recommend doing transformations at gateway instances and minimizing any work done at agent instances. This will ensure the resources required by agents at the edge, running on servers, are as minimal as possible. Typically, we see users only performing filtering (to minimize unnecessary network usage), timestamp setting (via operators), and enrichment, which requires context in agents. For example, if gateway instances reside in a different Kubernetes cluster, k8s enrichment will need to occur in the agent.
+
+OpenTelemetry supports the following processing and filtering features users can exploit:
+
+- **Processors** - Processors take the data collected by [receivers and modify or transform](https://opentelemetry.io/docs/collector/transforming-telemetry/) it before sending it to the exporters. Processors are applied in the order as configured in the `processors` section of the collector configuration. These are optional, but the minimal set is [typically recommended](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor#recommended-processors). When using an OTel collector with ClickHouse, we recommend limiting processors to:
+
+- A [memory_limiter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md) is used to prevent out of memory situations on the collector. See [Estimating Resources](#estimating-resources) for recommendations.
+- Any processor that does enrichment based on context. For example, the [Kubernetes Attributes Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/k8sattributesprocessor) allows the automatic setting of spans, metrics, and logs resource attributes with k8s metadata e.g. enriching events with their source pod id.
+- [Tail or head sampling](https://opentelemetry.io/docs/concepts/sampling/) if required for traces.
+- [Basic filtering](https://opentelemetry.io/docs/collector/transforming-telemetry/) - Dropping events that are not required if this cannot be done via operator (see below).
+- [Batching](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor) - essential when working with ClickHouse to ensure data is sent in batches. See ["Optimizing inserts"](#optimizing-inserts).
+
+- **Operators** - [Operators](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/README.md) provide the most basic unit of processing available at the receiver. Basic parsing is supported, allowing fields such as the Severity and Timestamp to be set. JSON and regex parsing are supported here along with event filtering and basic transformations. We recommend performing event filtering here.
+
+We recommend users avoid doing excessive event processing using operators or [transform processors](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/README.md). These can incur considerable memory and CPU overhead, especially JSON parsing. It is possible to do all processing in ClickHouse at insert time with materialized views and columns with some exceptions - specifically, context-aware enrichment e.g. adding of k8s metadata. For more details, see [Extracting structure with SQL](/use-cases/observability/schema-design#extracting-structure-with-sql).
+
+### Example {#example-processing}
+
+The following configuration shows collection of this [unstructured log file](https://datasets-documentation.s3.eu-west-3.amazonaws.com/http_logs/access-unstructured.log.gz). This configuration could be used by a collector in the agent role sending data to the ClickStack gateway.
+
+Note the use of operators to extract structure from the log lines (`regex_parser`) and filter events, along with a processor to batch events and limit memory usage.
+
+
+```yaml
+# config-unstructured-logs-with-processor.yaml
+receivers:
+ filelog:
+ include:
+ - /opt/data/logs/access-unstructured.log
+ start_at: beginning
+ operators:
+ - type: regex_parser
+ regex: '^(?P[\d.]+)\s+-\s+-\s+\[(?P[^\]]+)\]\s+"(?P[A-Z]+)\s+(?P[^\s]+)\s+HTTP/[^\s]+"\s+(?P\d+)\s+(?P\d+)\s+"(?P[^"]*)"\s+"(?P[^"]*)"'
+ timestamp:
+ parse_from: attributes.timestamp
+ layout: '%d/%b/%Y:%H:%M:%S %z'
+ #22/Jan/2019:03:56:14 +0330
+processors:
+ batch:
+ timeout: 1s
+ send_batch_size: 100
+ memory_limiter:
+ check_interval: 1s
+ limit_mib: 2048
+ spike_limit_mib: 256
+exporters:
+ # HTTP setup
+ otlphttp/hdx:
+ endpoint: 'http://localhost:4318'
+ headers:
+ authorization:
+ compression: gzip
+
+ # gRPC setup (alternative)
+ otlp/hdx:
+ endpoint: 'localhost:4317'
+ headers:
+ authorization:
+ compression: gzip
+service:
+ telemetry:
+ metrics:
+ address: 0.0.0.0:9888 # Modified as 2 collectors running on same host
+ pipelines:
+ logs:
+ receivers: [filelog]
+ processors: [batch]
+ exporters: [otlphttp/hdx]
+```
+
+Note the need to include an [authorization header containing your ingestion API key](#securing-the-collector) in any OTLP communication.
+
+For more advanced configuration, we suggest the [OpenTelemetry collector documentation](https://opentelemetry.io/docs/collector/).
+
+## Optimizing inserts {#optimizing-inserts}
+
+In order to achieve high insert performance while obtaining strong consistency guarantees, users should adhere to simple rules when inserting Observability data into ClickHouse via the ClickStack collector. With the correct configuration of the OTel collector, the following rules should be straightforward to follow. This also avoids [common issues](https://clickhouse.com/blog/common-getting-started-issues-with-clickhouse) users encounter when using ClickHouse for the first time.
+
+### Batching {#batching}
+
+By default, each insert sent to ClickHouse causes ClickHouse to immediately create a part of storage containing the data from the insert together with other metadata that needs to be stored. Therefore sending a smaller amount of inserts that each contain more data, compared to sending a larger amount of inserts that each contain less data, will reduce the number of writes required. We recommend inserting data in fairly large batches of at least 1,000 rows at a time. Further details [here](https://clickhouse.com/blog/asynchronous-data-inserts-in-clickhouse#data-needs-to-be-batched-for-optimal-performance).
+
+By default, inserts into ClickHouse are synchronous and idempotent if identical. For tables of the merge tree engine family, ClickHouse will, by default, automatically [deduplicate inserts](https://clickhouse.com/blog/common-getting-started-issues-with-clickhouse#5-deduplication-at-insert-time). This means inserts are tolerant in cases like the following:
+
+- (1) If the node receiving the data has issues, the insert query will time out (or get a more specific error) and not receive an acknowledgment.
+- (2) If the data got written by the node, but the acknowledgement can't be returned to the sender of the query because of network interruptions, the sender will either get a timeout or a network error.
+
+From the collector's perspective, (1) and (2) can be hard to distinguish. However, in both cases, the unacknowledged insert can just be retried immediately. As long as the retried insert query contains the same data in the same order, ClickHouse will automatically ignore the retried insert if the original (unacknowledged) insert succeeded.
+
+For this reason, the ClickStack distribution of the OTel collector uses the batch [batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md). This ensures inserts are sent as consistent batches of rows satisfying the above requirements. If a collector is expected to have high throughput (events per second), and at least 5000 events can be sent in each insert, this is usually the only batching required in the pipeline. In this case the collector will flush batches before the batch processor's `timeout` is reached, ensuring the end-to-end latency of the pipeline remains low and batches are of a consistent size.
+
+### Use Asynchronous inserts {#use-asynchronous-inserts}
+
+Typically, users are forced to send smaller batches when the throughput of a collector is low, and yet they still expect data to reach ClickHouse within a minimum end-to-end latency. In this case, small batches are sent when the `timeout` of the batch processor expires. This can cause problems and is when asynchronous inserts are required. This issue is rare if users are sending data to the ClickStack collector acting as a Gateway - by acting as aggregators, they alleviate this problem - see [Collector roles](#collector-roles).
+
+If large batches cannot be guaranteed, users can delegate batching to ClickHouse using [Asynchronous Inserts](/best-practices/selecting-an-insert-strategy#asynchronous-inserts). With asynchronous inserts, data is inserted into a buffer first and then written to the database storage later or asynchronously respectively.
+
+
+
+With [asynchronous inserts enabled](/optimize/asynchronous-inserts#enabling-asynchronous-inserts), when ClickHouse ① receives an insert query, the query's data is ② immediately written into an in-memory buffer first. When ③ the next buffer flush takes place, the buffer's data is [sorted](/guides/best-practices/sparse-primary-indexes#data-is-stored-on-disk-ordered-by-primary-key-columns) and written as a part to the database storage. Note, that the data is not searchable by queries before being flushed to the database storage; the buffer flush is [configurable](/optimize/asynchronous-inserts).
+
+To enable asynchronous inserts for the collector, add `async_insert=1` to the connection string. We recommend users use `wait_for_async_insert=1` (the default) to get delivery guarantees - see [here](https://clickhouse.com/blog/asynchronous-data-inserts-in-clickhouse) for further details.
+
+Data from an async insert is inserted once the ClickHouse buffer is flushed. This occurs either after the [`async_insert_max_data_size`](/operations/settings/settings#async_insert_max_data_size) is exceeded or after [`async_insert_busy_timeout_ms`](/operations/settings/settings#async_insert_max_data_size) milliseconds since the first INSERT query. If the `async_insert_stale_timeout_ms` is set to a non-zero value, the data is inserted after `async_insert_stale_timeout_ms milliseconds` since the last query. Users can tune these settings to control the end-to-end latency of their pipeline. Further settings that can be used to tune buffer flushing are documented [here](/operations/settings/settings#async_insert). Generally, defaults are appropriate.
+
+:::note Consider Adaptive Asynchronous Inserts
+In cases where a low number of agents are in use, with low throughput but strict end-to-end latency requirements, [adaptive asynchronous inserts](https://clickhouse.com/blog/clickhouse-release-24-02#adaptive-asynchronous-inserts) may be useful. Generally, these are not applicable to high throughput Observability use cases, as seen with ClickHouse.
+:::
+
+Finally, the previous deduplication behavior associated with synchronous inserts into ClickHouse is not enabled by default when using asynchronous inserts. If required, see the setting [`async_insert_deduplicate`](/operations/settings/settings#async_insert_deduplicate).
+
+Full details on configuring this feature can be found on this [docs page](/optimize/asynchronous-inserts#enabling-asynchronous-inserts), or with a deep dive [blog post](https://clickhouse.com/blog/asynchronous-data-inserts-in-clickhouse).
+
+## Scaling {#scaling}
+
+The ClickStack OTel collector acts a Gateway instance - see [Collector roles](#collector-roles). These provide a standalone service, typically per data center or per region. These receive events from applications (or other collectors in the agent role) via a single OTLP endpoint. Typically a set of collector instances are deployed, with an out-of-the-box load balancer used to distribute the load amongst them.
+
+
+
+The objective of this architecture is to offload computationally intensive processing from the agents, thereby minimizing their resource usage. These ClickStack gateways can perform transformation tasks that would otherwise need to be done by agents. Furthermore, by aggregating events from many agents, the gateways can ensure large batches are sent to ClickHouse - allowing efficient insertion. These gateway collectors can easily be scaled as more agents and SDK sources are added and event throughput increases.
+
+### Adding Kafka {#adding-kafka}
+
+Readers may notice the above architectures do not use Kafka as a message queue.
+
+Using a Kafka queue as a message buffer is a popular design pattern seen in logging architectures and was popularized by the ELK stack. It provides a few benefits: principally, it helps provide stronger message delivery guarantees and helps deal with backpressure. Messages are sent from collection agents to Kafka and written to disk. In theory, a clustered Kafka instance should provide a high throughput message buffer since it incurs less computational overhead to write data linearly to disk than parse and process a message. In Elastic, for example, tokenization and indexing incurs significant overhead. By moving data away from the agents, you also incur less risk of losing messages as a result of log rotation at the source. Finally, it offers some message reply and cross-region replication capabilities, which might be attractive for some use cases.
+
+However, ClickHouse can handle inserting data very quickly - millions of rows per second on moderate hardware. Backpressure from ClickHouse is rare. Often, leveraging a Kafka queue means more architectural complexity and cost. If you can embrace the principle that logs do not need the same delivery guarantees as bank transactions and other mission-critical data, we recommend avoiding the complexity of Kafka.
+
+However, if you require high delivery guarantees or the ability to replay data (potentially to multiple sources), Kafka can be a useful architectural addition.
+
+
+
+In this case, OTel agents can be configured to send data to Kafka via the [Kafka exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/kafkaexporter/README.md). Gateway instances, in turn, consume messages using the [Kafka receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kafkareceiver/README.md). We recommend the Confluent and OTel documentation for further details.
+
+:::note OTel collector configuration
+The ClickStack OpenTelemetry collector distribution cannot be used with Kafka as it requires a configuration modification. Users will need to deploy a default OTel collector using the ClickHouse exporter.
+:::
+
+## Estimating resources {#estimating-resources}
+
+Resource requirements for the OTel collector will depend on the event throughput, the size of messages and amount of processing performed. The OpenTelemetry project maintains [benchmarks users](https://opentelemetry.io/docs/collector/benchmarks/) can use to estimate resource requirements.
+
+[In our experience](https://clickhouse.com/blog/building-a-logging-platform-with-clickhouse-and-saving-millions-over-datadog#architectural-overview), a ClickStack gateway instance with 3 cores and 12GB of RAM can handle around 60k events per second. This assumes a minimal processing pipeline responsible for renaming fields and no regular expressions.
+
+For agent instances responsible for shipping events to a gateway, and only setting the timestamp on the event, we recommend users size based on the anticipated logs per second. The following represent approximate numbers users can use as a starting point:
+
+| Logging rate | Resources to collector agent |
+|--------------|------------------------------|
+| 1k/second | 0.2CPU, 0.2GiB |
+| 5k/second | 0.5 CPU, 0.5GiB |
+| 10k/second | 1 CPU, 1GiB |
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/index.md b/docs/use-cases/observability/clickstack/ingesting-data/index.md
new file mode 100644
index 00000000000..20196403bf7
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/index.md
@@ -0,0 +1,18 @@
+---
+slug: /use-cases/observability/clickstack/ingesting-data
+pagination_prev: null
+pagination_next: null
+description: 'Data ingestion for ClickStack - The ClickHouse Observability Stack'
+title: 'Ingesting data'
+---
+
+ClickStack provides multiple ways to ingest observability data into your ClickHouse instance. Whether you're collecting logs, metrics, traces, or session data, you can use the OpenTelemetry (OTel) collector as a unified ingestion point or leverage platform-specific integrations for specialized use cases.
+
+| Section | Description |
+|------|-------------|
+| [Overview](/use-cases/observability/clickstack/ingesting-data/overview) | Introduction to data ingestion methods and architecture |
+| [Ingesting data with OpenTelemetry](/use-cases/observability/clickstack/ingesting-data/opentelemetry) | For users using OpenTelemetry and looking to quickly integrate with ClickStack |
+| [OpenTelemetry collector](/use-cases/observability/clickstack/ingesting-data/otel-collector) | Advanced details for the ClickStack OpenTelemetry collector |
+| [Kubernetes](/use-cases/observability/clickstack/ingesting-data/kubernetes) | Guide on collecting observability data from Kubernetes clusters |
+| [Tables and Schemas](/use-cases/observability/clickstack/ingesting-data/schemas) | Overview of the ClickHouse tables and their schemas used by ClickStack |
+| [Language SDKs](/use-cases/observability/clickstack/sdks) | ClickStack SDKs for instrumenting programming languages and collecting telemetry data |
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/kubernetes.md b/docs/use-cases/observability/clickstack/ingesting-data/kubernetes.md
new file mode 100644
index 00000000000..07dbaa4c8be
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/kubernetes.md
@@ -0,0 +1,246 @@
+---
+slug: /use-cases/observability/clickstack/ingesting-data/kubernetes
+pagination_prev: null
+pagination_next: null
+description: 'Kubernetes integration for ClickStack - The ClickHouse Observability Stack'
+title: 'Kubernetes'
+---
+
+ClickStack uses the OpenTelemetry (OTel) collector to collect logs, metrics, and Kubernetes events from Kubernetes clusters and forward them to ClickStack. We support the native OTel log format and require no additional vendor-specific configuration.
+
+This guide integrates the following:
+
+- **Logs**
+- **Infra Metrics**
+
+:::note
+To send over application-level metrics or APM/traces, you'll need to add the corresponding language integration to your application as well.
+:::
+
+## Creating the OTel Helm Chart configuration files {#creating-the-otel-helm-chart-config-files}
+
+To collect logs and metrics from both each node and the cluster itself, we'll need to deploy two separate OpenTelemetry collectors. One will be deployed as a DaemonSet to collect logs and metrics from each node, and the other will be deployed as a deployment to collect logs and metrics from the cluster itself.
+
+### Creating the DaemonSet configuration {#creating-the-daemonset-configuration}
+
+The DaemonSet will collect logs and metrics from each node in the cluster but will not collect Kubernetes events or cluster-wide metrics.
+
+Create a file called `daemonset.yaml` with the following contents:
+
+```yaml
+# daemonset.yaml
+mode: daemonset
+
+# Required to use the kubeletstats cpu/memory utilization metrics
+clusterRole:
+ create: true
+ rules:
+ - apiGroups:
+ - ''
+ resources:
+ - nodes/proxy
+ verbs:
+ - get
+
+presets:
+ logsCollection:
+ enabled: true
+ hostMetrics:
+ enabled: true
+ # Configures the Kubernetes Processor to add Kubernetes metadata.
+ # Adds the k8sattributes processor to all the pipelines and adds the necessary rules to ClusterRole.
+ # More info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor
+ kubernetesAttributes:
+ enabled: true
+ # When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
+ # The label's exact name will be the key.
+ extractAllPodLabels: true
+ # When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
+ # The annotation's exact name will be the key.
+ extractAllPodAnnotations: true
+ # Configures the collector to collect node, pod, and container metrics from the API server on a kubelet..
+ # Adds the kubeletstats receiver to the metrics pipeline and adds the necessary rules to ClusterRole.
+ # More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubeletstats-receiver
+ kubeletMetrics:
+ enabled: true
+
+config:
+ receivers:
+ # Configures additional kubelet metrics
+ kubeletstats:
+ collection_interval: 20s
+ auth_type: 'serviceAccount'
+ endpoint: '${env:K8S_NODE_NAME}:10250'
+ insecure_skip_verify: true
+ metrics:
+ k8s.pod.cpu_limit_utilization:
+ enabled: true
+ k8s.pod.cpu_request_utilization:
+ enabled: true
+ k8s.pod.memory_limit_utilization:
+ enabled: true
+ k8s.pod.memory_request_utilization:
+ enabled: true
+ k8s.pod.uptime:
+ enabled: true
+ k8s.node.uptime:
+ enabled: true
+ k8s.container.cpu_limit_utilization:
+ enabled: true
+ k8s.container.cpu_request_utilization:
+ enabled: true
+ k8s.container.memory_limit_utilization:
+ enabled: true
+ k8s.container.memory_request_utilization:
+ enabled: true
+ container.uptime:
+ enabled: true
+
+ exporters:
+ otlphttp:
+ endpoint: 'https://in-otel.hyperdx.io'
+ headers:
+ authorization: ''
+ compression: gzip
+
+ service:
+ pipelines:
+ logs:
+ exporters:
+ - otlphttp
+ metrics:
+ exporters:
+ - otlphttp
+```
+
+### Creating the Deployment Configuration {#creating-the-deployment-configuration}
+
+To collect Kubernetes events and cluster-wide metrics, we'll need to deploy a separate OpenTelemetry collector as a deployment.
+
+Create a file called `deployment.yaml` with the following contents:
+
+```yaml copy
+# deployment.yaml
+mode: deployment
+
+# We only want one of these collectors - any more, and we'd produce duplicate data
+replicaCount: 1
+
+presets:
+ kubernetesAttributes:
+ enabled: true
+ # When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
+ # The label's exact name will be the key.
+ extractAllPodLabels: true
+ # When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
+ # The annotation's exact name will be the key.
+ extractAllPodAnnotations: true
+ # Configures the collector to collect Kubernetes events.
+ # Adds the k8sobject receiver to the logs pipeline and collects Kubernetes events by default.
+ # More info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-objects-receiver
+ kubernetesEvents:
+ enabled: true
+ # Configures the Kubernetes Cluster Receiver to collect cluster-level metrics.
+ # Adds the k8s_cluster receiver to the metrics pipeline and adds the necessary rules to ClusteRole.
+ # More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-cluster-receiver
+ clusterMetrics:
+ enabled: true
+
+config:
+ exporters:
+ otlphttp:
+ endpoint: 'https://in-otel.hyperdx.io'
+ headers:
+ authorization: ''
+ compression: gzip
+
+ service:
+ pipelines:
+ logs:
+ exporters:
+ - otlphttp
+ metrics:
+ exporters:
+ - otlphttp
+```
+
+## Deploying the OpenTelemetry collector {#deploying-the-otel-collector}
+
+The OpenTelemetry collector can now be deployed in your Kubernetes cluster using
+the [OpenTelemetry Helm Chart](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector).
+
+Add the OpenTelemetry Helm repo:
+
+```bash copy
+helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts # Add OTel Helm repo
+```
+
+Install the chart with the above config:
+
+```bash copy
+helm install my-opentelemetry-collector-deployment open-telemetry/opentelemetry-collector -f deployment.yaml
+helm install my-opentelemetry-collector-daemonset open-telemetry/opentelemetry-collector -f daemonset.yaml
+```
+
+Now the metrics, logs and Kubernetes events from your Kubernetes cluster should
+now appear inside HyperDX.
+
+## Forwarding resource tags to pods (Recommended) {#forwarding-resouce-tags-to-pods}
+
+To correlate application-level logs, metrics, and traces with Kubernetes metadata
+(ex. pod name, namespace, etc.), you'll want to forward the Kubernetes metadata
+to your application using the `OTEL_RESOURCE_ATTRIBUTES` environment variable.
+
+Here's an example deployment that forwards the Kubernetes metadata to the
+application using environment variables:
+
+```yaml
+# my_app_deployment.yaml
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: app-deployment
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: app
+ template:
+ metadata:
+ labels:
+ app: app
+ # Combined with the Kubernetes Attribute Processor, this will ensure
+ # the pod's logs and metrics will be associated with a service name.
+ service.name:
+ spec:
+ containers:
+ - name: app-container
+ image: my-image
+ env:
+ # ... other environment variables
+ # Collect K8s metadata from the downward API to forward to the app
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_UID
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.uid
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: DEPLOYMENT_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.labels['deployment']
+ # Forward the K8s metadata to the app via OTEL_RESOURCE_ATTRIBUTES
+ - name: OTEL_RESOURCE_ATTRIBUTES
+ value: k8s.pod.name=$(POD_NAME),k8s.pod.uid=$(POD_UID),k8s.namespace.name=$(POD_NAMESPACE),k8s.node.name=$(NODE_NAME),k8s.deployment.name=$(DEPLOYMENT_NAME)
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/opentelemetry.md b/docs/use-cases/observability/clickstack/ingesting-data/opentelemetry.md
new file mode 100644
index 00000000000..a8a418875d0
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/opentelemetry.md
@@ -0,0 +1,105 @@
+---
+slug: /use-cases/observability/clickstack/ingesting-data/opentelemetry
+pagination_prev: null
+pagination_next: null
+description: 'Data ingestion with OpenTelemetry for ClickStack - The ClickHouse Observability Stack'
+title: 'Ingesting with OpenTelemetry'
+---
+
+import Image from '@theme/IdealImage';
+import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
+
+All data is ingested into ClickStack via an **OpenTelemetry (OTel) collector** instance, which acts as the primary entry point for logs, metrics, traces, and session data. We recommend using the official [ClickStack distribution](#installing-otel-collector) of the collector for this instance.
+
+Users send data to this collector from [language SDKs](/use-cases/observability/clickstack/sdks) or through data collection agents collecting infrastructure metrics and logs (such OTel collectors in an [agent](/use-cases/observability/clickstack/ingesting-data/otel-collector#collector-roles) role or other technologies e.g. [Fluentd](https://www.fluentd.org/) or [Vector](https://vector.dev/)).
+
+## Installing ClickStack OpenTelemetry collector {#installing-otel-collector}
+
+The ClickStack OpenTelemetry collector is included in most ClickStack distributions, including:
+
+- [All-in-One](/use-cases/observability/clickstack/deployment/all-in-one)
+- [Docker Compose](/use-cases/observability/clickstack/deployment/docker-compose)
+- [Helm](/use-cases/observability/clickstack/deployment/helm)
+
+### Standalone {#standalone}
+
+The ClickStack OTel collector can also be deployed standalone, independent of other components of the stack.
+
+If you're using the [HyperDX-only](/use-cases/observability/clickstack/deployment/hyperdx-only) distribution, you are responsible for delivering data into ClickHouse yourself. This can be done by:
+
+- Running your own OpenTelemetry collector and pointing it at ClickHouse - see below.
+- Sending directly to ClickHouse using alternative tooling, such as [Vector](https://vector.dev/), [Fluentd](https://www.fluentd.org/) etc, or even the default [OTel contrib collector distribution](https://github.com/open-telemetry/opentelemetry-collector-contrib).
+
+:::note We recommend using the ClickStack OpenTelemetry collector
+This allows users to benefit from standardized ingestion, enforced schemas, and out-of-the-box compatibility with the HyperDX UI. Using the default schema enables automatic source detection and preconfigured column mappings.
+:::
+
+For further details see ["Deploying the collector"](/use-cases/observability/clickstack/ingesting-data/otel-collector).
+
+## Sending OpenTelemetry data {#sending-otel-data}
+
+To send data to ClickStack, point your OpenTelemetry instrumentation to the following endpoints made available by the OpenTelemetry collector:
+
+- **HTTP (OTLP):** `http://localhost:4318`
+- **gRPC (OTLP):** `localhost:4317`
+
+For most [language SDKs](/use-cases/observability/clickstack/sdks) and telemetry libraries that support OpenTelemetry, users can simply set `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable in your application:
+
+```bash
+export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
+```
+
+In addition, an authorization header containing the API ingestion key is required. You can find the key in the HyperDX app under `Team Settings → API Keys`.
+
+
+
+
+For language SDKs, this can either be set by an `init` function or via an`OTEL_EXPORTER_OTLP_HEADERS` environment variable e.g.:
+
+```bash
+OTEL_EXPORTER_OTLP_HEADERS='authorization='
+```
+
+Agents should likewise include this authorization header in any OTLP communication. For example, if deploying a [contrib distribution of the OTel collector](https://github.com/open-telemetry/opentelemetry-collector-contrib) in the agent role, they can use the OTLP exporter. An example agent config consuming this [structured log file](https://datasets-documentation.s3.eu-west-3.amazonaws.com/http_logs/access-structured.log.gz), is shown below. Note the need to specify an authorization key - see ``.
+
+
+```yaml
+# clickhouse-agent-config.yaml
+receivers:
+ filelog:
+ include:
+ - /opt/data/logs/access-structured.log
+ start_at: beginning
+ operators:
+ - type: json_parser
+ timestamp:
+ parse_from: attributes.time_local
+ layout: '%Y-%m-%d %H:%M:%S'
+exporters:
+ # HTTP setup
+ otlphttp/hdx:
+ endpoint: 'http://localhost:4318'
+ headers:
+ authorization:
+ compression: gzip
+
+ # gRPC setup (alternative)
+ otlp/hdx:
+ endpoint: 'localhost:4317'
+ headers:
+ authorization:
+ compression: gzip
+processors:
+ batch:
+ timeout: 5s
+ send_batch_size: 1000
+service:
+ telemetry:
+ metrics:
+ address: 0.0.0.0:9888 # Modified as 2 collectors running on same host
+ pipelines:
+ logs:
+ receivers: [filelog]
+ processors: [batch]
+ exporters: [otlphttp/hdx]
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/overview.md b/docs/use-cases/observability/clickstack/ingesting-data/overview.md
new file mode 100644
index 00000000000..e3b7089c5a2
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/overview.md
@@ -0,0 +1,35 @@
+---
+slug: /use-cases/observability/clickstack/ingesting-data/overview
+title: 'Ingesting data into ClickStack'
+sidebar_label: 'Overview'
+sidebar_position: 0
+pagination_prev: null
+pagination_next: use-cases/observability/clickstack/ingesting-data/opentelemetry
+description: 'Overview for ingesting data to ClickStack'
+---
+
+import Image from '@theme/IdealImage';
+import architecture_with_flow from '@site/static/images/use-cases/observability/simple-architecture-with-flow.png';
+
+All data is ingested into ClickStack via an **OpenTelemetry (OTel) collector**, which acts as the primary entry point for logs, metrics, traces, and session data.
+
+
+
+This collector exposes two OTLP endpoints:
+
+- **HTTP** - port `4318`
+- **gRPC** - port `4317`
+
+Users can send data to these endpoints either directly from [language SDKs](/use-cases/observability/clickstack/sdks) or OTel-compatible data collection agents e.g. other OTel collectors collecting infrastructure metrics and logs.
+
+More specifically:
+
+- [**Language SDKs**](/use-cases/observability/clickstack/sdks) are responsible for collecting telemetry from within your application - most notably **traces** and **logs** - and exporting this data to the OpenTelemetry collector, via the OTLP endpoint, which handles ingestion into ClickHouse. For more details on the language SDKs available with ClickStack see [SDKs](/use-cases/observability/clickstack/sdks).
+
+- **Data collection agents** are agents deployed at the edge — on servers, Kubernetes nodes, or alongside applications. They collect infrastructure telemetry (e.g. logs, metrics) or receive events directly from applications instrumented with SDKs. In this case, the agent runs on the same host as the application, often as a sidecar or DaemonSet. These agents forward data to the central ClickStack OTel collector, which acts as a [gateway](/use-cases/observability/clickstack/ingesting-data/otel-collector#collector-roles), typically deployed once per cluster, data center, or region. The [gateway](/use-cases/observability/clickstack/ingesting-data/otel-collector#collector-roles) receives OTLP events from agents or applications and handles ingestion into ClickHouse. See [OTel collector](/use-cases/observability/clickstack/ingesting-data/otel-collector) for more details. These agents can be other instances of the OTel collector or alternative technologies such as [Fluentd](https://www.fluentd.org/) or [Vector](https://vector.dev/).
+
+:::note OpenTelemetry compatibility
+While ClickStack offers its own language SDKs and a custom OpenTelemetry, with enhanced telemetry and features, users can also use their existing OpenTelemetry SDKs and agents seamlessly.
+:::
+
+
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/schemas.md b/docs/use-cases/observability/clickstack/ingesting-data/schemas.md
new file mode 100644
index 00000000000..b7c8fca4be6
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/schemas.md
@@ -0,0 +1,328 @@
+---
+slug: /use-cases/observability/clickstack/ingesting-data/schemas
+pagination_prev: null
+pagination_next: null
+description: 'Tables and schemas used by ClickStack - The ClickHouse Observability Stack'
+sidebar_label: 'Tables and Schemas'
+title: 'Tables and schemas used by ClickStack'
+---
+
+The ClickStack OpenTelemetry (OTel) collector uses the [ClickHouse exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/clickhouseexporter/README.md) to create tables in ClickHouse and insert data.
+
+The following tables are created for each data type in the `default` database. Users can change this target database by modifying the environment variable `HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE` for the image hosting the OTel collector.
+
+## Logs {#logs}
+
+```sql
+CREATE TABLE otel_logs
+(
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimestampTime` DateTime DEFAULT toDateTime(Timestamp),
+ `TraceId` String CODEC(ZSTD(1)),
+ `SpanId` String CODEC(ZSTD(1)),
+ `TraceFlags` UInt8,
+ `SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
+ `SeverityNumber` UInt8,
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Body` String CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` LowCardinality(String) CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimestampTime)
+PRIMARY KEY (ServiceName, TimestampTime)
+ORDER BY (ServiceName, TimestampTime, Timestamp)
+```
+
+## Traces {#traces}
+
+```sql
+CREATE TABLE otel_traces
+(
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TraceId` String CODEC(ZSTD(1)),
+ `SpanId` String CODEC(ZSTD(1)),
+ `ParentSpanId` String CODEC(ZSTD(1)),
+ `TraceState` String CODEC(ZSTD(1)),
+ `SpanName` LowCardinality(String) CODEC(ZSTD(1)),
+ `SpanKind` LowCardinality(String) CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `SpanAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `Duration` UInt64 CODEC(ZSTD(1)),
+ `StatusCode` LowCardinality(String) CODEC(ZSTD(1)),
+ `StatusMessage` String CODEC(ZSTD(1)),
+ `Events.Timestamp` Array(DateTime64(9)) CODEC(ZSTD(1)),
+ `Events.Name` Array(LowCardinality(String)) CODEC(ZSTD(1)),
+ `Events.Attributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
+ `Links.TraceId` Array(String) CODEC(ZSTD(1)),
+ `Links.SpanId` Array(String) CODEC(ZSTD(1)),
+ `Links.TraceState` Array(String) CODEC(ZSTD(1)),
+ `Links.Attributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
+ INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_span_attr_key mapKeys(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_span_attr_value mapValues(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_duration Duration TYPE minmax GRANULARITY 1
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(Timestamp)
+ORDER BY (ServiceName, SpanName, toDateTime(Timestamp))
+```
+
+## Metrics {#metrics}
+
+### Gauge metrics {#gauge}
+
+```sql
+CREATE TABLE otel_metrics_gauge
+(
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` String CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `MetricName` String CODEC(ZSTD(1)),
+ `MetricDescription` String CODEC(ZSTD(1)),
+ `MetricUnit` String CODEC(ZSTD(1)),
+ `Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `Value` Float64 CODEC(ZSTD(1)),
+ `Flags` UInt32 CODEC(ZSTD(1)),
+ `Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
+ `Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
+ `Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
+ `Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
+ `Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimeUnix)
+ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
+```
+
+### Sum metrics {#sum}
+
+```sql
+CREATE TABLE otel_metrics_sum
+(
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` String CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `MetricName` String CODEC(ZSTD(1)),
+ `MetricDescription` String CODEC(ZSTD(1)),
+ `MetricUnit` String CODEC(ZSTD(1)),
+ `Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `Value` Float64 CODEC(ZSTD(1)),
+ `Flags` UInt32 CODEC(ZSTD(1)),
+ `Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
+ `Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
+ `Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
+ `Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
+ `Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
+ `AggregationTemporality` Int32 CODEC(ZSTD(1)),
+ `IsMonotonic` Bool CODEC(Delta(1), ZSTD(1)),
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimeUnix)
+ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
+```
+
+### Histogram metrics {#histogram}
+
+```sql
+CREATE TABLE otel_metrics_histogram
+(
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` String CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `MetricName` String CODEC(ZSTD(1)),
+ `MetricDescription` String CODEC(ZSTD(1)),
+ `MetricUnit` String CODEC(ZSTD(1)),
+ `Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `Count` UInt64 CODEC(Delta(8), ZSTD(1)),
+ `Sum` Float64 CODEC(ZSTD(1)),
+ `BucketCounts` Array(UInt64) CODEC(ZSTD(1)),
+ `ExplicitBounds` Array(Float64) CODEC(ZSTD(1)),
+ `Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
+ `Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
+ `Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
+ `Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
+ `Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
+ `Flags` UInt32 CODEC(ZSTD(1)),
+ `Min` Float64 CODEC(ZSTD(1)),
+ `Max` Float64 CODEC(ZSTD(1)),
+ `AggregationTemporality` Int32 CODEC(ZSTD(1)),
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimeUnix)
+ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
+SETTINGS index_granularity = 8192"
+```
+
+### Exponential histograms {#exponential-histograms}
+
+```sql
+CREATE TABLE otel_metrics_histogram
+(
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` String CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `MetricName` String CODEC(ZSTD(1)),
+ `MetricDescription` String CODEC(ZSTD(1)),
+ `MetricUnit` String CODEC(ZSTD(1)),
+ `Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `Count` UInt64 CODEC(Delta(8), ZSTD(1)),
+ `Sum` Float64 CODEC(ZSTD(1)),
+ `BucketCounts` Array(UInt64) CODEC(ZSTD(1)),
+ `ExplicitBounds` Array(Float64) CODEC(ZSTD(1)),
+ `Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
+ `Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
+ `Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
+ `Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
+ `Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
+ `Flags` UInt32 CODEC(ZSTD(1)),
+ `Min` Float64 CODEC(ZSTD(1)),
+ `Max` Float64 CODEC(ZSTD(1)),
+ `AggregationTemporality` Int32 CODEC(ZSTD(1)),
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimeUnix)
+ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
+```
+
+### Summary table {#summary-table}
+
+```sql
+CREATE TABLE otel_metrics_summary
+(
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` String CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` String CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `MetricName` String CODEC(ZSTD(1)),
+ `MetricDescription` String CODEC(ZSTD(1)),
+ `MetricUnit` String CODEC(ZSTD(1)),
+ `Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `Count` UInt64 CODEC(Delta(8), ZSTD(1)),
+ `Sum` Float64 CODEC(ZSTD(1)),
+ `ValueAtQuantiles.Quantile` Array(Float64) CODEC(ZSTD(1)),
+ `ValueAtQuantiles.Value` Array(Float64) CODEC(ZSTD(1)),
+ `Flags` UInt32 CODEC(ZSTD(1)),
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimeUnix)
+ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
+```
+
+## Sessions {#sessions}
+
+```sql
+CREATE TABLE hyperdx_sessions
+(
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TimestampTime` DateTime DEFAULT toDateTime(Timestamp),
+ `TraceId` String CODEC(ZSTD(1)),
+ `SpanId` String CODEC(ZSTD(1)),
+ `TraceFlags` UInt8,
+ `SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
+ `SeverityNumber` UInt8,
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Body` String CODEC(ZSTD(1)),
+ `ResourceSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `ScopeSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
+ `ScopeName` String CODEC(ZSTD(1)),
+ `ScopeVersion` LowCardinality(String) CODEC(ZSTD(1)),
+ `ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+PARTITION BY toDate(TimestampTime)
+PRIMARY KEY (ServiceName, TimestampTime)
+ORDER BY (ServiceName, TimestampTime, Timestamp)
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/browser.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/browser.md
new file mode 100644
index 00000000000..e19db263a5a
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/browser.md
@@ -0,0 +1,193 @@
+---
+slug: /use-cases/observability/clickstack/sdks/browser
+pagination_prev: null
+pagination_next: null
+sidebar_position: 0
+description: 'Browser SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Browser JS'
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The ClickStack browser SDK allows you to instrument your frontend application to
+send events to ClickStack. This allows you to view network
+requests and exceptions alongside backend events in a single timeline.
+
+Additionally, it'll automatically capture and correlate session replay data, so
+you can visually step through and debug what a user was seeing while using your
+application.
+
+This guide integrates the following:
+
+- **Console Logs**
+- **Session Replays**
+- **XHR/Fetch/Websocket Requests**
+- **Exceptions**
+
+## Getting Started {#getting-started}
+
+
+
+
+
+
+**Install via package import (Recommended)**
+
+Use the following command to install the [browser package](https://www.npmjs.com/package/@hyperdx/browser).
+
+```bash
+npm install @hyperdx/browser
+```
+
+**Initialize ClickStack**
+
+```js
+import HyperDX from '@hyperdx/browser';
+
+HyperDX.init({
+ url: 'http://localhost:4318',
+ apiKey: 'YOUR_INGESTION_API_KEY',
+ service: 'my-frontend-app',
+ tracePropagationTargets: [/api.myapp.domain/i], // Set to link traces from frontend to backend requests
+ consoleCapture: true, // Capture console logs (default false)
+ advancedNetworkCapture: true, // Capture full HTTP request/response headers and bodies (default false)
+});
+```
+
+
+
+
+**Install via Script Tag (Alternative)**
+
+You can also include and install the script via a script tag as opposed to
+installing via NPM. This will expose the `HyperDX` global variable and can be
+used in the same way as the NPM package.
+
+This is recommended if your site is not currently built using a bundler.
+
+```html
+
+
+```
+
+
+
+
+
+### Options {#options}
+
+- `apiKey` - Your ClickStack Ingestion API Key.
+- `service` - The service name events will show up as in HyperDX UI.
+- `tracePropagationTargets` - A list of regex patterns to match against HTTP
+ requests to link frontend and backend traces, it will add an additional
+ `traceparent` header to all requests matching any of the patterns. This should
+ be set to your backend API domain (ex. `api.yoursite.com`).
+- `consoleCapture` - (Optional) Capture all console logs (default `false`).
+- `advancedNetworkCapture` - (Optional) Capture full request/response headers
+ and bodies (default false).
+- `url` - (Optional) The OpenTelemetry collector URL, only needed for
+ self-hosted instances.
+- `maskAllInputs` - (Optional) Whether to mask all input fields in session
+ replay (default `false`).
+- `maskAllText` - (Optional) Whether to mask all text in session replay (default
+ `false`).
+- `disableIntercom` - (Optional) Whether to disable Intercom integration (default `false`)
+- `disableReplay` - (Optional) Whether to disable session replay (default `false`)
+
+## Additional configuration {#additional-configuration}
+
+### Attach user information or metadata {#attach-user-information-or-metadata}
+
+Attaching user information will allow you to search/filter sessions and events
+in the HyperDX UI. This can be called at any point during the client session. The
+current client session and all events sent after the call will be associated
+with the user information.
+
+`userEmail`, `userName`, and `teamName` will populate the sessions UI with the
+corresponding values, but can be omitted. Any other additional values can be
+specified and used to search for events.
+
+```js
+HyperDX.setGlobalAttributes({
+ userId: user.id,
+ userEmail: user.email,
+ userName: user.name,
+ teamName: user.team.name,
+ // Other custom properties...
+});
+```
+
+### Auto capture React error boundary errors {#auto-capture-react-error-boundary-errors}
+
+If you're using React, you can automatically capture errors that occur within
+React error boundaries by passing your error boundary component
+into the `attachToReactErrorBoundary` function.
+
+```js
+// Import your ErrorBoundary (we're using react-error-boundary as an example)
+import { ErrorBoundary } from 'react-error-boundary';
+
+// This will hook into the ErrorBoundary component and capture any errors that occur
+// within any instance of it.
+HyperDX.attachToReactErrorBoundary(ErrorBoundary);
+```
+
+### Send custom actions {#send-custom-actions}
+
+To explicitly track a specific application event (ex. sign up, submission,
+etc.), you can call the `addAction` function with an event name and optional
+event metadata.
+
+Example:
+
+```js
+HyperDX.addAction('Form-Completed', {
+ formId: 'signup-form',
+ formName: 'Signup Form',
+ formType: 'signup',
+});
+```
+
+### Enable network capture dynamically {#enable-network-capture-dynamically}
+
+To enable or disable network capture dynamically, simply invoke the `enableAdvancedNetworkCapture` or `disableAdvancedNetworkCapture` function as needed.
+
+```js
+HyperDX.enableAdvancedNetworkCapture();
+```
+
+### Enable resource timing for CORS requests {#enable-resource-timing-for-cors-requests}
+
+If your frontend application makes API requests to a different domain, you can
+optionally enable the `Timing-Allow-Origin`[header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Timing-Allow-Origin) to be sent with the request. This will allow ClickStack to capture fine-grained
+resource timing information for the request such as DNS lookup, response
+download, etc. via [`PerformanceResourceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceResourceTiming).
+
+If you're using `express` with `cors` packages, you can use the following
+snippet to enable the header:
+
+```js
+var cors = require('cors');
+var onHeaders = require('on-headers');
+
+// ... all your stuff
+
+app.use(function (req, res, next) {
+ onHeaders(res, function () {
+ var allowOrigin = res.getHeader('Access-Control-Allow-Origin');
+ if (allowOrigin) {
+ res.setHeader('Timing-Allow-Origin', allowOrigin);
+ }
+ });
+ next();
+});
+app.use(cors());
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/deno.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/deno.md
new file mode 100644
index 00000000000..394245a30d4
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/deno.md
@@ -0,0 +1,51 @@
+---
+slug: /use-cases/observability/clickstack/sdks/deno
+pagination_prev: null
+pagination_next: null
+sidebar_position: 6
+description: 'Deno SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Deno'
+---
+
+This guide Integrates the following:
+
+- **Logs**
+
+:::note
+Currently only supports OpenTelemetry Logging. For tracing support, [see the following guide](https://dev.to/grunet/leveraging-opentelemetry-in-deno-45bj#a-minimal-interesting-example).
+:::
+
+## Logging {#logging}
+
+Logging is supported by exporting a custom logger for the `std/log` module.
+
+**Example usage:**
+
+```typescript
+import * as log from 'https://deno.land/std@0.203.0/log/mod.ts';
+import { OpenTelemetryHandler } from 'npm:@hyperdx/deno';
+
+log.setup({
+ handlers: {
+ otel: new OpenTelemetryHandler('DEBUG'),
+ },
+
+ loggers: {
+ 'my-otel-logger': {
+ level: 'DEBUG',
+ handlers: ['otel'],
+ },
+ },
+});
+
+log.getLogger('my-otel-logger').info('Hello from Deno!');
+```
+
+### Run the application {#run-the-application}
+
+```sh
+OTEL_EXPORTER_OTLP_HEADERS="authorization=" \
+OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
+OTEL_SERVICE_NAME="" \
+deno run --allow-net --allow-env --allow-read --allow-sys --allow-run app.ts
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/elixir.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/elixir.md
new file mode 100644
index 00000000000..e14dd5ca4c1
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/elixir.md
@@ -0,0 +1,59 @@
+---
+slug: /use-cases/observability/clickstack/sdks/elixir
+pagination_prev: null
+pagination_next: null
+sidebar_position: 1
+description: 'Elixir SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Elixir'
+---
+
+
+
+
+ ✅ Logs |
+ ✖️ Metrics |
+ ✖️ Traces |
+
+
+
+_🚧 OpenTelemetry metrics & tracing instrumentation coming soon!_
+
+## Getting started {#getting-started}
+
+### Install ClickStack logger backend package {#install-hyperdx-logger-backend-package}
+
+The package can be installed by adding `hyperdx` to your list of dependencies in
+`mix.exs`:
+
+```elixir
+def deps do
+ [
+ {:hyperdx, "~> 0.1.6"}
+ ]
+end
+```
+
+### Configure logger {#configure-logger}
+
+Add the following to your `config.exs` file:
+
+```elixir
+# config/releases.exs
+
+config :logger,
+ level: :info,
+ backends: [:console, {Hyperdx.Backend, :hyperdx}]
+```
+
+### Configure environment variables {#configure-environment-variables}
+
+Afterwards you'll need to configure the following environment variables in your
+shell to ship telemetry to ClickStack:
+
+```bash
+export HYPERDX_API_KEY='' \
+OTEL_SERVICE_NAME=''
+```
+
+_The `OTEL_SERVICE_NAME` environment variable is used to identify your service
+in the HyperDX app, it can be any name you want._
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/golang.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/golang.md
new file mode 100644
index 00000000000..ba49cbf7bd0
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/golang.md
@@ -0,0 +1,244 @@
+---
+slug: /use-cases/observability/clickstack/sdks/golang
+pagination_prev: null
+pagination_next: null
+sidebar_position: 2
+description: 'Golang SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Golang'
+---
+
+ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs and
+traces). Traces are auto-generated with automatic instrumentation, so manual
+instrumentation isn't required to get value out of tracing.
+
+**This Guide Integrates:**
+
+
+
+
+ ✅ Logs |
+ ✅ Metrics |
+ ✅ Traces |
+
+
+
+
+## Getting started {#getting-started}
+
+### Install OpenTelemetry instrumentation packages {#install-opentelemetry}
+
+To install the OpenTelemetry and HyperDX Go packages, use the command below. It is recommended to check out the [current instrumentation packages](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/v1.4.0/instrumentation#instrumentation-packages) and install the necessary packages to ensure that the trace information is attached correctly.
+
+```bash
+go get -u go.opentelemetry.io/otel
+go get -u github.com/hyperdxio/otel-config-go
+go get -u github.com/hyperdxio/opentelemetry-go
+go get -u github.com/hyperdxio/opentelemetry-logs-go
+```
+
+### Native HTTP server example (net/http) {#native-http-server-example}
+
+For this example, we will be using `net/http/otelhttp`.
+
+```sh
+go get -u go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
+```
+
+Refer to the commented sections to learn how to instrument your Go application.
+
+```go
+
+package main
+
+import (
+ "context"
+ "io"
+ "log"
+ "net/http"
+ "os"
+
+ "github.com/hyperdxio/opentelemetry-go/otelzap"
+ "github.com/hyperdxio/opentelemetry-logs-go/exporters/otlp/otlplogs"
+ "github.com/hyperdxio/otel-config-go/otelconfig"
+ "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
+ "go.opentelemetry.io/otel/trace"
+ "go.uber.org/zap"
+ sdk "github.com/hyperdxio/opentelemetry-logs-go/sdk/logs"
+ semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
+ "go.opentelemetry.io/otel/sdk/resource"
+)
+
+// configure common attributes for all logs
+func newResource() *resource.Resource {
+ hostName, _ := os.Hostname()
+ return resource.NewWithAttributes(
+ semconv.SchemaURL,
+ semconv.ServiceVersion("1.0.0"),
+ semconv.HostName(hostName),
+ )
+}
+
+// attach trace id to the log
+func WithTraceMetadata(ctx context.Context, logger *zap.Logger) *zap.Logger {
+ spanContext := trace.SpanContextFromContext(ctx)
+ if !spanContext.IsValid() {
+ // ctx does not contain a valid span.
+ // There is no trace metadata to add.
+ return logger
+ }
+ return logger.With(
+ zap.String("trace_id", spanContext.TraceID().String()),
+ zap.String("span_id", spanContext.SpanID().String()),
+ )
+}
+
+func main() {
+ // Initialize otel config and use it across the entire app
+ otelShutdown, err := otelconfig.ConfigureOpenTelemetry()
+ if err != nil {
+ log.Fatalf("error setting up OTel SDK - %e", err)
+ }
+ defer otelShutdown()
+
+ ctx := context.Background()
+
+ // configure opentelemetry logger provider
+ logExporter, _ := otlplogs.NewExporter(ctx)
+ loggerProvider := sdk.NewLoggerProvider(
+ sdk.WithBatcher(logExporter),
+ )
+ // gracefully shutdown logger to flush accumulated signals before program finish
+ defer loggerProvider.Shutdown(ctx)
+
+ // create new logger with opentelemetry zap core and set it globally
+ logger := zap.New(otelzap.NewOtelCore(loggerProvider))
+ zap.ReplaceGlobals(logger)
+ logger.Warn("hello world", zap.String("foo", "bar"))
+
+ http.Handle("/", otelhttp.NewHandler(wrapHandler(logger, ExampleHandler), "example-service"))
+
+ port := os.Getenv("PORT")
+ if port == "" {
+ port = "7777"
+ }
+
+ logger.Info("** Service Started on Port " + port + " **")
+ if err := http.ListenAndServe(":"+port, nil); err != nil {
+ logger.Fatal(err.Error())
+ }
+}
+
+// Use this to wrap all handlers to add trace metadata to the logger
+func wrapHandler(logger *zap.Logger, handler http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ logger := WithTraceMetadata(r.Context(), logger)
+ logger.Info("request received", zap.String("url", r.URL.Path), zap.String("method", r.Method))
+ handler(w, r)
+ logger.Info("request completed", zap.String("path", r.URL.Path), zap.String("method", r.Method))
+ }
+}
+
+func ExampleHandler(w http.ResponseWriter, r *http.Request) {
+ w.Header().Add("Content-Type", "application/json")
+ io.WriteString(w, `{"status":"ok"}`)
+}
+```
+
+
+### Gin application example {#gin-application-example}
+
+For this example, we will be using `gin-gonic/gin`.
+
+```sh
+go get -u go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin
+```
+
+Refer to the commented sections to learn how to instrument your Go application.
+
+```go
+
+package main
+
+import (
+ "context"
+ "log"
+ "net/http"
+
+ "github.com/gin-gonic/gin"
+ "github.com/hyperdxio/opentelemetry-go/otelzap"
+ "github.com/hyperdxio/opentelemetry-logs-go/exporters/otlp/otlplogs"
+ sdk "github.com/hyperdxio/opentelemetry-logs-go/sdk/logs"
+ "github.com/hyperdxio/otel-config-go/otelconfig"
+ "go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin"
+ "go.opentelemetry.io/otel/trace"
+ "go.uber.org/zap"
+)
+
+// attach trace id to the log
+func WithTraceMetadata(ctx context.Context, logger *zap.Logger) *zap.Logger {
+ spanContext := trace.SpanContextFromContext(ctx)
+ if !spanContext.IsValid() {
+ // ctx does not contain a valid span.
+ // There is no trace metadata to add.
+ return logger
+ }
+ return logger.With(
+ zap.String("trace_id", spanContext.TraceID().String()),
+ zap.String("span_id", spanContext.SpanID().String()),
+ )
+}
+
+func main() {
+ // Initialize otel config and use it across the entire app
+ otelShutdown, err := otelconfig.ConfigureOpenTelemetry()
+ if err != nil {
+ log.Fatalf("error setting up OTel SDK - %e", err)
+ }
+
+ defer otelShutdown()
+
+ ctx := context.Background()
+
+ // configure opentelemetry logger provider
+ logExporter, _ := otlplogs.NewExporter(ctx)
+ loggerProvider := sdk.NewLoggerProvider(
+ sdk.WithBatcher(logExporter),
+ )
+
+ // gracefully shutdown logger to flush accumulated signals before program finish
+ defer loggerProvider.Shutdown(ctx)
+
+ // create new logger with opentelemetry zap core and set it globally
+ logger := zap.New(otelzap.NewOtelCore(loggerProvider))
+ zap.ReplaceGlobals(logger)
+
+ // Create a new Gin router
+ router := gin.Default()
+
+ router.Use(otelgin.Middleware("service-name"))
+
+ // Define a route that responds to GET requests on the root URL
+ router.GET("/", func(c *gin.Context) {
+ _logger := WithTraceMetadata(c.Request.Context(), logger)
+ _logger.Info("Hello World!")
+ c.String(http.StatusOK, "Hello World!")
+ })
+
+ // Run the server on port 7777
+ router.Run(":7777")
+}
+```
+
+
+### Configure environment variables {#configure-environment-variables}
+
+Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
+
+```sh
+export OTEL_EXPORTER_OTLP_ENDPOINT=https://localhost:4318 \
+OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
+OTEL_SERVICE_NAME='' \
+OTEL_EXPORTER_OTLP_HEADERS='authorization='
+```
+
+The `OTEL_EXPORTER_OTLP_HEADERS` environment variable contains the API Key available via HyperDX app in `Team Settings → API Keys`.
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/index.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/index.md
new file mode 100644
index 00000000000..bbaf8fbe9ef
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/index.md
@@ -0,0 +1,67 @@
+---
+slug: /use-cases/observability/clickstack/sdks
+pagination_prev: null
+pagination_next: null
+description: 'Language SDKs for ClickStack - The ClickHouse Observability Stack'
+title: 'Language SDKs'
+---
+
+Users typically send data to ClickStack via the **OpenTelemetry (OTel) collector**, either directly from language SDKs or through intermediate OpenTelemetry collector acting as agents e.g. collecting infrastructure metrics and logs.
+
+Language SDKs are responsible for collecting telemetry from within your application - most notably **traces** and **logs** - and exporting this data to the OpenTelemetry collector, via the OTLP endpoint, which handles ingestion into ClickHouse.
+
+In browser-based environments, SDKs may also be responsible for collecting **session data**, including UI events, clicks, and navigation thus enabling replays of user sessions.
+
+## How It Works {#how-it-works}
+
+1. Your application uses a a ClickStack SDK (e.g., Node.js, Python, Go). These SDKs are based on the OpenTelemetry SDKs with additional features and usability enhancements.
+2. The SDK collects and exports traces and logs via OTLP (HTTP or gRPC).
+3. The OpenTelemetry collector receives the telemetry and writes it to ClickHouse via the configured exporters.
+
+## Supported Languages {#supported-languages}
+
+:::note OpenTelemetry compatibility
+While ClickStack offers its own language SDKs with enhanced telemetry and features, users can also use their existing OpenTelemetry SDKs seamlessly.
+:::
+
+
+
+| Language | Description | Link |
+|----------|-------------|------|
+| Browser | JavaScript SDK for Browser-based applications | [Documentation](/use-cases/observability/clickstack/sdks/browser) |
+| Elixir | Elixir applications | [Documentation](/use-cases/observability/clickstack/sdks/elixir) |
+| Go | Go applications and microservices | [Documentation](/use-cases/observability/clickstack/sdks/golang) |
+| Java | Java applications | [Documentation](/use-cases/observability/clickstack/sdks/java) |
+| NestJS | NestJS applications | [Documentation](/use-cases/observability/clickstack/sdks/nestjs) |
+| Next.js | Next.js applications | [Documentation](/use-cases/observability/clickstack/sdks/nextjs) |
+| Node.js | JavaScript runtime for server-side applications | [Documentation](/use-cases/observability/clickstack/sdks/nodejs) |
+| Deno | Deno applications | [Documentation](/use-cases/observability/clickstack/sdks/deno) |
+| Python | Python applications and web services | [Documentation](/use-cases/observability/clickstack/sdks/python) |
+| React Native | React Native mobile applications | [Documentation](/use-cases/observability/clickstack/sdks/react-native) |
+| Ruby | Ruby on Rails applications and web services | [Documentation](/use-cases/observability/clickstack/sdks/ruby-on-rails) |
+
+## Securing with API Key {#securing-api-key}
+
+In order to send data to ClickStack via the OTel collector, SDKs will need to specify an ingestion API key. This can either be set using an `init` function in the SDK or an `OTEL_EXPORTER_OTLP_HEADERS` environment variable:
+
+```bash
+OTEL_EXPORTER_OTLP_HEADERS='authorization='
+```
+
+This API key is generated by the HyperDX application, and is available via the app in `Team Settings → API Keys`.
+
+For most [language SDKs](/use-cases/observability/clickstack/sdks) and telemetry libraries that support OpenTelemetry, users can simply set `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable in your application or specify it during initialization of the SDK:
+
+```bash
+export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
+```
+
+## Kubernetes Integration {#kubernetes-integration}
+
+All SDKs support automatic correlation with Kubernetes metadata (pod name, namespace, etc.) when running in a Kubernetes environment. This allows you to:
+
+- View Kubernetes metrics for pods and nodes associated with your services
+- Correlate application logs and traces with infrastructure metrics
+- Track resource usage and performance across your Kubernetes cluster
+
+To enable this feature, configure the OpenTelemetry collector to forward resource tags to pods. See the [Kubernetes integration guide](/use-cases/observability/clickstack/ingesting-data/kubernetes#forwarding-resouce-tags-to-pods) for detailed setup instructions.
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/java.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/java.md
new file mode 100644
index 00000000000..8b9908a257b
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/java.md
@@ -0,0 +1,66 @@
+---
+slug: /use-cases/observability/clickstack/sdks/java
+pagination_prev: null
+pagination_next: null
+sidebar_position: 3
+description: 'Java SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Java'
+---
+
+ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs and
+traces). Traces are auto-generated with automatic instrumentation, so manual
+instrumentation isn't required to get value out of tracing.
+
+**This guide Integrates:**
+
+
+
+
+ ✅ Logs |
+ ✅ Metrics |
+ ✅ Traces |
+
+
+
+
+## Getting started {#getting-started}
+
+:::note
+At present, the integration is compatible exclusively with **Java 8+**
+:::
+
+### Download OpenTelemetry Java agent {#download-opentelemtry-java-agent}
+
+Download [`opentelemetry-javaagent.jar`](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar)
+and place the JAR in your preferred directory. The JAR file contains the agent
+and instrumentation libraries. You can also use the following command to
+download the agent:
+
+```bash
+curl -L -O https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar
+```
+
+### Configure environment variables {#configure-environment-variables}
+
+Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
+
+```bash
+export JAVA_TOOL_OPTIONS="-javaagent:PATH/TO/opentelemetry-javaagent.jar" \
+OTEL_EXPORTER_OTLP_ENDPOINT=https://localhost:4318 \
+OTEL_EXPORTER_OTLP_HEADERS='authorization=' \
+OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
+OTEL_LOGS_EXPORTER=otlp \
+OTEL_SERVICE_NAME=''
+```
+
+_The `OTEL_SERVICE_NAME` environment variable is used to identify your service in the HyperDX app, it can be any name you want._
+
+The `OTEL_EXPORTER_OTLP_HEADERS` environment variable contains the API Key available via HyperDX app in `Team Settings → API Keys`.
+
+### Run the application with OpenTelemetry Java agent {#run-the-application-with-otel-java-agent}
+
+```sh
+java -jar target/
+```
+
+Read more about Java OpenTelemetry instrumentation here: [https://opentelemetry.io/docs/instrumentation/java/](https://opentelemetry.io/docs/instrumentation/java/)
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/nestjs.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/nestjs.md
new file mode 100644
index 00000000000..336f762267f
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/nestjs.md
@@ -0,0 +1,127 @@
+---
+slug: /use-cases/observability/clickstack/sdks/nestjs
+pagination_prev: null
+pagination_next: null
+sidebar_position: 4
+description: 'NestJS SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'NestJS'
+---
+
+The ClickStack NestJS integration allows you to create a logger or use the default
+logger to send logs to ClickStack (powered by [nest-winston](https://www.npmjs.com/package/nest-winston?activeTab=readme)).
+
+**This guide integrates:**
+
+
+
+
+ ✅ Logs |
+ ✖️ Metrics |
+ ✖️ Traces |
+
+
+
+
+_To send over metrics or APM/traces, you'll need to add the corresponding language
+integration to your application as well._
+
+## Getting started {#getting-started}
+
+Import `HyperDXNestLoggerModule` into the root `AppModule` and use the `forRoot()`
+method to configure it.
+
+```js
+import { Module } from '@nestjs/common';
+import { HyperDXNestLoggerModule } from '@hyperdx/node-logger';
+
+@Module({
+ imports: [
+ HyperDXNestLoggerModule.forRoot({
+ apiKey: ***YOUR_INGESTION_API_KEY***,
+ maxLevel: 'info',
+ service: 'my-app',
+ }),
+ ],
+})
+export class AppModule {}
+```
+
+Afterward, the winston instance will be available to inject across the entire
+project using the `HDX_LOGGER_MODULE_PROVIDER` injection token:
+
+```js
+import { Controller, Inject } from '@nestjs/common';
+import { HyperDXNestLoggerModule, HyperDXNestLogger } from '@hyperdx/node-logger';
+
+@Controller('cats')
+export class CatsController {
+ constructor(
+ @Inject(HyperDXNestLoggerModule.HDX_LOGGER_MODULE_PROVIDER)
+ private readonly logger: HyperDXNestLogger,
+ ) { }
+
+ meow() {
+ this.logger.info({ message: '🐱' });
+ }
+}
+```
+
+### Replacing the Nest logger (also for bootstrapping) {#replacing-the-nest-logger}
+
+:::note Important
+By doing this, you give up the dependency injection, meaning that `forRoot` and `forRootAsync` are not needed and shouldn't be used. Remove them from your main module.
+:::
+
+Using the dependency injection has one minor drawback. Nest has to bootstrap the
+application first (instantiating modules and providers, injecting dependencies,
+etc.) and during this process the instance of `HyperDXNestLogger` is not yet
+available, which means that Nest falls back to the internal logger.
+
+One solution is to create the logger outside of the application lifecycle, using
+the `createLogger` function, and pass it to `NestFactory.create`. Nest will then
+wrap our custom logger (the same instance returned by the `createLogger` method)
+into the Logger class, forwarding all calls to it:
+
+Create the logger in the `main.ts` file
+
+```js
+import { HyperDXNestLoggerModule } from '@hyperdx/node-logger';
+
+async function bootstrap() {
+ const app = await NestFactory.create(AppModule, {
+ logger: HyperDXNestLoggerModule.createLogger({
+ apiKey: ***YOUR_INGESTION_API_KEY***,
+ maxLevel: 'info',
+ service: 'my-app',
+ })
+ });
+ await app.listen(3000);
+}
+bootstrap();
+```
+
+Change your main module to provide the Logger service:
+
+```js
+import { Logger, Module } from '@nestjs/common';
+
+@Module({
+ providers: [Logger],
+})
+export class AppModule {}
+```
+
+Then inject the logger simply by type hinting it with the Logger from `@nestjs/common`:
+
+```js
+import { Controller, Logger } from '@nestjs/common';
+
+@Controller('cats')
+export class CatsController {
+ constructor(private readonly logger: Logger) {}
+
+ meow() {
+ this.logger.log({ message: '🐱' });
+ }
+}
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/nextjs.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/nextjs.md
new file mode 100644
index 00000000000..391550b1bce
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/nextjs.md
@@ -0,0 +1,107 @@
+---
+slug: /use-cases/observability/clickstack/sdks/nextjs
+pagination_prev: null
+pagination_next: null
+sidebar_position: 4
+description: 'Next.js SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Next.js'
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+ClickStack can ingest native OpenTelemetry traces from your
+[Next.js serverless functions](https://nextjs.org/docs/pages/building-your-application/optimizing/open-telemetry#manual-opentelemetry-configuration)
+in Next 13.2+.
+
+This Guide Integrates:
+
+- **Console Logs**
+- **Traces**
+
+:::note
+If you're looking for session replay/browser-side monitoring, you'll want to install the [Browser integration](/use-cases/observability/clickstack/sdks/browser) instead.
+:::
+
+## Installing {#installing}
+
+### Enable instrumentation hook (required for v15 and below) {#enable-instrumentation-hook}
+
+To get started, you'll need to enable the Next.js instrumentation hook by setting `experimental.instrumentationHook = true;` in your `next.config.js`.
+
+**Example:**
+
+```js
+const nextConfig = {
+ experimental: {
+ instrumentationHook: true,
+ },
+ // Ignore otel pkgs warnings
+ // https://github.com/open-telemetry/opentelemetry-js/issues/4173#issuecomment-1822938936
+ webpack: (
+ config,
+ { buildId, dev, isServer, defaultLoaders, nextRuntime, webpack },
+ ) => {
+ if (isServer) {
+ config.ignoreWarnings = [{ module: /opentelemetry/ }];
+ }
+ return config;
+ },
+};
+
+module.exports = nextConfig;
+```
+
+### Install ClickHouse OpenTelemetry SDK {#install-sdk}
+
+
+
+
+```bash
+npm install @hyperdx/node-opentelemetry
+```
+
+
+
+
+```bash
+yarn add @hyperdx/node-opentelemetry
+```
+
+
+
+
+### Create instrumentation files {#create-instrumentation-files}
+
+Create a file called `instrumentation.ts` (or `.js`) in your Next.js project root with the following contents:
+
+```js
+export async function register() {
+ if (process.env.NEXT_RUNTIME === 'nodejs') {
+ const { init } = await import('@hyperdx/node-opentelemetry');
+ init({
+ apiKey: '', // optionally configure via `HYPERDX_API_KEY` env var
+ service: '', // optionally configure via `OTEL_SERVICE_NAME` env var
+ additionalInstrumentations: [], // optional, default: []
+ });
+ }
+}
+```
+
+This will allow Next.js to import the OpenTelemetry instrumentation for any serverless function invocation.
+
+
+### Configure environment variables {#configure-environment-variables}
+
+If you're sending traces directly to ClickStack, you'll need to start your Next.js
+server with the following environment variables to point spans towards the OTel collector:
+
+```sh copy
+HYPERDX_API_KEY= \
+OTEL_SERVICE_NAME= \
+OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
+npm run dev
+```
+
+If you're deploying in Vercel, ensure that all the environment variables above are configured
+for your deployment.
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/nodejs.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/nodejs.md
new file mode 100644
index 00000000000..4a34944035c
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/nodejs.md
@@ -0,0 +1,397 @@
+---
+slug: /use-cases/observability/clickstack/sdks/nodejs
+pagination_prev: null
+pagination_next: null
+sidebar_position: 5
+description: 'Node.js SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Node.js'
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs, metrics,
+traces and exceptions). Traces are auto-generated with automatic instrumentation, so manual
+instrumentation isn't required to get value out of tracing.
+
+This guide integrates:
+
+- **Logs**
+- **Metrics**
+- **Traces**
+- **Exceptions**
+
+## Getting started {#getting-started}
+
+### Install HyperDX OpenTelemetry instrumentation package {#install-hyperdx-opentelemetry-instrumentation-package}
+
+Use the following command to install the [ClickStack OpenTelemetry package](https://www.npmjs.com/package/@hyperdx/node-opentelemetry).
+
+
+
+
+```bash
+npm install @hyperdx/node-opentelemetry
+```
+
+
+
+
+```bash
+yarn add @hyperdx/node-opentelemetry
+```
+
+
+
+
+### Initializing the SDK {#initializin-the-sdk}
+
+To initialize the SDK, you'll need to call the `init` function at the top of the entry point of your application.
+
+
+
+
+```js
+const HyperDX = require('@hyperdx/node-opentelemetry');
+
+HyperDX.init({
+ apiKey: 'YOUR_INGESTION_API_KEY',
+ service: 'my-service'
+});
+```
+
+
+
+
+```js
+import * as HyperDX from '@hyperdx/node-opentelemetry';
+
+HyperDX.init({
+ apiKey: 'YOUR_INGESTION_API_KEY',
+ service: 'my-service'
+});
+```
+
+
+
+
+This will automatically capture tracing, metrics, and logs from your Node.js application.
+
+### Setup log collection {#setup-log-collection}
+
+By default, `console.*` logs are collected by default. If you're using a logger
+such as `winston` or `pino`, you'll need to add a transport to your logger to
+send logs to ClickStack. If you're using another type of logger,
+[reach out](mailto:support@clickhouse.com) or explore one of our platform
+integrations if applicable (such as [Kubernetes](/use-cases/observability/clickstack/ingesting-data/kubernetes)).
+
+
+
+
+If you're using `winston` as your logger, you'll need to add the following transport to your logger.
+
+```typescript
+ import winston from 'winston';
+ import * as HyperDX from '@hyperdx/node-opentelemetry';
+
+ const logger = winston.createLogger({
+ level: 'info',
+ format: winston.format.json(),
+ transports: [
+ new winston.transports.Console(),
+ HyperDX.getWinstonTransport('info', { // Send logs info and above
+ detectResources: true,
+ }),
+ ],
+ });
+
+ export default logger;
+```
+
+
+
+
+If you're using `pino` as your logger, you'll need to add the following transport to your logger and specify a `mixin` to correlate logs with traces.
+
+```typescript
+import pino from 'pino';
+import * as HyperDX from '@hyperdx/node-opentelemetry';
+
+const logger = pino(
+ pino.transport({
+ mixin: HyperDX.getPinoMixinFunction,
+ targets: [
+ HyperDX.getPinoTransport('info', { // Send logs info and above
+ detectResources: true,
+ }),
+ ],
+ }),
+);
+
+export default logger;
+```
+
+
+
+
+By default, `console.*` methods are supported out of the box. No additional configuration is required.
+
+You can disable this by setting the `HDX_NODE_CONSOLE_CAPTURE` environment variable to 0 or by passing `consoleCapture: false` to the `init` function.
+
+
+
+
+### Setup error collection {#setup-error-collection}
+
+The ClickStack SDK can automatically capture uncaught exceptions and errors in your application with full stack trace and code context.
+
+To enable this, you'll need to add the following code to the end of your application's error handling middleware, or manually capture exceptions using the `recordException` function.
+
+
+
+
+
+```js
+const HyperDX = require('@hyperdx/node-opentelemetry');
+HyperDX.init({
+ apiKey: 'YOUR_INGESTION_API_KEY',
+ service: 'my-service'
+});
+const app = express();
+
+// Add your routes, etc.
+
+// Add this after all routes,
+// but before any and other error-handling middlewares are defined
+HyperDX.setupExpressErrorHandler(app);
+
+app.listen(3000);
+```
+
+
+
+
+```js
+const Koa = require("koa");
+const Router = require("@koa/router");
+const HyperDX = require('@hyperdx/node-opentelemetry');
+HyperDX.init({
+ apiKey: 'YOUR_INGESTION_API_KEY',
+ service: 'my-service'
+});
+
+const router = new Router();
+const app = new Koa();
+
+HyperDX.setupKoaErrorHandler(app);
+
+// Add your routes, etc.
+
+app.listen(3030);
+```
+
+
+
+
+```js
+const HyperDX = require('@hyperdx/node-opentelemetry');
+
+function myErrorHandler(error, req, res, next) {
+ // This can be used anywhere in your application
+ HyperDX.recordException(error);
+}
+```
+
+
+
+
+
+## Troubleshooting {#troubleshooting}
+
+If you're having trouble with the SDK, you can enable verbose logging by setting
+the `OTEL_LOG_LEVEL` environment variable to `debug`.
+
+```sh
+export OTEL_LOG_LEVEL=debug
+```
+
+## Advanced instrumentation configuration {#advanced-instrumentation-configuration}
+
+### Capture console logs {#capture-console-logs}
+
+By default, the ClickStack SDK will capture console logs. You can disable it by
+setting `HDX_NODE_CONSOLE_CAPTURE` environment variable to 0.
+
+```sh copy
+export HDX_NODE_CONSOLE_CAPTURE=0
+```
+
+### Attach user information or metadata {#attach-user-information-or-metadata}
+
+To easily tag all events related to a given attribute or identifier (ex. user id
+or email), you can call the `setTraceAttributes` function which will tag every
+log/span associated with the current trace after the call with the declared
+attributes. It's recommended to call this function as early as possible within a
+given request/trace (ex. as early in an Express middleware stack as possible).
+
+This is a convenient way to ensure all logs/spans are automatically tagged with
+the right identifiers to be searched on later, instead of needing to manually
+tag and propagate identifiers yourself.
+
+`userId`, `userEmail`, `userName`, and `teamName` will populate the sessions UI
+with the corresponding values, but can be omitted. Any other additional values
+can be specified and used to search for events.
+
+```ts
+import * as HyperDX from '@hyperdx/node-opentelemetry';
+
+app.use((req, res, next) => {
+ // Get user information from the request...
+
+ // Attach user information to the current trace
+ HyperDX.setTraceAttributes({
+ userId,
+ userEmail,
+ });
+
+ next();
+});
+```
+
+Make sure to enable beta mode by setting `HDX_NODE_BETA_MODE` environment
+variable to 1 or by passing `betaMode: true` to the `init` function to
+enable trace attributes.
+
+```sh
+export HDX_NODE_BETA_MODE=1
+```
+
+### Google Cloud Run {#google-cloud-run}
+
+If you're running your application on Google Cloud Run, Cloud Trace
+automatically injects sampling headers into incoming requests, currently
+restricting traces to be sampled at 0.1 requests per second for each instance.
+
+The `@hyperdx/node-opentelemetry` package overwrites the sample rate to 1.0 by
+default.
+
+To change this behavior, or to configure other OpenTelemetry installations, you
+can manually configure the environment variables
+`OTEL_TRACES_SAMPLER=parentbased_always_on` and `OTEL_TRACES_SAMPLER_ARG=1` to
+achieve the same result.
+
+To learn more, and to force tracing of specific requests, please refer to the
+[Google Cloud Run documentation](https://cloud.google.com/run/docs/trace).
+
+### Auto-instrumented libraries {#auto-instrumented-libraries}
+
+The following libraries will be automatically instrumented (traced) by the SDK:
+
+- [`dns`](https://nodejs.org/dist/latest/docs/api/dns.html)
+- [`express`](https://www.npmjs.com/package/express)
+- [`graphql`](https://www.npmjs.com/package/graphql)
+- [`hapi`](https://www.npmjs.com/package/@hapi/hapi)
+- [`http`](https://nodejs.org/dist/latest/docs/api/http.html)
+- [`ioredis`](https://www.npmjs.com/package/ioredis)
+- [`knex`](https://www.npmjs.com/package/knex)
+- [`koa`](https://www.npmjs.com/package/koa)
+- [`mongodb`](https://www.npmjs.com/package/mongodb)
+- [`mongoose`](https://www.npmjs.com/package/mongoose)
+- [`mysql`](https://www.npmjs.com/package/mysql)
+- [`mysql2`](https://www.npmjs.com/package/mysql2)
+- [`net`](https://nodejs.org/dist/latest/docs/api/net.html)
+- [`pg`](https://www.npmjs.com/package/pg)
+- [`pino`](https://www.npmjs.com/package/pino)
+- [`redis`](https://www.npmjs.com/package/redis)
+- [`winston`](https://www.npmjs.com/package/winston)
+
+## Alternative installation {#alternative-installation}
+
+### Run the Application with ClickStack OpenTelemetry CLI {#run-the-application-with-cli}
+
+Alternatively, you can auto-instrument your application without any code changes by using the `opentelemetry-instrument` CLI or using the
+Node.js `--require` flag. The CLI installation exposes a wider range of auto-instrumented libraries and frameworks.
+
+
+
+
+
+```bash
+HYPERDX_API_KEY='' OTEL_SERVICE_NAME='' npx opentelemetry-instrument index.js
+```
+
+
+
+
+```bash
+HYPERDX_API_KEY='' OTEL_SERVICE_NAME='' ts-node -r '@hyperdx/node-opentelemetry/build/src/tracing' index.js
+```
+
+
+
+
+
+```js
+// Import this at the very top of the first file loaded in your application
+// You'll still specify your API key via the `HYPERDX_API_KEY` environment variable
+import { initSDK } from '@hyperdx/node-opentelemetry';
+
+initSDK({
+ consoleCapture: true, // optional, default: true
+ additionalInstrumentations: [], // optional, default: []
+});
+```
+
+
+
+
+
+_The `OTEL_SERVICE_NAME` environment variable is used to identify your service in the HyperDX app, it can be any name you want._
+
+### Enabling exception capturing {#enabling-exception-capturing}
+
+To enable uncaught exception capturing, you'll need to set the `HDX_NODE_EXPERIMENTAL_EXCEPTION_CAPTURE` environment variable to 1.
+
+```sh
+HDX_NODE_EXPERIMENTAL_EXCEPTION_CAPTURE=1
+```
+
+Afterwards, to automatically capture exceptions from Express, Koa, or to manually catch exceptions, follow the instructions in the [Setup Error Collection](#setup-error-collection) section above.
+
+### Auto-instrumented libraries {#auto-instrumented-libraries-2}
+
+The following libraries will be automatically instrumented (traced) via the above installation methods:
+
+- [`amqplib`](https://www.npmjs.com/package/amqplib)
+- [`AWS Lambda Functions`](https://docs.aws.amazon.com/lambda/latest/dg/nodejs-handler.html)
+- [`aws-sdk`](https://www.npmjs.com/package/aws-sdk)
+- [`bunyan`](https://www.npmjs.com/package/bunyan)
+- [`cassandra-driver`](https://www.npmjs.com/package/cassandra-driver)
+- [`connect`](https://www.npmjs.com/package/connect)
+- [`cucumber`](https://www.npmjs.com/package/@cucumber/cucumber)
+- [`dataloader`](https://www.npmjs.com/package/dataloader)
+- [`dns`](https://nodejs.org/dist/latest/docs/api/dns.html)
+- [`express`](https://www.npmjs.com/package/express)
+- [`fastify`](https://www.npmjs.com/package/fastify)
+- [`generic-pool`](https://www.npmjs.com/package/generic-pool)
+- [`graphql`](https://www.npmjs.com/package/graphql)
+- [`grpc`](https://www.npmjs.com/package/@grpc/grpc-js)
+- [`hapi`](https://www.npmjs.com/package/@hapi/hapi)
+- [`http`](https://nodejs.org/dist/latest/docs/api/http.html)
+- [`ioredis`](https://www.npmjs.com/package/ioredis)
+- [`knex`](https://www.npmjs.com/package/knex)
+- [`koa`](https://www.npmjs.com/package/koa)
+- [`lru-memoizer`](https://www.npmjs.com/package/lru-memoizer)
+- [`memcached`](https://www.npmjs.com/package/memcached)
+- [`mongodb`](https://www.npmjs.com/package/mongodb)
+- [`mongoose`](https://www.npmjs.com/package/mongoose)
+- [`mysql`](https://www.npmjs.com/package/mysql)
+- [`mysql2`](https://www.npmjs.com/package/mysql2)
+- [`nestjs-core`](https://www.npmjs.com/package/@nestjs/core)
+- [`net`](https://nodejs.org/dist/latest/docs/api/net.html)
+- [`pg`](https://www.npmjs.com/package/pg)
+- [`pino`](https://www.npmjs.com/package/pino)
+- [`redis`](https://www.npmjs.com/package/redis)
+- [`restify`](https://www.npmjs.com/package/restify)
+- [`socket.io`](https://www.npmjs.com/package/socket.io)
+- [`winston`](https://www.npmjs.com/package/winston)
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/python.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/python.md
new file mode 100644
index 00000000000..ccfc61f70fa
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/python.md
@@ -0,0 +1,143 @@
+---
+slug: /use-cases/observability/clickstack/sdks/python
+pagination_prev: null
+pagination_next: null
+sidebar_position: 7
+description: 'Python for ClickStack - The ClickHouse Observability Stack'
+title: 'Python'
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs and
+traces). Traces are auto-generated with automatic instrumentation, so manual
+instrumentation isn't required to get value out of tracing.
+
+This guide integrates:
+
+- **Logs**
+- **Metrics**
+- **Traces**
+
+## Getting started {#getting-started}
+
+### Install ClickStack OpenTelemetry instrumentation package {#install-clickstack-otel-instrumentation-package}
+
+Use the following command to install the [ClickStack OpenTelemetry package](https://pypi.org/project/hyperdx-opentelemetry/).
+
+```bash
+pip install hyperdx-opentelemetry
+```
+
+Install the OpenTelemetry automatic instrumentation libraries for the packages used by your Python application. We recommend that you use the
+`opentelemetry-bootstrap` tool that comes with the OpenTelemetry Python SDK to scan your application packages and generate the list of available libraries.
+
+```bash
+opentelemetry-bootstrap -a install
+```
+
+### Configure environment variables {#configure-environment-variables}
+
+Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
+
+```bash
+export HYPERDX_API_KEY='' \
+OTEL_SERVICE_NAME='' \
+OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
+```
+
+_The `OTEL_SERVICE_NAME` environment variable is used to identify your service in the HyperDX app, it can be any name you want._
+
+### Run the application with OpenTelemetry Python agent {#run-the-application-with-otel-python-agent}
+
+Now you can run the application with the OpenTelemetry Python agent (`opentelemetry-instrument`).
+
+```bash
+opentelemetry-instrument python app.py
+```
+
+#### If you are using `Gunicorn`, `uWSGI` or `uvicorn` {#using-uvicorn-gunicorn-uwsgi}
+
+In this case, the OpenTelemetry Python agent will require additional changes to work.
+
+To configure OpenTelemetry for application servers using the pre-fork web server mode, make sure to call the `configure_opentelemetry` method within the post-fork hook.
+
+
+
+
+
+```python
+from hyperdx.opentelemetry import configure_opentelemetry
+
+def post_fork(server, worker):
+ configure_opentelemetry()
+```
+
+
+
+```python
+from hyperdx.opentelemetry import configure_opentelemetry
+from uwsgidecorators import postfork
+
+@postfork
+def init_tracing():
+ configure_opentelemetry()
+```
+
+
+
+
+
+OpenTelemetry [currently does not work](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/385) with `uvicorn` run using the `--reload`
+flag or with multi-workers (`--workers`). We recommend disabling those flags while testing, or using Gunicorn.
+
+
+
+
+
+## Advanced configuration {#advanced-configuration}
+
+#### Network capture {#network-capture}
+
+By enabling network capture features, developers gain the capability to debug
+HTTP request headers and body payloads effectively. This can be accomplished
+simply by setting `HYPERDX_ENABLE_ADVANCED_NETWORK_CAPTURE` flag to 1.
+
+```bash
+export HYPERDX_ENABLE_ADVANCED_NETWORK_CAPTURE=1
+```
+
+## Troubleshooting {#troubleshooting}
+
+### Logs not appearing due to log level {#logs-not-appearing-due-to-log-level}
+
+By default, OpenTelemetry logging handler uses `logging.NOTSET` level which
+defaults to WARNING level. You can specify the logging level when you create a
+logger:
+
+```python
+import logging
+
+logger = logging.getLogger(__name__)
+logger.setLevel(logging.DEBUG)
+```
+
+### Exporting to the console {#exporting-to-the-console}
+
+The OpenTelemetry Python SDK usually displays errors in the console when they
+occur. However, if you don't encounter any errors but notice that your data is
+not appearing in HyperDX as expected, you have the option to enable debug mode.
+When debug mode is activated, all telemetries will be printed to the console,
+allowing you to verify if your application is properly instrumented with the
+expected data.
+
+```bash
+export DEBUG=true
+```
+
+Read more about Python OpenTelemetry instrumentation here:
+[https://opentelemetry.io/docs/instrumentation/python/manual/](https://opentelemetry.io/docs/instrumentation/python/manual/)
+
+
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/react-native.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/react-native.md
new file mode 100644
index 00000000000..265ac6fffd9
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/react-native.md
@@ -0,0 +1,131 @@
+---
+slug: /use-cases/observability/clickstack/sdks/react-native
+pagination_prev: null
+pagination_next: null
+sidebar_position: 7
+description: 'React Native SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'React Native'
+---
+
+The ClickStack React Native SDK allows you to instrument your React Native
+application to send events to ClickStack. This allows you to see mobile network
+requests and exceptions alongside backend events in a single timeline.
+
+This Guide Integrates:
+
+- **XHR/Fetch Requests**
+
+## Getting started {#getting-started}
+
+### Install via NPM {#install-via-npm}
+
+Use the following command to install the [ClickStack React Native package](https://www.npmjs.com/package/@hyperdx/otel-react-native).
+
+```bash
+npm install @hyperdx/otel-react-native
+```
+
+### Initialize ClickStack {#initialize-clickstack}
+
+Initialize the library as early in your app lifecycle as possible:
+
+```js
+import { HyperDXRum } from '@hyperdx/otel-react-native';
+
+HyperDXRum.init({
+ service: 'my-rn-app',
+ apiKey: '',
+ tracePropagationTargets: [/api.myapp.domain/i], // Set to link traces from frontend to backend requests
+});
+```
+
+### Attach user information or metadata (Optional) {#attach-user-information-metadata}
+
+Attaching user information will allow you to search/filter sessions and events
+in HyperDX. This can be called at any point during the client session. The
+current client session and all events sent after the call will be associated
+with the user information.
+
+`userEmail`, `userName`, and `teamName` will populate the sessions UI with the
+corresponding values, but can be omitted. Any other additional values can be
+specified and used to search for events.
+
+```js
+HyperDXRum.setGlobalAttributes({
+ userId: user.id,
+ userEmail: user.email,
+ userName: user.name,
+ teamName: user.team.name,
+ // Other custom properties...
+});
+```
+
+### Instrument lower versions {#instrument-lower-versions}
+
+To instrument applications running on React Native versions lower than 0.68,
+edit your `metro.config.js` file to force metro to use browser specific
+packages. For example:
+
+```js
+const defaultResolver = require('metro-resolver');
+
+module.exports = {
+ resolver: {
+ resolveRequest: (context, realModuleName, platform, moduleName) => {
+ const resolved = defaultResolver.resolve(
+ {
+ ...context,
+ resolveRequest: null,
+ },
+ moduleName,
+ platform,
+ );
+
+ if (
+ resolved.type === 'sourceFile' &&
+ resolved.filePath.includes('@opentelemetry')
+ ) {
+ resolved.filePath = resolved.filePath.replace(
+ 'platform\\node',
+ 'platform\\browser',
+ );
+ return resolved;
+ }
+
+ return resolved;
+ },
+ },
+ transformer: {
+ getTransformOptions: async () => ({
+ transform: {
+ experimentalImportSupport: false,
+ inlineRequires: true,
+ },
+ }),
+ },
+};
+```
+
+## View navigation {#view-navigation}
+
+[react-navigation](https://github.com/react-navigation/react-navigation) version 5 and 6 are supported.
+
+The following example shows how to instrument navigation:
+
+```js
+import { startNavigationTracking } from '@hyperdx/otel-react-native';
+
+export default function App() {
+ const navigationRef = useNavigationContainerRef();
+ return (
+ {
+ startNavigationTracking(navigationRef);
+ }}
+ >
+ ...
+
+ );
+}
+```
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/sdks/ruby.md b/docs/use-cases/observability/clickstack/ingesting-data/sdks/ruby.md
new file mode 100644
index 00000000000..8eaf9df0b14
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/ingesting-data/sdks/ruby.md
@@ -0,0 +1,90 @@
+---
+slug: /use-cases/observability/clickstack/sdks/ruby-on-rails
+pagination_prev: null
+pagination_next: null
+sidebar_position: 7
+description: 'Ruby on Rails SDK for ClickStack - The ClickHouse Observability Stack'
+title: 'Ruby on Rails'
+---
+
+This guide integrates:
+
+
+
+
+ ✖️ Logs |
+ ✖️ ️️Metrics |
+ ✅ Traces |
+
+
+
+
+
+_To send logs to ClickStack, please send logs via the [OpenTelemetry collector](/use-cases/observability/clickstack/ingesting-data/otel-collector)._
+
+## Getting started {#getting-started}
+
+### Install OpenTelemetry packages {#install-otel-packages}
+
+Use the following command to install the OpenTelemetry package.
+
+```bash
+bundle add opentelemetry-sdk opentelemetry-instrumentation-all opentelemetry-exporter-otlp
+```
+
+### Configure OpenTelemetry + logger formatter {#configure-otel-logger-formatter}
+
+Next, you'll need to initialize the OpenTelemetry tracing instrumentation
+and configure the log message formatter for Rails logger so that logs can be
+tied back to traces automatically. Without the custom formatter, logs will not
+be automatically correlated together in ClickStack.
+
+In `config/initializers` folder, create a file called `hyperdx.rb` and add the
+following to it:
+
+```ruby
+# config/initializers/hyperdx.rb
+
+require 'opentelemetry-exporter-otlp'
+require 'opentelemetry/instrumentation/all'
+require 'opentelemetry/sdk'
+
+OpenTelemetry::SDK.configure do |c|
+ c.use_all() # enables all trace instrumentation!
+end
+
+Rails.application.configure do
+ Rails.logger = Logger.new(STDOUT)
+ # Rails.logger.log_level = Logger::INFO # default is DEBUG, but you might want INFO or above in production
+ Rails.logger.formatter = proc do |severity, time, progname, msg|
+ span_id = OpenTelemetry::Trace.current_span.context.hex_span_id
+ trace_id = OpenTelemetry::Trace.current_span.context.hex_trace_id
+ if defined? OpenTelemetry::Trace.current_span.name
+ operation = OpenTelemetry::Trace.current_span.name
+ else
+ operation = 'undefined'
+ end
+
+ { "time" => time, "level" => severity, "message" => msg, "trace_id" => trace_id, "span_id" => span_id,
+ "operation" => operation }.to_json + "\n"
+ end
+
+ Rails.logger.info "Logger initialized !! 🐱"
+end
+```
+
+### Configure environment variables {#configure-environment-variables}
+
+Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
+
+```bash
+export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
+OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
+OTEL_SERVICE_NAME='' \
+OTEL_EXPORTER_OTLP_HEADERS='authorization='
+```
+
+_The `OTEL_SERVICE_NAME` environment variable is used to identify your service
+in the HyperDX app, it can be any name you want._
+
+The `OTEL_EXPORTER_OTLP_HEADERS` environment variable contains the API Key available via HyperDX app in `Team Settings → API Keys`.
diff --git a/docs/use-cases/observability/clickstack/overview.md b/docs/use-cases/observability/clickstack/overview.md
new file mode 100644
index 00000000000..4cc36bb5d2d
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/overview.md
@@ -0,0 +1,97 @@
+---
+slug: /use-cases/observability/clickstack/overview
+title: 'ClickStack - The ClickHouse Observability Stack'
+sidebar_label: 'Overview'
+pagination_prev: null
+pagination_next: use-cases/observability/clickstack/getting-started
+description: 'Overview for ClickStack - The ClickHouse Observability Stack'
+---
+
+import Image from '@theme/IdealImage';
+import architecture from '@site/static/images/use-cases/observability/clickstack-simple-architecture.png';
+import landing_image from '@site/static/images/use-cases/observability/hyperdx-landing.png';
+
+
+
+**ClickStack** is a production-grade observability platform built on ClickHouse, unifying logs, traces, metrics and session in a single high-performance solution. Designed for monitoring and debugging complex systems, ClickStack enables developers and SREs to trace issues end-to-end without switching between tools or manually stitching together data using timestamps or correlation IDs.
+
+At the core of ClickStack is a simple but powerful idea: all observability data should be ingested as wide, rich events. These events are stored in ClickHouse tables by data type - logs, traces, metrics, and sessions - but remain fully queryable and cross-correlatable at the database level.
+
+ClickStack is built to handle high-cardinality workloads efficiently by leveraging ClickHouse's column-oriented architecture, native JSON support, and fully parallelized execution engine. This enables sub-second queries across massive datasets, fast aggregations over wide time ranges, and deep inspection of individual traces. JSON is stored in a compressed, columnar format, allowing schema evolution without manual intervention or upfront definitions.
+
+## Features {#features}
+
+The stack includes several key features designed for debugging and root cause analysis:
+
+- Correlate/search logs, metrics, session replays, and traces all in one place
+- Schema agnostic, works on top of your existing ClickHouse schema
+- Blazing-fast searches & visualizations optimized for ClickHouse
+- Intuitive full-text search and property search syntax (ex. `level:err`), SQL optional!
+- Analyze trends in anomalies with event deltas
+- Set up alerts in just a few clicks
+- Dashboard high cardinality events without a complex query language
+- Native JSON string querying
+- Live tail logs and traces to always get the freshest events
+- OpenTelemetry (OTel) supported out of the box
+- Monitor health and performance from HTTP requests to DB queries (APM)
+- Event deltas for identifying anomalies and performance regressions
+- Log pattern recognition
+
+## Components {#components}
+
+ClickStack consists of three core components:
+
+1. **HyperDX UI** – a purpose-built frontend for exploring and visualizing observability data
+2. **OpenTelemetry collector** – a custom-built, preconfigured collector with an opinionated schema for logs, traces, and metrics
+3. **ClickHouse** – the high-performance analytical database at the heart of the stack
+
+These components can be deployed independently or together. A browser-hosted version of the HyperDX UI is also available, allowing users to connect to existing ClickHouse deployments without additional infrastructure.
+
+To get started, visit the [Getting started guide](/use-cases/observability/clickstack/getting-started) before loading a [sample dataset](/use-cases/observability/clickstack/sample-datasets). You can also explore documentation on [deployment options](/use-cases/observability/clickstack/deployment) and [production best practices](/use-cases/observability/clickstack/production).
+
+## Principles {#clickstack-principles}
+
+ClickStack is designed with a set of core principles that prioritize ease of use, performance, and flexibility at every layer of the observability stack:
+
+### Easy to set up in minutes {#clickstack-easy-to-setup}
+
+ClickStack works out of the box with any ClickHouse instance and schema, requiring minimal configuration. Whether you're starting fresh or integrating with an existing setup, you can be up and running in minutes.
+
+### User-friendly and purpose-built {#user-friendly-purpose-built}
+
+The HyperDX UI supports both SQL and Lucene-style syntax, allowing users to choose the query interface that fits their workflow. Purpose-built for observability, the UI is optimized to help teams identify root causes quickly and navigate complex data without friction.
+
+### End-to-end observability {#end-to-end-observability}
+
+ClickStack provides full-stack visibility, from front-end user sessions to backend infrastructure metrics, application logs, and distributed traces. This unified view enables deep correlation and analysis across the entire system.
+
+### Built for ClickHouse {#built-for-clickhouse}
+
+Every layer of the stack is designed to make full use of ClickHouse's capabilities. Queries are optimized to leverage ClickHouse's analytical functions and columnar engine, ensuring fast search and aggregation over massive volumes of data.
+
+### OpenTelemetry-native {#open-telemetry-native}
+
+ClickStack is natively integrated with OpenTelemetry, ingesting all data through an OpenTelemetry collector endpoint. For advanced users, it also supports direct ingestion into ClickHouse using native file formats, custom pipelines, or third-party tools like Vector.
+
+### Open source and fully customizable {#open-source-and-customizable}
+
+ClickStack is fully open source and can be deployed anywhere. The schema is flexible and user-modifiable, and the UI is designed to be configurable to custom schemas without requiring changes. All components—including collectors, ClickHouse, and the UI - can be scaled independently to meet ingestion, query, or storage demands.
+
+## Architectural overview {#architectural-overview}
+
+
+
+ClickStack consists of three core components:
+
+1. **HyperDX UI**
+ A user-friendly interface built for observability. It supports both Lucene-style and SQL queries, interactive dashboards, alerting, trace exploration, and more—all optimized for ClickHouse as the backend.
+
+2. **OpenTelemetry collector**
+ A custom-built collector configured with an opinionated schema optimized for ClickHouse ingestion. It receives logs, metrics, and traces via OpenTelemetry protocols and writes them directly to ClickHouse using efficient batched inserts.
+
+3. **ClickHouse**
+ The high-performance analytical database that serves as the central data store for wide events. ClickHouse powers fast search, filtering, and aggregation at scale, leveraging its columnar engine and native support for JSON.
+
+In addition to these three components, ClickStack uses a **MongoDB instance** to store application state such as dashboards, user accounts, and configuration settings.
+
+A full architectural diagram and deployment details can be found in the [Architecture section](/use-cases/observability/clickstack/architecture).
diff --git a/docs/use-cases/observability/clickstack/production.md b/docs/use-cases/observability/clickstack/production.md
new file mode 100644
index 00000000000..d81abc8991e
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/production.md
@@ -0,0 +1,197 @@
+---
+slug: /use-cases/observability/clickstack/production
+title: 'Going to Production'
+sidebar_label: 'Production'
+pagination_prev: null
+pagination_next: null
+description: 'Going to production with ClickStack'
+---
+
+import Image from '@theme/IdealImage';
+import connect_cloud from '@site/static/images/use-cases/observability/connect-cloud.png';
+import hyperdx_cloud from '@site/static/images/use-cases/observability/hyperdx-cloud.png';
+import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
+import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
+
+When deploying ClickStack in production, there are several additional considerations to ensure security, stability, and correct configuration.
+
+## Network and Port Security {#network-security}
+
+By default, Docker Compose exposes ports on the host, making them accessible from outside the container - even if tools like `ufw` (Uncomplicated Firewall) are enabled. This behavior is due to the Docker networking stack, which can bypass host-level firewall rules unless explicitly configured.
+
+**Recommendation:**
+
+Only expose ports that are necessary for production use. Typically the OTLP endpoints, API server, and frontend.
+
+For example, remove or comment out unnecessary port mappings in your `docker-compose.yml` file:
+
+```yaml
+ports:
+ - "4317:4317" # OTLP gRPC
+ - "4318:4318" # OTLP HTTP
+ - "8080:8080" # Only if needed for the API
+# Avoid exposing internal ports like ClickHouse 8123 or MongoDB 27017.
+```
+
+Refer to the [Docker networking documentation](https://docs.docker.com/network/) for details on isolating containers and hardening access.
+
+## Session Secret Configuration {#session-secret}
+
+In production, you must set a strong, random value for the `EXPRESS_SESSION_SECRET` environment variable to protect session data and prevent tampering.
+
+Here's how to add it to your `docker-compose.yml` file for the app service:
+
+```yaml
+ app:
+ image: ${IMAGE_NAME_HDX}:${IMAGE_VERSION}
+ ports:
+ - ${HYPERDX_API_PORT}:${HYPERDX_API_PORT}
+ - ${HYPERDX_APP_PORT}:${HYPERDX_APP_PORT}
+ environment:
+ FRONTEND_URL: ${HYPERDX_APP_URL}:${HYPERDX_APP_PORT}
+ HYPERDX_API_KEY: ${HYPERDX_API_KEY}
+ HYPERDX_API_PORT: ${HYPERDX_API_PORT}
+ HYPERDX_APP_PORT: ${HYPERDX_APP_PORT}
+ HYPERDX_APP_URL: ${HYPERDX_APP_URL}
+ HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
+ MINER_API_URL: 'http://miner:5123'
+ MONGO_URI: 'mongodb://db:27017/hyperdx'
+ NEXT_PUBLIC_SERVER_URL: http://127.0.0.1:${HYPERDX_API_PORT}
+ OTEL_SERVICE_NAME: 'hdx-oss-api'
+ USAGE_STATS_ENABLED: ${USAGE_STATS_ENABLED:-true}
+ EXPRESS_SESSION_SECRET: "super-secure-random-string"
+ networks:
+ - internal
+ depends_on:
+ - ch-server
+ - db1
+```
+
+You can generate a strong secret using openssl:
+
+```bash
+openssl rand -hex 32
+```
+
+Avoid committing secrets to source control. In production, consider using environment variable management tools (e.g. Docker Secrets, HashiCorp Vault, or environment-specific CI/CD configs).
+
+## Secure ingestion {#secure-ingestion}
+
+All ingestion should occur via the OTLP ports exposed by ClickStack distribution of the OpenTelemetry (OTel) collector. By default, this requires a secure ingestion API key generated at startup. This key is required when sending data to the OTel ports, and can be found in the HyperDX UI under `Team Settings → API Keys`.
+
+
+
+Additionally, we recommend enabling TLS for OTLP endpoints and creating a [dedicated user for ClickHouse ingestion](#database-ingestion-user).
+
+## ClickHouse {#clickhouse}
+
+For production deployments, we recommend using [ClickHouse Cloud](https://clickhouse.com/cloud), which applies industry-standard [security practices](/cloud/security) by default - including [enhanced encryption](/cloud/security/cmek), [authentication and connectivity](/cloud/security/connectivity), and [managed access controls](/cloud/security/cloud-access-management). See ["ClickHouse Cloud"](#clickhouse-cloud-production) for a step-by-step guide of using ClickHouse Cloud with best practices.
+
+### User Permissions {#user-permissions}
+
+#### HyperDX user {#hyperdx-user}
+
+The ClickHouse user for HyperDX only needs to be a `readonly` user with access to change the following settings:
+
+- `max_rows_to_read` (at least up to 1 million)
+- `read_overflow_mode`
+- `cancel_http_readonly_queries_on_client_close`
+- `wait_end_of_query`
+
+By default the `default` user in both OSS and ClickHouse Cloud will have these permissions available but we recommend you create a new user with these permissions.
+
+#### Database and ingestion user {#database-ingestion-user}
+
+We recommend creating a dedicated user for the OTel collector for ingestion into ClickHouse and ensuring ingestion is sent to a specific database e.g. `otel`. See ["Creating an ingestion user"](/use-cases/observability/clickstack/ingesting-data/otel-collector#creating-an-ingestion-user) for further details.
+
+### Self-managed security {#self-managed-security}
+
+If you are managing your own ClickHouse instance, it's essential to enable **SSL/TLS**, enforce authentication, and follow best practices for hardening access. See [this blog post](https://www.wiz.io/blog/clickhouse-and-wiz) for context on real-world misconfigurations and how to avoid them.
+
+ClickHouse OSS provides robust security features out of the box. However, these require configuration:
+
+- **Use SSL/TLS** via `tcp_port_secure` and `` in `config.xml`. See [guides/sre/configuring-ssl](/guides/sre/configuring-ssl).
+- **Set a strong password** for the `default` user or disable it.
+- **Avoid exposing ClickHouse externally** unless explicitly intended. By default, ClickHouse binds only to `localhost` unless `listen_host` is modified.
+- **Use authentication methods** such as passwords, certificates, SSH keys, or [external authenticators](/operations/external-authenticators).
+- **Restrict access** using IP filtering and the `HOST` clause. See [sql-reference/statements/create/user#user-host](/sql-reference/statements/create/user#user-host).
+- **Enable Role-Based Access Control (RBAC)** to grant granular privileges. See [operations/access-rights](/operations/access-rights).
+- **Enforce quotas and limits** using [quotas](/operations/quotas), [settings profiles](/operations/settings/settings-profiles), and read-only modes.
+- **Encrypt data at rest** and use secure external storage. See [operations/storing-data](/operations/storing-data) and [cloud/security/CMEK](/cloud/security/cmek).
+- **Avoid hard coding credentials.** Use [named collections](/operations/named-collections) or IAM roles in ClickHouse Cloud.
+- **Audit access and queries** using [system logs](/operations/system-tables/query_log) and [session logs](/operations/system-tables/session_log).
+
+See also [external authenticators](/operations/external-authenticators) and [query complexity settings](/operations/settings/query-complexity) for managing users and ensuring query/resource limits.
+
+## MongoDB Guidelines {#mongodb-guidelines}
+
+Follow the official [MongoDB security checklist](https://www.mongodb.com/docs/manual/administration/security-checklist/).
+
+## ClickHouse Cloud {#clickhouse-cloud-production}
+
+The following represents a simple deployment of ClickStack using ClickHouse Cloud which meets best practices.
+
+
+
+### Create a service {#create-a-service}
+
+Follow the [getting started guide for ClickHouse Cloud](/cloud/get-started/cloud-quick-start#1-create-a-clickhouse-service) to create a service.
+
+### Copy connection details {#copy-connection-details}
+
+To find the connection details for HyperDX, navigate to the ClickHouse Cloud console and click the Connect button on the sidebar recording the HTTP connection details specifically the url.
+
+**While you may use the default username and password shown in this step to connect HyperDX, we recommend creating a dedicated user - see below**
+
+
+
+### Create a HyperDX user {#create-a-user}
+
+We recommend you create a dedicated user for HyperDX. Run the following SQL commands in the [Cloud SQL console](/cloud/get-started/sql-console), providing a secure password which meets complexity requirements:
+
+```sql
+CREATE USER hyperdx IDENTIFIED WITH sha256_password BY '' SETTINGS PROFILE 'readonly';
+GRANT sql_console_read_only TO hyperdx;
+```
+
+### Prepare for ingestion user {#prepare-for-ingestion}
+
+Create an `otel` database for data and a `hyperdx_ingest` user for ingestion with limited permissions.
+
+```sql
+CREATE DATABASE otel;
+CREATE USER hyperdx_ingest IDENTIFIED WITH sha256_password BY 'ClickH0u3eRocks123!';
+GRANT SELECT, INSERT, CREATE TABLE, CREATE VIEW ON otel.* TO hyperdx_ingest;
+```
+
+### Deploy ClickStack {#deploy-clickstack}
+
+Deploy ClickStack - the [Helm](/use-cases/observability/clickstack/deployment/helm) or [Docker Compose](/use-cases/observability/clickstack/deployment/docker-compose) (modified to exclude ClickHouse) deployment models are preferred.
+
+:::note Deploying components separately
+Advanced users can deploy the [OTel collector](/use-cases/observability/clickstack/ingesting-data/opentelemetry#standalone) and [HyperDX](/use-cases/observability/clickstack/deployment/hyperdx-only) separately with their respective standalone deployment modes.
+:::
+
+Instructions for using ClickHouse Cloud with the Helm chart can be found [here](/use-cases/observability/clickstack/deployment/helm#using-clickhouse-cloud). Equivalent instructions for Docker Compose can be found [here](/use-cases/observability/clickstack/deployment/docker-compose).
+
+### Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
+
+Visit [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
+
+Create a user, providing a username and password which meets the requirements.
+
+
+
+On clicking `Create` you'll be prompted for connection details.
+
+### Connect to ClickHouse Cloud {#connect-to-clickhouse-cloud}
+
+Using the credentials created earlier, complete the connection details and click `Create`.
+
+
+
+### Send data to ClickStack {#send-data}
+
+To send data to ClickStack see ["Sending OpenTelemetry data"](/use-cases/observability/clickstack/ingesting-data/opentelemetry#sending-otel-data).
+
+
diff --git a/docs/use-cases/observability/clickstack/search.md b/docs/use-cases/observability/clickstack/search.md
new file mode 100644
index 00000000000..8d896367312
--- /dev/null
+++ b/docs/use-cases/observability/clickstack/search.md
@@ -0,0 +1,63 @@
+---
+slug: /use-cases/observability/clickstack/search
+title: 'Search with ClickStack'
+sidebar_label: 'Search'
+pagination_prev: null
+pagination_next: null
+description: 'Search with ClickStack'
+---
+
+import Image from '@theme/IdealImage';
+import hyperdx_27 from '@site/static/images/use-cases/observability/hyperdx-27.png';
+
+ClickStack allows you to do a full-text search on your events (logs and traces). You can get started searching by just typing keywords that match your events. For example, if your log contains "Error", you can find it by just typing in "Error" in the search bar.
+
+This same search syntax is used for filtering events with Dashboards and Charts
+as well.
+
+## Natural Language Search Syntax {#natural-language-syntax}
+
+- Searches are not case sensitive
+- Searches match by whole word by default (ex. `Error` will match `Error here`
+ but not `Errors here`). You can surround a word by wildcards to match partial
+ words (ex. `*Error*` will match `AnyError` and `AnyErrors`)
+- Search terms are searched in any order (ex. `Hello World` will match logs that
+ contain `Hello World` and `World Hello`)
+- You can exclude keywords by using `NOT` or `-` (ex. `Error NOT Exception` or
+ `Error -Exception`)
+- You can use `AND` and `OR` to combine multiple keywords (ex.
+ `Error OR Exception`)
+- Exact matches can be done via double quotes (ex. `"Error tests not found"`)
+
+
+
+### Column/Property Search {#column-search}
+
+- You can search columns and JSON/map properties by using `column:value` (ex. `level:Error`,
+ `service:app`)
+- You can search for a range of values by using comparison operators (`>`, `<`,
+ `>=`, `<=`) (ex. `Duration:>1000`)
+- You can search for the existence of a property by using `property:*` (ex.
+ `duration:*`)
+
+## Time Input {#time-input}
+
+- Time input accepts natural language inputs (ex. `1 hour ago`, `yesterday`,
+ `last week`)
+- Specifying a single point in time will result in searching from that point in
+ time up until now.
+- Time range will always be converted into the parsed time range upon search for
+ easy debugging of time queries.
+- You can highlight a histogram bar to zoom into a specific time range as well.
+
+## SQL Search Syntax {#sql-syntax}
+
+You can optionally toggle search inputs to be in SQL mode. This will accept any valid
+SQL WHERE clause for searching. This is useful for complex queries that cannot be
+expressed in Lucene syntax.
+
+## SELECT Statement {#select-statement}
+
+To specify the columns to display in the search results, you can use the `SELECT`
+input. This is a SQL SELECT expression for the columns to select in the search page.
+Aliases are not supported at this time (ex. you can not use `column as "alias"`).
diff --git a/docs/use-cases/observability/demo-application.md b/docs/use-cases/observability/demo-application.md
deleted file mode 100644
index dbb678122fa..00000000000
--- a/docs/use-cases/observability/demo-application.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: 'Demo Application'
-description: 'Demo application for observability'
-slug: /observability/demo-application
-keywords: ['observability', 'logs', 'traces', 'metrics', 'OpenTelemetry', 'Grafana', 'OTel']
----
-
-The Open Telemetry project includes a [demo application](https://opentelemetry.io/docs/demo/). A maintained fork of this application with ClickHouse as a data source for logs and traces can be found [here](https://github.com/ClickHouse/opentelemetry-demo). The [official demo instructions](https://opentelemetry.io/docs/demo/docker-deployment/) can be followed to deploy this demo with docker. In addition to the [existing components](https://opentelemetry.io/docs/demo/collector-data-flow-dashboard/), an instance of ClickHouse will be deployed and used for the storage of logs and traces.
diff --git a/docs/use-cases/observability/index.md b/docs/use-cases/observability/index.md
index aee17d5eeaa..17c73a27d75 100644
--- a/docs/use-cases/observability/index.md
+++ b/docs/use-cases/observability/index.md
@@ -7,15 +7,36 @@ description: 'Landing page for the Observability use case guide'
keywords: ['observability', 'logs', 'traces', 'metrics', 'OpenTelemetry', 'Grafana', 'OTel']
---
-Welcome to our Observability use case guide. In this guide you'll learn how you can get setup and use ClickHouse for Observability.
-
-Navigate to the pages below to explore the different sections of this guide.
-
-| Page | Description |
-|-------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [Introduction](./introduction.md) | This guide is designed for users looking to build their own SQL-based Observability solution using ClickHouse, focusing on logs and traces. |
-| [Schema design](./schema-design.md) | Learn why users are recommended to create their own schema for logs and traces, along with some best practices for doing so. |
-| [Managing data](./managing-data.md) | Deployments of ClickHouse for Observability invariably involve large datasets, which need to be managed. ClickHouse offers a number of features to assist with data management. |
-| [Integrating OpenTelemetry](./integrating-opentelemetry.md) | Any Observability solution requires a means of collecting and exporting logs and traces. For this purpose, ClickHouse recommends the OpenTelemetry (OTel) project. Learn more about how to integrate it with ClickHouse. |
-| [Using Grafana](./grafana.md) | Learn how to use Grafana, the preferred visualization tool for Observability data in ClickHouse, with ClickHouse.
-| [Demo Application](./demo-application.md) | The Open Telemetry project includes a demo application. A maintained fork of this application with ClickHouse as a data source for logs and traces can be found linked on this page.|
+ClickHouse offers unmatched speed, scale, and cost-efficiency for observability. This guide provides two paths depending on your needs:
+
+## ClickStack - The ClickHouse Observability Stack {#clickstack}
+
+The ClickHouse Observability Stack is our **recommended approach** for most users.
+
+**ClickStack** is a production-grade observability platform built on ClickHouse and OpenTelemetry (OTel), unifying logs, traces, metrics and session in a single high-performance scalable solution that works from single-node deployments to **multi-petabyte** scale.
+
+| Section | Description |
+|---------|-------------|
+| [Overview](/use-cases/observability/clickstack/overview) | Introduction to ClickStack and its key features |
+| [Getting Started](/use-cases/observability/clickstack/getting-started) | Quick start guide and basic setup instructions |
+| [Example Datasets](/use-cases/observability/clickstack/sample-datasets) | Sample datasets and use cases |
+| [Architecture](/use-cases/observability/clickstack/architecture) | System architecture and components overview |
+| [Deployment](/use-cases/observability/clickstack/deployment) | Deployment guides and options |
+| [Configuration](/use-cases/observability/clickstack/config) | Detailed configuration options and settings |
+| [Ingesting Data](/use-cases/observability/clickstack/ingesting-data) | Guidelines for ingesting data to ClickStack |
+| [Search](/use-cases/observability/clickstack/search) | How to search and query your observability data |
+| [Production](/use-cases/observability/clickstack/production) | Best practices for production deployment |
+
+
+## Build-Your-Own Stack {#build-your-own-stack}
+
+For users with **custom requirements** — such as highly specialized ingestion pipelines, schema designs, or extreme scaling needs — we provide guidance to build a custom observability stack with ClickHouse as the core database.
+
+| Page | Description |
+|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Introduction](/use-cases/observability/introduction) | This guide is designed for users looking to build their own observability solution using ClickHouse, focusing on logs and traces. |
+| [Schema design](/use-cases/observability/schema-design) | Learn why users are recommended to create their own schema for logs and traces, along with some best practices for doing so. |
+| [Managing data](/observability/managing-data) | Deployments of ClickHouse for observability invariably involve large datasets, which need to be managed. ClickHouse offers features to assist with data management. |
+| [Integrating OpenTelemetry](/observability/integrating-opentelemetry) | Collecting and exporting logs and traces using OpenTelemetry with ClickHouse. |
+| [Using Visualization Tools](/observability/grafana) | Learn how to use observability visualization tools for ClickHouse, including HyperDX and Grafana. |
+| [Demo Application](/observability/demo-application) | Explore the OpenTelemetry demo application forked to work with ClickHouse for logs and traces. |
diff --git a/scripts/aspell-ignore/en/aspell-dict.txt b/scripts/aspell-ignore/en/aspell-dict.txt
index 2bebaa3b063..bf82ebcb026 100644
--- a/scripts/aspell-ignore/en/aspell-dict.txt
+++ b/scripts/aspell-ignore/en/aspell-dict.txt
@@ -190,6 +190,7 @@ ClickPipe
ClickPipes
ClickPipes's
ClickPy
+ClickStack
ClickVisual
ClickableSquare
CloudAvailableBadge
@@ -246,6 +247,7 @@ DDLs
DECRYPT
Decrypter
DELETEs
+Deno
DESC
DIEs
DOGEFI
@@ -414,6 +416,7 @@ Grafana
GraphQL
GraphiteMergeTree
Greenwald
+Gunicorn
HANA
HDDs
HHMM
@@ -442,6 +445,8 @@ Hostname
HouseOps
Hudi
HudiCluster
+HyperDX
+HyperDX's
HyperLogLog
Hypertable
Hypot
@@ -634,6 +639,7 @@ LowCardinality
LpDistance
LpNorm
LpNormalize
+Lucene
Luebbe
Luzmo
Lyft
@@ -661,6 +667,7 @@ MarkCacheBytes
MarkCacheFiles
MarksLoaderThreads
MarksLoaderThreadsActive
+Mastercard
MaterializedMySQL
MaterializedPostgreSQL
MaterializedView
@@ -750,6 +757,7 @@ Namenode
NamesAndTypesList
Nano
Nesterov
+NestJS
NetFlow
NetworkReceive
NetworkReceiveBytes
@@ -841,6 +849,7 @@ Ok
Omni
OnTime
Ookla
+OpAMP
OpenAI
OpenAPI
OpenCelliD
@@ -1285,6 +1294,7 @@ Vadim
Valgrind
Vectorization
Vectorized
+Vercel
VersionBadge
VersionInteger
VersionedCollapsingMergeTree
@@ -1295,6 +1305,7 @@ Vose
WALs
WSFG
WarpStream
+Websocket
Welch's
Werror
Wether
@@ -1479,6 +1490,7 @@ atomicity
auth
authenticator
authenticators
+autocomplete
autocompletion
autodetect
autodetected
@@ -1580,6 +1592,7 @@ bugfix
buildId
buildable
builtins
+bundler
burstable
byteHammingDistance
byteSize
@@ -1691,6 +1704,7 @@ const
contrib
convertCharset
coroutines
+correlatable
corrMatrix
corrStable
corrmatrix
@@ -2004,6 +2018,7 @@ fromModifiedJulianDayOrNull
fromUTCTimestamp
fromUnixTimestamp
fromUnixTimestampInJodaSyntax
+frontend
fsync
func
fuzzBits
@@ -2318,6 +2333,7 @@ libpq
libpqxx
librdkafka
libs
+lifecycle
libunwind
libuv
libvirt
@@ -2409,6 +2425,7 @@ metrica
metroHash
mfedotov
mflix
+microservice
microservices
middleware
minMap
@@ -2423,6 +2440,7 @@ minmap
minmax
mins
misconfiguration
+misconfigurations
misconfigured
mispredictions
mlock
@@ -3148,6 +3166,7 @@ throughputs
throwIf
throwif
timeDiff
+timespan
timeSeriesData
timeSeriesMetrics
timeSeriesTags
@@ -3389,6 +3408,7 @@ userver
utils
uuid
uuidv
+uvicorn
vCPU
vCPUs
varPop
@@ -3424,6 +3444,7 @@ vruntime
walkthrough
wchc
wchs
+webhook
webpage
webserver
weekyear
@@ -3435,6 +3456,7 @@ whitespace
whitespaces
wikistat
windowFunnel
+winston
wordShingleMinHash
wordShingleMinHashArg
wordShingleMinHashArgCaseInsensitive
diff --git a/sidebars.js b/sidebars.js
index 58228e22fd7..cf1b9ab776e 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -107,16 +107,33 @@ const sidebars = {
{
type: "category",
label: "Observability",
- collapsed: true,
+ collapsed: false,
collapsible: true,
link: { type: "doc", id: "use-cases/observability/index" },
items: [
- "use-cases/observability/introduction",
- "use-cases/observability/schema-design",
- "use-cases/observability/managing-data",
- "use-cases/observability/integrating-opentelemetry",
- "use-cases/observability/grafana",
- "use-cases/observability/demo-application",
+ {
+ type: "category",
+ label: "ClickStack",
+ collapsed: true,
+ collapsible: true,
+ link: { type: "doc", id: "use-cases/observability/clickstack/index" },
+ items: []
+ },
+ {
+ type: "category",
+ label: "Build Your Own",
+ collapsed: true,
+ collapsible: true,
+ link: { type: "doc", id: "use-cases/observability/build-your-own/index" },
+ items: [
+ "use-cases/observability/build-your-own/introduction",
+ "use-cases/observability/build-your-own/schema-design",
+ "use-cases/observability/build-your-own/managing-data",
+ "use-cases/observability/build-your-own/integrating-opentelemetry",
+ "use-cases/observability/build-your-own/grafana",
+ "use-cases/observability/build-your-own/demo-application",
+ ]
+ }
]
},
{
@@ -1536,6 +1553,78 @@ const sidebars = {
],
+ clickstack: [
+ {
+ type: "category",
+ label: "ClickStack",
+ collapsed: false,
+ collapsible: false,
+ link: { type: "doc", id: "use-cases/observability/clickstack/index" },
+ items: [
+ "use-cases/observability/clickstack/overview",
+ "use-cases/observability/clickstack/getting-started",
+ {
+ type: "category",
+ label: "Sample Datasets",
+ collapsed: true,
+ collapsible: true,
+ link: { type: "doc", id: "use-cases/observability/clickstack/example-datasets/index" },
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "use-cases/observability/clickstack/example-datasets",
+ }
+ ]
+ },
+ "use-cases/observability/clickstack/architecture",
+ {
+ type: "category",
+ label: "Deployment",
+ collapsed: true,
+ collapsible: true,
+ link: { type: "doc", id: "use-cases/observability/clickstack/deployment/index" },
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "use-cases/observability/clickstack/deployment",
+ }
+ ]
+ },
+ {
+ type: "category",
+ label: "Ingesting Data",
+ collapsed: true,
+ collapsible: true,
+ link: { type: "doc", id: "use-cases/observability/clickstack/ingesting-data/index" },
+ items: [
+ "use-cases/observability/clickstack/ingesting-data/overview",
+ "use-cases/observability/clickstack/ingesting-data/opentelemetry",
+ "use-cases/observability/clickstack/ingesting-data/collector",
+ "use-cases/observability/clickstack/ingesting-data/kubernetes",
+ "use-cases/observability/clickstack/ingesting-data/schemas",
+ {
+ type: "category",
+ label: "SDKs",
+ collapsed: true,
+ collapsible: true,
+ link: { type: "doc", id: "use-cases/observability/clickstack/ingesting-data/sdks/index" },
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "use-cases/observability/clickstack/ingesting-data/sdks",
+ }
+ ]
+ }
+ ]
+ },
+ "use-cases/observability/clickstack/config",
+ "use-cases/observability/clickstack/search",
+ "use-cases/observability/clickstack/alerts",
+ "use-cases/observability/clickstack/production",
+ ]
+ },
+ ],
+
// Used for generating the top nav menu and secondary nav mobile menu (DocsCategoryDropdown) AND top navigation menu
dropdownCategories: [
{
@@ -1814,6 +1903,59 @@ const sidebars = {
}
]
},
+ {
+ type: "category",
+ label: "ClickStack",
+ description: "ClickStack - The ClickHouse Observability Stack",
+ customProps: {
+ href: "/use-cases/observability/clickstack/overview",
+ sidebar: "clickstack"
+ },
+ items: [
+ {
+ type: "link",
+ label: "Getting Started",
+ description: "Get started with ClickStack",
+ href: "/use-cases/observability/clickstack/getting-started"
+ },
+ {
+ type: "link",
+ label: "Sample Datasets",
+ description: "Learn ClickStack with sample datasets",
+ href: "/use-cases/observability/clickstack/sample-datasets"
+ },
+ {
+ type: "link",
+ label: "Architecture",
+ description: "Familiarize yourself with the ClickStack architecture",
+ href: "/use-cases/observability/clickstack/sample-datasets"
+ },
+ {
+ type: "link",
+ label: "Deployment",
+ description: "Choose a ClickStack deployment mode",
+ href: "/use-cases/observability/clickstack/deployment"
+ },
+ {
+ type: "link",
+ label: "Ingesting Data",
+ description: "Ingest data into ClickStack",
+ href: "/use-cases/observability/clickstack/ingesting-data"
+ },
+ {
+ type: "link",
+ label: "Configuration Options",
+ description: "Deploy ClickStack in production",
+ href: "/use-cases/observability/clickstack/production"
+ },
+ {
+ type: "link",
+ label: "Production",
+ description: "Deploy ClickStack in production",
+ href: "/use-cases/observability/clickstack/production"
+ }
+ ]
+ },
{
type: "category",
label: "chDB",
diff --git a/src/css/custom.scss b/src/css/custom.scss
index 5211983f004..b77201d2fb9 100644
--- a/src/css/custom.scss
+++ b/src/css/custom.scss
@@ -494,15 +494,15 @@ li.theme-doc-sidebar-item-category-level-1 {
}
.theme-doc-sidebar-item-link-level-5 {
- padding-left: 1.9rem;
+ padding-left: 1.2rem;
}
.theme-doc-sidebar-item-category-level-5 {
- padding-left: 1.9rem;
+ padding-left: 1.2rem;
}
.theme-doc-sidebar-item-link-level-6 {
- padding-left: 2.2rem;
+ padding-left: 1.2rem;
}
.theme-doc-sidebar-item-category-level-6 {
diff --git a/static/images/use-cases/observability/add_connection.png b/static/images/use-cases/observability/add_connection.png
new file mode 100644
index 00000000000..3a07f09f529
Binary files /dev/null and b/static/images/use-cases/observability/add_connection.png differ
diff --git a/static/images/use-cases/observability/clickstack-architecture.png b/static/images/use-cases/observability/clickstack-architecture.png
new file mode 100644
index 00000000000..97390f1a4f4
Binary files /dev/null and b/static/images/use-cases/observability/clickstack-architecture.png differ
diff --git a/static/images/use-cases/observability/clickstack-simple-architecture.png b/static/images/use-cases/observability/clickstack-simple-architecture.png
new file mode 100644
index 00000000000..8946d40e2d6
Binary files /dev/null and b/static/images/use-cases/observability/clickstack-simple-architecture.png differ
diff --git a/static/images/use-cases/observability/clickstack-with-gateways.png b/static/images/use-cases/observability/clickstack-with-gateways.png
new file mode 100644
index 00000000000..5259c7f3614
Binary files /dev/null and b/static/images/use-cases/observability/clickstack-with-gateways.png differ
diff --git a/static/images/use-cases/observability/clickstack-with-kafka.png b/static/images/use-cases/observability/clickstack-with-kafka.png
new file mode 100644
index 00000000000..6759321fdb1
Binary files /dev/null and b/static/images/use-cases/observability/clickstack-with-kafka.png differ
diff --git a/static/images/use-cases/observability/connect-cloud-creds.png b/static/images/use-cases/observability/connect-cloud-creds.png
new file mode 100644
index 00000000000..5bb5e50d4e1
Binary files /dev/null and b/static/images/use-cases/observability/connect-cloud-creds.png differ
diff --git a/static/images/use-cases/observability/connect-cloud.png b/static/images/use-cases/observability/connect-cloud.png
new file mode 100644
index 00000000000..22ca86b7de2
Binary files /dev/null and b/static/images/use-cases/observability/connect-cloud.png differ
diff --git a/static/images/use-cases/observability/copy_api_key.png b/static/images/use-cases/observability/copy_api_key.png
new file mode 100644
index 00000000000..dd4911ba75f
Binary files /dev/null and b/static/images/use-cases/observability/copy_api_key.png differ
diff --git a/static/images/use-cases/observability/create_cloud_connection.png b/static/images/use-cases/observability/create_cloud_connection.png
new file mode 100644
index 00000000000..d791fc4cde9
Binary files /dev/null and b/static/images/use-cases/observability/create_cloud_connection.png differ
diff --git a/static/images/use-cases/observability/created_sources.png b/static/images/use-cases/observability/created_sources.png
new file mode 100644
index 00000000000..9ba9f1f526d
Binary files /dev/null and b/static/images/use-cases/observability/created_sources.png differ
diff --git a/static/images/use-cases/observability/delete_connection.png b/static/images/use-cases/observability/delete_connection.png
new file mode 100644
index 00000000000..4a4660d231b
Binary files /dev/null and b/static/images/use-cases/observability/delete_connection.png differ
diff --git a/static/images/use-cases/observability/delete_source.png b/static/images/use-cases/observability/delete_source.png
new file mode 100644
index 00000000000..d3a3f3b97d1
Binary files /dev/null and b/static/images/use-cases/observability/delete_source.png differ
diff --git a/static/images/use-cases/observability/edit_cloud_connection.png b/static/images/use-cases/observability/edit_cloud_connection.png
new file mode 100644
index 00000000000..5108a4d369b
Binary files /dev/null and b/static/images/use-cases/observability/edit_cloud_connection.png differ
diff --git a/static/images/use-cases/observability/edit_connection.png b/static/images/use-cases/observability/edit_connection.png
new file mode 100644
index 00000000000..cebba38c222
Binary files /dev/null and b/static/images/use-cases/observability/edit_connection.png differ
diff --git a/static/images/use-cases/observability/hyperdx-1.png b/static/images/use-cases/observability/hyperdx-1.png
new file mode 100644
index 00000000000..abf7e4fdbe1
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-1.png differ
diff --git a/static/images/use-cases/observability/hyperdx-10.png b/static/images/use-cases/observability/hyperdx-10.png
new file mode 100644
index 00000000000..7de703ab623
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-10.png differ
diff --git a/static/images/use-cases/observability/hyperdx-11.png b/static/images/use-cases/observability/hyperdx-11.png
new file mode 100644
index 00000000000..7d2f18c16ae
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-11.png differ
diff --git a/static/images/use-cases/observability/hyperdx-12.png b/static/images/use-cases/observability/hyperdx-12.png
new file mode 100644
index 00000000000..ad5e97045a0
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-12.png differ
diff --git a/static/images/use-cases/observability/hyperdx-13.png b/static/images/use-cases/observability/hyperdx-13.png
new file mode 100644
index 00000000000..7661352de61
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-13.png differ
diff --git a/static/images/use-cases/observability/hyperdx-14.png b/static/images/use-cases/observability/hyperdx-14.png
new file mode 100644
index 00000000000..ee60e8783cc
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-14.png differ
diff --git a/static/images/use-cases/observability/hyperdx-15.png b/static/images/use-cases/observability/hyperdx-15.png
new file mode 100644
index 00000000000..287e29ca159
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-15.png differ
diff --git a/static/images/use-cases/observability/hyperdx-16.png b/static/images/use-cases/observability/hyperdx-16.png
new file mode 100644
index 00000000000..4412c746d9e
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-16.png differ
diff --git a/static/images/use-cases/observability/hyperdx-17.png b/static/images/use-cases/observability/hyperdx-17.png
new file mode 100644
index 00000000000..9684d52ee30
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-17.png differ
diff --git a/static/images/use-cases/observability/hyperdx-18.png b/static/images/use-cases/observability/hyperdx-18.png
new file mode 100644
index 00000000000..0225b98fec8
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-18.png differ
diff --git a/static/images/use-cases/observability/hyperdx-19.png b/static/images/use-cases/observability/hyperdx-19.png
new file mode 100644
index 00000000000..7101146112e
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-19.png differ
diff --git a/static/images/use-cases/observability/hyperdx-2.png b/static/images/use-cases/observability/hyperdx-2.png
new file mode 100644
index 00000000000..6c9c0bc4eee
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-2.png differ
diff --git a/static/images/use-cases/observability/hyperdx-20.png b/static/images/use-cases/observability/hyperdx-20.png
new file mode 100644
index 00000000000..e6692e068f0
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-20.png differ
diff --git a/static/images/use-cases/observability/hyperdx-21.png b/static/images/use-cases/observability/hyperdx-21.png
new file mode 100644
index 00000000000..13667a0d3ba
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-21.png differ
diff --git a/static/images/use-cases/observability/hyperdx-22.png b/static/images/use-cases/observability/hyperdx-22.png
new file mode 100644
index 00000000000..3da563a3f81
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-22.png differ
diff --git a/static/images/use-cases/observability/hyperdx-23.png b/static/images/use-cases/observability/hyperdx-23.png
new file mode 100644
index 00000000000..9f9f42fea17
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-23.png differ
diff --git a/static/images/use-cases/observability/hyperdx-24.png b/static/images/use-cases/observability/hyperdx-24.png
new file mode 100644
index 00000000000..90ebf53c7e8
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-24.png differ
diff --git a/static/images/use-cases/observability/hyperdx-25.png b/static/images/use-cases/observability/hyperdx-25.png
new file mode 100644
index 00000000000..9fcc65c0c88
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-25.png differ
diff --git a/static/images/use-cases/observability/hyperdx-26.png b/static/images/use-cases/observability/hyperdx-26.png
new file mode 100644
index 00000000000..1ab65282139
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-26.png differ
diff --git a/static/images/use-cases/observability/hyperdx-27.png b/static/images/use-cases/observability/hyperdx-27.png
new file mode 100644
index 00000000000..40b7a40e6f4
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-27.png differ
diff --git a/static/images/use-cases/observability/hyperdx-3.png b/static/images/use-cases/observability/hyperdx-3.png
new file mode 100644
index 00000000000..4f793788b12
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-3.png differ
diff --git a/static/images/use-cases/observability/hyperdx-4.png b/static/images/use-cases/observability/hyperdx-4.png
new file mode 100644
index 00000000000..9f75f337c44
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-4.png differ
diff --git a/static/images/use-cases/observability/hyperdx-5.png b/static/images/use-cases/observability/hyperdx-5.png
new file mode 100644
index 00000000000..600c0a2c22d
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-5.png differ
diff --git a/static/images/use-cases/observability/hyperdx-6.png b/static/images/use-cases/observability/hyperdx-6.png
new file mode 100644
index 00000000000..96600bdcb0b
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-6.png differ
diff --git a/static/images/use-cases/observability/hyperdx-7.png b/static/images/use-cases/observability/hyperdx-7.png
new file mode 100644
index 00000000000..b5cd2a4da87
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-7.png differ
diff --git a/static/images/use-cases/observability/hyperdx-8.png b/static/images/use-cases/observability/hyperdx-8.png
new file mode 100644
index 00000000000..f445df863eb
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-8.png differ
diff --git a/static/images/use-cases/observability/hyperdx-9.png b/static/images/use-cases/observability/hyperdx-9.png
new file mode 100644
index 00000000000..d5ccca07fa3
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-9.png differ
diff --git a/static/images/use-cases/observability/hyperdx-cloud.png b/static/images/use-cases/observability/hyperdx-cloud.png
new file mode 100644
index 00000000000..1b8ddd4c700
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-cloud.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/architecture.png b/static/images/use-cases/observability/hyperdx-demo/architecture.png
new file mode 100644
index 00000000000..b2316a2c9cc
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/architecture.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/demo_connection.png b/static/images/use-cases/observability/hyperdx-demo/demo_connection.png
new file mode 100644
index 00000000000..eb091a4fb41
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/demo_connection.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/demo_sources.png b/static/images/use-cases/observability/hyperdx-demo/demo_sources.png
new file mode 100644
index 00000000000..c20c5ad4cd3
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/demo_sources.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/edit_demo_connection.png b/static/images/use-cases/observability/hyperdx-demo/edit_demo_connection.png
new file mode 100644
index 00000000000..d53df815395
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/edit_demo_connection.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/edit_demo_source.png b/static/images/use-cases/observability/hyperdx-demo/edit_demo_source.png
new file mode 100644
index 00000000000..35796f81e62
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/edit_demo_source.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_10.png b/static/images/use-cases/observability/hyperdx-demo/step_10.png
new file mode 100644
index 00000000000..aa551f37081
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_10.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_11.png b/static/images/use-cases/observability/hyperdx-demo/step_11.png
new file mode 100644
index 00000000000..469fd95a541
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_11.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_12.png b/static/images/use-cases/observability/hyperdx-demo/step_12.png
new file mode 100644
index 00000000000..012c2fb0241
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_12.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_13.png b/static/images/use-cases/observability/hyperdx-demo/step_13.png
new file mode 100644
index 00000000000..957914f9d44
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_13.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_14.png b/static/images/use-cases/observability/hyperdx-demo/step_14.png
new file mode 100644
index 00000000000..d0ae0e9aa2e
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_14.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_15.png b/static/images/use-cases/observability/hyperdx-demo/step_15.png
new file mode 100644
index 00000000000..38adbb907bc
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_15.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_16.png b/static/images/use-cases/observability/hyperdx-demo/step_16.png
new file mode 100644
index 00000000000..13d23f3bd63
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_16.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_17.png b/static/images/use-cases/observability/hyperdx-demo/step_17.png
new file mode 100644
index 00000000000..8a3c061aae4
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_17.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_18.png b/static/images/use-cases/observability/hyperdx-demo/step_18.png
new file mode 100644
index 00000000000..eb556a08c8e
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_18.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_19.png b/static/images/use-cases/observability/hyperdx-demo/step_19.png
new file mode 100644
index 00000000000..2321228c5d1
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_19.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_1a.png b/static/images/use-cases/observability/hyperdx-demo/step_1a.png
new file mode 100644
index 00000000000..9b0d7991dd7
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_1a.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_2.png b/static/images/use-cases/observability/hyperdx-demo/step_2.png
new file mode 100644
index 00000000000..b338429f89f
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_2.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_20.png b/static/images/use-cases/observability/hyperdx-demo/step_20.png
new file mode 100644
index 00000000000..3bc9d62502e
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_20.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_21.png b/static/images/use-cases/observability/hyperdx-demo/step_21.png
new file mode 100644
index 00000000000..c133a5bdd72
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_21.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_22.png b/static/images/use-cases/observability/hyperdx-demo/step_22.png
new file mode 100644
index 00000000000..1e78e85b631
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_22.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_23.png b/static/images/use-cases/observability/hyperdx-demo/step_23.png
new file mode 100644
index 00000000000..0421672c007
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_23.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_24.png b/static/images/use-cases/observability/hyperdx-demo/step_24.png
new file mode 100644
index 00000000000..cc43b7a0687
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_24.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_3.png b/static/images/use-cases/observability/hyperdx-demo/step_3.png
new file mode 100644
index 00000000000..185df1df07f
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_3.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_4.png b/static/images/use-cases/observability/hyperdx-demo/step_4.png
new file mode 100644
index 00000000000..71ccc9d9ec4
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_4.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_5.png b/static/images/use-cases/observability/hyperdx-demo/step_5.png
new file mode 100644
index 00000000000..79357d44490
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_5.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_6.png b/static/images/use-cases/observability/hyperdx-demo/step_6.png
new file mode 100644
index 00000000000..02e86a1b031
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_6.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_7.png b/static/images/use-cases/observability/hyperdx-demo/step_7.png
new file mode 100644
index 00000000000..717d9174da4
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_7.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_8.png b/static/images/use-cases/observability/hyperdx-demo/step_8.png
new file mode 100644
index 00000000000..6107e63ac9c
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_8.png differ
diff --git a/static/images/use-cases/observability/hyperdx-demo/step_9.png b/static/images/use-cases/observability/hyperdx-demo/step_9.png
new file mode 100644
index 00000000000..a685eab3772
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-demo/step_9.png differ
diff --git a/static/images/use-cases/observability/hyperdx-landing.png b/static/images/use-cases/observability/hyperdx-landing.png
new file mode 100644
index 00000000000..aae64ec4282
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-landing.png differ
diff --git a/static/images/use-cases/observability/hyperdx-login.png b/static/images/use-cases/observability/hyperdx-login.png
new file mode 100644
index 00000000000..dd34c2f2620
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-login.png differ
diff --git a/static/images/use-cases/observability/hyperdx-logs.png b/static/images/use-cases/observability/hyperdx-logs.png
new file mode 100644
index 00000000000..7f771d98a44
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx-logs.png differ
diff --git a/static/images/use-cases/observability/hyperdx.png b/static/images/use-cases/observability/hyperdx.png
new file mode 100644
index 00000000000..d17c586cd55
Binary files /dev/null and b/static/images/use-cases/observability/hyperdx.png differ
diff --git a/static/images/use-cases/observability/ingestion-keys.png b/static/images/use-cases/observability/ingestion-keys.png
new file mode 100644
index 00000000000..7570971c4ac
Binary files /dev/null and b/static/images/use-cases/observability/ingestion-keys.png differ
diff --git a/static/images/use-cases/observability/search_alert.png b/static/images/use-cases/observability/search_alert.png
new file mode 100644
index 00000000000..7c38e9f58c1
Binary files /dev/null and b/static/images/use-cases/observability/search_alert.png differ
diff --git a/static/images/use-cases/observability/simple-architecture-with-flow.png b/static/images/use-cases/observability/simple-architecture-with-flow.png
new file mode 100644
index 00000000000..0a57ce07001
Binary files /dev/null and b/static/images/use-cases/observability/simple-architecture-with-flow.png differ