Skip to content

Commit

Permalink
Merge branch 'main' into oss-version
Browse files Browse the repository at this point in the history
  • Loading branch information
TomaszGaweda authored Nov 14, 2024
2 parents 967e0de + e1da166 commit ed1a2f4
Show file tree
Hide file tree
Showing 67 changed files with 559 additions and 332 deletions.
2 changes: 1 addition & 1 deletion .github/CODEOWNERS
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# These owners will be the default owners for everything in
# the repo
* @oliverhowell @amandalindsay
* @oliverhowell @amandalindsay @Rob-Hazelcast
2 changes: 1 addition & 1 deletion .github/workflows/action-updater.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4.1.4
- uses: actions/checkout@v4
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.ACTION_UPDATER }}
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/adoc-html.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4.1.4
- uses: actions/setup-node@v4.0.2
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Convert adoc
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport-5-0.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport-5-1.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport-5-2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport-5-3.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport-5-4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/forwardport.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
steps:

- name: checkout
uses: actions/checkout@v4.1.4
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/to-plain-html.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4.1.4
- uses: actions/checkout@v4
with:
token: ${{ secrets.TO_HTML }}
- name: Asciidoc to html
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/validate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4.1.4
- uses: actions/setup-node@v4.0.2
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Check for broken internal links
Expand Down
3 changes: 2 additions & 1 deletion antora-playbook-local.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ site:
docsearch_id: 'QK2EAH8GB0'
docsearch_api: 'ef7bd9485eafbd75d6e8425949eda1f5'
docsearch_index: 'prod_hazelcast_docs'
ai_search_id: '6b326171-dd1e-40c6-a948-1f9bb6c0ed52'
urls:
html_extension_style: drop
content:
Expand All @@ -19,7 +20,7 @@ content:
# start_path: docs/rest
ui:
bundle:
url: ../hazelcast-docs-ui/build/ui-bundle.zip
url: https://github.com/hazelcast/hazelcast-docs-ui/releases/latest/download/ui-bundle.zip #../hazelcast-docs-ui/build/ui-bundle.zip
snapshot: true
antora:
extensions:
Expand Down
1 change: 1 addition & 0 deletions antora-playbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ site:
docsearch_id: 'QK2EAH8GB0'
docsearch_api: 'ef7bd9485eafbd75d6e8425949eda1f5'
docsearch_index: 'prod_hazelcast_docs'
ai_search_id: '6b326171-dd1e-40c6-a948-1f9bb6c0ed52'
content:
sources:
- url: .
Expand Down
Binary file added docs/modules/ROOT/images/Ask_AI_JDK.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/modules/ROOT/images/Ask_AI_demo.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/modules/ROOT/images/ask_ai.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/modules/ROOT/images/ask_ai_search.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
.Get started
* xref:whats-new.adoc[What's new]
* xref:what-is-hazelcast.adoc[What is Hazelcast Platform]
* xref:ask-ai.adoc[]
* xref:getting-started:editions.adoc[Available versions]
* Start a local cluster
** xref:getting-started:get-started-docker.adoc[Docker]
Expand Down
33 changes: 33 additions & 0 deletions docs/modules/ROOT/pages/ask-ai.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
= Ask AI
:description: Use our Ask AI feature to get instant answers to technical questions.

== Overview
image:ask_ai.png[Ask AI,role="related thumb right"] You can use the *Ask AI* feature available on every docs page to get answers to your technical questions. The chatbot has indexed all Hazelcast documentation, including standalone API docs microsites, and selected hazelcast.com and support content. The AI uses the information from the knowledge sources to answer questions and does not make up information if it's not listed in the sources.

To use Ask AI, click the *Ask AI* button in the bottom right of any docs.hazelcast.com page.

Ask AI currently indexes the following sources:

- docs.hazelcast.com (the /latest version for all products and tools)
- API docs for languages and clients (Javadoc, C++, .NET/C#, Node.js, Python, Go)
- hazelcast.com/blog (from last two years)
- hazelcast.com/developers/clients
- support.hazelcast.com
- Hazelcast Code Samples repository

IMPORTANT: This is an experimental custom LLM for answering technical questions about Hazelcast. Answers are based *only* on Hazelcast documentation and support sources, but may not be fully accurate so please use your best judgement.

== Tips

- Don't forget to rate the answers using the voting buttons. And please give detailed feedback if the answers you receive aren't correct or could be improved, as this feedback helps us improve Ask AI!
- You can ask follow up questions or clarify the scope or use case to focus the conversation and get the information you're looking for.
- You can also ask the chatbot to identify problems in code or configuration files, or create, for example, a pom.xml with certain features enabled.
- Answers are all based on English language content (see sources listed above) but you can ask and receive answers in different languages.
- If the answer is not documented in the indexed sources then Ask AI will be unable to provide an answer (feedback is especially welcome in cases where you would expect the answer to be documented and indexed).

== Search

You can also use the *Search* tab within Ask AI to search the same sources, and toggle between Search and Ask AI mode.

image:ask_ai_search.png[Search with Ask AI]

2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/production-checklist.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Consider the following for VMWare ESX:
* Network performance issues, including timeouts, might occur with LRO (Large Receive Offload)
enabled on Linux virtual machines and ESXi/ESX hosts. We have specifically had
this reported in VMware environments, but it could potentially impact other environments as well.
We strongly recommend disabling LRO when running in virtualized environments, see https://kb.vmware.com/s/article/1027511.
Although this issue is observed only under certain conditions, we strongly recommend either using the e1000 device driver, or disabling LRO when running in virtualized environments. For more information, see https://kb.vmware.com/s/article/1027511.

=== Windows

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/system-properties.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1068,7 +1068,7 @@ These operations log a warning and are shown in the Management Center with detai
|`hazelcast.socket.bind.any`
| true
| bool
| Bind both server-socket and client-sockets to any local interface.
| Bind both server-socket and client-sockets to any local interface. For ZIP and TAR distributions, this is overridden by false in configuration files.

|`hazelcast.socket.buffer.direct`
| false
Expand Down
8 changes: 8 additions & 0 deletions docs/modules/ROOT/pages/whats-new.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,14 @@ NOTE: The What's New page for Hazelcast Platform 6.0-SNAPSHOT will be available

{description}

== Get instant answers with new Hazelcast Ask AI

On every docs page you can now click the *Ask AI* button in the bottom right and get instant answers to all your questions about Hazelcast Platform, and our tools and clients. Ask AI is powered by the entire suite of Hazelcast documentation, including the latest docs from docs.hazelcast.com, the various API docs microsites, the latest official blogs, and a bunch of code samples and support knowledgebase articles.

image:Ask_AI_JDK.png[Ask AI example]

Give it a try now - for more information, see xref:ask-ai.adoc[].

== New Vector Collection for building semantic search (BETA)
[.enterprise]*Enterprise*

Expand Down
7 changes: 3 additions & 4 deletions docs/modules/clients/pages/memcache.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
= Memcache Client

NOTE: Hazelcast Memcache Client only supports ASCII protocol. Binary Protocol is not supported.

A Memcache client written in any language can talk directly to a Hazelcast cluster.
No additional configuration is required.

To be able to use a Memcache client, you must enable
the Memcache client request listener service using either one of the following configuration options:
NOTE: Hazelcast Memcache Client only supports ASCII protocol. Binary Protocol is not supported.

To be able to use a Memcache client, you must enable the Memcache client request listener service using either one of the following configuration options:

1 - Using the `network` configuration element:

Expand Down
3 changes: 1 addition & 2 deletions docs/modules/cluster-performance/pages/data-affinity.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
= Data Affinity
:description: Data affinity ensures that related entries exist on the same member. If related data is on the same member, operations can
be executed without the cost of extra network calls and extra wire data. This feature is provided by using the same partition keys for related data.
:description: Data affinity ensures that related entries exist on the same member. If related data is on the same member, operations can be executed without the cost of extra network calls and extra wire data. This feature is provided by using the same partition keys for related data.

{description}

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/clusters/partials/ucn-migrate-tip.adoc
Original file line number Diff line number Diff line change
@@ -1 +1 @@
CAUTION: {ucd} has been deprecated and will be removed in the next major version. To continue deploying your user code after this time, {open-source-product-name} users can either upgrade to {enterprise-product-name}, or add their resources to the Hazelcast member class paths. Hazelcast recommends that {enterprise-product-name} users migrate their user code to use {ucn}. For further information on migrating from {ucd} to {ucn}, see the xref:clusters:ucn-migrate-ucd.adoc[] topic.
CAUTION: {ucd} has been deprecated and will be removed in the next major version. To continue deploying your user code after this time, {open-source-product-name} users can either upgrade to {enterprise-product-name}, or add their resources to the Hazelcast member class paths. Hazelcast recommends that {enterprise-product-name} users migrate their user code to use {ucn} for all purposes other than Jet stream processing. For further information on migrating from {ucd} to {ucn}, see xref:clusters:ucn-migrate-ucd.adoc[].
2 changes: 1 addition & 1 deletion docs/modules/configuration/pages/dynamic-config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ Dynamic configuration is supported for the following data structures:
- CardinalityEstimator
- PNCounter
- FlakeIdGenerator
- ExternalDataStore
- DataConnection

[[persistence]]
== Persisting Dynamic Configuration
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/data-structures/pages/backing-up-maps.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ hazelcast:

Hazelcast offers several features for backing up your in-memory maps to files located on the local cluster member disk, in persistent memory, or to a system of record such as an external database.

* xref:storage:persistence.adoc[Persistence] provides for data recovery in the event of a planned or unplanned complete cluster shutdown. When enabled, each cluster member periodically writes a copy of all local map data to either the local disk drive or to persistent memory. When the cluster is restarted, each member reads the stored data back into memory. If all cluster members successfully recover the stored data, cluster operations resume as usual.
* xref:storage:persistence.adoc[Persistence] provides for data recovery in the event of a planned or unplanned complete cluster shutdown. When enabled, each cluster member periodically writes a copy of all local map data to the local disk drive. When the cluster is restarted, each member reads the stored data back into memory. If all cluster members successfully recover the stored data, cluster operations resume as usual.

* xref:mapstore:working-with-external-data.adoc[MapStore] provides for automatic write-through of map changes to an external system, and automatic loading of data from that external system when an application calls a map. Although this can function as a data safety feature, the primary purpose of MapStore is to maintain synchronization between a system of record and the in-memory map.

Expand Down
18 changes: 16 additions & 2 deletions docs/modules/data-structures/pages/map-config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

{description}

[[map-configuration-defaults]]
== Hazelcast Map Configuration Defaults

The `hazelcast.xml`/`hazelcast.yaml` configuration included with your Hazelcast distribution includes the following default settings for maps.
Expand Down Expand Up @@ -33,9 +34,22 @@ For details on map backups, refer to xref:backing-up-maps.adoc[].

For details on in-memory format, refer to xref:setting-data-format.adoc[].

== Modifying the Default Configuration
== The Default (Fallback) Map Configuration
When a map is created, if the map name matches an entry in the `hazelcast.xml`/`hazelcast.yaml` file, the values in the matching entry are used to overwrite the initial values
discussed in the <<map-configuration-defaults,Map Configuration Defaults>> section.

You can create a default configuration for all maps for your environment by modifying the map configuration block named "default" in your `hazelcast.xml`/`hazelcast.yaml` file. In the following example, we set expiration timers for map entries. Map entries that are idle for an hour will be marked as eligible for removal if the cluster begins to run out of memory. Any map entry older than six hours will be marked as eligible for removal.
Maps that do not have any configuration defined use the default configuration. If you want to set a configuration that is valid for all maps, you can name your configuration as `default`. A user-defined default configuration applies to every map that does not have a specific custom map configuration defined with the map’s name. You can also use wildcards to associate your configuration with multiple map names. See the [configuration documentation](https://docs.hazelcast.com/hazelcast/5.5/configuration/using-wildcards) for more information about wildcards.

When a map name does not match any entry in the `hazelcast.xml`/`hazelcast.yaml` file then:

- If the `default` map configuration exists, the values under this entry are used to overwrite initial values. Therefore, `default` serves as a fallback.

- If a `default` map configuration does not exist, the map is created with initial values as discussed in <<map-configuration-defaults,Map Configuration Defaults>>.


== Modifying the Default (Fallback) Configuration

In the following example, we set expiration timers for dynamically created maps that lack a named configuration block. Map entries that are idle for an hour will be marked as eligible for removal if the cluster begins to run out of memory. Any map entry older than six hours will be marked as eligible for removal.

For more on entry expiration, go to xref:managing-map-memory.adoc[Managing Map Memory].

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/getting-started/pages/support.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ A support subscription from Hazelcast will allow you to get the most value out o
selection of Hazelcast. Our customers benefit from rapid response times to technical
support inquiries, access to critical software patches, and other services which
will help you achieve increased productivity and quality. Learn more about Hazelcast support subscriptions:
https://hazelcast.com/pricing/?utm_source=docs-website[hazelcast.com/pricing]
https://hazelcast.com/pricing/?utm_source=docs-website[https://hazelcast.com/pricing/]

If your organization subscribes to Hazelcast support,
and you already have an account setup, you can login to your account and open
Expand Down
8 changes: 4 additions & 4 deletions docs/modules/integrate/pages/cdc-connectors.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ Each database type has its own database type-to-struct type mappings. For specif
|io.debezium.time.Date|java.time.LocalDate / java.util.Date / String `yyyy-MM-dd`
|io.debezium.time.Time|java.time.Duration / String ISO-8601 `PnDTnHnMn.nS`

.5+|INT64
.6+|INT64
|-|long/Long
|io.debezium.time.Timestamp|java.time.Instant / String `yyyy-MM-dd HH:mm:ss.SSS`
|io.debezium.time.MicroTimestamp|java.time.Instant / String `yyyy-MM-dd HH:mm:ss.SSS`
Expand All @@ -243,6 +243,8 @@ Each database type has its own database type-to-struct type mappings. For specif
|BOOLEAN|-|boolean/Boolean / String
|STRING|-|String

|===

The `RecordPart#value` field contains Debezium's message in a JSON format. This JSON format uses string as date representation,
instead of ints, which are standard in Debezium but harder to work with.

Expand All @@ -252,13 +254,11 @@ We strongly recommend using `time.precision.mode=adaptive` (default).
Using `time.precision.mode=connect` uses `java.util.Date` to represent dates, time, etc. and is less precise.
====

|===

== Migration tips

Hazelcast {open-source-product-name} has a Debezium CDC connector, but it's based on an older version of Debezium.
Migration to the new connector is straightforward but be aware of the following changes:

* You should use the `com.hazelcast.enterprise.jet.cdc` package instead of `com.hazelcast.jet.cdc`.
* Artifact names are now `hazelcast-enterprise-cdc-debezium`, `hazelcast-enterprise-cdc-mysql` and `hazelcast-enterprise-cdc-postgres` (instead of `hazelcast-jet-...`).
* Debezium renamed certain terms, which we have also replicated in our code. For example, `include list` replaces `whitelist`, `exclude list` replaces `blacklist`. This means, for example, you need to use `setTableIncludeList` instead of `setTableWhitelist`. For more detail on new Debezium names, see their link:https://debezium.io/documentation/reference/stable/connectors/mysql.html#mysql-connector-properties[MySQL] and link:https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-properties[PostgreSQL] documentation.
* Debezium renamed certain terms, which we have also replicated in our code. For example, `include list` replaces `whitelist`, `exclude list` replaces `blacklist`. This means, for example, you need to use `setTableIncludeList` instead of `setTableWhitelist`. For more detail on new Debezium names, see their link:https://debezium.io/documentation/reference/stable/connectors/mysql.html#mysql-connector-properties[MySQL] and link:https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-properties[PostgreSQL] documentation.
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ image:ROOT:feast_batch.png[Feast batch wokflow]

You will need the following ready before starting the tutorial:

* Hazelcast CLC. link:https://docs.hazelcast.com/clc/latest/install-clc[Installation instructions]
* Hazelcast CLC (see link:https://docs.hazelcast.com/clc/latest/install-clc[Install CLC])
* A recent version of Docker and Docker Compose

To set up your project, complete the following steps:
Expand Down
4 changes: 2 additions & 2 deletions docs/modules/integrate/pages/integrate-with-feast.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -114,5 +114,5 @@ To use Feast with Hazelcast, you must do the following:

You can also work through the following tutorials:

* Get Started with Feature Store
* Feature Compute and Transformation
* xref:integrate:feature-engineering-with-feast.adoc[Get started with Feast streaming]
* xref:integrate:streaming-features-with-feast.adoc[Get started with Feast feature engineering]
8 changes: 4 additions & 4 deletions docs/modules/integrate/pages/jdbc-connector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,11 +45,11 @@ p.readFrom(Sources.jdbc(
)).writeTo(Sinks.logger());
```

You can also use a configured xref:external-data-stores:external-data-stores.adoc#defining-external-data-stores[external data store] as a JDBC Source:
You can also use a configured xref:data-connections:data-connections-configuration.adoc[data connection] as a JDBC Source:
```java
Pipeline p = Pipeline.create();
p.readFrom(Sources.jdbc(
externalDataStoreRef("my-database"),
dataConnectionRef("my-database"),
(con, parallelism, index) -> {
PreparedStatement stmt = con.prepareStatement(
"SELECT * FROM person WHERE MOD(id, ?) = ?)");
Expand Down Expand Up @@ -88,14 +88,14 @@ p.readFrom(KafkaSources.<Person>kafka(.., "people"))
));
```

You can also use a configured xref:external-data-stores:external-data-stores.adoc#defining-external-data-stores[external data store] as a JDBC sink:
You can also use a configured xref:data-connections:data-connections-configuration.adoc[data connection] as a JDBC sink:

```java
Pipeline p = Pipeline.create();
p.readFrom(KafkaSources.<Person>kafka(.., "people"))
.writeTo(Sinks.jdbc(
"REPLACE INTO PERSON (id, name) values(?, ?)",
externalDataStoreRef(JDBC_DATA_STORE),
dataConnectionRef(JDBC_DATA_STORE),
(stmt, item) -> {
stmt.setInt(1, item.id);
stmt.setString(2, item.name);
Expand Down
Loading

0 comments on commit ed1a2f4

Please sign in to comment.