Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency llvm_zstd to v1.5.6 #469

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Apr 1, 2024

This PR contains the following updates:

Package Type Update New value References Sourcegraph
llvm_zstd http_archive patch v1.5.6 source code search for "llvm_zstd"

Test plan: CI should pass with updated dependencies. No review required: this is an automated dependency update PR.


Release Notes

facebook/zstd (llvm_zstd)

v1.5.6: Zstandard v1.5.6 - Chrome Edition

Compare Source

This release highlights the deployment of Google Chrome 123, introducing zstd-encoding for Web traffic, offered as a preferable option for compression of dynamic contents. With limited web server support for zstd-encoding due to its novelty, we are launching an updated Zstandard version to facilitate broader adoption.

Improved latency (time to first byte) for web pages

Using zstd compression for large documents over the Internet, data is segmented into smaller blocks of up to 128 KB, for incremental updates. This is crucial for applications like Chrome that process parts of documents as they arrive. However, on slow or congested networks, there can be some brief unresponsiveness in the middle of a block transmission, delaying update. To mitigate such scenarios, libzstd introduces the new parameter ZSTD_c_targetCBlockSize, enabling the division of blocks into even smaller segments to enhance initial byte delivery speed. Activating this feature incurs a cost, both runtime (equivalent to -2% speed at level 8) and a slight compression efficiency decrease (<0.1%), but offers some desirable latency reduction, notably beneficial in areas with more congested network infrastructure.

Improved compression ratio at high levels

Highest compression levels (typically 18+) receive some compression ratio improvement. The improvement is really noticeable for 32-bit structures, like arrays of int for example. A real-world example would the .debug_str_offsets section of DWARF debug info within ELF executables, mentioned in #​2832, for which the compression effectiveness increases by +35%. It's not rare for many files or objects to contain sections of 32-bit structures, resulting in corresponding compression ratio improvements.

Granular binary size selection

libzstd provides build customization, including options to compile only the compression or decompression modules, minimizing binary size. Enhanced in v1.5.6 (source), it now allows for even finer control by enabling selective inclusion or exclusion of specific components within these modules. This advancement aids applications needing precise binary size management.

Miscellaneous Enhancements

This release includes various minor enhancements and bug fixes to enhance user experience. Key updates include an expanded list of recognized compressed file suffixes for the --exclude-compressed flag, improving efficiency by skipping presumed incompressible content. Furthermore, compatibility has been broadened to include additional chipsets (sparc64, ARM64EC, risc-v) and operating systems (QNX, AIX, Solaris, HP-UX).

Change Log

api: Promote ZSTD_c_targetCBlockSize to Stable API by @​felixhandte
api: new experimental ZSTD_d_maxBlockSize parameter, to reduce streaming decompression memory, by @​terrelln
perf: improve performance of param ZSTD_c_targetCBlockSize, by @​Cyan4973
perf: improved compression of arrays of integers at high compression, by @​Cyan4973
lib: reduce binary size with selective built-time exclusion, by @​felixhandte
lib: improved huffman speed on small data and linux kernel, by @​terrelln
lib: accept dictionaries with partial literal tables, by @​terrelln
lib: fix CCtx size estimation with external sequence producer, by @​embg
lib: fix corner case decoder behaviors, by @​Cyan4973 and @​aimuz
lib: fix zdict prototype mismatch in static_only mode, by @​ldv-alt
lib: fix several bugs in magicless-format decoding, by @​embg
cli: add common compressed file types to --exclude-compressed by @​daniellerozenblit (requested by @​dcog989)
cli: fix mixing -c and -o commands with --rm, by @​Cyan4973
cli: fix erroneous exclusion of hidden files with --output-dir-mirror by @​felixhandte
cli: improved time accuracy on BSD, by @​felixhandte
cli: better errors on argument parsing, by @​KapJI
tests: better compatibility with older versions of grep, by @​Cyan4973
tests: lorem ipsum generator as default content generator, by @​Cyan4973
build: cmake improvements by @​terrelln, @​sighingnow, @​gjasny, @​JohanMabille, @​Saverio976, @​gruenich, @​teo-tsirpanis
build: bazel support, by @​jondo2010
build: fix cross-compiling for AArch64 with lld by @​jcelerier
build: fix Apple platform compatibility, by @​nidhijaju
build: fix Visual 2012 and lower compatibility, by @​Cyan4973
build: improve win32 support, by @​DimitriPapadopoulos
build: better C90 compliance for zlibWrapper, by @​emaste
port: make: fat binaries on macos, by @​mredig
port: ARM64EC compatibility for Windows, by @​dunhor
port: QNX support by @​klausholstjacobsen
port: MSYS2 and Cygwin makefile installation and test support, by @​QBos07
port: risc-v support validation in CI, by @​Cyan4973
port: sparc64 support validation in CI, by @​Cyan4973
port: AIX compatibility, by @​likema
port: HP-UX compatibility, by @​likema
doc: Improved specification accuracy, by @​elasota
bug: Fix and deprecate ZSTD_generateSequences (#​3981), by @​terrelln

Full change list (auto-generated)

New Contributors

Full Changelog: facebook/zstd@v1.5.5...v1.5.6

v1.5.5: Zstandard v1.5.5

Compare Source

This is a quick fix release. The primary focus is to correct a rare corruption bug in high compression mode, detected by @​danlark1 . The probability to generate such a scenario by random chance is extremely low. It evaded months of continuous fuzzer tests, due to the number and complexity of simultaneous conditions required to trigger it. Nevertheless, @​danlark1 from Google shepherds such a humongous amount of data that he managed to detect a reproduction case (corruptions are detected thanks to the checksum), making it possible for @​terrelln to investigate and fix the bug. Thanks !
While the probability might be very small, corruption issues are nonetheless very serious, so an update to this version is highly recommended, especially if you employ high compression modes (levels 16+).

When the issue was detected, there were a number of other improvements and minor fixes already in the making, hence they are also present in this release. Let’s detail the main ones.

Improved memory usage and speed for the --patch-from mode

V1.5.5 introduces memory-mapped dictionaries, by @​daniellerozenblit, for both posix #​3486 and windows #​3557.

This feature allows zstd to memory-map large dictionaries, rather than requiring to load them into memory. This can make a pretty big difference for memory-constrained environments operating patches for large data sets.
It's mostly visible under memory pressure, since mmap will be able to release less-used memory and continue working.
But even when memory is plentiful, there are still measurable memory benefits, as shown in the graph below, especially when the reference turns out to be not completely relevant for the patch.

mmap_memory_usage

This feature is automatically enabled for --patch-from compression/decompression when the dictionary is larger than the user-set memory limit. It can also be manually enabled/disabled using --mmap-dict or --no-mmap-dict respectively.

Additionally, @​daniellerozenblit introduces significant speed improvements for --patch-from.

An I/O optimization in #​3486 greatly improves --patch-from decompression speed on Linux, typically by +50% on large files (~1GB).

patch-from_IO_optimization

Compression speed is also taken care of, with a dictionary-indexing speed optimization introduced in #​3545. It wildly accelerates --patch-from compression, typically doubling speed on large files (~1GB), sometimes even more depending on exact scenario.

patch_from_compression_speed_optimization

This speed improvement comes at a slight regression in compression ratio, and is therefore enabled only on non-ultra compression strategies.

Speed improvements of middle-level compression for specific scenarios

The row-hash match finder introduced in version 1.5.0 for levels 5-12 has been improved in version 1.5.5, enhancing its speed in specific corner-case scenarios.

The first optimization (#​3426) accelerates streaming compression using ZSTD_compressStream on small inputs by removing an expensive table initialization step. This results in remarkable speed increases for very small inputs.

The following scenario measures compression speed of ZSTD_compressStream at level 9 for different sample sizes on a linux platform running an i7-9700k cpu.

sample size v1.5.4 (MB/s) v1.5.5 (MB/s) improvement
100 1.4 44.8 x32
200 2.8 44.9 x16
500 6.5 60.0 x9.2
1K 12.4 70.0 x5.6
2K 25.0 111.3 x4.4
4K 44.4 139.4 x3.2
... ... ...
1M 97.5 99.4 +2%

The second optimization (#​3552) speeds up compression of incompressible data by a large multiplier. This is achieved by increasing the step size and reducing the frequency of matching when no matches are found, with negligible impact on the compression ratio. It makes mid-level compression essentially inexpensive when processing incompressible data, typically, already compressed data (note: this was already the case for fast compression levels).

The following scenario measures compression speed of ZSTD_compress compiled with gcc-9 for a ~10MB incompressible sample on a linux platform running an i7-9700k cpu.

level v1.5.4 (MB/s) v1.5.5 (MB/s) improvement
3 3500 3500 not a row-hash level (control)
5 400 2500 x6.2
7 380 2200 x5.8
9 176 1880 x10
11 67 1130 x16
13 89 89 not a row-hash level (control)
Miscellaneous

There are other welcome speed improvements in this package.

For example, @​felixhandte managed to increase processing speed of small files by carefully reducing the nb of system calls (#​3479). This can easily translate into +10% speed when processing a lot of small files in batch.

The Seekable format received a bit of care. It's now much faster when splitting data into very small blocks (#​3544). In an extreme scenario reported by @​P-E-Meunier, it improves processing speed by x90. Even for more "common" settings, such as using 4KB blocks on some "normally" compressible data like enwik, it still provides a healthy x2 processing speed benefit. Moreover, @​dloidolt merged an optimization that reduces the nb of I/O seek() events during reads (decompression), which is also beneficial for speed.

The release is not limited to speed improvements, several loose ends and corner cases were also fixed in this release. For a more detailed list of changes, please take a look at the changelog.

Change Log

Full change list (auto-generated)


Configuration

📅 Schedule: Branch creation - "on the 1st through 7th day of the month" in timezone America/Los_Angeles, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot added the bot label Apr 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants