5a1fd7dadd
Bumps [github.com/SevereCloud/vksdk/v2](https://github.com/SevereCloud/vksdk) from 2.11.0 to 2.13.0. - [Release notes](https://github.com/SevereCloud/vksdk/releases) - [Commits](https://github.com/SevereCloud/vksdk/compare/v2.11.0...v2.13.0) --- updated-dependencies: - dependency-name: github.com/SevereCloud/vksdk/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
---|---|---|
.. | ||
.gitignore | ||
bitreader.go | ||
bitwriter.go | ||
bytereader.go | ||
compress.go | ||
decompress.go | ||
huff0.go | ||
README.md |
Huff0 entropy compression
This package provides Huff0 encoding and decoding as used in zstd.
Huff0, a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU (Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds.
This can be used for compressing input with a lot of similar input values to the smallest number of bytes. This does not perform any multi-byte dictionary coding as LZ coders, but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding.
News
This is used as part of the zstandard compression and decompression package.
This ensures that most functionality is well tested.
Usage
This package provides a low level interface that allows to compress single independent blocks.
Each block is separate, and there is no built in integrity checks. This means that the caller should keep track of block sizes and also do checksums if needed.
Compressing a block is done via the Compress1X
and Compress4X
functions. You must provide input and will receive the output and maybe
an error.
These error values can be returned:
Error | Description |
---|---|
<nil> |
Everything ok, output is returned |
ErrIncompressible |
Returned when input is judged to be too hard to compress |
ErrUseRLE |
Returned from the compressor when the input is a single byte value repeated |
ErrTooBig |
Returned if the input block exceeds the maximum allowed size (128 Kib) |
(error) |
An internal error occurred. |
As can be seen above some of there are errors that will be returned even under normal operation so it is important to handle these.
To reduce allocations you can provide a Scratch
object that can be re-used for successive calls. Both compression and
decompression accepts a Scratch
object, and the same object
can be used for both.
Be aware, that when re-using a Scratch
object that the
output buffer is also re-used, so if you are still using this
you must set the Out
field in the scratch to nil. The same
buffer is used for compression and decompression output.
The Scratch
object will retain state that allows to
re-use previous tables for encoding and decoding.
Tables and re-use
Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results.
The Scratch object allows you to set a ReusePolicy
that controls this behaviour. See the documentation for details. This
can be altered between each block.
Do however note that this information is not stored in the
output block and it is up to the users of the package to record whether
ReadTable
should be called, based on the boolean reported back from the CompressXX
call.
If you want to store the table separate from the data, you can access
them as OutData
and OutTable
on the Scratch
object.
Decompressing
The first part of decoding is to initialize the decoding table
through ReadTable
.
This will initialize the decoding tables. You can supply the complete
block to ReadTable
and it will return the data part of the
block which can be given to the decompressor.
Decompressing is done by calling the Decompress1X
or Decompress4X
function.
For concurrently decompressing content with a fixed table a stateless
Decoder
can be requested which will remain correct as long as the scratch is
unchanged. The capacity of the provided slice indicates the expected
output size.
You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back your input was likely corrupted.
It is important to note that a successful decoding does not mean your output matches your original input. There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid.
Contributing
Contributions are always welcome. Be aware that adding public functions will require good justification and breaking changes will likely not be accepted. If in doubt open an issue before writing the PR.