Charset Normalizer Versions Save

Truly universal encoding detector in pure Python

3.0.0b1

1 year ago

3.0.0b1 (2022-08-15)

Changed

  • Optional: Module md.py can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1

Removed

  • Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches
  • Breaking: Top-level function normalize
  • Breaking: Properties chaos_secondary_pass, coherence_non_latin and w_counter from CharsetMatch
  • Support for the backport unicodedata2

2.1.0

1 year ago

2.1.0 (2022-06-19)

Added

  • Output the Unicode table version when running the CLI with --version (PR #194)

Changed

  • Re-use decoded buffer for single byte character sets from @nijel (PR #175)
  • Fixing some performance bottlenecks from @deedy5 (PR #183)

Fixed

  • Workaround potential bug in cpython with Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space (PR #175)
  • CLI default threshold aligned with the API threshold from @oleksandr-kuzmenko (PR #181)

Removed

  • Support for Python 3.5 (PR #192)

Deprecated

  • Use of backport unicodedata from unicodedata2 as Python is quickly catching up, scheduled for removal in 3.0 (PR #194)

2.0.12

2 years ago

2.0.12 (2022-02-12)

Fixed

  • ASCII miss-detection on rare cases (PR #170)

2.0.11

2 years ago

2.0.11 (2022-01-30)

Added

  • Explicit support for Python 3.11 (PR #164)

Changed

  • The logging behavior has been completely reviewed, now using only TRACE and DEBUG levels (PR #163 #165)

2.0.10

2 years ago

2.0.10 (2022-01-04)

Fixed

  • Fallback match entries might lead to UnicodeDecodeError for large bytes sequence (PR #154)

Changed

  • Skipping the language-detection (CD) on ASCII (PR #155)

2.0.9

2 years ago

2.0.9 (2021-12-03)

Changed

  • Moderating the logging impact (since 2.0.8) for specific environments (PR #147)

Fixed

  • Wrong logging level applied when setting kwarg explain to True (PR #146)

2.0.8

2 years ago

Changed

  • Improvement over Vietnamese detection (PR #126)
  • MD improvement on trailing data and long foreign (non-pure latin) data (PR #124)
  • Efficiency improvements in cd/alphabet_languages from @adbar (PR #122)
  • call sum() without an intermediary list following PEP 289 recommendations from @adbar (PR #129)
  • Code style as refactored by Sourcery-AI (PR #131)
  • Minor adjustment on the MD around european words (PR #133)
  • Remove and replace SRTs from assets / tests (PR #139)
  • Initialize the library logger with a NullHandler by default from @nmaynes (PR #135)
  • Setting kwarg explain to True will add provisionally (bounded to function lifespan) a specific stream handler (PR #135)

Fixed

  • Fix large (misleading) sequence giving UnicodeDecodeError (PR #137)
  • Avoid using too insignificant chunk (PR #137)

Added

  • Add and expose function set_logging_handler to configure a specific StreamHandler from @nmaynes (PR #135)
  • Add CHANGELOG.md entries, format is based on Keep a Changelog (PR #141)

2.0.7

2 years ago

We arrived in a pretty stable state.

Changes:

  • Addition: :bento: Add support for Kazakh (Cyrillic) language detection #109
  • Improvement: :sparkle: Further improve inferring the language from a given code page (single-byte) #112
  • Removed: :fire: Remove redundant logging entry about detected language(s) #115
  • Miscellaneous: :wrench: Trying to leverage PEP263 when PEP3120 is not supported #116
    • While I do not think that this (116) will actually fix something, it will rather raise a SyntaxError (Not about ASCII decoding error) for those trying to install this package using a non-supported Python version
  • Improvement: :zap: Refactoring for potential performance improvements in loops #113 @adbar
  • Improvement: :sparkles: Various detection improvement (MD+CD) #117
  • Bugfix: :bug: Fix a minor inconsistency between Python 3.5 and other versions regarding language detection #117 #102

This version pushes forward the detection-coverage to 98%! https://github.com/Ousret/charset_normalizer/runs/3863881150 The great filter (cannot be better than) shall be 99% in conjunction with the current dataset. In future releases.

2.0.6

2 years ago

Changes:

  • Bugfix: :bug: Unforeseen regression with the loss of the backward-compatibility with some older minor of Python 3.5.x #100
  • Bugfix: :bug: Fix CLI crash when using --minimal output in certain cases #103
  • Improvement: :sparkles: Minor improvement to the detection efficiency (less than 1%) #106 #101

2.0.5

2 years ago

Changes:

Internal: :art: The project now comply with: flake8, mypy, isort and black to ensure a better overall quality #81 Internal: :art: The MANIFEST.in was not exhaustive #78 Improvement: :sparkles: The BC-support with v1.x was improved, the old staticmethods are restored #82 Remove: :fire: The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead #92 Improvement: :sparkles: The Unicode detection is slightly improved, see #93 Bugfix: :bug: In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection #95 Bugfix: :bug: Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection #96 Improvement: :art: Add syntax sugar __bool__ for results CharsetMatches list-container see #91

This release push further the detection coverage to 97 % !