Commit Graph

4 Commits

Author SHA1 Message Date
65ec7ec524 Cleanup: remove redundant, invalid info from headers
BF-admins agree to remove header information that isn't useful,
to reduce noise.

- BEGIN/END license blocks

  Developers should add non license comments as separate comment blocks.
  No need for separator text.

- Contributors

  This is often invalid, outdated or misleading
  especially when splitting files.

  It's more useful to git-blame to find out who has developed the code.

See P901 for script to perform these edits.
2019-02-02 01:36:28 +11:00
cfdd27381c GHash - code reorganization, performance enhancements, add a few missing utils to API.
This patch is the root of the GHash rework, all other diff will be based on it:

Reduce average load from 3.0 to 0.75
----------------------------------

This is the big performance booster part, e.g. makes tracing a dyntopo stroke between 25% and 30% faster.

Not much to say about it, aside that it obviously increase memory footprint (about 25% - 30% too).

Add optional shrinking
----------------------------------

I.e. ghashes/gsets can now shrink their buckets array when you remove enough entries. This remains optional and OFF by default.

Add code to use masking instead of modulo
----------------------------------

Buckets indices are obtained from hashes by “reducing” the hash value into the valid bucket range. This can be done either by bit-masking, or using modulo operation.
The former is quicker, but requires real hashes, while the later is slower (average 10% impact on ghash operations) but can also be used as a 'fake' hashing on raw values, like e.g. indices.

In Blender currently not all ghash usages actually hash their keys, so we stick to modulo for now (masking is ifdef’ed out), we may however investigate the benefits of switching to masking with systematic very basic hashing later…

Add various missing API helpers
----------------------------------

I.e. a way to deep-copy a ghash/gset, and a way to (re-)reserve entries (i.e. manually grow or shrink the ghash after its creation).

Various code refactoring
----------------------------------

* Get rid of the 'hack' regarding ghash size when used as gset (it’s simpler and safer to have two structs defined here, and cast pointers as needed).
* Various re-shuffle and factorization in low-level internal code.
* Some work on hashing helpers, introducing some murmur2a-based hashing too.

Thanks a bunch to Campbell for the extensive review work. :)

Reviewers: sergey, campbellbarton

Subscribers: psy-fi, lukastoenne

Projects: #bf_blender

Maniphest Tasks: T43766

Differential Revision: https://developer.blender.org/D1178
2015-03-19 17:37:54 +01:00
14795baf21 Cleanup: headers 2014-11-14 11:49:45 +01:00
64c0c13e6e Add Murmur2A hashing feature to BLI
Murmur2a is a very fast hashing function generation int32 hashes.
It also features a very good distribution of generated hashes.

However, it is not endianness-agnostic, meaning it will usually generate
different hashes for a same key on big- and little-endian architectures.
Consequently, **it shall not be used to generate persistent hashes**
(never store them in .blend file e.g.).

This implementation supports incremental hashing, and is a direct
adaptation of reference implementation (in c++):
https://smhasher.googlecode.com/svn-history/r130/trunk/MurmurHash2.cpp

That cpp code was also used to generate reference values in gtests file.

Reviewers: sergey, campbellbarton

Reviewed By: campbellbarton

Projects: #bf_blender

Differential Revision: https://developer.blender.org/D892
2014-11-14 11:00:26 +01:00