-
Notifications
You must be signed in to change notification settings - Fork 20
Specify the number of bits used for networked variables #1549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
src/public/dt_common.h
Outdated
| inline int NumBitsForRange( int nMinElements, int nMaxElements ) | ||
| { | ||
| Assert( nMinElements < nMaxElements ); | ||
| return NumBitsForCount( nMaxElements - nMinElements ); | ||
| } | ||
| #endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TL;DR: If we wanna have this utility function, it should strictly accept unsigned (nonnegative) integers only, because the NumBitsForCount assumption fails for ranges that include negative values.
The long-winded version:
I'm not terribly familiar with the underlying Valve in-house serialization format ("BitBuffers") that the engine datatables are using, so I may be wrong, but I think this is not quite right for encoding ranges that include negative values.
Before C++20, the representation of integral types was implementation-defined which would complicate things further, eg. excerpt from the C++17 spec:
[basic.fundamental] Fundamental types
Types bool, char, char16_t, char32_t, wchar_t, and the signed and unsigned integer types are collectively
called integral types.47 A synonym for integral type is integer type. The representations of integral types
shall define values by use of a pure binary numeration system.48 [ Example: This document permits two’s
complement, ones’ complement and signed magnitude representations for integral types. — end example ]
However, this is somewhat simplified for C++20 (which we use), which dictates that the representation must be (or at least, behave as-if it were) two's complement:
[basic.fundamental] Fundamental types
An unsigned integer type has the same object representation, value representation, and alignment requirements
(6.7.6) as the corresponding signed integer type. For each value x of a signed integer type, the value of the
corresponding unsigned integer type congruent to x modulo 2N has the same value of corresponding bits in
its value representation. (* This is also known as two’s complement representation.)
So for our code base, we can assume two's complement for this encoding.
With two's complement, an arbitrary negative integral value may require encoding the entire width of the type; for example an int32 of value -1 would be encoded as 0xFFFFFFFF. So if we called NumBitsForRange(-1, 3); and it returned NumBitsForCount( 3-(-1)/*==4*/ ); //==3, we might think we can encode the value -1 using only 3 bits (000, 001, 010, 011, 100). And while it is true we can encode a range of 4 positive values with said 3 bits, this is insufficient for storing the signedness of said fundamental types in itself. So we'd need some modification of the implementation which e.g. allowed us to distinct which offset marks the zero value for our range in order to encode negative values as a bit-offset from that, or possibly some other scheme entirely. But whatever the case, this might be a nontrivial refactor, and we might be fighting against engine expectations in ways that may make it not worth it, unless there is some clear need for putting resources towards making this change.
An alternative to rolling our own custom flavor of the engine's BitBuffers implementation would be to adapt some existing, more robust serialization framework such as Protobuf (which actually has some Source SDK integration existing, IIRC). But again, unless there is some concrete problem that we need this more robust solution for, I'm not sure if it's worth putting our limited resources towards right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you suggest the best course of action is then, most other places in the SDK just hard code the value. That's fine until we start adding more to these enums, be it classes, squads, etc and someone forgets to change the value or sets it incorrectly when the range is avaliable to us.
Or we can just forget about the networked unsigned ints and leave them at the default (32 bits i believe). At least they are correctly flagged as SPROP_UNSIGNED now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the lowest-friction solution for now would be to add Asserts for the function NumBitsForRange that demand its input values for both nMinElements and nMaxElements must be >= 0 (since negative values would break for assumptions of the subsequent call to NumBitsForCount), and then fix any places where that dynamic assertion fails for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got mixed up between signed and unsigned. But yea if NumBitsForRange isn't accurate with negatives then I'll just remove it because it's no use and NumBitsForCount should be used instead
Description
Avoid using more more bits than necessary for some networked variables, and flag as unsigned where applicable
Also changes some
SendPropFloats toSendPropTimeCONFIRM:
Is NumBitsForRange safe and does it actually do what I want it toRemovedToolchain
Linked Issues