Description
lsb_mask = (uint8_t)( (2 << lsb_used) - 1 );
msb_mask = (uint8_t)( (2 << bits_used_msb) - 1 );
Why are we shifting 2 here? It does not seem correct.
Some examples: 2 is 00000010b in binary
lets say lsb_used is 6, i.e. we would want bits 7..2 of the LSB, with bits 0 and 1 not used. But this calculation would give us:
00000010b << 6 = 10000000b
10000000b -1 = 01111111b and we get bits 6..0 instead
lets say msb_used is 4, i.e. we would want bits 3..0 of the MSB, but the calculation gives us:
00000010b << 4 = 00100000b
00100000b -1 = 00011111b and we get bits 4..0 instead
This is just wrong. It was probably never noticed for two reasons. Firstly, the present calculation, though incorrect, still gives the correct mask for the LSB if all 8 bits are used. Secondly, although the MSB mask is wrong, it appears the AS5600 actually always returns 0 for the incorrectly included bits, and so no one has ever really noticed this...
The correct calculation should be:
lsb_mask = (uint8_t)( ( (0x1 << lsb_used) - 1 ) << (8-lsb_used) );
msb_mask = (uint8_t)( (0x1 << bits_used_msb) - 1 );