Fix for 0-padding in recovery-data when using NumPy
If you recreate a archive with Protect+ Data with enables NumPy, rescene fills each recovery byte to a 64bit int, which causes a padding of 7 Null-Bytes.
A7 00 00 00 00 00 00 00 6B 00 00 00 00 00 00 00 67 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 6A 00 00 00 00 00 00 00 D8 00 00 00 00 00 00 00 87 00 00 00 00 00 00 00
00 00 00 00 F5 00 00 00 00 00 00 00 39 00 00 00 00 00 00 00 AB 00 00 00 00 00 00 00
has to be
A7 6B 67 00 6A D8 87 F5 39 AB
This can be fixed by writing a bytearray instead of the sector or cleaner by adding a bytearray to the rs array
rarfs.write(sector)
=>
rarfs.write(bytearray(sector))
rs[rs_slice] = bitwise_xor(rs[rs_slice], bytearray(sector))
=>
rs[rs_slice] = bytearray(bitwise_xor(rs[rs_slice], bytearray(sector)))
Comments (4)
-
repo owner -
Account Deleted ah nice that my fork and pull request were accepted successfully though the bitbucked webinterface said there were some unknown errors and this repo cannot be forked.
I also thought applying the bytearray at the end is faster, but if you dont use NumPy, it applies the bytearray to a bytearray. But Python should recognise that and return the original bytearray, anyway he writes the corrent data in both cases.
I'll test and benchmark both options tomorrow.
-
repo owner - changed status to resolved
Closing Issue
#7: removal of 7 Null-Bytes padding when using NumPy→ <<cset deff8fb70816>>
-
repo owner - changed version to 0.5
- Log in to comment
I suggest you send me a pull request of the first option: https://bitbucket.org/Komplanar/pyrescene/commits/e277dadc27d05b5f5187657fd58e7a5ce53c7afb (the change I already had done to my local files too)
This is the most performance intensive part of the code. That is why I think applying bytearray only once when writing the result is faster than doing it to all the intermediate results too.