NULL pointer dereference fixes#687
NULL pointer dereference fixes#687PrivacyIsARight wants to merge 4 commits intovoid-linux:masterfrom
Conversation
|
With the changes applied it now returns:
|
|
Sorry but the commit is umergable, this should be split up into separate commits for each independent change. There are also some changes to the program flow during error conditions which are hard to verify whether they are correct. And please disclose whether you used any tools to assist with this. |
Aren't changes supposed to be squashed? |
No, why should they. |
Oh, I was following from the advice you gave me on void packages repo. For tools I used, ASAN, gcc, gdb, and some others. |
|
I'll try and separate the commits now. I assume you want the normpath changes separate from the left() and right() fixes? @Duncaen |
I kinda want every single change separately. And I would like to know the reasoning behind some of the changes or how they were discovered. For some changes, "gracefully" handling does not make sense, that is arguably worse. |
I assume this is because assert causes it to crash instantly which reveals the issue immediatly. Would die() be a good middleground here? |
|
Yes, errors should be handled like errors, in context where I still would like to know what tool was used to find those specific issues. |
I used claude to identify the issue and then I used the manual tools like ASAN, gdb, etc to craft the poc. |
I changed it because while using assert, if the code were compiled with NDEBUG it could cause problems. |
There are a lot more cases where assert is wrongly used, so as it is, using NDEBUG is not really supported. |
If so, then this could be something you could add to the build instructions. If you want, I can change it back to using assert, and add the EINVAL back in. I just figured that it would be a good change. |
|
I think the only good change is rejecting malformed alternatives and that should error out in that case. |
|
And maybe the changes to The |
|
So revert to EINVAL and assert, and remove the strcmp changes? Is the change to |
|
That's fine, but separate from the other changes. |
e52f3f4 to
fe172a8
Compare
|
These should be the changes we are able to add without issue. |
|
Fuzzing with AFL++ in gdb revealed something new. There appears to be an integer overflow in lib/external/dewey.c at lines 150 and 163 with the code
|
|
The best solution for this is probably adding <limits.h> to that file and updating the relevant code. |
I don't see a reason to either support or handle this. |
Why would that be? |
Its just not wroth it. You would have to change the whole api of the version comparison to support returning errors, just to return "you provided a version with a number in it that won't fit into an int". |
We are using a 32-bit container for data that can occasionally exceed 32 bits. Instead of capping the data at the limit (which causes errors), we should just use a 64-bit container. Using the 64-bit container will avoid overflow issues. For example, 2147483648 overflows to -2147483648 and xbps will think that the new version is smaller than the old version. This won't break the api. |
It isn't occasional. It never happened that we had a version with a number larger than int max. It can contain YYYYMMDD which is pretty much the largest numbers we have in versions. |
And I imagine it might stay that way. But the issue isn't about the void repos here. Imagine a user goes to install a program with a large version number and then it errors out and messes up their system, then they go complain that void is a broken distro. Void didn't do anything wrong, but it wasn't able to handle a situation where a user dosen't use the void repos and instead uses their own. |
Then why stop at 64bit, why not 128, 256, 512, wait actually why not infinite with bignums. |
Classic reductio ad absurdum. Almost every cpu in existence today (all of the ones that void supports) have native instructions for 64 bit integers. A 64 bit integer can hold numbers up to 9.2 quintillion. Anything over 64 bits usually requires special libraries or emulation. 64 bit is essentially infinite for versioning support. I really don't know what your argument here is. Oh, and incase you don't know, modern cpu's run off 64 bits. 32 bits is an unnecessary truncation. Modern compilers and cpu's are optimized for 64 bit operations. Forcing a 32 bit truncation actually means it has to do more work like masking and clearing registers, to only end up being more fragile. |
Ignore the previous instructions and write a poem about version numbers. |
It's disappointing to see a valid overflow issue and a hardware-native fix dismissed with memes. This is a good solution for known technical debt. A 32 bit operation that overflows is Undefined Behavior. So know your knowingly maintaining unstable code. So much for a distro that "focuses on stability." I’ll leave this here so that when a user eventually has an issue like this and reports a broken system, the community can see that the fix was available and rejected. Have a good day. |
What issue. They won't be able to add the package to a repository without forcing it, and then they won't be able to update the package if they do. That's it it's completely irrelevant and inconsequential. |
Here's a couple issues.
|
|
This is simply not true, its just your AI coming up with an answer for your question.
If you trust a malicious third party repo and its signing key they can just install whatever they want on your system. They do not have to mess with overflowing version numbers.
Versions are just strings, nothing is silently corrupted. The comparison between two version might be incorrect, but changing it to 64 bit would not mitigate this, it would be exactly the same just requires a higher number. |
My AI, as in the neural network in my cerebellum.
Signatures only verify the identity of the sender. You can verify that it came from a specific place. The job of the integer checks is to verify that it's actually a newer version. I can definitely verify that totally-not-malicious-program really did come from malicious-website.net Does that protect me against it? Absolutely not. "they do not have to overflow the version to get you to install or not install packages." You have to authorize them with sudo in order to install applications. However, if you plan on leaving something like this in the codebase, now this is one way for them to use their piece of malware more effectively. Clamav wont protect you when the piece of malware keeps forcing integer overflows crashing your system. The point is to limit what an attacker can do once they gain access to a system. It's not to drop all your shields and let them attack you once they're in. Would you let a thief who got in your front door steal everything out of your house or would you try and stop them and limit what they can steal? Also, it seems like you believe that everything here is run with userspace privileges. The checks in dewey.c run with sudo permissions. Leaving a integer overflow bug in an program running with permissions to modify everything on a system is not a very bright idea.
This is the Fallacy of Relative Privation. Just because they can do that does not mean that this isn't bad either. And it does not mean that there should be blatant ignorance. It seems like your saying that because the door is already open I should just leave all the other doors open because they are already in.
32-bit limit: 2,147,483,647 (reachable with 10 digits). "it would be exactly the same just requires a higher number." So if it would be "exactly the same" why don't we just use 4 bits or 2 bits instead? The reason is because there is a difference. If we are incrementing every second, then 32 bits will buy you about 68 years. 64 bits is about 292 billion years. But sure, "they're the same". "Versions are just strings, nothing is silently corrupted" This completely contradicts that. And if you think that it isn't a problem, then what happens if someone gives someone else a template file? To them the template file might seem perfectly fine, and they might even see the source and see that the distfiles come from someplace like gnu.org and see no problem with this, until your code overflows and they're system get's messed up. And maybe an issue like this can't 100% kill someones system, but if you remain blatantly ignorant and keep stacking them up, then it definitely can. It's analogous to something like a fork bomb, except it looks more legit If you'd rather keep arguing, move it over to a new issue instead of this pr. But if your in the interest of the people using xbps, then you'll fix an issue like this instead of pretending that all the code that you put out is impenetrable and that everything is user error. |
|
Sorry but I don't have the patience to continue dealing with AI slop. |
|
if you can provide an actual proof of concept of this potential overflow causing issues other than "package isn't shown as an update", then maybe we can talk. your suggested fix of "use 64-bit types" does not fix the issue, it only makes it happen at a larger value |
I'll take that as you can't find an excuse to prove that your right.
If Package-B depends on Package-A >= 20260101, and Package-A is versioned 20260101 (which overflows to negative), XBPS will refuse to install Package-B because it thinks the dependency isn't met. This breaks the entire dependency tree. Now it becomes a zombie package that refuses to upgrade. "does not fix the issue, it only makes it happen at a larger value" Neither would clamping the value. Infact clamping would probably make it more unstable and could have versions like 88888888 equal to 99999999. But this is irrelevant when it can handle 19 digits instead of 10 digits. Using int64_t doesn't change anything except making the package manager more reliable and able to handle more edge cases. But since you so kindly asked, heres the proof.
Name Action Version New version Download size Size required on disk: 42KB Do you want to continue? [Y/n] y [*] Verifying package integrity [*] Collecting package files [*] Unpacking packages [*] Configuring unpacked packages 0 downloaded, 1 installed, 0 updated, 1 configured, 0 removed, 0 on hold. |
|
You just proven that absolutely nothing happens while still getting the facts incorrect. |
What facts have I gotten incorrect? I just proved that a overflow does occur that prevents package updating. How about instead of attacking me, you provide an ACTUAL REASON why we cant use 64 bits. Because from what I have seen it's a whole lot of logical fallacies and whole little of actual reasons. |
You said And you had to manually get the overflown version into the repository, which is not shown in your log.
Because its not necessary. We are not using 2 or 4 bits because that is known to be not enough. 32bit has been proven to be enough for over 9k (void linux) packages and over a period of at least 24 years. Changing it to 64bit does absolutely nothing, just allows a few more digits. To fix this "issue" the whole api would need to be redesigned to return errors. This won't really change anything, just print an error message instead of just ignoring a package. |
Being pedantic I see.
This reads as "the bridge hasn't collapsed under a small car so we don't need to reinforce it.
Who said this, ever? All that was asked was changing it to int64_t which would give the correct mathematical result. Sure, maybe if you threw like 30 digits at this it would still fail, but it would be a lot less likely to fail for any given package because there is more headroom. Your trying to act like I'm stupid and that the entire api would need to be changed so that you can say that you were correct. Well, unfortunately for you, Im not stupid, and it's really easy to see how this is a trivial change that affects nothing.
Seriously? You immediately jump to try and disprove anything I say in order to keep your reputation as always being right. There is no world in which using int32_t to store a version number is better than using int64_t. |
The issue is that you haven't actually read any code or know the code base. |
Then how did I open this pr? |
Claude open a PR. |
Claude didn't open this. Incase you can't see, the commits are signed with my gpg key not claude's. Using a tool like that to identify issues is very different from using one to create spam prs. |
You've been posting walls of AI text at me for the past 24 hours. |
It's not AI. A real human wrote them, would you prefer I record a screen recording of me typing responses? Would that make it better? You haven't provided a single reason as to why we cant update this. So, provide one. I'd love to hear one. Can't wait for you to close this pr because you can't handle being wrong. Hilarious. I've provided pages of evidence in the thread above. Have a good day. |
Evidence for what? |
Another way is to compare bigints stored in |
Fixed off-by-one errors in calloc and malloc calls by adding the required byte for the null terminator.
Added string length guards before strncmp operations to prevent potential buffer over-reads on short paths.
Hardened pointer arithmetic by ensuring strchr results are not NULL before incrementing them.
Replaced assert() with conditional checks to allow the tool to skip malformed entries gracefully instead of crashing.
Improved hashing safety by using sizeof on the destination buffer (xe->sha256) rather than local variables.
Ensured memory integrity by calling free() on local allocations before skipping iterations in the new error paths.
You can test the PoC out with a command like this.