-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Utilities.sol: Optimized For loops #31
Conversation
* Add JSONs generated by pp-spartan * Separate path (src/blocks/) for low-level crypto building blocks * Add e2e verification contract skeleton * Move FieldLib and PolyLib code to Utilities.sol file * Step 1 * Step 2 * CI adjustments * Update README * Update integration testing infrastructure
…ter#27) * Add PolyEvalInstance building block * Refactor building blocks * Add step 3 of verification to the e2e contract * Remove data contracts for step 2 and step 3 * Adjust CI * KeccakTranscript and Sumcheck blocks adjustments * Requested changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Aryan9592, thank you for your contribution! PRs that aim to decrease the gas cost are always welcomed. Can we populate this approach to the rest of for loops in the codebase?
The current roadmap for this project is moving the e2e contract from pp-spartan branch into main. So for now, I would propose postponing merging this until we merge the whole contract from pp-spartan and enforce the overall estimation of gas cost on CI (in order to have some comparable value of gas cost and prevent merging PRs that increase it without strong reasons).
You can still extend this PR to the rest of for loops in the codebase. Considering what I said above, we will be able to actually compare the gas cost of the e2e flow that involves your optimisation with initial one.
P.S.: we also need to figure out why integration tests fail on CI
@@ -252,7 +256,7 @@ library PolyLib { | |||
} | |||
|
|||
function evalAtOne(UniPoly memory poly, uint256 mod) public pure returns (uint256 result) { | |||
for (uint256 i = 0; i < poly.coeffs.length; i++) { | |||
for (uint256 i; i < poly.coeffs.length; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not using approach with unchecked
incrementing of the index here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, solidity version 0.8 constituted 'underflow/overflow' checks which obviously costs some amount of gas and they are damn important. unchecked
kind of discards these checks and then the code inside the 'unchecked' block becomes prone to these underflow/overflow bugs. Thus, we really can't used 'unchecked' everywhere..
Now, coming back to your question, There were mainly two types of 'for' loops used in this solidity file:
// 1. This one has a fixed limit of 32, which makes me pretty sure
// that 'i' will never overflow
for (uint i; i < 32;){
// code
unchecked {
++i;
}
}
// 2. But this one doesn't have a fixed limit,
// like we can't say what size of array the user can provide.
// That's why, really doubtful in here and avoided making any changes
for (uint i; i < poly.coeffs.length; i++){
// code
}
Still, I feel that there won't be any problem with ++i
, but still if you really assure me that poly.coeffs.length doesn't cause any bugs in the code due to the addition of unchecked
block, then it be worth making these changes here too!
This answer also aligns well with your previous question -> ' Can we populate this approach to the rest of for loops in the codebase?'
Still like to say that, Yes we can inscribe this approach in rest of the for loops in this codebase, but need to be more careful where 'underflow/overflow' checks are really needed and where they aren't
Hence, I will make these changes once you allow for it. But do make sure to double-check them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makes sense. Let me ask @huitseeker's opinion here.
Now offhand, I couldn't imagine the situation where particular loop counter with unfixed limits may overflow. Otherwise, it would mean that we need to iterate over extremely large array of uint256s (we use them most of all) which I believe will cause OutOfGasException.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True..I agree with that. Iterating over large arrays might cause OutOfGasException. But still many repositories and projects kind of avoids using unchecked in many scenarios. Anyways, yeah we must also say that those repositories and projects do lack these gas optimization fixes too.
However, even I feel that there won't be any problem in making these changes even with the arrays. Let's wait for @huitseeker opinion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I have no issue with removing checks out of increments that can be locally shown to have no overflow, such as increments bounded by a loop constant,
- the increments bounded by a user-controlled variable, however, are more problematic, essentially because they lead to increased complexity: we don't know if much of the code we have right now will still be there in 3 months, due to the alpha nature of the software.
How about this? We merge the PR as-is today, keep the link to the optimization page you suggested in an issue, which will remind us to revisit if we can extend the technique we're implementing here to more loops once the code is more stable.
The #32 has been merged and I think we can now merge this. My suspicious regarding failing CI is that integration tests rely on private key and url for our cloud-based Anvil node which are not accessible by forks, so we can target this PR (and subsequent ones) to some other branch (not |
Perfect! |
@@ -252,7 +256,7 @@ library PolyLib { | |||
} | |||
|
|||
function evalAtOne(UniPoly memory poly, uint256 mod) public pure returns (uint256 result) { | |||
for (uint256 i = 0; i < poly.coeffs.length; i++) { | |||
for (uint256 i; i < poly.coeffs.length; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I have no issue with removing checks out of increments that can be locally shown to have no overflow, such as increments bounded by a loop constant,
- the increments bounded by a user-controlled variable, however, are more problematic, essentially because they lead to increased complexity: we don't know if much of the code we have right now will still be there in 3 months, due to the alpha nature of the software.
How about this? We merge the PR as-is today, keep the link to the optimization page you suggested in an issue, which will remind us to revisit if we can extend the technique we're implementing here to more loops once the code is more stable.
} | ||
offset += 32; | ||
|
||
for (uint256 i = 2; i < poly.coeffs.length; i++) { | ||
for (uint256 i = 2; i < poly.coeffs.length;) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pursuant to the comment above: is i bounded by a constant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think 'i' is bounded by any constant. It's range goes from 2
to poly.coeffs.length
, whereas poly.coeffs
refers to a dynamic array whose length is not fixed (i.e depends on the number of elements that array contains). So, I feel here 'i' is bounded by a user-controlled variable like you mentioned in this comment.
How about this? We merge the PR as-is today, keep the link to the optimization page you suggested in an issue, which will remind us to revisit if we can extend the technique we're implementing here to more loops once the code is more stable.
Nice one, better we go with it and refer to this PR while the code reaches to it's stable stage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Aryan9592 , switch please target branch to different branch (not main
) and we will merge it. Then I will be able to check if integration tests work as expected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed from main
to spark
. Is that okay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect, that you couldn't create new branch from main due to permission issues, right? So, I'm merging it to spark
and will manually resolve the conflicts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Aryan9592 , integration tests passed which is great! I think we will need to revisit the policy of working with third-party contributors on our side and allow PRs from forks to run integration tests somehow. For now, I'm creating the branch where we will accumulate contributions and periodically merge to main branch on our own. Please use it as a target for the next PRs.
Regarding your further work (if you are going to contribute more) related to gas cost optimising. So far we have a meta-issue: #29 dedicated specifically to gas cost reduction. We consider the systematic approach of turning our existing "correct" contracts (that are compatible to proofs verification by reference Rust implementation of Nova) into assembly ones using Yul (on the first iteration) and probably involving Huff insertions for more low-level optimisations eventually (on the second iteration). This roadmap implies that all building blocks that we have so far (Poseidon, KeccakTranscript, EqPolynomial, etc.) are going to be re-implemented in a pure Yul (where it makes sense and possible) together with all steps libraries. The work that is being performed in this context is in gas-optimizing branch. You can play with existing assembly code and take any building block which is not yet exists in assembly form. Or you can review existing ones (Poseidon / KeccakTranscript) and try to improve them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@storojs72 That be fun!
Will try whenever I got some free time..Thanks
So, these for loops are optimized on the basis of this -> https://gist.github.com/grGred/9bab8b9bad0cd42fc23d4e31e7347144#for-loops-improvement
Moreover, there's no need to initialize unsigned integers with the value '0', as they are already equipped with a value '0' if initialized like this ->
uint256 i;