You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is likely a note for reference of performance evaluation: amortized cost can be significantly smaller than the non-amortized cost for the function-independent phase.
Example: If parties are going to run a lot of small circuits (e.g., run AES circuits for one million times), they can:
use amortization. Do a big function-independent phase that is sufficient for one million invocations (or one thousand, also sufficient). Note that these circuits would need to use the same delta key.
not use amortization. Do a function-independent phase for each invocation.
The latter will show performance numbers that are suboptimal.
Benefits of amortization: Based on some evaluations, it seems that amortization affects the evaluation in many ways.
First, and most obviously and severely, the network latency, as already noted in [WRK17].
The function-independent phase has a lot of rounds. To generate 10k AND triples, using this repo (aka IKNP) and 2Gbit/s network, with two parties,
For RTT = 20 ms, per AND triple cost is 38 us.
For RTT = 4 ms, per AND triple cost is 9 us.
If one instead generates 10^7 AND triple, then one can get similar numbers that are insensitive to network latency: 3.9 us.
Second, also obviously and severely when bandwidth is limited, the authenticated bit cost if using Ferret.
This one goes without saying. Ferret generates a large batch of AND triples at the same time. And one would miss the opportunities if not all AND triples are consumed. Ferret's benefits are more obvious when machines are powerful and when the network is slow.
Third, the bucket size needed for removing the leakage in leaky AND triples.
Most common circuits would use more than 3100 AND gates, so bucket_size = 4. But for a bucket with size 280000 or more, bucket_size = 3, and this is indeed a non-negligible saving (if network latency is handled).
This means that if one can do a batch function-independent phase, which does not need to be large---10^7 AND gates are not that expensive---then one could always enjoy the bucket size of 3.
Implications for benchmark: This may suggest that benchmarks on one invocation of small circuits would need significant cautions, as network latency may dominate.
Furthermore, the function-dependent phase shows a similar situation---if the circuit is very small, the network latency may dominate.
Implications for the platform: This may suggest that for benchmark purposes, people may want to get good numbers, and reasonable amortization would be considered.
Currently, emp-agmpc does not provide a separation between online/offline, nor a useful tool for benchmarking that considers amortization at heart. In the future, this may be an interesting topic on its own, as it benefits paper writers. E.g., how to store the offline phase result efficiently is another problem (#11).
The text was updated successfully, but these errors were encountered:
This is likely a note for reference of performance evaluation: amortized cost can be significantly smaller than the non-amortized cost for the function-independent phase.
Example: If parties are going to run a lot of small circuits (e.g., run AES circuits for one million times), they can:
The latter will show performance numbers that are suboptimal.
Benefits of amortization: Based on some evaluations, it seems that amortization affects the evaluation in many ways.
First, and most obviously and severely, the network latency, as already noted in [WRK17].
The function-independent phase has a lot of rounds. To generate 10k AND triples, using this repo (aka IKNP) and 2Gbit/s network, with two parties,
If one instead generates 10^7 AND triple, then one can get similar numbers that are insensitive to network latency: 3.9 us.
Second, also obviously and severely when bandwidth is limited, the authenticated bit cost if using Ferret.
This one goes without saying. Ferret generates a large batch of AND triples at the same time. And one would miss the opportunities if not all AND triples are consumed. Ferret's benefits are more obvious when machines are powerful and when the network is slow.
Third, the bucket size needed for removing the leakage in leaky AND triples.
Most common circuits would use more than 3100 AND gates, so bucket_size = 4. But for a bucket with size 280000 or more, bucket_size = 3, and this is indeed a non-negligible saving (if network latency is handled).
This means that if one can do a batch function-independent phase, which does not need to be large---10^7 AND gates are not that expensive---then one could always enjoy the bucket size of 3.
Implications for benchmark: This may suggest that benchmarks on one invocation of small circuits would need significant cautions, as network latency may dominate.
Furthermore, the function-dependent phase shows a similar situation---if the circuit is very small, the network latency may dominate.
Implications for the platform: This may suggest that for benchmark purposes, people may want to get good numbers, and reasonable amortization would be considered.
Currently, emp-agmpc does not provide a separation between online/offline, nor a useful tool for benchmarking that considers amortization at heart. In the future, this may be an interesting topic on its own, as it benefits paper writers. E.g., how to store the offline phase result efficiently is another problem (#11).
The text was updated successfully, but these errors were encountered: