-
-
Notifications
You must be signed in to change notification settings - Fork 851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jpeg downscaling decoding #2076
Jpeg downscaling decoding #2076
Conversation
Only works for 444
Fixed warnings & added intermediate benchmark results
src/ImageSharp/Formats/Jpeg/Components/Decoder/ColorConverters/JpegColorConverterBase.cs
Outdated
Show resolved
Hide resolved
...eSharp/Formats/Jpeg/Components/Decoder/ComponentProcessors/DownScalingComponentProcessor8.cs
Outdated
Show resolved
Hide resolved
...eSharp/Formats/Jpeg/Components/Decoder/ComponentProcessors/DownScalingComponentProcessor8.cs
Outdated
Show resolved
Hide resolved
...eSharp/Formats/Jpeg/Components/Decoder/ComponentProcessors/DownScalingComponentProcessor8.cs
Outdated
Show resolved
Hide resolved
...eSharp/Formats/Jpeg/Components/Decoder/ComponentProcessors/DownScalingComponentProcessor8.cs
Outdated
Show resolved
Hide resolved
src/ImageSharp/Formats/Jpeg/Components/Decoder/SpectralConverter{TPixel}.cs
Outdated
Show resolved
Hide resolved
Codecov Report
@@ Coverage Diff @@
## main #2076 +/- ##
======================================
Coverage ? 88%
======================================
Files ? 997
Lines ? 53874
Branches ? 6891
======================================
Hits ? 47444
Misses ? 5250
Partials ? 1180
Flags with carried forward coverage won't be shown. Click here to find out more. Continue to review full report at Codecov.
|
@JimBobSquarePants it's actually done and downscaling IDCT is covered by tests. Do we need tests with actual image comparison between The only blocking thing left is API, what's the best approach to discuss it? |
I've had a little idea on how we could handle this as an api... basically i think we could handle this by convention during load. The premise is, we add an Limitation here would be that the resize would have to be the first operation but apart from that it would work...and I think for the fact this feel like an optimisation target its not too hard an a limitation. This benefit of this approach is would allow us to add the feature without adding lot of new APIs (only new overloads of existing exposing more existing APIs). Here is a spike on how it could be implemented. |
so instead of calling using var img = Image.Load("/path.jpg");
img.Mutate(ctx=>ctx.Resize(120, 100));
img.Save("/out.png"); you can instead call using var img = Image.Load("/path.jpg", ctx=>ctx.Resize(120, 100));
img.Save("/out.png"); but this wouls also work for any mutation set applied i.e. using var img = Image.Load("/path.jpg", ctx=>ctx
.Resize(120, 100)
.Pixelate(...)
.DrawText(...)
.Resize(20, 10));
img.Save("/out.png"); which would functionally translate to using var img = Image.Load("/path.jpg", new ResizeProcessor(120, 100));
img.Mutate(ctx=>ctx
.Pixelate(...)
.DrawText(...)
.Resize(20, 10)); //notice only the first resize is striped off and in lined into loading
img.Save("/out.png"); |
I thought we would just provide an overload for
Decoding to a specific size is is a very specific and unique function. I can't ever see us implementing other mutations on load so I think it's best to keep things locked down and specific. I wouldn't even allow setting a custom |
I don't think it's a problem. Interface: Image LoadResize(Stream, Size size, IResampler resampler); Jpeg implementation: Image LoadResize(Stream stream, Size size, IResampler resampler)
{
Image img;
// jpeg idct resizing is very close to Box resampler
if(resampler == KnownResamplers.Box)
{
img = this.DecodeResizeImplementation(stream, size);
}
else
{
img = this.Decode(stream);
}
// post decode resize if decoding didn't resize due to unsupported resampler
// or didn't resize to exact size user asked for
if(img.Size != size)
{
// Note: MagicScaler provides a way to use two different resamplers
// For example, it can use low-quality Box resampler for rough downscaling
// and then use something better for final downscaling to target size
//
// IMO this is a very cool niche thingy but it's very specific to the jpeg decoder
// that it would be better to cut it or put into JpegDecoder as a specific method at max
img.Mutate(ctx => ctx.Resize(size, resampler));
}
return img;
} If any other format would provide a 'native' resizing operation - we can use this boilerplate with any other resampler check for native scaling support. And if there's no such support from the format: Image img = this.Decode(stream);
img.Mutate(ctx => ctx.Resize(size, resampler));
return img; The only limitation is the obligation to provide a resampler, providing anything but |
Soooo how do we decide? |
…p/jpeg-downscaling-decode
@@ -326,11 +325,13 @@ private void ParseBaselineData() | |||
|
|||
if (this.scanComponentCount != 1) | |||
{ | |||
this.spectralConverter.PrepareForDecoding(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, we got a little problem here. ITU specs allows to place markers between frame and scan markers which can screw up current architecture.
I've decided to separate SOF scan callback which now simply saves references to frame and jpeg data variables and has a separate PrepareForDecoding()
callback which is called explicitly for single scan images and implicitly called by multi-scan images by SpectralConverter
. HuffmanScanDecoder
and ArithmeticScanDecoder
are affected by this change.
I'm planning to do a refactoring PR for entire jpeg decoder after this is merged. Arithmetic decoding PR and this scaling decoding PR contains rather large changes so it's the perfect time to clean it up! :)
@tocsoft you might be interested in this: Build log: https://github.com/SixLabors/ImageSharp/runs/6247356714 |
Well, it's finally done and ready for final review and merging, thanks everyone for initial feedback! Current changes are internal so users won't be able to use it. Updated benchmarks for load-resize-save benchmark:
Nothing really changed since the first benchmark for this PR. There's a lot to work on which should bring a noticeable performance boost:
|
You said it's internal so users can't use it. Is there a way I can pick up an alpha build and try it? |
That's what we decided above as this API is highly experimental. Personally I don't have anything against making it public if @antonfirsov and @JimBobSquarePants agree :) |
Update: I've deleted some old jpeg-specific benchmarks as they are never used and we have a Load-Resize-Save across different .net libs anyway. |
Played a bit with parsing only DC component without changing internal structure of how we allocate and store spectral blocks and got this beauty:
NetVips is still very fast but we are already only 20% slower in multithreaded scenario. |
Very nice speed up! Can’t review til Thursday night cos on holiday but during the review process we’ll likely make the bits needed public. |
What I mean is that if we manage to figure how to build a simplified, codec-invariant LoadResize API in the 3.0 timeframe (which I still believe we can do), that may impact the shape of the Jpeg-specific API in ways we cannot predict now. I think this is very good use-case for |
Played a bit more with some possible optimizations, Avx spectral -> color conversion and skipping huffman values got us to almost 10% slower in multi-threaded scenarion compared to NetVips:
And I don't really understand why multithreaded scenario is faster in terms of 'how close we are to the NetVips', thought it can only be the other way around :D EDIT: Now I got it, current benchmark setup contains progressive jpeg which NetVips doesn't handle well. With only baseline jpegs we have an expected benchmark of 25%:
EDIT 2: Ran a progressive-only thumbnail benchmark:
Yep, that's it, we are almost the fastest .net lib in multithreaded scenario in progressive jpeg decoding and resizing. But baseline is not that cool unfortunately. It's gonna be a very tough thing to optimize. |
@br3aker I'll review this next. |
@br3aker If I'm reading this correctly your decoder is simply decoding to the closest possible scaled size to the target yeah? If so I'm happy to get this merged once the conflict is fixed. We can figure out naming and make things public when we figure out the public API. I think we can provide a best of both worlds scenario btw.
|
@JimBobSquarePants if I recall correctly - yes, but I'll re-check and respond just to be sure. Will resolve conflicts tomorrow. |
Yes.
Resolved.
I like it! |
Yes , that is #2117 haunting us again. I will try restart the CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very excited about this! I have plans for a general API I've already started working on.
Prerequisites
TODO
ProfilingSandbox/Program.cs
Description
This PR implements built-in scaling of the Jpeg decoder via scaled IDCT. Currently only 8-to-8 and 8-to-1 (1/8) scalings are supported, I'll implement 1/2 and 1/4 scalings as part of this PR a bit later. Current code is rather stable in terms of internal API, 1/2 and 1/4 scaling will be based on implemented abstract class.
This code is WIP, contains development leftovers.
API
I'm proposing two new methods for the
IImageDecoder
interface:And new method to the
Image<TPixel>
:I've certainly missed other endpoints affected by proposed new API, reason for this proposal is to start a discussion :)
Why are new methods needed?
For code versatility. As ImageSharp already supports quite some image formats I'd expect this piece of code to be very popular:
To use new Jpeg resizing API users would need to somehow identify image format first and only then call it directly parsing image stream twice. Proposed API eliminates this performance loss:
Benchmarks
Master
This PR