Version 0.2.0 - Transformers and Anomaly Detection
What's Changed
- Adds likelihood computation by @marksgraham in #122
- Add classifier-free guidance tutorial by @Warvito in #131
- Add missing scale by @Warvito in #147
- Fix 2d-ldm-tutorial. by @JessyD in #151
- Add 3d percetual loss by @Warvito in #158
- Adds inpainting tutorials by @marksgraham in #161
- Fix set_timesteps by @Warvito in #163
- Add stable diffusion v2.0 x4 upscaler tutorial by @Warvito in #148
- Added the MMD Metric and tests by @danieltudosiu in #152
- Add v_prediction and update docstrings by @Warvito in #165
- Add RadImageNet to Perceptual Loss by @Warvito in #153
- Add option num_head_channels as Sequence by @Warvito in #172
- Fix kernel_size in quant_conv and post_quant_conv layers by @Warvito in #170
- Fix "medicalnet_..." network_type used with spatial_dims==2 by @Warvito in #167
- Adds context error by @marksgraham in #175
- Remove ununsed predict_epsilon by @Warvito in #184
- Fix TestDiffusionSamplingInferer by @Warvito in #180
- Add option to use residual blocks for updownsampling by @Warvito in #176
- Optimise Attention Mechanisms by @Warvito in #145
- Fix F.interpolate usage with bfloat16 by @Warvito in #157
- Pretrained DDPM by @marksgraham in #177
- Add full precision attention by @Warvito in #189
- Add FID by @Warvito in #40
- Update tests, CI and pre-commit by @Warvito in #193
- Fix missing init.py by @Warvito in #200
- Replace FeedForward with MLPBlock by @Warvito in #201
- Remove python3.8 as default_language_version by @Warvito in #209
- Refactor code with new pre commit configuration by @Warvito in #207
- Fixes by @marksgraham in #211
- Modify sample function to divide by scale factor before passing to th… by @virginiafdez in #214
- Addition of is_fake_3d setting condition to error in PerceptualLoss by @virginiafdez in #215
- Add verification code by @Warvito in #221
- Suspend CI by @Warvito in #224
- Sequence Ordering class by @danieltudosiu in #168
- Add AutoregressiveTransformer by @Warvito in #225
- Remove ch_mult from AutoencoderKL by @Warvito in #220
- Fix print messages for MS-SSIM by @JessyD in #230
- 228 update pretrained ddpm by @marksgraham in #233
- WIP by @OeslleLucena in #202
- Add annotations from future by @Warvito in #239
- Change num_res_blocks to Sequence[int] | int by @Warvito in #238
- Change num_res_channels and num_channels to Sequence[int] | int by @Warvito in #237
- Initialise inference_timesteps to train_timesteps by @marksgraham in #240
- Fix eval mode for stage_2 by @Warvito in #246
- Fix format by @Warvito in #250
- Fix format by @Warvito in #251
- Use no_grad decorator for sample method by @Warvito in #244
- Add VQVAE + Transformer inferer by @Warvito in #242
- Fix TypeError by @Warvito in #254
- Harmonise VQVAE with AutoencoderKL by @Warvito in #248
- Adds transformer get_likelihood by @marksgraham in #257
- Changed PatchAdversarialLoss to allow for least-squares criterion to … by @virginiafdez in #262
- 258 fix diffusion inferer by @marksgraham in #265
- Fixes type by @marksgraham in #264
- Change default values by @Warvito in #267
- 106 vqvae transfomer tutorial by @Ashayp31 in #236
- 203 add scale factor to the ldm training tutorials by @virginiafdez in #271
- Mednist Bundle by @ericspod in #263
- Add cache_dir by @Warvito in #278
- Add option to use flash attention by @Warvito in #222
- Set param.requires_grad = False by @Warvito in #273
- Update loss imports by @marksgraham in #279
- Add Brain LDM to model-zoo by @Warvito in #188
- Fix 3D DDPM tutorial by @Warvito in #277
- Add use_flash_attention argument by @Warvito in #284
- Fixes transformer max_sequence_length training/inference by @marksgraham in #282
- Add MIMIC pretrained model by @Warvito in #286
- Fix validation data in tutorials by @Warvito in #291
- Fix prediction_type="sample" by @Warvito in #300
- Moving DiffusionPrepareBatch by @ericspod in #305
- Change transformer number of tokens by @Warvito in #303
- Add tutorial performing anomaly detection based on likelihoods from generative models by @Warvito in #298
- Add causal self-attention block by @Warvito in #218
- Remove TODOs by @Warvito in #310
- Fix num_head_channels by @Warvito in #316
- Update AutoencoderKL and add more content by @Warvito in #315
- fixed typo in anomaly detection tutorial by @vacmar01 in #321
- Fix generate function by @Warvito in #322
- Clip image_data values before casting to uint8 by @Warvito in #324
- 314 fix transformer training by @marksgraham in #318
- Fix xtransformer error by @Warvito in #327
- Update README.md by @Warvito in #328
- 150 - Diff-scm by @SANCHES-Pedro in #306
- Update tutorial by @Warvito in #329
- Add README file to tutorial dir by @Warvito in #330
- Add more content to tutorial by @Warvito in #331
- Fix dependencies and license by @Warvito in #332
- Update README.md by @marksgraham in #333
- Update anomaly detection notebook by @marksgraham in #334
- initial commit anomaly detection with gradient guidance by @JuliaWolleb in #190
- Fix formatting by @Warvito in #340
- fix transformer max_sequence_length by @marksgraham in #335
- Fix typo by @Warvito in #341
- Remove dependency on x transformers by @Warvito in #325
- Add flash attention to Transformers by @Warvito in #342
- Makes use of inferer classes for training and sampling by @marksgraham in #347
- Fix MS-SSIM by @Warvito in #348
- initial commit for segmentation with diffusion models by @JuliaWolleb in #292
- Fix format by @Warvito in #349
- Use double in FID computation by @Warvito in #350
- Adopt exact computation of FID by @Warvito in #355
- Fixing vulnerable version of setuptools by @ericspod in #360
- Diffusion Autoencoder tutorial by @SANCHES-Pedro in #361
- Fix typo by @Warvito in #367
- Fix number of attention heads by @Warvito in #369
- Update CXR model by @Warvito in #336
- Add sample figures by @Warvito in #370
- Fix CUDA device mismatch by @ycremar in #371
- Fix grad strides warning when using ddp by @marksgraham in #375
- Fix MMD Metric by @Warvito in #368
- Add ControlNet by @Warvito in #358
- Evaluate the performance of generative models (realism and diversity) by @JessyD in #227
- Remove asserts by @Warvito in #383
- Bump to version 0.2 by @Warvito in #384
New Contributors
- @OeslleLucena made their first contribution in #202
- @Ashayp31 made their first contribution in #236
- @ericspod made their first contribution in #263
- @vacmar01 made their first contribution in #321
- @JuliaWolleb made their first contribution in #190
- @ycremar made their first contribution in #371
Full Changelog: 0.1.0...0.2.0