OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
Highlights
New Features & Improvements
Bug Fixes
New Contributors
Full Changelog: https://github.com/open-mmlab/mmagic/compare/v1.1.0...v1.2.0
Highlights
In this new version of MMagic, we have added support for the following five new algorithms.
|
New Features & Improvements
CodeCamp Contributions
Bug Fixes
New Contributors
Full Changelog: https://github.com/open-mmlab/mmagic/compare/v1.0.2...v1.1.0
Highlights
1. More detailed documentation
Thank you to the community contributors for helping us improve the documentation. We have improved many documents, including both Chinese and English versions. Please refer to the documentation for more details.
2. New algorithms
From right to left: origin image, DDIM inversion, Null-text inversion
Prompt-to-prompt Editing
New Features & Improvements
BaseModule
for some models by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1543
Bug Fixes
New Contributors
We are excited to announce the release of MMagic v1.0.0 that inherits from MMEditing and MMGeneration.
Since its inception, MMEditing has been the preferred algorithm library for many super-resolution, editing, and generation tasks, helping research teams win more than 10 top international competitions and supporting over 100 GitHub ecosystem projects. After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN.
Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: MMagic (Multimodal Advanced, Generative, and Intelligent Creation).
In MMagic, we have supported 53+ models in multiple tasks such as fine-tuning for stable diffusion, text-to-image, image and video restoration, super-resolution, editing and generation. With excellent training and experiment management support from MMEngine, MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey. With MMagic, experience more magic in generation! Let's open a new era beyond editing together. More than Editing, Unlock the Magic!
We support 11 new models in 4 new tasks.
For the Diffusion Model, we provide the following "magic" :
Support image generation based on Stable Diffusion and Disco Diffusion.
Support Finetune methods such as Dreambooth and DreamBooth LoRA.
Support controllability in text-to-image generation using ControlNet.
Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.
Support video generation based on MultiFrame Render. MMagic supports the generation of long videos in various styles through ControlNet and MultiFrame Render. prompt keywords: a handsome man, silver hair, smiling, play basketball
prompt keywords: a girl, black hair, white pants, smiling, play basketball
prompt keywords: a handsome man
Support calling basic models and sampling strategies through DiffuserWrapper.
SAM + MMagic = Generate Anything! SAM (Segment Anything Model) is a popular model these days and can also provide more support for MMagic! If you want to create your own animation, you can go to OpenMMLab PlayGround.
To improve your "spellcasting" efficiency, we have made the following adjustments to the "magic circuit":
We are excited to announce the release of MMEditing 1.0.0rc7. This release supports 51+ models, 226+ configs and 212+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
A total of 8 developers contributed to this release. Thanks @LeoXing1996, @Z-Fran, @plyfager, @zengyh1900, @liuwenran, @ryanxingql, @HAOCHENYE, @VongolaWu
Full Changelog: https://github.com/open-mmlab/mmediting/compare/v1.0.0rc6...v1.0.0rc7
We are excited to announce the release of MMEditing 1.0.0rc6. This release supports 50+ models, 222+ configs and 209+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
GenValLoop
and MultiValLoop
has been merged to EditValLoop
, GenTestLoop
and MultiTestLoop
has been merged to EditTestLoop
. Use case: Case 1: metrics on a single dataset
>>> # add the following lines in your config
>>> # 1. use `EditValLoop` instead of `ValLoop` in MMEngine
>>> val_cfg = dict(type='EditValLoop')
>>> # 2. specific EditEvaluator instead of Evaluator in MMEngine
>>> val_evaluator = dict(
>>> type='EditEvaluator',
>>> metrics=[
>>> dict(type='PSNR', crop_border=2, prefix='Set5'),
>>> dict(type='SSIM', crop_border=2, prefix='Set5'),
>>> ])
>>> # 3. define dataloader
>>> val_dataloader = dict(...)
Case 2: different metrics on different datasets
>>> # add the following lines in your config
>>> # 1. use `EditValLoop` instead of `ValLoop` in MMEngine
>>> val_cfg = dict(type='EditValLoop')
>>> # 2. specific a list EditEvaluator
>>> # do not forget to add prefix for each metric group
>>> div2k_evaluator = dict(
>>> type='EditEvaluator',
>>> metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K'))
>>> set5_evaluator = dict(
>>> type='EditEvaluator',
>>> metrics=[
>>> dict(type='PSNR', crop_border=2, prefix='Set5'),
>>> dict(type='SSIM', crop_border=2, prefix='Set5'),
>>> ])
>>> # define evaluator config
>>> val_evaluator = [div2k_evaluator, set5_evaluator]
>>> # 3. specific a list dataloader for each metric groups
>>> div2k_dataloader = dict(...)
>>> set5_dataloader = dict(...)
>>> # define dataloader config
>>> val_dataloader = [div2k_dataloader, set5_dataloader]
stack
and split
for EditDataSample
, Use case:# Example for `split`
gen_sample = EditDataSample()
gen_sample.fake_img = outputs # tensor
gen_sample.noise = noise # tensor
gen_sample.sample_kwargs = deepcopy(sample_kwargs) # dict
gen_sample.sample_model = sample_model # string
# set allow_nonseq_value as True to copy non-sequential data (sample_kwargs and sample_model for this example)
batch_sample_list = gen_sample.split(allow_nonseq_value=True)
# Example for `stack`
data_sample1 = EditDataSample()
data_sample1.set_gt_label(1)
data_sample1.set_tensor_data({'img': torch.randn(3, 4, 5)})
data_sample1.set_data({'mode': 'a'})
data_sample1.set_metainfo({
'channel_order': 'rgb',
'color_flag': 'color'
})
data_sample2 = EditDataSample()
data_sample2.set_gt_label(2)
data_sample2.set_tensor_data({'img': torch.randn(3, 4, 5)})
data_sample2.set_data({'mode': 'b'})
data_sample2.set_metainfo({
'channel_order': 'rgb',
'color_flag': 'color'
})
data_sample_merged = EditDataSample.stack([data_sample1, data_sample2])
GenDataPreprocessor
has been merged into EditDataPreprocessor
,
type
field in config.input_view
and output_view
since we will infer the shape of mean automatically.BGR
(for three-channel images) and [0, 255]
.PixelData
has been removed.
For BaseGAN/CondGAN models, real images are passed from data_samples.gt_img
instead of inputs['img']
.
momentum
in EMA. #1581
A total of 17 developers contributed to this release. Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @liuwenran, @austinmw, @dienachtderwelt, @liangzelong, @i-aki-y, @xiaomile, @Li-Qingyun, @vansin, @Luo-Yihang, @ydengbi, @ruoningYu, @triple-Mu
Full Changelog: https://github.com/open-mmlab/mmediting/compare/v1.0.0rc5...v1.0.0rc6
pixel-unshuffle
. #1637
A total of 10 developers contributed to this release. Thanks @LeoXing1996, @Z-Fran, @zengyh1900, @liuky74, @KKIEEK, @zeakey, @Sqhttwl, @yhna940, @gihwan-kim, @vansin
Full Changelog: https://github.com/open-mmlab/mmediting/compare/v0.16.0...0.16.1
We are excited to announce the release of MMEditing 1.0.0rc5. This release supports 49+ models, 180+ configs and 177+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
A total of 16 developers contributed to this release. Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @liuwenran, @AlexZou14, @lvhan028, @xiaomile, @ldr426, @austin273, @whu-lee, @willaty, @curiosity654, @Zdafeng, @Taited
Full Changelog: https://github.com/open-mmlab/mmediting/compare/v1.0.0rc4...v1.0.0rc5
Highlights
We are excited to announce the release of MMEditing 1.0.0rc4. This release supports 45+ models, 176+ configs and 175+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
New Features & Improvements
Bug Fixes
Contributors A total of 14 developers contributed to this release. Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @gaoyang07, @ChangjianZhao, @zxczrx123, @jackghosts, @liuwenran, @CCODING04, @RoseZhao929, @shaocongliu, @liangzelong.
New Contributors