File tree Expand file tree Collapse file tree 1 file changed +6
-2
lines changed Expand file tree Collapse file tree 1 file changed +6
-2
lines changed Original file line number Diff line number Diff line change @@ -374,7 +374,9 @@ def load_lora_into_unet(
374
374
adapter_name (`str`, *optional*):
375
375
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
376
376
`default_{i}` where i is the total number of adapters being loaded.
377
- metadata: TODO
377
+ metadata (`dict`):
378
+ Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
379
+ from the state dict.
378
380
low_cpu_mem_usage (`bool`, *optional*):
379
381
Speed up model loading only loading the pretrained LoRA weights and not initializing the random
380
382
weights.
@@ -856,7 +858,9 @@ def load_lora_into_unet(
856
858
adapter_name (`str`, *optional*):
857
859
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
858
860
`default_{i}` where i is the total number of adapters being loaded.
859
- metadata: TODO
861
+ metadata (`dict`):
862
+ Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
863
+ from the state dict.
860
864
low_cpu_mem_usage (`bool`, *optional*):
861
865
Speed up model loading only loading the pretrained LoRA weights and not initializing the random
862
866
weights.
You can’t perform that action at this time.
0 commit comments