Skip to content

Commit f78c6f9

Browse files
committed
todo: 3.
1 parent 461d2bd commit f78c6f9

File tree

1 file changed

+15
-5
lines changed

1 file changed

+15
-5
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -441,7 +441,9 @@ def load_lora_into_text_encoder(
441441
adapter_name (`str`, *optional*):
442442
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
443443
`default_{i}` where i is the total number of adapters being loaded.
444-
metadata: TODO
444+
metadata (`dict`):
445+
Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
446+
from the state dict.
445447
low_cpu_mem_usage (`bool`, *optional*):
446448
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
447449
weights.
@@ -926,7 +928,9 @@ def load_lora_into_text_encoder(
926928
adapter_name (`str`, *optional*):
927929
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
928930
`default_{i}` where i is the total number of adapters being loaded.
929-
metadata: TODO
931+
metadata (`dict`):
932+
Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
933+
from the state dict.
930934
low_cpu_mem_usage (`bool`, *optional*):
931935
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
932936
weights.
@@ -1383,7 +1387,9 @@ def load_lora_into_text_encoder(
13831387
adapter_name (`str`, *optional*):
13841388
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
13851389
`default_{i}` where i is the total number of adapters being loaded.
1386-
metadata: TODO
1390+
metadata (`dict`):
1391+
Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
1392+
from the state dict.
13871393
low_cpu_mem_usage (`bool`, *optional*):
13881394
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
13891395
weights.
@@ -2320,7 +2326,9 @@ def load_lora_into_text_encoder(
23202326
adapter_name (`str`, *optional*):
23212327
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
23222328
`default_{i}` where i is the total number of adapters being loaded.
2323-
metadata: TODO
2329+
metadata (`dict`):
2330+
Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
2331+
from the state dict.
23242332
low_cpu_mem_usage (`bool`, *optional*):
23252333
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
23262334
weights.
@@ -2861,7 +2869,9 @@ def load_lora_into_text_encoder(
28612869
adapter_name (`str`, *optional*):
28622870
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
28632871
`default_{i}` where i is the total number of adapters being loaded.
2864-
metadata: TODO
2872+
metadata (`dict`):
2873+
Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
2874+
from the state dict.
28652875
low_cpu_mem_usage (`bool`, *optional*):
28662876
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
28672877
weights.

0 commit comments

Comments
 (0)