Commit
·
170b079
1
Parent(s):
522db09
Update README.md
Browse files
README.md
CHANGED
@@ -13,5 +13,14 @@ inference: true
|
|
13 |
|
14 |
# controlnet-CiaraRowles/TemporalNetXL
|
15 |
|
16 |
-
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
# controlnet-CiaraRowles/TemporalNetXL
|
15 |
|
16 |
+
This is TemporalNet1XL, it is a retrain of the controlnet TemporalNet1 with Stable Diffusion XL.
|
17 |
|
18 |
+
This does not use the control mechanism of TemporalNet2 as it would require some additional work to adapt the diffusers pipeline to work with a 6 channel input.
|
19 |
+
|
20 |
+
In order to run, simply use the script "runtemporalnetxl.py" after installing the normal diffusers requirements and specify the following command line arguments:
|
21 |
+
|
22 |
+
--prompt does what it says on the tin
|
23 |
+
--video_path the path to your input video, this will split the frames out if the frames are not already there, if you want a different resolution or frame rate, you'll want to preprocess them and put them into the ./frames folder
|
24 |
+
--frames_dir (optional) if you want a different path for the frames input
|
25 |
+
--output_frames_dir (optional) the output directory
|
26 |
+
--init_image_path (optional) it is recommended you get the first frame, modify it to a good starting look with stable diffusion, and use that as the first generated frame, if unspecified it will use the first video frame (not reccomened)
|