chjn commited on
Commit
6a7e388
·
1 Parent(s): abbd553

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: audio-to-audio
6
  ---
7
+ # AgnesTachyon So-vits-svc 4.1 Model
8
+
9
+ A so-vits-svc 4.1 model of AgnesTachyon in Uma Musume: Pretty Derby.
10
+
11
+ ## Model Details
12
+
13
+ ### Model Description
14
+
15
+ <!-- Provide a longer summary of what this model is. -->
16
+ This is a so-vits-svc 4.1 model of AgnesTachyon in Uma Musume: Pretty Derby.
17
+
18
+
19
+ - **Developed by:** [svc-develop-team](https://github.com/svc-develop-team)
20
+ - **Shared by:** [70295](https://space.bilibili.com/700776013)
21
+ - **Model type:** Audio to Audio
22
+ - **License:** [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0)
23
+
24
+
25
+ ## Uses
26
+
27
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
28
+
29
+ - Clone the [so-vits-svc repository](https://github.com/svc-develop-team/so-vits-svc) and install all dependencies.
30
+ - Create a new folder named "models" and place the "AgnesTachyon" folder inside it.
31
+ - Navigate to the directory of "so-vits-svc" and execute the following command by replacing "xxx.wav" with the name of your source audio file and "x" with the desired key to raise/lower.
32
+ ```
33
+ python inference_main.py -m "models/AgnesTachyon/AgnesTachyon.pth" -c "models/AgnesTachyon/config.json" -n "xxx.wav" -t x -s "AgnesTachyon"
34
+ ```
35
+ Shallow diffusion model, cluster model and feature index model is also provided. Check [the README.md file of the *so-vits-svc project*](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md)
36
+ for more information.
37
+
38
+ ## Training Details
39
+
40
+ ### Training Data
41
+
42
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
43
+ All of the training data is extracted from the Windows client of Uma Musume: Pretty Derby using the [umamusume-voice-text-extractor](https://github.com/chinosk6/umamusume-voice-text-extractor)
44
+ The copyright of the training dataset belongs to Cygames.
45
+ Only the voice is used, the live music soundtrack is not included in the training dataset.
46
+
47
+
48
+ ### Training Procedure
49
+
50
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
51
+
52
+ #### Preprocessing
53
+
54
+ Navigate to the directory of "so-vits-svc" and execute `python resample.py --skip_loudnorm` .
55
+
56
+ #### Training Hyperparameters
57
+
58
+ *Please check config.json and diffusion.yaml for training hyperparameters*
59
+
60
+ ## Environmental Impact
61
+
62
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
63
+
64
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
65
+
66
+ - **Hardware Type:** RTX 3090
67
+ - **Hours used:** 41.6
68
+ - **Provider:** Myself
69
+ - **Compute Region:** Mainland China
70
+ - **Carbon Emitted:** ~16.02kg CO2
71
+
72
+