RCV/i18n/locale/en_US.json
github-actions[bot] 47a3882b3a
🎨 同步 locale (#1117)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-08-30 00:02:34 +08:00

125 lines
15 KiB
JSON
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.",
"A模型权重": "Weight (w) for Model A:",
"A模型路径": "Path to Model A:",
"B模型路径": "Path to Model B:",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file (optional). One pitch per line. Replaces the default F0 and pitch modulation:",
"Index Rate": "Index Rate",
"Onnx导出": "Export Onnx",
"Onnx输出路径": "Onnx Export Path:",
"RVC模型路径": "RVC Model Path:",
"ckpt处理": "ckpt Processing",
"harvest进程数": "Number of CPU processes used for harvest pitch algorithm",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "Enter the GPU index(es) separated by '-', e.g., 0-0-1 to use 2 processes in GPU0 and 1 process in GPU1",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "Step 1: Fill in the experimental configuration. Experimental data is stored in the 'logs' folder, with each experiment having a separate folder. Manually enter the experiment name path, which contains the experimental configuration, logs, and trained model files.",
"step1:正在处理数据": "Step 1: Processing data",
"step2:正在提取音高&正在提取特征": "step2:Pitch extraction & feature extraction",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "Step 2a: Automatically traverse all files in the training folder that can be decoded into audio and perform slice normalization. Generates 2 wav folders in the experiment directory. Currently, only single-singer/speaker training is supported.",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "Step 2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (select GPU index):",
"step3: , ": "Step 3: Fill in the training settings and start training the model and index",
"step3a:": "Step 3a: Model training started",
"": "One-click training",
", , ": ", , ",
" 使UVR5 <br> E:\\codes\\py39\\vits_vc_gpu\\() <br> <br>1HP5HP2HP3HP3HP2 <br>2HP5 <br> 3by FoxJoy<br>(1)MDX-Net(onnx_dereverb):<br>&emsp;(234)DeEcho:AggressiveNormalDeReverb<br>/<br>1DeEcho-DeReverb2DeEcho2<br>2MDX-Net-Dereverb<br>3MDX-NetDeEcho-Aggressive": "Batch processing for vocal accompaniment separation using the UVR5 model.<br>Example of a valid folder path format: D:\\path\\to\\input\\folder (copy it from the file manager address bar).<br>The model is divided into three categories:<br>1. Preserve vocals: Choose this option for audio without harmonies. It preserves vocals better than HP5. It includes two built-in models: HP2 and HP3. HP3 may slightly leak accompaniment but preserves vocals slightly better than HP2.<br>2. Preserve main vocals only: Choose this option for audio with harmonies. It may weaken the main vocals. It includes one built-in model: HP5.<br>3. De-reverb and de-delay models (by FoxJoy):<br>(1) MDX-Net: The best choice for stereo reverb removal but cannot remove mono reverb;<br>&emsp;(234) DeEcho: Removes delay effects. Aggressive mode removes more thoroughly than Normal mode. DeReverb additionally removes reverb and can remove mono reverb, but not very effectively for heavily reverberated high-frequency content.<br>De-reverb/de-delay notes:<br>1. The processing time for the DeEcho-DeReverb model is approximately twice as long as the other two DeEcho models.<br>2. The MDX-Net-Dereverb model is quite slow.<br>3. The recommended cleanest configuration is to apply MDX-Net first and then DeEcho-Aggressive.",
"-使, 0-1-2 使012": "Enter the GPU index(es) separated by '-', e.g., 0-1-2 to use GPU 0, 1, and 2:",
"&&": "Vocals/Accompaniment Separation & Reverberation Removal",
"": "Save name:",
", ": "Save file name (default: same as the source file):",
"": "Saved model name (without extension):",
"save_every_epoch": "Save frequency (save_every_epoch):",
"artifact0.5": "Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy:",
"": "Modify",
"(weights)": "Modify model information (only supported for small model files extracted from the 'weights' folder)",
"": "Stop audio conversion",
"": "All processes have been completed!",
"": "Refresh voice list and index path",
"": "Load model",
"D": "Load pre-trained base model D path:",
"G": "Load pre-trained base model G path:",
"": "Unload voice to save GPU memory:",
"(, , 12-12)": "Transpose (integer, number of semitones, raise by an octave: 12, lower by an octave: -12):",
"0": "Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling:",
"": "No",
"": "Response threshold",
"": "Process data",
"Onnx": "Export Onnx Model",
"": "Export file format",
"": "FAQ (Frequently Asked Questions)",
"": "General settings",
"": "Start audio conversion",
"": "Unfortunately, there is no compatible GPU available to support your training.",
"": "Performance settings",
"total_epoch": "Total training epochs (total_epoch):",
", , , (opt). ": "Batch conversion. Enter the folder containing the audio files to be converted or upload multiple audio files. The converted audio will be output in the specified folder (default: 'opt').",
"": "Specify the output folder for vocals:",
"": "Specify output folder:",
"": "Specify the output folder for accompaniment:",
"(ms):": "Inference time (ms):",
"": "Inferencing voice:",
"": "Extract",
"使CPU": "Number of CPU processes used for pitch extraction and data processing:",
"": "Yes",
"ckpt": "Save only the latest '.ckpt' file to save disk space:",
"weights": "Save a small final model to the 'weights' folder at each save point:",
". 10min, ": "Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement:",
"": "GPU Information",
"MIT, , 使. <br>, 使. <b>LICENSE</b>.": "This software is open source under the MIT license. The author does not have any control over the software. Users who use the software and distribute the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or reference any codes and files within the software package. See the root directory <b>Agreement-LICENSE.txt</b> for details.",
"": "View",
"(weights)": "View model information (only supported for small model files extracted from the 'weights' folder)",
"": "Search feature ratio (controls accent strength, too high has artifacting):",
"": "Model",
"": "Model Inference",
"(logs),,": "Model extraction (enter the path of the large file model under the 'logs' folder). This is useful if you want to stop training halfway and manually extract and save a small model file, or if you want to test an intermediate model:",
"": "Whether the model has pitch guidance:",
"(, )": "Whether the model has pitch guidance (required for singing, optional for speech):",
",10": "Whether the model has pitch guidance (1: yes, 0: no):",
"": "Model architecture version:",
", ": "Model fusion, can be used to test timbre fusion",
"": "Path to Model:",
"batch_size": "Batch size per GPU:",
"": "Fade length",
"": "Version",
"": "Feature extraction",
",使": "Path to the feature index file. Leave blank to use the selected result from the dropdown:",
"+12key, -12key, . ": "Recommended +12 key for male to female conversion, and -12 key for female to male conversion. If the sound range goes too far and the voice is distorted, you can also adjust it to the appropriate range by yourself.",
"": "Target sample rate:",
"index,(dropdown)": "Auto-detect index path and select from the dropdown:",
"": "Fusion",
"": "Model information to be modified:",
"": "Model information to be placed:",
"": "Train",
"": "Train model",
"": "Train feature index",
", train.log": "Training complete. You can check the training logs in the console or the 'train.log' file under the experiment folder.",
"id": "Please specify the speaker/singer ID:",
"index": "index",
"pth": "pth",
"id": "Select Speaker/Singer ID:",
"": "Convert",
"": "Enter the experiment name:",
"": "Enter the path of the audio folder to be processed:",
"()": "Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):",
"()": "Enter the path of the audio file to be processed (default is the correct format example):",
"1使": "Adjust the volume envelope scaling. Closer to 0, the more it mimicks the volume of the original vocals. Can help mask noise and make volume sound more natural when set relatively low. Closer to 1 will be more of a consistently loud volume:",
"": "Enter the path of the training folder:",
"": "Input device",
"": "Input noise reduction",
"": "Output information",
"": "Output device",
"": "Output noise reduction",
"(,)": "Export audio (click on the three dots in the lower right corner to download)",
".index": "Select the .index file",
".pth": "Select the .pth file",
",pm,harvest,crepeGPU": ",pm,harvest,crepeGPU",
",pm,harvest,crepeGPU,rmvpeGPU": "Select the pitch extraction algorithm ('pm': faster extraction but lower-quality speech; 'harvest': better bass but extremely slow; 'crepe': better quality but GPU intensive), 'rmvpe': best quality, and little GPU requirement",
":pm,CPUdio,harvest,rmvpeCPU/GPU": ":pm,CPUdio,harvest,rmvpeCPU/GPU",
"": "Sample length",
"": "Reload device list",
"": "Pitch settings",
"(使)": "Audio device (please use the same type of driver)",
"": "pitch detection algorithm",
"": "Extra inference time"
}