diff --git a/Conda/01 - Miniconda install instructions.md b/Conda/01 - Miniconda install instructions.md
index f574eef..45682fb 100644
--- a/Conda/01 - Miniconda install instructions.md
+++ b/Conda/01 - Miniconda install instructions.md
@@ -43,6 +43,6 @@ conda update conda
That's it! You have successfully installed Miniconda on your system.
-Now when you open up a terminal you should see a prompt and (base) to indicate no conda enviroment is active.
+Now when you open up a terminal you should see a prompt and (base) to indicate no conda environment is active.

diff --git a/README.md b/README.md
index 1f9ecc6..079fcbc 100644
--- a/README.md
+++ b/README.md
@@ -127,7 +127,7 @@ Follow these sparkly steps to get your TensorRT ready for action! 🛠️✨
You can use one of the .engine models we supply. But if it doesn't work, then you will need to re-export it. Grab the `.pt` file here for the model you want. We recommend `yolov5s.py` or `yolov5m.py` [HERE 🔗](https://github.com/ultralytics/yolov5/releases/tag/v7.0).
12. **Run the Export Script** 🏃♂️💻
- Time to execute `export.py` with the following command. Patience is key; it might look frozen, but it's just concentrating hard! Can take up to 20 mintues.
+ Time to execute `export.py` with the following command. Patience is key; it might look frozen, but it's just concentrating hard! Can take up to 20 minutes.
```
python .\export.py --weights ./yolov5s.pt --include engine --half --imgsz 320 320 --device 0
diff --git a/customScripts/AimAssist/readme.md b/customScripts/AimAssist/readme.md
index e075ed1..ffa97c1 100644
--- a/customScripts/AimAssist/readme.md
+++ b/customScripts/AimAssist/readme.md
@@ -1,7 +1,7 @@
# Performance optimizations
This version aimes to achieve the best performance possible on AMD hardware.
-To achieve this, the script acts more as an aim assist insted of a full fledged aimbot.
+To achieve this, the script acts more as an aim assist instead of a full fledged aimbot.
The user will still need to do most on the aim
Changes that have been made:
diff --git a/ultralytics1/utils/augmentations.py b/ultralytics1/utils/augmentations.py
index 7ab75f1..4d49513 100644
--- a/ultralytics1/utils/augmentations.py
+++ b/ultralytics1/utils/augmentations.py
@@ -330,7 +330,7 @@ def classify_albumentations(
if vflip > 0:
T += [A.VerticalFlip(p=vflip)]
if jitter > 0:
- color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue
+ color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, saturation, 0 hue
T += [A.ColorJitter(*color_jitter, 0)]
else: # Use fixed crop for eval set (reproducibility)
T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)]
diff --git a/ultralytics1/utils/loggers/wandb/README.md b/ultralytics1/utils/loggers/wandb/README.md
index d78324b..b966a35 100644
--- a/ultralytics1/utils/loggers/wandb/README.md
+++ b/ultralytics1/utils/loggers/wandb/README.md
@@ -68,7 +68,7 @@ Run information streams from your environment to the W&B cloud console as you tr
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
1: Train and Log Evaluation simultaneousy
+ 1: Train and Log Evaluation simultaneously
This is an extension of the previous section, but it'll also training after uploading the dataset. This also evaluation Table
Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,
so no images will be uploaded from your system more than once.
@@ -102,7 +102,7 @@ You can leverage W&B artifacts and Tables integration to easily visualize and ma