Compare commits

..

43 Commits
v3.0.0 ... main

Author SHA1 Message Date
Ethan Snow
3af624949a
Merge pull request #294 from KernFerm/patch-1
Update README.md
2024-12-31 23:19:31 -05:00
Bubbles The Dev
144405b471
Update README.md 2024-12-31 22:50:17 -05:00
Voxlinou
f1c52c562a
Update README.md (#196)
Fixed clear error from
- Work solo or alone
to
- Work alone or with friends
2024-09-29 21:47:39 -04:00
Takumi Muraishi
fccb5de13c
fix some typos (#182) 2024-05-03 13:48:06 -07:00
Elijah Harmon
0768cf8139
Update README.md 2024-04-22 10:40:33 -07:00
Elijah Harmon
29c2091384
Update README.md 2024-02-03 13:16:42 -08:00
Elijah Harmon
84101933c2
Update README.md 2024-02-03 13:16:09 -08:00
Elijah Harmon
90c31024e4
Update README.md 2024-01-18 22:05:15 -08:00
Elijah Harmon
c199da0824
Update README.md 2024-01-08 09:01:49 -08:00
Villageslayer
3033919f3b
My Custom scripts added (#153)
* Add files via upload

* Add files via upload

* Update readme.md
2024-01-06 19:07:41 -05:00
Elijah Harmon
7457e51875
Update README.md 2024-01-04 10:42:53 -08:00
Elijah Harmon
c6a2ab350b
Update README.md 2024-01-01 09:09:19 -07:00
Elijah Harmon
34638cde48
Update README.md 2023-12-31 08:08:03 -07:00
Elijah Harmon
77d8669576
Update README.md 2023-12-30 09:11:08 -07:00
Elijah Harmon
72502fb6b2
Update README.md (#146) 2023-12-28 15:14:27 -07:00
Elijah Harmon
34f458fb44
Update README.md 2023-12-25 20:26:52 -08:00
Elijah Harmon
ebde7b64bf
Update README.md 2023-12-24 10:40:15 -08:00
Elijah Harmon
84293a5a6e
Update README.md 2023-12-23 16:12:02 -08:00
Elijah Harmon
ddc6bd19e6
Update README.md 2023-12-20 01:37:24 -08:00
Elijah Harmon
157a3ad673
Update README.md 2023-12-18 23:34:19 -08:00
Elijah Harmon
8cf4e8c424
Update README.md 2023-12-18 23:33:47 -08:00
JinxTheCat
333a015acf
Add support for changing mask side (#141)
* Add mask config options

* Mask side support

* Mask side support

* Mask side support

* Update main.py

* Add maskSide support

* Add maskSide support

* revert

* double revert :(
2023-12-14 21:21:18 -08:00
afy
e51fda35eb
Create afy_raspberry_pi_pico_w_tensorrt.py (#138)
* Create afy_raspberry_pi_pico_w_tensorrt.py

* Update afy_raspberry_pi_pico_w_tensorrt.py
2023-12-11 00:19:33 -05:00
Elijah Harmon
25a411b676
Update README.md 2023-12-10 18:00:01 -05:00
Elijah Harmon
be14306b3f
Update README.md (#137) 2023-12-09 18:51:31 -08:00
Charlie Mac
451dcffd31
Custom rust tensorrt model and weights (#132)
* rust custom model

* new model engine and weights

* Conda installation
2023-12-06 19:10:02 -08:00
Elijah Harmon
73e8ba921d Merge branch 'main' of https://github.com/RootKit-Org/AI-Aimbot 2023-12-05 13:38:22 -08:00
Elijah Harmon
425a3d7f5d changed typing for better support 2023-12-05 13:38:14 -08:00
Elijah Harmon
9f8e6dc25a
Update README.md (#136) 2023-12-03 20:20:43 -05:00
Elijah Harmon
88bca0c5d4
Update README.md (#135) 2023-12-03 13:55:52 -08:00
Elijah Harmon
dc5552d069
Update README.md (#134) 2023-12-03 13:54:23 -08:00
Elijah Harmon
e34f5ec81a launcher release 2023-12-02 02:33:47 -08:00
Elijah Harmon
f0a8602e92 turned off visuals by default 2023-12-01 15:32:27 -08:00
Elijah Harmon
0c20f3a5bd readded trollatko model 2023-11-30 22:56:39 -08:00
Elijah Harmon
c5618aa4ab fixed tensor context is none issue 2023-11-29 23:08:02 -08:00
Elijah Harmon
b605e59718 moved ad to top 2023-11-28 22:24:50 -08:00
Elijah Harmon
d32f66cd86 launcher 2023-11-28 10:35:37 -08:00
Elijah Harmon
b91dd1cbc2 movementamp value returned 2023-11-26 17:06:20 -08:00
Elijah Harmon
eb15b7bd44 readme updated for mask 2023-11-26 16:58:53 -08:00
Elijah Harmon
a43e16432c masking update 2023-11-26 16:31:45 -08:00
Elijah Harmon
8429fe0441 bettercam is now a package 2023-11-22 00:29:20 -08:00
Elijah Harmon
459dece619 wording 2023-11-16 09:37:34 -08:00
Elijah Harmon
9583a6b58a won't use cp unless using nvidia 2023-11-15 08:18:34 -08:00
148 changed files with 2039 additions and 386 deletions

4
.gitignore vendored
View File

@ -17,3 +17,7 @@ utils/__pycache__
models/__pycache__
venv
*.old
elite
!Robloxmodel.pt
!rust.pt
!rust.engine

24
Conda/00 - Conda.md Normal file
View File

@ -0,0 +1,24 @@
# Conda: A Package and Environment Manager
## What is Conda?
Conda is an open-source package management system and environment management system. It was created for Python programs, but it can package and distribute software for any language.
## Key Features of Conda
1. **Package Management**: Conda helps you manage and keep track of packages in your projects. It can install packages from the Conda package repository and other sources.
2. **Environment Management**: Conda allows you to create separate environments containing files, packages, and their dependencies that will not interfere with each other. This can be extremely useful when working on projects with different requirements.
3. **Cross-Platform**: Conda is a cross-platform tool, which means it works on Windows, macOS, and Linux.
4. **Language Agnostic**: Originally, Conda was created for Python. Now, it can handle packages from any language, which is a big advantage over pip, which is Python-specific.
## Benefits of Using Conda
- **Simplicity**: Conda simplifies package management and deployment.
- **Reproducibility**: Conda allows you to share your environments with others, which helps in reproducing research.
- **Isolation**: With Conda, you can easily create isolated environments to separate different projects.
- **Wide Package Support**: Conda supports a wide array of packages, and it's not limited to Python.
In conclusion, Conda is a powerful tool for managing packages and environments, making it easier to manage projects and their dependencies.

View File

@ -0,0 +1,48 @@
```markdown
# Installing Miniconda
Follow these steps to install Miniconda on your system:
## Step 1: Download Miniconda
First, download the appropriate Miniconda installer for your system from the [official Miniconda website](https://docs.conda.io/en/latest/miniconda.html).
## Step 2: Run the Installer
- **Windows**: Double click the `.exe` file and follow the instructions.
- **macOS and Linux**: Open a terminal, navigate to the directory where you downloaded the installer, and run the following command:
```
```bash
bash Miniconda3-latest-Linux-x86_64.sh
```
Replace `Miniconda3-latest-Linux-x86_64.sh` with the name of the file you downloaded.
## Step 3: Follow the Prompts
The installer will prompt you to review the license agreement, choose the install location, and optionally allow the installer to initialize Miniconda3 by appending it to your `PATH`.
## Step 4: Verify the Installation
To verify that the installation was successful, open a new terminal window and type:
```bash
conda list
```
If Miniconda has been installed and added to your `PATH`, this should display a list of installed packages.
## Step 5: Update Conda to the Latest Version
It's a good practice to make sure you're running the latest version of Conda. You can update it by running:
```bash
conda update conda
```
That's it! You have successfully installed Miniconda on your system.
Now when you open up a terminal you should see a prompt and (base) to indicate no conda environment is active.
![Your console](imgs/console.jpg)

View File

@ -0,0 +1,41 @@
```markdown
# Creating a New Conda Environment with Python 3.11
Follow these steps to create a new Conda environment with Python 3.11:
## Step 1: Open a Terminal
Open a terminal window. This could be Git Bash, Terminal on macOS, or Command Prompt on Windows.
## Step 2: Create a New Conda Environment
To create a new Conda environment with Python 3.11, use the following command:
```
```bash
conda create --name RootKit python=3.11
```
In this command, `RootKit` is the name of the new environment, and `python=3.11` specifies that we want Python 3.11 in this environment.
## Step 3: Activate the New Environment
After creating the new environment, you need to activate it using the following command:
```bash
conda activate RootKit
```
Now, `RootKit` is your active environment.
## Step 4: Verify Python Version
To verify that the correct version of Python is installed in your new environment, use the following command:
```bash
python --version
```
This should return `Python 3.11.x`.
That's it! You have successfully created a new Conda environment with Python 3.11.

View File

@ -0,0 +1,46 @@
```markdown
# Cloning a GitHub Repository
Cloning a GitHub repository creates a local copy of the remote repo. This allows you to save all files from the repository on your local computer. Here's how you can do it:
## Step 1: Copy the Repository URL
Navigate to the main page of the repository on GitHub and click the "Code" button. Then click the "copy to clipboard" button to copy the repository URL.
## Step 2: Open a Terminal
Open a terminal window on your computer. If you're using Windows, you can use Git Bash or Command Prompt. On macOS, you can use the Terminal app.
## Step 3: Navigate to the Directory
Navigate to the directory where you want to clone the repository using the `cd` (change directory) command. For example:
```
```bash
cd /path/to/your/directory
```
## Step 4: Clone the Repository
Now, run the `git clone` command followed by the URL of the repository that you copied in step 1:
```bash
git clone https://github.com/RootKit-Org/AI-Aimbot.git
```
Replace `https://github.com/RootKit-Org/AI-Aimbot.git` with the URL you copied.
## Step 5: Verify the Cloning Process
Navigate into the cloned repository and list its files to verify that the cloning process was successful:
```bash
cd AI-Aimbot
ls
```
Replace `AI-Aimbot` with the name of your repository if you called it something else. The `ls` command will list all the files in the directory.
That's it! You have successfully cloned a GitHub repository to your local machine.
By cloning the repo, any later changes you can git pull.

View File

@ -0,0 +1,39 @@
```markdown
# Installing Requirements
Follow these steps to install all the requirements to your system:
## Step 1: Activate your environment:
## Step 2: Only if you have an NVIDIA graphics card - Download and Install CUDA:
Nvidia CUDA Toolkit 11.8 [DOWNLOAD HERE](https://developer.nvidia.com/cuda-11-8-0-download-archive)
## Step 3: Install PYTORCH:
- For NVIDIA GPU:
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
- For AMD or CPU only:
pip install torch torchvision torchaudio
## Step 4: Install requirements.txt:
pip install -r requirements.txt
## Step 5: Install additional modules:
Because you are using Conda, you need to install additional requirements in your environment.
pip install -r Conda/additionalRequirements.txt
## Step 6: Test your installation:
To test your installation, run the following command:
python main.py
You should now have a working AI AIMBOT. If you want to use the fastest version continue the installation steps on the RootKit AI Aimbot README.md
```

View File

@ -0,0 +1,20 @@
argilla
bettercam
datasets
fastapi
langchainplus-sdk
langsmith
markdownlit
onnx
onnxruntime
opencv-python
panel
pygetwindow
sentence-transformers
streamlit
streamlit-camera-input-live
streamlit-extras
streamlit-faker
streamlit-image-coordinates
streamlit-keyup
transformers

View File

@ -2,8 +2,12 @@
![World's Best AI Aimbot Banner](imgs/banner.png)
[![Pull Requests Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat)](http://makeapullrequest.com)
[![Pull Requests Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat)](https://makeapullrequest.com)
Want to make your own bot? Then use the [Starter Code Pack](https://github.com/RootKit-Org/AI-Aimbot-Starter-Code)!
--
--
## 🙌 Welcome Aboard!
We're a charity on a mission to educate and certify the upcoming wave of developers in the world of Computer Engineering 🌍. Need assistance? Hop into our [Discord](https://discord.gg/rootkitorg) and toss your questions at `@Wonder` in the *#ai-aimbot channel* (be sure to stick to this channel or face the consequences! 😬). Type away your query and include `@Wonder` in there.
@ -35,7 +39,7 @@ Intended for educational use 🎓, our aim is to highlight the vulnerability of
- 🛑 Is it a `pip is not recognized...` error? [WATCH THIS!](https://youtu.be/zWYvRS7DtOg)
3. Fire up `PowerShell` or `Command Prompt` on Windows 🔍.
4. To install `PyTorch`, select the appropriate command based on your GPU.
- Nvidia `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`
- Nvidia `pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118`
- AMD or CPU `pip install torch torchvision torchaudio`
5. 📦 Run the command below to install the required Open Source packages:
```
@ -85,30 +89,33 @@ Follow these sparkly steps to get your TensorRT ready for action! 🛠️✨
5. **CUDNN Installation** 🧩
Click to install [CUDNN 📥](https://developer.nvidia.com/downloads/compute/cudnn/secure/8.9.6/local_installers/11.x/cudnn-windows-x86_64-8.9.6.50_cuda11-archive.zip/). You'll need a Nvidia account to proceed. Don't worry it's free.
6. **Get TensorRT 8.6 GA** 🔽
6. **Unzip and Relocate** 📁➡️
Open the .zip CuDNN file and move all the folders/files to where the CUDA Toolkit is on your machine, usually at `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8`.
7. **Get TensorRT 8.6 GA** 🔽
Fetch [`TensorRT 8.6 GA 🛒`](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/zip/TensorRT-8.6.1.6.Windows10.x86_64.cuda-11.8.zip).
7. **Unzip and Relocate** 📁➡️
8. **Unzip and Relocate** 📁➡️
Open the .zip TensorRT file and move all the folders/files to where the CUDA Toolkit is on your machine, usually at `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8`.
8. **Python TensorRT Installation** 🎡
9. **Python TensorRT Installation** 🎡
Once you have all the files copied over, you should have a folder at `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python`. If you do, good, then run the following command to install TensorRT in python.
```
pip install C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python\tensorrt-8.6.1-cp311-none-win_amd64.whl
pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python\tensorrt-8.6.1-cp311-none-win_amd64.whl"
```
🚨 If the following steps didn't work, don't stress out! 😅 The labeling of the files corresponds with the Python version you have installed on your machine. We're not looking for the 'lean' or 'dispatch' versions. 🔍 Just locate the correct file and replace the path with your new one. 🔄 You've got this! 💪
9. **Set Your Environmental Variables** 🌎
10. **Set Your Environmental Variables** 🌎
Add these paths to your environment:
- `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib`
- `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp`
- `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin`
10. **Download Pre-trained Models** 🤖
11. **Download Pre-trained Models** 🤖
You can use one of the .engine models we supply. But if it doesn't work, then you will need to re-export it. Grab the `.pt` file here for the model you want. We recommend `yolov5s.py` or `yolov5m.py` [HERE 🔗](https://github.com/ultralytics/yolov5/releases/tag/v7.0).
11. **Run the Export Script** 🏃‍♂️💻
Time to execute `export.py` with the following command. Patience is key; it might look frozen, but it's just concentrating hard! Can take up to 20 mintues.
12. **Run the Export Script** 🏃‍♂️💻
Time to execute `export.py` with the following command. Patience is key; it might look frozen, but it's just concentrating hard! Can take up to 20 minutes.
```
python .\export.py --weights ./yolov5s.pt --include engine --half --imgsz 320 320 --device 0
@ -124,7 +131,11 @@ If you've followed these steps, you should be all set with TensorRT! ⚙️🚀
*Default settings are generally great for most scenarios. Check out the comments in the code for more insights. 🔍 The configuration settings are now located in the `config.py` file!<br>
**CAPS_LOCK is the default for flipping the switch on the autoaim superpower! ⚙️ 🎯**
`aaRightShift` - Might need a tweak in 3rd person games like Fortnite and New World. 🎮 Typically, a setting of `100` or `150` should hit the mark. 🎯👌
`useMask` - Set to `True` or `False` to turn on and off 🎭
`maskWidth` - The width of the mask to use. Only used when `useMask` is `True` 📐
`maskHeight` - The height of the mask to use. Only used when `useMask` is `True` 📐
`aaQuitKey` - The go-to key is `q`, but if it clashes with your game style, swap it out! ⌨️♻️
@ -174,7 +185,7 @@ Show off your work or new models via Pull Requests in `customScripts` or `custom
## 🌠 Future Ideas
- [ ] Mask Player to avoid false positives
- [x] Mask Player to avoid false positives
Happy Coding and Aiming! 🎉👾

View File

@ -2,9 +2,11 @@
screenShotHeight = 320
screenShotWidth = 320
# For use in games that are 3rd person and character model interferes with the autoaim
# EXAMPLE: Fortnite and New World
aaRightShift = 0
# Use "left" or "right" for the mask side depending on where the interfering object is, useful for 3rd player models or large guns
useMask = False
maskSide = "left"
maskWidth = 80
maskHeight = 200
# Autoaim mouse movement amplifier
aaMovementAmp = .4
@ -31,4 +33,4 @@ centerOfScreen = True
# 1 - CPU
# 2 - AMD
# 3 - NVIDIA
onnxChoice = 3
onnxChoice = 1

View File

@ -0,0 +1,14 @@
# Explain your model
Rust dataset. 6k images - 10/10/80 split. Included weights file - best.pt
Tell the community about your model
- What data was it trained on?
- Rust Images
- How much data was it trained on?
- 6k Images
- How many models do you have?
- 1
- Are they for pytorch, onnx, tensorrt, something else?
- tensorrt
- Any set up info

Binary file not shown.

BIN
customModels/rust/rust.pt Normal file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

View File

@ -1,7 +1,7 @@
# Performance optimizations
This version aimes to achieve the best performance possible on AMD hardware.
To achieve this, the script acts more as an aim assist insted of a full fledged aimbot.
To achieve this, the script acts more as an aim assist instead of a full fledged aimbot.
The user will still need to do most on the aim
Changes that have been made:

View File

@ -0,0 +1,180 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
import gc
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskWidth, maskHeight, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model, for better results use yolov5m or yolov5l
model = torch.hub.load('ultralytics/yolov5', 'yolov5s',
pretrained=True, force_reload=True)
stride, names, pt = model.stride, model.names, model.pt
if torch.cuda.is_available():
model.half()
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# Normalizing Data
im = torch.from_numpy(npImg)
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
if torch.cuda.is_available():
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# Detecting all the objects
results = model(im, size=screenShotHeight)
# Suppressing results that dont meet thresholds
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=1000)
# Converting output to usable cords
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
if win32api.GetKeyState(0x14):
win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(
mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp), 0, 0)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Human", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -0,0 +1,203 @@
import onnxruntime as ort
import numpy as np
import gc
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from mouse_driver.MouseMove import mouse_move as ghub_move
import torch
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Choosing the correct ONNX Provider based on config.py
onnxProvider = ""
if onnxChoice == 1:
onnxProvider = "CPUExecutionProvider"
elif onnxChoice == 2:
onnxProvider = "DmlExecutionProvider"
elif onnxChoice == 3:
import cupy as cp
onnxProvider = "CUDAExecutionProvider"
so = ort.SessionOptions()
so.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
ort_sess = ort.InferenceSession('RRRR.onnx', sess_options=so, providers=[
onnxProvider])
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# If Nvidia, do this
if onnxChoice == 3:
# Normalizing Data
im = torch.from_numpy(npImg).to('cuda')
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# If AMD or CPU, do this
else:
# Normalizing Data
im = np.array([npImg])
if im.shape[3] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :, :3]
im = im / 255
im = im.astype(np.half)
im = np.moveaxis(im, 3, 1)
# If Nvidia, do this
if onnxChoice == 3:
outputs = ort_sess.run(None, {'images': cp.asnumpy(im)})
# If AMD or CPU, do this
else:
outputs = ort_sess.run(None, {'images': np.array(im)})
im = torch.from_numpy(outputs[0]).to('cpu')
pred = non_max_suppression(
im, confidence, confidence, 0, False, max_det=10)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {int(c)}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
#imagine recalculating everything to find out you have a drop in replacement
if win32api.GetKeyState(0x02) < 0:
ghub_move(mouseMove[0],mouseMove[1])
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(midX + halfW), int(midY +
halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -0,0 +1,172 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from models.common import DetectMultiBackend
from mouse_driver.MouseMove import mouse_move as ghub_move
import cupy as cp
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen, screenShotWidth
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model
model = DetectMultiBackend('RRRR320half.engine', device=torch.device(
'cuda'), dnn=False, data='', fp16=True)
stride, names, pt = model.stride, model.names, model.pt
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if exit key is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
npImg = cp.array([camera.get_latest_frame()])
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[:, -maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[:, -maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
im = npImg / 255
im = im.astype(cp.half)
im = cp.moveaxis(im, 3, 1)
im = torch.from_numpy(cp.asnumpy(im)).to('cuda')
# Detecting all the objects
results = model(im)
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=2)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
if win32api.GetKeyState(0x91):# Moving the mouse
if win32api.GetKeyState(0x02) < 0 or win32api.GetKeyState(0x01) < 0:
ghub_move(mouseMove[0],mouseMove[1])
time.sleep(0.01)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
npImg = cp.asnumpy(npImg[0])
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -0,0 +1,5 @@
# Explain your model
switched aimkey to RMB
added scrollock as a toggle key

View File

@ -0,0 +1,188 @@
from unittest import result
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from models.common import DetectMultiBackend
import cupy as cp
import socket
ip = '' # raspberry board ip
port = 50123 # raspberry port
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print(f'Connecting to {ip}:{port}...')
try:
client.connect((ip, port))
except TimeoutError as e:
print(f'ERROR: Could not connect. {e}')
client.close()
exit(1)
def moveafy(x, y):
x = int(np.floor(x))
y = int(np.floor(y))
if x != 0 or y != 0:
command = (f'M{x},{y}\r')
client.sendall(command.encode())
get_response()
def get_response():
return f'Socket: {client.recv(4).decode()}'
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model
model = DetectMultiBackend('afyfort.engine', device=torch.device('cuda'), dnn=False, data='', fp16=True)
stride, names, pt = model.stride, model.names, model.pt
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
npImg = cp.array([camera.get_latest_frame()])
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
if useMask:
npImg[:, -maskHeight:, :maskWidth, :] = 0
im = npImg / 255
im = im.astype(cp.half)
im = cp.moveaxis(im, 3, 1)
im = torch.from_numpy(cp.asnumpy(im)).to('cuda')
# Detecting all the objects
results = model(im)
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=10)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
if win32api.GetAsyncKeyState(0x02) < 0:
# win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp), 0, 0)
moveafy(int(mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp))
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
npImg = cp.asnumpy(npImg[0])
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Human", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print(str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

180
customScripts/main.py Normal file
View File

@ -0,0 +1,180 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
import gc
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskWidth, maskHeight, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model, for better results use yolov5m or yolov5l
model = torch.hub.load('ultralytics/yolov5', 'yolov5s',
pretrained=True, force_reload=True)
stride, names, pt = model.stride, model.names, model.pt
if torch.cuda.is_available():
model.half()
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# Normalizing Data
im = torch.from_numpy(npImg)
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
if torch.cuda.is_available():
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# Detecting all the objects
results = model(im, size=screenShotHeight)
# Suppressing results that dont meet thresholds
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=1000)
# Converting output to usable cords
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
if win32api.GetKeyState(0x14):
win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(
mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp), 0, 0)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Human", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

203
customScripts/main_onnx.py Normal file
View File

@ -0,0 +1,203 @@
import onnxruntime as ort
import numpy as np
import gc
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from mouse_driver.MouseMove import mouse_move as ghub_move
import torch
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Choosing the correct ONNX Provider based on config.py
onnxProvider = ""
if onnxChoice == 1:
onnxProvider = "CPUExecutionProvider"
elif onnxChoice == 2:
onnxProvider = "DmlExecutionProvider"
elif onnxChoice == 3:
import cupy as cp
onnxProvider = "CUDAExecutionProvider"
so = ort.SessionOptions()
so.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
ort_sess = ort.InferenceSession('RRRR.onnx', sess_options=so, providers=[
onnxProvider])
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# If Nvidia, do this
if onnxChoice == 3:
# Normalizing Data
im = torch.from_numpy(npImg).to('cuda')
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# If AMD or CPU, do this
else:
# Normalizing Data
im = np.array([npImg])
if im.shape[3] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :, :3]
im = im / 255
im = im.astype(np.half)
im = np.moveaxis(im, 3, 1)
# If Nvidia, do this
if onnxChoice == 3:
outputs = ort_sess.run(None, {'images': cp.asnumpy(im)})
# If AMD or CPU, do this
else:
outputs = ort_sess.run(None, {'images': np.array(im)})
im = torch.from_numpy(outputs[0]).to('cpu')
pred = non_max_suppression(
im, confidence, confidence, 0, False, max_det=10)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {int(c)}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
#imagine recalculating everything to find out you have a drop in replacement
if win32api.GetKeyState(0x02) < 0:
ghub_move(mouseMove[0],mouseMove[1])
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(midX + halfW), int(midY +
halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -0,0 +1,172 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from models.common import DetectMultiBackend
from mouse_driver.MouseMove import mouse_move as ghub_move
import cupy as cp
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen, screenShotWidth
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model
model = DetectMultiBackend('RRRR320half.engine', device=torch.device(
'cuda'), dnn=False, data='', fp16=True)
stride, names, pt = model.stride, model.names, model.pt
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if exit key is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
npImg = cp.array([camera.get_latest_frame()])
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[:, -maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[:, -maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
im = npImg / 255
im = im.astype(cp.half)
im = cp.moveaxis(im, 3, 1)
im = torch.from_numpy(cp.asnumpy(im)).to('cuda')
# Detecting all the objects
results = model(im)
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=2)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
if win32api.GetKeyState(0x91):# Moving the mouse
if win32api.GetKeyState(0x02) < 0 or win32api.GetKeyState(0x01) < 0:
ghub_move(mouseMove[0],mouseMove[1])
time.sleep(0.01)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
npImg = cp.asnumpy(npImg[0])
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

231
export.py
View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
"""
Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
@ -77,6 +77,25 @@ from utils.torch_utils import select_device, smart_inference_mode
MACOS = platform.system() == 'Darwin' # macOS environment
class iOSModel(torch.nn.Module):
def __init__(self, model, im):
super().__init__()
b, c, h, w = im.shape # batch, channel, height, width
self.model = model
self.nc = model.nc # number of classes
if w == h:
self.normalize = 1. / w
else:
self.normalize = torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]) # broadcast (slower, smaller)
# np = model(im)[0].shape[1] # number of points
# self.normalize = torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]).expand(np, 4) # explicit (faster, larger)
def forward(self, x):
xywh, conf, cls = self.model(x)[0].squeeze().split((4, 1, self.nc), 1)
return cls * conf, xywh * self.normalize # confidence (3780, 80), coordinates (3780, 4)
def export_formats():
# YOLOv5 export formats
x = [
@ -136,7 +155,7 @@ def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX
import onnx
LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
f = file.with_suffix('.onnx')
f = str(file.with_suffix('.onnx'))
output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0']
if dynamic:
@ -186,23 +205,68 @@ def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX
@try_export
def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')):
def export_openvino(file, metadata, half, int8, data, prefix=colorstr('OpenVINO:')):
# YOLOv5 OpenVINO export
check_requirements('openvino-dev') # requires openvino-dev: https://pypi.org/project/openvino-dev/
import openvino.inference_engine as ie
check_requirements('openvino-dev>=2023.0') # requires openvino-dev: https://pypi.org/project/openvino-dev/
import openvino.runtime as ov # noqa
from openvino.tools import mo # noqa
LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')
f = str(file).replace('.pt', f'_openvino_model{os.sep}')
LOGGER.info(f'\n{prefix} starting export with openvino {ov.__version__}...')
f = str(file).replace(file.suffix, f'_openvino_model{os.sep}')
f_onnx = file.with_suffix('.onnx')
f_ov = str(Path(f) / file.with_suffix('.xml').name)
if int8:
check_requirements('nncf>=2.4.0') # requires at least version 2.4.0 to use the post-training quantization
import nncf
import numpy as np
from openvino.runtime import Core
args = [
'mo',
'--input_model',
str(file.with_suffix('.onnx')),
'--output_dir',
f,
'--data_type',
('FP16' if half else 'FP32'),]
subprocess.run(args, check=True, env=os.environ) # export
from utils.dataloaders import create_dataloader
core = Core()
onnx_model = core.read_model(f_onnx) # export
def prepare_input_tensor(image: np.ndarray):
input_tensor = image.astype(np.float32) # uint8 to fp16/32
input_tensor /= 255.0 # 0 - 255 to 0.0 - 1.0
if input_tensor.ndim == 3:
input_tensor = np.expand_dims(input_tensor, 0)
return input_tensor
def gen_dataloader(yaml_path, task='train', imgsz=640, workers=4):
data_yaml = check_yaml(yaml_path)
data = check_dataset(data_yaml)
dataloader = create_dataloader(data[task],
imgsz=imgsz,
batch_size=1,
stride=32,
pad=0.5,
single_cls=False,
rect=False,
workers=workers)[0]
return dataloader
# noqa: F811
def transform_fn(data_item):
"""
Quantization transform function. Extracts and preprocess input data from dataloader item for quantization.
Parameters:
data_item: Tuple with data item produced by DataLoader during iteration
Returns:
input_tensor: Input data for quantization
"""
img = data_item[0].numpy()
input_tensor = prepare_input_tensor(img)
return input_tensor
ds = gen_dataloader(data)
quantization_dataset = nncf.Dataset(ds, transform_fn)
ov_model = nncf.quantize(onnx_model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)
else:
ov_model = mo.convert_model(f_onnx, model_name=file.stem, framework='onnx', compress_to_fp16=half) # export
ov.serialize(ov_model, f_ov) # save
yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml
return f, None
@ -223,7 +287,7 @@ def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')):
@try_export
def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):
def export_coreml(model, im, file, int8, half, nms, prefix=colorstr('CoreML:')):
# YOLOv5 CoreML export
check_requirements('coremltools')
import coremltools as ct
@ -231,6 +295,8 @@ def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):
LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
f = file.with_suffix('.mlmodel')
if nms:
model = iOSModel(model, im)
ts = torch.jit.trace(model, im, strict=False) # TorchScript model
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None)
@ -506,6 +572,129 @@ def add_tflite_metadata(file, metadata, num_outputs):
tmp_file.unlink()
def pipeline_coreml(model, im, file, names, y, prefix=colorstr('CoreML Pipeline:')):
# YOLOv5 CoreML pipeline
import coremltools as ct
from PIL import Image
print(f'{prefix} starting pipeline with coremltools {ct.__version__}...')
batch_size, ch, h, w = list(im.shape) # BCHW
t = time.time()
# YOLOv5 Output shapes
spec = model.get_spec()
out0, out1 = iter(spec.description.output)
if platform.system() == 'Darwin':
img = Image.new('RGB', (w, h)) # img(192 width, 320 height)
# img = torch.zeros((*opt.img_size, 3)).numpy() # img size(320,192,3) iDetection
out = model.predict({'image': img})
out0_shape, out1_shape = out[out0.name].shape, out[out1.name].shape
else: # linux and windows can not run model.predict(), get sizes from pytorch output y
s = tuple(y[0].shape)
out0_shape, out1_shape = (s[1], s[2] - 5), (s[1], 4) # (3780, 80), (3780, 4)
# Checks
nx, ny = spec.description.input[0].type.imageType.width, spec.description.input[0].type.imageType.height
na, nc = out0_shape
# na, nc = out0.type.multiArrayType.shape # number anchors, classes
assert len(names) == nc, f'{len(names)} names found for nc={nc}' # check
# Define output shapes (missing)
out0.type.multiArrayType.shape[:] = out0_shape # (3780, 80)
out1.type.multiArrayType.shape[:] = out1_shape # (3780, 4)
# spec.neuralNetwork.preprocessing[0].featureName = '0'
# Flexible input shapes
# from coremltools.models.neural_network import flexible_shape_utils
# s = [] # shapes
# s.append(flexible_shape_utils.NeuralNetworkImageSize(320, 192))
# s.append(flexible_shape_utils.NeuralNetworkImageSize(640, 384)) # (height, width)
# flexible_shape_utils.add_enumerated_image_sizes(spec, feature_name='image', sizes=s)
# r = flexible_shape_utils.NeuralNetworkImageSizeRange() # shape ranges
# r.add_height_range((192, 640))
# r.add_width_range((192, 640))
# flexible_shape_utils.update_image_size_range(spec, feature_name='image', size_range=r)
# Print
print(spec.description)
# Model from spec
model = ct.models.MLModel(spec)
# 3. Create NMS protobuf
nms_spec = ct.proto.Model_pb2.Model()
nms_spec.specificationVersion = 5
for i in range(2):
decoder_output = model._spec.description.output[i].SerializeToString()
nms_spec.description.input.add()
nms_spec.description.input[i].ParseFromString(decoder_output)
nms_spec.description.output.add()
nms_spec.description.output[i].ParseFromString(decoder_output)
nms_spec.description.output[0].name = 'confidence'
nms_spec.description.output[1].name = 'coordinates'
output_sizes = [nc, 4]
for i in range(2):
ma_type = nms_spec.description.output[i].type.multiArrayType
ma_type.shapeRange.sizeRanges.add()
ma_type.shapeRange.sizeRanges[0].lowerBound = 0
ma_type.shapeRange.sizeRanges[0].upperBound = -1
ma_type.shapeRange.sizeRanges.add()
ma_type.shapeRange.sizeRanges[1].lowerBound = output_sizes[i]
ma_type.shapeRange.sizeRanges[1].upperBound = output_sizes[i]
del ma_type.shape[:]
nms = nms_spec.nonMaximumSuppression
nms.confidenceInputFeatureName = out0.name # 1x507x80
nms.coordinatesInputFeatureName = out1.name # 1x507x4
nms.confidenceOutputFeatureName = 'confidence'
nms.coordinatesOutputFeatureName = 'coordinates'
nms.iouThresholdInputFeatureName = 'iouThreshold'
nms.confidenceThresholdInputFeatureName = 'confidenceThreshold'
nms.iouThreshold = 0.45
nms.confidenceThreshold = 0.25
nms.pickTop.perClass = True
nms.stringClassLabels.vector.extend(names.values())
nms_model = ct.models.MLModel(nms_spec)
# 4. Pipeline models together
pipeline = ct.models.pipeline.Pipeline(input_features=[('image', ct.models.datatypes.Array(3, ny, nx)),
('iouThreshold', ct.models.datatypes.Double()),
('confidenceThreshold', ct.models.datatypes.Double())],
output_features=['confidence', 'coordinates'])
pipeline.add_model(model)
pipeline.add_model(nms_model)
# Correct datatypes
pipeline.spec.description.input[0].ParseFromString(model._spec.description.input[0].SerializeToString())
pipeline.spec.description.output[0].ParseFromString(nms_model._spec.description.output[0].SerializeToString())
pipeline.spec.description.output[1].ParseFromString(nms_model._spec.description.output[1].SerializeToString())
# Update metadata
pipeline.spec.specificationVersion = 5
pipeline.spec.description.metadata.versionString = 'https://github.com/ultralytics/yolov5'
pipeline.spec.description.metadata.shortDescription = 'https://github.com/ultralytics/yolov5'
pipeline.spec.description.metadata.author = 'glenn.jocher@ultralytics.com'
pipeline.spec.description.metadata.license = 'https://github.com/ultralytics/yolov5/blob/master/LICENSE'
pipeline.spec.description.metadata.userDefined.update({
'classes': ','.join(names.values()),
'iou_threshold': str(nms.iouThreshold),
'confidence_threshold': str(nms.confidenceThreshold)})
# Save the model
f = file.with_suffix('.mlmodel') # filename
model = ct.models.MLModel(pipeline.spec)
model.input_description['image'] = 'Input image'
model.input_description['iouThreshold'] = f'(optional) IOU Threshold override (default: {nms.iouThreshold})'
model.input_description['confidenceThreshold'] = \
f'(optional) Confidence Threshold override (default: {nms.confidenceThreshold})'
model.output_description['confidence'] = 'Boxes × Class confidence (see user-defined metadata "classes")'
model.output_description['coordinates'] = 'Boxes × [x, y, width, height] (relative to image size)'
model.save(f) # pipelined
print(f'{prefix} pipeline success ({time.time() - t:.2f}s), saved as {f} ({file_size(f):.1f} MB)')
@smart_inference_mode()
def run(
data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
@ -582,9 +771,11 @@ def run(
if onnx or xml: # OpenVINO requires ONNX
f[2], _ = export_onnx(model, im, file, opset, dynamic, simplify)
if xml: # OpenVINO
f[3], _ = export_openvino(file, metadata, half)
f[3], _ = export_openvino(file, metadata, half, int8, data)
if coreml: # CoreML
f[4], _ = export_coreml(model, im, file, int8, half)
f[4], ct_model = export_coreml(model, im, file, int8, half, nms)
if nms:
pipeline_coreml(ct_model, im, file, model.names, y)
if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formats
assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.'
assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.'
@ -640,7 +831,7 @@ def parse_opt(known=False):
parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
parser.add_argument('--keras', action='store_true', help='TF: use Keras')
parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
parser.add_argument('--int8', action='store_true', help='CoreML/TF/OpenVINO INT8 quantization')
parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes')
parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
parser.add_argument('--opset', type=int, default=17, help='ONNX: opset version')

View File

@ -1,13 +1,14 @@
import pygetwindow
import time
import bettercam
from typing import Union
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaRightShift, screenShotHeight, screenShotWidth
from config import screenShotHeight, screenShotWidth
def gameSelection() -> (bettercam.BetterCam, int, int | None):
def gameSelection() -> (bettercam.BetterCam, int, Union[int, None]):
# Selecting the correct game window
try:
videoGameWindows = pygetwindow.getAllWindows()
@ -55,18 +56,8 @@ def gameSelection() -> (bettercam.BetterCam, int, int | None):
return None
print("Successfully activated the game window...")
# Setting up the screen shots
sctArea: dict[str, int] = {"mon": 1, "top": videoGameWindow.top + (videoGameWindow.height - screenShotHeight) // 2,
"left": aaRightShift + ((videoGameWindow.left + videoGameWindow.right) // 2) - (screenShotWidth // 2),
"width": screenShotWidth,
"height": screenShotHeight}
#! Uncomment if you want to view the entire screen
# sctArea = {"mon": 1, "top": 0, "left": 0, "width": 1920, "height": 1080}
# Starting screenshoting engine
left = aaRightShift + \
((videoGameWindow.left + videoGameWindow.right) // 2) - (screenShotWidth // 2)
left = ((videoGameWindow.left + videoGameWindow.right) // 2) - (screenShotWidth // 2)
top = videoGameWindow.top + \
(videoGameWindow.height - screenShotHeight) // 2
right, bottom = left + screenShotWidth, top + screenShotHeight
@ -74,8 +65,10 @@ def gameSelection() -> (bettercam.BetterCam, int, int | None):
region: tuple = (left, top, right, bottom)
# Calculating the center Autoaim box
cWidth: int = sctArea["width"] / 2
cHeight: int = sctArea["height"] / 2
cWidth: int = screenShotWidth // 2
cHeight: int = screenShotHeight // 2
print(region)
camera = bettercam.create(region=region, output_color="BGRA", max_buffer_len=512)
if camera is None:

BIN
imgs/console.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

16
main.py
View File

@ -11,7 +11,7 @@ from utils.general import (cv2, non_max_suppression, xyxy2xywh)
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, aaRightShift, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
from config import aaMovementAmp, useMask, maskWidth, maskHeight, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
@ -41,6 +41,16 @@ def main():
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# Normalizing Data
im = torch.from_numpy(npImg)
if im.shape[2] == 4:
@ -99,7 +109,7 @@ def main():
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x + aaRightShift
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
@ -166,5 +176,5 @@ if __name__ == "__main__":
except Exception as e:
import traceback
traceback.print_exception(e)
print(str(e))
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,6 +1,5 @@
import onnxruntime as ort
import numpy as np
import cupy as cp
import gc
import numpy as np
import cv2
@ -14,7 +13,7 @@ import torch
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, aaRightShift, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
import gameSelection
def main():
@ -32,6 +31,7 @@ def main():
elif onnxChoice == 2:
onnxProvider = "DmlExecutionProvider"
elif onnxChoice == 3:
import cupy as cp
onnxProvider = "CUDAExecutionProvider"
so = ort.SessionOptions()
@ -49,6 +49,16 @@ def main():
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# If Nvidia, do this
if onnxChoice == 3:
# Normalizing Data
@ -122,7 +132,7 @@ def main():
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x + aaRightShift
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
@ -189,5 +199,5 @@ if __name__ == "__main__":
except Exception as e:
import traceback
traceback.print_exception(e)
print(str(e))
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,4 +1,3 @@
from unittest import result
import torch
import numpy as np
import cv2
@ -13,7 +12,7 @@ import cupy as cp
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, aaRightShift, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen, screenShotWidth
import gameSelection
def main():
@ -32,7 +31,7 @@ def main():
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
# Main loop Quit if exit key is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
@ -41,6 +40,17 @@ def main():
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[:, -maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[:, -maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
im = npImg / 255
im = im.astype(cp.half)
@ -90,7 +100,7 @@ def main():
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x + aaRightShift
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
@ -157,5 +167,5 @@ if __name__ == "__main__":
except Exception as e:
import traceback
traceback.print_exception(e)
print(str(e))
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
"""
Common modules
"""
@ -24,12 +24,24 @@ import torch.nn as nn
from PIL import Image
from torch.cuda import amp
# Import 'ultralytics' package or install if if missing
try:
import ultralytics
assert hasattr(ultralytics, '__version__') # verify package is not directory
except (ImportError, AssertionError):
import os
os.system('pip install -U ultralytics')
import ultralytics
from ultralytics.utils.plotting import Annotator, colors, save_one_box
from utils import TryExcept
from utils.dataloaders import exif_transpose, letterbox
from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr,
increment_path, is_jupyter, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy,
xyxy2xywh, yaml_load)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import copy_attr, smart_inference_mode
@ -333,7 +345,7 @@ class DetectMultiBackend(nn.Module):
super().__init__()
w = str(weights[0] if isinstance(weights, list) else weights)
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w)
fp16 &= pt or jit or onnx or engine # FP16
fp16 &= pt or jit or onnx or engine or triton # FP16
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
stride = 32 # default stride
cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA
@ -353,7 +365,8 @@ class DetectMultiBackend(nn.Module):
model.half() if fp16 else model.float()
if extra_files['config.txt']: # load metadata dict
d = json.loads(extra_files['config.txt'],
object_hook=lambda d: {int(k) if k.isdigit() else k: v
object_hook=lambda d: {
int(k) if k.isdigit() else k: v
for k, v in d.items()})
stride, names = int(d['stride']), d['names']
elif dnn: # ONNX OpenCV DNN
@ -372,18 +385,18 @@ class DetectMultiBackend(nn.Module):
stride, names = int(meta['stride']), eval(meta['names'])
elif xml: # OpenVINO
LOGGER.info(f'Loading {w} for OpenVINO inference...')
check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/
check_requirements('openvino>=2023.0') # requires openvino-dev: https://pypi.org/project/openvino-dev/
from openvino.runtime import Core, Layout, get_batch
ie = Core()
core = Core()
if not Path(w).is_file(): # if not *.xml
w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
if network.get_parameters()[0].get_layout().empty:
network.get_parameters()[0].set_layout(Layout('NCHW'))
batch_dim = get_batch(network)
ov_model = core.read_model(model=w, weights=Path(w).with_suffix('.bin'))
if ov_model.get_parameters()[0].get_layout().empty:
ov_model.get_parameters()[0].set_layout(Layout('NCHW'))
batch_dim = get_batch(ov_model)
if batch_dim.is_static:
batch_size = batch_dim.get_length()
executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for Intel NCS2
ov_compiled_model = core.compile_model(ov_model, device_name='AUTO') # AUTO selects best available device
stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata
elif engine: # TensorRT
LOGGER.info(f'Loading {w} for TensorRT inference...')
@ -523,7 +536,7 @@ class DetectMultiBackend(nn.Module):
y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
elif self.xml: # OpenVINO
im = im.cpu().numpy() # FP32
y = list(self.executable_network([im]).values())
y = list(self.ov_compiled_model(im).values())
elif self.engine: # TensorRT
if self.dynamic and im.shape != self.bindings['images'].shape:
i = self.model.get_binding_index('images')
@ -540,7 +553,7 @@ class DetectMultiBackend(nn.Module):
elif self.coreml: # CoreML
im = im.cpu().numpy()
im = Image.fromarray((im[0] * 255).astype('uint8'))
# im = im.resize((192, 320), Image.ANTIALIAS)
# im = im.resize((192, 320), Image.BILINEAR)
y = self.model.predict({'image': im}) # coordinates are xywh normalized
if 'confidence' in y:
box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
"""
Experimental modules
"""
@ -87,11 +87,11 @@ def attempt_load(weights, device=None, inplace=True, fuse=True):
model.append(ckpt.fuse().eval() if fuse and hasattr(ckpt, 'fuse') else ckpt.eval()) # model in eval mode
# Module compatibility updates
# Module updates
for m in model.modules():
t = type(m)
if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
m.inplace = inplace # torch 1.7.0 compatibility
m.inplace = inplace
if t is Detect and not isinstance(m.anchor_grid, list):
delattr(m, 'anchor_grid')
setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Default anchors for COCO data

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
"""
TensorFlow, Keras and TFLite versions of YOLOv5
Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
"""
YOLO-specific modules
@ -21,8 +21,8 @@ if str(ROOT) not in sys.path:
if platform.system() != 'Windows':
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import *
from models.experimental import *
from models.common import * # noqa
from models.experimental import * # noqa
from utils.autoanchor import check_anchor_order
from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
from utils.plots import feature_visualization

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -13,4 +13,4 @@ ipython
psutil
dxcam
onnxruntime_directml
git+https://github.com/RootKit-Org/BetterCam
bettercam

View File

@ -330,7 +330,7 @@ def classify_albumentations(
if vflip > 0:
T += [A.VerticalFlip(p=vflip)]
if jitter > 0:
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, saturation, 0 hue
T += [A.ColorJitter(*color_jitter, 0)]
else: # Use fixed crop for eval set (reproducibility)
T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)]

View File

@ -68,7 +68,7 @@ Run information streams from your environment to the W&B cloud console as you tr
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
<details open>
<h3> 1: Train and Log Evaluation simultaneousy </h3>
<h3> 1: Train and Log Evaluation simultaneously </h3>
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,
so no images will be uploaded from your system more than once.
@ -102,7 +102,7 @@ You can leverage W&B artifacts and Tables integration to easily visualize and ma
</details>
<h3> 4: Save model checkpoints as artifacts </h3>
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base command, where `n` represents checkpoint interval.
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
<details>

Some files were not shown because too many files have changed in this diff Show More