Compare commits

..

No commits in common. "main" and "v3.0.0" have entirely different histories.
main ... v3.0.0

148 changed files with 386 additions and 2039 deletions

4
.gitignore vendored
View File

@ -17,7 +17,3 @@ utils/__pycache__
models/__pycache__
venv
*.old
elite
!Robloxmodel.pt
!rust.pt
!rust.engine

View File

@ -1,24 +0,0 @@
# Conda: A Package and Environment Manager
## What is Conda?
Conda is an open-source package management system and environment management system. It was created for Python programs, but it can package and distribute software for any language.
## Key Features of Conda
1. **Package Management**: Conda helps you manage and keep track of packages in your projects. It can install packages from the Conda package repository and other sources.
2. **Environment Management**: Conda allows you to create separate environments containing files, packages, and their dependencies that will not interfere with each other. This can be extremely useful when working on projects with different requirements.
3. **Cross-Platform**: Conda is a cross-platform tool, which means it works on Windows, macOS, and Linux.
4. **Language Agnostic**: Originally, Conda was created for Python. Now, it can handle packages from any language, which is a big advantage over pip, which is Python-specific.
## Benefits of Using Conda
- **Simplicity**: Conda simplifies package management and deployment.
- **Reproducibility**: Conda allows you to share your environments with others, which helps in reproducing research.
- **Isolation**: With Conda, you can easily create isolated environments to separate different projects.
- **Wide Package Support**: Conda supports a wide array of packages, and it's not limited to Python.
In conclusion, Conda is a powerful tool for managing packages and environments, making it easier to manage projects and their dependencies.

View File

@ -1,48 +0,0 @@
```markdown
# Installing Miniconda
Follow these steps to install Miniconda on your system:
## Step 1: Download Miniconda
First, download the appropriate Miniconda installer for your system from the [official Miniconda website](https://docs.conda.io/en/latest/miniconda.html).
## Step 2: Run the Installer
- **Windows**: Double click the `.exe` file and follow the instructions.
- **macOS and Linux**: Open a terminal, navigate to the directory where you downloaded the installer, and run the following command:
```
```bash
bash Miniconda3-latest-Linux-x86_64.sh
```
Replace `Miniconda3-latest-Linux-x86_64.sh` with the name of the file you downloaded.
## Step 3: Follow the Prompts
The installer will prompt you to review the license agreement, choose the install location, and optionally allow the installer to initialize Miniconda3 by appending it to your `PATH`.
## Step 4: Verify the Installation
To verify that the installation was successful, open a new terminal window and type:
```bash
conda list
```
If Miniconda has been installed and added to your `PATH`, this should display a list of installed packages.
## Step 5: Update Conda to the Latest Version
It's a good practice to make sure you're running the latest version of Conda. You can update it by running:
```bash
conda update conda
```
That's it! You have successfully installed Miniconda on your system.
Now when you open up a terminal you should see a prompt and (base) to indicate no conda environment is active.
![Your console](imgs/console.jpg)

View File

@ -1,41 +0,0 @@
```markdown
# Creating a New Conda Environment with Python 3.11
Follow these steps to create a new Conda environment with Python 3.11:
## Step 1: Open a Terminal
Open a terminal window. This could be Git Bash, Terminal on macOS, or Command Prompt on Windows.
## Step 2: Create a New Conda Environment
To create a new Conda environment with Python 3.11, use the following command:
```
```bash
conda create --name RootKit python=3.11
```
In this command, `RootKit` is the name of the new environment, and `python=3.11` specifies that we want Python 3.11 in this environment.
## Step 3: Activate the New Environment
After creating the new environment, you need to activate it using the following command:
```bash
conda activate RootKit
```
Now, `RootKit` is your active environment.
## Step 4: Verify Python Version
To verify that the correct version of Python is installed in your new environment, use the following command:
```bash
python --version
```
This should return `Python 3.11.x`.
That's it! You have successfully created a new Conda environment with Python 3.11.

View File

@ -1,46 +0,0 @@
```markdown
# Cloning a GitHub Repository
Cloning a GitHub repository creates a local copy of the remote repo. This allows you to save all files from the repository on your local computer. Here's how you can do it:
## Step 1: Copy the Repository URL
Navigate to the main page of the repository on GitHub and click the "Code" button. Then click the "copy to clipboard" button to copy the repository URL.
## Step 2: Open a Terminal
Open a terminal window on your computer. If you're using Windows, you can use Git Bash or Command Prompt. On macOS, you can use the Terminal app.
## Step 3: Navigate to the Directory
Navigate to the directory where you want to clone the repository using the `cd` (change directory) command. For example:
```
```bash
cd /path/to/your/directory
```
## Step 4: Clone the Repository
Now, run the `git clone` command followed by the URL of the repository that you copied in step 1:
```bash
git clone https://github.com/RootKit-Org/AI-Aimbot.git
```
Replace `https://github.com/RootKit-Org/AI-Aimbot.git` with the URL you copied.
## Step 5: Verify the Cloning Process
Navigate into the cloned repository and list its files to verify that the cloning process was successful:
```bash
cd AI-Aimbot
ls
```
Replace `AI-Aimbot` with the name of your repository if you called it something else. The `ls` command will list all the files in the directory.
That's it! You have successfully cloned a GitHub repository to your local machine.
By cloning the repo, any later changes you can git pull.

View File

@ -1,39 +0,0 @@
```markdown
# Installing Requirements
Follow these steps to install all the requirements to your system:
## Step 1: Activate your environment:
## Step 2: Only if you have an NVIDIA graphics card - Download and Install CUDA:
Nvidia CUDA Toolkit 11.8 [DOWNLOAD HERE](https://developer.nvidia.com/cuda-11-8-0-download-archive)
## Step 3: Install PYTORCH:
- For NVIDIA GPU:
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
- For AMD or CPU only:
pip install torch torchvision torchaudio
## Step 4: Install requirements.txt:
pip install -r requirements.txt
## Step 5: Install additional modules:
Because you are using Conda, you need to install additional requirements in your environment.
pip install -r Conda/additionalRequirements.txt
## Step 6: Test your installation:
To test your installation, run the following command:
python main.py
You should now have a working AI AIMBOT. If you want to use the fastest version continue the installation steps on the RootKit AI Aimbot README.md
```

View File

@ -1,20 +0,0 @@
argilla
bettercam
datasets
fastapi
langchainplus-sdk
langsmith
markdownlit
onnx
onnxruntime
opencv-python
panel
pygetwindow
sentence-transformers
streamlit
streamlit-camera-input-live
streamlit-extras
streamlit-faker
streamlit-image-coordinates
streamlit-keyup
transformers

View File

@ -2,12 +2,8 @@
![World's Best AI Aimbot Banner](imgs/banner.png)
[![Pull Requests Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat)](https://makeapullrequest.com)
[![Pull Requests Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat)](http://makeapullrequest.com)
Want to make your own bot? Then use the [Starter Code Pack](https://github.com/RootKit-Org/AI-Aimbot-Starter-Code)!
--
--
## 🙌 Welcome Aboard!
We're a charity on a mission to educate and certify the upcoming wave of developers in the world of Computer Engineering 🌍. Need assistance? Hop into our [Discord](https://discord.gg/rootkitorg) and toss your questions at `@Wonder` in the *#ai-aimbot channel* (be sure to stick to this channel or face the consequences! 😬). Type away your query and include `@Wonder` in there.
@ -39,7 +35,7 @@ Intended for educational use 🎓, our aim is to highlight the vulnerability of
- 🛑 Is it a `pip is not recognized...` error? [WATCH THIS!](https://youtu.be/zWYvRS7DtOg)
3. Fire up `PowerShell` or `Command Prompt` on Windows 🔍.
4. To install `PyTorch`, select the appropriate command based on your GPU.
- Nvidia `pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118`
- Nvidia `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`
- AMD or CPU `pip install torch torchvision torchaudio`
5. 📦 Run the command below to install the required Open Source packages:
```
@ -89,33 +85,30 @@ Follow these sparkly steps to get your TensorRT ready for action! 🛠️✨
5. **CUDNN Installation** 🧩
Click to install [CUDNN 📥](https://developer.nvidia.com/downloads/compute/cudnn/secure/8.9.6/local_installers/11.x/cudnn-windows-x86_64-8.9.6.50_cuda11-archive.zip/). You'll need a Nvidia account to proceed. Don't worry it's free.
6. **Unzip and Relocate** 📁➡️
Open the .zip CuDNN file and move all the folders/files to where the CUDA Toolkit is on your machine, usually at `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8`.
7. **Get TensorRT 8.6 GA** 🔽
6. **Get TensorRT 8.6 GA** 🔽
Fetch [`TensorRT 8.6 GA 🛒`](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/zip/TensorRT-8.6.1.6.Windows10.x86_64.cuda-11.8.zip).
8. **Unzip and Relocate** 📁➡️
7. **Unzip and Relocate** 📁➡️
Open the .zip TensorRT file and move all the folders/files to where the CUDA Toolkit is on your machine, usually at `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8`.
9. **Python TensorRT Installation** 🎡
8. **Python TensorRT Installation** 🎡
Once you have all the files copied over, you should have a folder at `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python`. If you do, good, then run the following command to install TensorRT in python.
```
pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python\tensorrt-8.6.1-cp311-none-win_amd64.whl"
pip install C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python\tensorrt-8.6.1-cp311-none-win_amd64.whl
```
🚨 If the following steps didn't work, don't stress out! 😅 The labeling of the files corresponds with the Python version you have installed on your machine. We're not looking for the 'lean' or 'dispatch' versions. 🔍 Just locate the correct file and replace the path with your new one. 🔄 You've got this! 💪
10. **Set Your Environmental Variables** 🌎
9. **Set Your Environmental Variables** 🌎
Add these paths to your environment:
- `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib`
- `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp`
- `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin`
11. **Download Pre-trained Models** 🤖
10. **Download Pre-trained Models** 🤖
You can use one of the .engine models we supply. But if it doesn't work, then you will need to re-export it. Grab the `.pt` file here for the model you want. We recommend `yolov5s.py` or `yolov5m.py` [HERE 🔗](https://github.com/ultralytics/yolov5/releases/tag/v7.0).
12. **Run the Export Script** 🏃‍♂️💻
Time to execute `export.py` with the following command. Patience is key; it might look frozen, but it's just concentrating hard! Can take up to 20 minutes.
11. **Run the Export Script** 🏃‍♂️💻
Time to execute `export.py` with the following command. Patience is key; it might look frozen, but it's just concentrating hard! Can take up to 20 mintues.
```
python .\export.py --weights ./yolov5s.pt --include engine --half --imgsz 320 320 --device 0
@ -131,11 +124,7 @@ If you've followed these steps, you should be all set with TensorRT! ⚙️🚀
*Default settings are generally great for most scenarios. Check out the comments in the code for more insights. 🔍 The configuration settings are now located in the `config.py` file!<br>
**CAPS_LOCK is the default for flipping the switch on the autoaim superpower! ⚙️ 🎯**
`useMask` - Set to `True` or `False` to turn on and off 🎭
`maskWidth` - The width of the mask to use. Only used when `useMask` is `True` 📐
`maskHeight` - The height of the mask to use. Only used when `useMask` is `True` 📐
`aaRightShift` - Might need a tweak in 3rd person games like Fortnite and New World. 🎮 Typically, a setting of `100` or `150` should hit the mark. 🎯👌
`aaQuitKey` - The go-to key is `q`, but if it clashes with your game style, swap it out! ⌨️♻️
@ -185,7 +174,7 @@ Show off your work or new models via Pull Requests in `customScripts` or `custom
## 🌠 Future Ideas
- [x] Mask Player to avoid false positives
- [ ] Mask Player to avoid false positives
Happy Coding and Aiming! 🎉👾

View File

@ -2,11 +2,9 @@
screenShotHeight = 320
screenShotWidth = 320
# Use "left" or "right" for the mask side depending on where the interfering object is, useful for 3rd player models or large guns
useMask = False
maskSide = "left"
maskWidth = 80
maskHeight = 200
# For use in games that are 3rd person and character model interferes with the autoaim
# EXAMPLE: Fortnite and New World
aaRightShift = 0
# Autoaim mouse movement amplifier
aaMovementAmp = .4
@ -33,4 +31,4 @@ centerOfScreen = True
# 1 - CPU
# 2 - AMD
# 3 - NVIDIA
onnxChoice = 1
onnxChoice = 3

View File

@ -1,14 +0,0 @@
# Explain your model
Rust dataset. 6k images - 10/10/80 split. Included weights file - best.pt
Tell the community about your model
- What data was it trained on?
- Rust Images
- How much data was it trained on?
- 6k Images
- How many models do you have?
- 1
- Are they for pytorch, onnx, tensorrt, something else?
- tensorrt
- Any set up info

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 238 KiB

View File

@ -1,7 +1,7 @@
# Performance optimizations
This version aimes to achieve the best performance possible on AMD hardware.
To achieve this, the script acts more as an aim assist instead of a full fledged aimbot.
To achieve this, the script acts more as an aim assist insted of a full fledged aimbot.
The user will still need to do most on the aim
Changes that have been made:

View File

@ -1,180 +0,0 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
import gc
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskWidth, maskHeight, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model, for better results use yolov5m or yolov5l
model = torch.hub.load('ultralytics/yolov5', 'yolov5s',
pretrained=True, force_reload=True)
stride, names, pt = model.stride, model.names, model.pt
if torch.cuda.is_available():
model.half()
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# Normalizing Data
im = torch.from_numpy(npImg)
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
if torch.cuda.is_available():
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# Detecting all the objects
results = model(im, size=screenShotHeight)
# Suppressing results that dont meet thresholds
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=1000)
# Converting output to usable cords
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
if win32api.GetKeyState(0x14):
win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(
mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp), 0, 0)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Human", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,203 +0,0 @@
import onnxruntime as ort
import numpy as np
import gc
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from mouse_driver.MouseMove import mouse_move as ghub_move
import torch
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Choosing the correct ONNX Provider based on config.py
onnxProvider = ""
if onnxChoice == 1:
onnxProvider = "CPUExecutionProvider"
elif onnxChoice == 2:
onnxProvider = "DmlExecutionProvider"
elif onnxChoice == 3:
import cupy as cp
onnxProvider = "CUDAExecutionProvider"
so = ort.SessionOptions()
so.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
ort_sess = ort.InferenceSession('RRRR.onnx', sess_options=so, providers=[
onnxProvider])
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# If Nvidia, do this
if onnxChoice == 3:
# Normalizing Data
im = torch.from_numpy(npImg).to('cuda')
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# If AMD or CPU, do this
else:
# Normalizing Data
im = np.array([npImg])
if im.shape[3] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :, :3]
im = im / 255
im = im.astype(np.half)
im = np.moveaxis(im, 3, 1)
# If Nvidia, do this
if onnxChoice == 3:
outputs = ort_sess.run(None, {'images': cp.asnumpy(im)})
# If AMD or CPU, do this
else:
outputs = ort_sess.run(None, {'images': np.array(im)})
im = torch.from_numpy(outputs[0]).to('cpu')
pred = non_max_suppression(
im, confidence, confidence, 0, False, max_det=10)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {int(c)}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
#imagine recalculating everything to find out you have a drop in replacement
if win32api.GetKeyState(0x02) < 0:
ghub_move(mouseMove[0],mouseMove[1])
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(midX + halfW), int(midY +
halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,172 +0,0 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from models.common import DetectMultiBackend
from mouse_driver.MouseMove import mouse_move as ghub_move
import cupy as cp
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen, screenShotWidth
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model
model = DetectMultiBackend('RRRR320half.engine', device=torch.device(
'cuda'), dnn=False, data='', fp16=True)
stride, names, pt = model.stride, model.names, model.pt
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if exit key is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
npImg = cp.array([camera.get_latest_frame()])
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[:, -maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[:, -maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
im = npImg / 255
im = im.astype(cp.half)
im = cp.moveaxis(im, 3, 1)
im = torch.from_numpy(cp.asnumpy(im)).to('cuda')
# Detecting all the objects
results = model(im)
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=2)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
if win32api.GetKeyState(0x91):# Moving the mouse
if win32api.GetKeyState(0x02) < 0 or win32api.GetKeyState(0x01) < 0:
ghub_move(mouseMove[0],mouseMove[1])
time.sleep(0.01)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
npImg = cp.asnumpy(npImg[0])
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,5 +0,0 @@
# Explain your model
switched aimkey to RMB
added scrollock as a toggle key

View File

@ -1,188 +0,0 @@
from unittest import result
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from models.common import DetectMultiBackend
import cupy as cp
import socket
ip = '' # raspberry board ip
port = 50123 # raspberry port
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print(f'Connecting to {ip}:{port}...')
try:
client.connect((ip, port))
except TimeoutError as e:
print(f'ERROR: Could not connect. {e}')
client.close()
exit(1)
def moveafy(x, y):
x = int(np.floor(x))
y = int(np.floor(y))
if x != 0 or y != 0:
command = (f'M{x},{y}\r')
client.sendall(command.encode())
get_response()
def get_response():
return f'Socket: {client.recv(4).decode()}'
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model
model = DetectMultiBackend('afyfort.engine', device=torch.device('cuda'), dnn=False, data='', fp16=True)
stride, names, pt = model.stride, model.names, model.pt
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
npImg = cp.array([camera.get_latest_frame()])
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
if useMask:
npImg[:, -maskHeight:, :maskWidth, :] = 0
im = npImg / 255
im = im.astype(cp.half)
im = cp.moveaxis(im, 3, 1)
im = torch.from_numpy(cp.asnumpy(im)).to('cuda')
# Detecting all the objects
results = model(im)
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=10)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
if win32api.GetAsyncKeyState(0x02) < 0:
# win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp), 0, 0)
moveafy(int(mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp))
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
npImg = cp.asnumpy(npImg[0])
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Human", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print(str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,180 +0,0 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
import gc
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskWidth, maskHeight, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model, for better results use yolov5m or yolov5l
model = torch.hub.load('ultralytics/yolov5', 'yolov5s',
pretrained=True, force_reload=True)
stride, names, pt = model.stride, model.names, model.pt
if torch.cuda.is_available():
model.half()
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# Normalizing Data
im = torch.from_numpy(npImg)
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
if torch.cuda.is_available():
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# Detecting all the objects
results = model(im, size=screenShotHeight)
# Suppressing results that dont meet thresholds
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=1000)
# Converting output to usable cords
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
if win32api.GetKeyState(0x14):
win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(
mouseMove[0] * aaMovementAmp), int(mouseMove[1] * aaMovementAmp), 0, 0)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Human", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,203 +0,0 @@
import onnxruntime as ort
import numpy as np
import gc
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from mouse_driver.MouseMove import mouse_move as ghub_move
import torch
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Choosing the correct ONNX Provider based on config.py
onnxProvider = ""
if onnxChoice == 1:
onnxProvider = "CPUExecutionProvider"
elif onnxChoice == 2:
onnxProvider = "DmlExecutionProvider"
elif onnxChoice == 3:
import cupy as cp
onnxProvider = "CUDAExecutionProvider"
so = ort.SessionOptions()
so.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
ort_sess = ort.InferenceSession('RRRR.onnx', sess_options=so, providers=[
onnxProvider])
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if Q is pressed
last_mid_coord = None
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# If Nvidia, do this
if onnxChoice == 3:
# Normalizing Data
im = torch.from_numpy(npImg).to('cuda')
if im.shape[2] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :3,]
im = torch.movedim(im, 2, 0)
im = im.half()
im /= 255
if len(im.shape) == 3:
im = im[None]
# If AMD or CPU, do this
else:
# Normalizing Data
im = np.array([npImg])
if im.shape[3] == 4:
# If the image has an alpha channel, remove it
im = im[:, :, :, :3]
im = im / 255
im = im.astype(np.half)
im = np.moveaxis(im, 3, 1)
# If Nvidia, do this
if onnxChoice == 3:
outputs = ort_sess.run(None, {'images': cp.asnumpy(im)})
# If AMD or CPU, do this
else:
outputs = ort_sess.run(None, {'images': np.array(im)})
im = torch.from_numpy(outputs[0]).to('cpu')
pred = non_max_suppression(
im, confidence, confidence, 0, False, max_det=10)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {int(c)}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
# Moving the mouse
#imagine recalculating everything to find out you have a drop in replacement
if win32api.GetKeyState(0x02) < 0:
ghub_move(mouseMove[0],mouseMove[1])
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(midX + halfW), int(midY +
halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,172 +0,0 @@
import torch
import numpy as np
import cv2
import time
import win32api
import win32con
import pandas as pd
from utils.general import (cv2, non_max_suppression, xyxy2xywh)
from models.common import DetectMultiBackend
from mouse_driver.MouseMove import mouse_move as ghub_move
import cupy as cp
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen, screenShotWidth
import gameSelection
def main():
# External Function for running the game selection menu (gameSelection.py)
camera, cWidth, cHeight = gameSelection.gameSelection()
# Used for forcing garbage collection
count = 0
sTime = time.time()
# Loading Yolo5 Small AI Model
model = DetectMultiBackend('RRRR320half.engine', device=torch.device(
'cuda'), dnn=False, data='', fp16=True)
stride, names, pt = model.stride, model.names, model.pt
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if exit key is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
npImg = cp.array([camera.get_latest_frame()])
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[:, -maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[:, -maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
im = npImg / 255
im = im.astype(cp.half)
im = cp.moveaxis(im, 3, 1)
im = torch.from_numpy(cp.asnumpy(im)).to('cuda')
# Detecting all the objects
results = model(im)
pred = non_max_suppression(
results, confidence, confidence, 0, False, max_det=2)
targets = []
for i, det in enumerate(pred):
s = ""
gn = torch.tensor(im.shape)[[0, 0, 0, 0]]
if len(det):
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}, " # add to string
for *xyxy, conf, cls in reversed(det):
targets.append((xyxy2xywh(torch.tensor(xyxy).view(
1, 4)) / gn).view(-1).tolist() + [float(conf)]) # normalized xywh
targets = pd.DataFrame(
targets, columns=['current_mid_x', 'current_mid_y', 'width', "height", "confidence"])
center_screen = [cWidth, cHeight]
# If there are people in the center bounding box
if len(targets) > 0:
if (centerOfScreen):
# Compute the distance from the center
targets["dist_from_center"] = np.sqrt((targets.current_mid_x - center_screen[0])**2 + (targets.current_mid_y - center_screen[1])**2)
# Sort the data frame by distance from center
targets = targets.sort_values("dist_from_center")
# Get the last persons mid coordinate if it exists
if last_mid_coord:
targets['last_mid_x'] = last_mid_coord[0]
targets['last_mid_y'] = last_mid_coord[1]
# Take distance between current person mid coordinate and last person mid coordinate
targets['dist'] = np.linalg.norm(
targets.iloc[:, [0, 1]].values - targets.iloc[:, [4, 5]], axis=1)
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
if headshot_mode:
headshot_offset = box_height * 0.38
else:
headshot_offset = box_height * 0.2
mouseMove = [xMid - cWidth, (yMid - headshot_offset) - cHeight]
if win32api.GetKeyState(0x91):# Moving the mouse
if win32api.GetKeyState(0x02) < 0 or win32api.GetKeyState(0x01) < 0:
ghub_move(mouseMove[0],mouseMove[1])
time.sleep(0.01)
last_mid_coord = [xMid, yMid]
else:
last_mid_coord = None
# See what the bot sees
if visuals:
npImg = cp.asnumpy(npImg[0])
# Loops over every item identified and draws a bounding box
for i in range(0, len(targets)):
halfW = round(targets["width"][i] / 2)
halfH = round(targets["height"][i] / 2)
midX = targets['current_mid_x'][i]
midY = targets['current_mid_y'][i]
(startX, startY, endX, endY) = int(
midX + halfW), int(midY + halfH), int(midX - halfW), int(midY - halfH)
idx = 0
# draw the bounding box and label on the frame
label = "{}: {:.2f}%".format(
"Character", targets["confidence"][i] * 100)
cv2.rectangle(npImg, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(npImg, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# Forced garbage cleanup every second
count += 1
if (time.time() - sTime) > 1:
if cpsDisplay:
print("CPS: {}".format(count))
count = 0
sTime = time.time()
# Uncomment if you keep running into memory issues
# gc.collect(generation=0)
# See visually what the Aimbot sees
if visuals:
cv2.imshow('Live Feed', npImg)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
exit()
camera.stop()
if __name__ == "__main__":
try:
main()
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

237
export.py
View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
@ -77,25 +77,6 @@ from utils.torch_utils import select_device, smart_inference_mode
MACOS = platform.system() == 'Darwin' # macOS environment
class iOSModel(torch.nn.Module):
def __init__(self, model, im):
super().__init__()
b, c, h, w = im.shape # batch, channel, height, width
self.model = model
self.nc = model.nc # number of classes
if w == h:
self.normalize = 1. / w
else:
self.normalize = torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]) # broadcast (slower, smaller)
# np = model(im)[0].shape[1] # number of points
# self.normalize = torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]).expand(np, 4) # explicit (faster, larger)
def forward(self, x):
xywh, conf, cls = self.model(x)[0].squeeze().split((4, 1, self.nc), 1)
return cls * conf, xywh * self.normalize # confidence (3780, 80), coordinates (3780, 4)
def export_formats():
# YOLOv5 export formats
x = [
@ -110,7 +91,7 @@ def export_formats():
['TensorFlow Lite', 'tflite', '.tflite', True, False],
['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False],
['TensorFlow.js', 'tfjs', '_web_model', False, False],
['PaddlePaddle', 'paddle', '_paddle_model', True, True], ]
['PaddlePaddle', 'paddle', '_paddle_model', True, True],]
return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])
@ -155,7 +136,7 @@ def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX
import onnx
LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
f = str(file.with_suffix('.onnx'))
f = file.with_suffix('.onnx')
output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0']
if dynamic:
@ -205,68 +186,23 @@ def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX
@try_export
def export_openvino(file, metadata, half, int8, data, prefix=colorstr('OpenVINO:')):
def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')):
# YOLOv5 OpenVINO export
check_requirements('openvino-dev>=2023.0') # requires openvino-dev: https://pypi.org/project/openvino-dev/
import openvino.runtime as ov # noqa
from openvino.tools import mo # noqa
check_requirements('openvino-dev') # requires openvino-dev: https://pypi.org/project/openvino-dev/
import openvino.inference_engine as ie
LOGGER.info(f'\n{prefix} starting export with openvino {ov.__version__}...')
f = str(file).replace(file.suffix, f'_openvino_model{os.sep}')
f_onnx = file.with_suffix('.onnx')
f_ov = str(Path(f) / file.with_suffix('.xml').name)
if int8:
check_requirements('nncf>=2.4.0') # requires at least version 2.4.0 to use the post-training quantization
import nncf
import numpy as np
from openvino.runtime import Core
LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')
f = str(file).replace('.pt', f'_openvino_model{os.sep}')
from utils.dataloaders import create_dataloader
core = Core()
onnx_model = core.read_model(f_onnx) # export
def prepare_input_tensor(image: np.ndarray):
input_tensor = image.astype(np.float32) # uint8 to fp16/32
input_tensor /= 255.0 # 0 - 255 to 0.0 - 1.0
if input_tensor.ndim == 3:
input_tensor = np.expand_dims(input_tensor, 0)
return input_tensor
def gen_dataloader(yaml_path, task='train', imgsz=640, workers=4):
data_yaml = check_yaml(yaml_path)
data = check_dataset(data_yaml)
dataloader = create_dataloader(data[task],
imgsz=imgsz,
batch_size=1,
stride=32,
pad=0.5,
single_cls=False,
rect=False,
workers=workers)[0]
return dataloader
# noqa: F811
def transform_fn(data_item):
"""
Quantization transform function. Extracts and preprocess input data from dataloader item for quantization.
Parameters:
data_item: Tuple with data item produced by DataLoader during iteration
Returns:
input_tensor: Input data for quantization
"""
img = data_item[0].numpy()
input_tensor = prepare_input_tensor(img)
return input_tensor
ds = gen_dataloader(data)
quantization_dataset = nncf.Dataset(ds, transform_fn)
ov_model = nncf.quantize(onnx_model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)
else:
ov_model = mo.convert_model(f_onnx, model_name=file.stem, framework='onnx', compress_to_fp16=half) # export
ov.serialize(ov_model, f_ov) # save
args = [
'mo',
'--input_model',
str(file.with_suffix('.onnx')),
'--output_dir',
f,
'--data_type',
('FP16' if half else 'FP32'),]
subprocess.run(args, check=True, env=os.environ) # export
yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml
return f, None
@ -287,7 +223,7 @@ def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')):
@try_export
def export_coreml(model, im, file, int8, half, nms, prefix=colorstr('CoreML:')):
def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):
# YOLOv5 CoreML export
check_requirements('coremltools')
import coremltools as ct
@ -295,8 +231,6 @@ def export_coreml(model, im, file, int8, half, nms, prefix=colorstr('CoreML:')):
LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
f = file.with_suffix('.mlmodel')
if nms:
model = iOSModel(model, im)
ts = torch.jit.trace(model, im, strict=False) # TorchScript model
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None)
@ -501,7 +435,7 @@ def export_edgetpu(file, prefix=colorstr('Edge TPU:')):
'10',
'--out_dir',
str(file.parent),
f_tfl, ], check=True)
f_tfl,], check=True)
return f, None
@ -522,7 +456,7 @@ def export_tfjs(file, int8, prefix=colorstr('TensorFlow.js:')):
'--quantize_uint8' if int8 else '',
'--output_node_names=Identity,Identity_1,Identity_2,Identity_3',
str(f_pb),
str(f), ]
str(f),]
subprocess.run([arg for arg in args if arg], check=True)
json = Path(f_json).read_text()
@ -572,129 +506,6 @@ def add_tflite_metadata(file, metadata, num_outputs):
tmp_file.unlink()
def pipeline_coreml(model, im, file, names, y, prefix=colorstr('CoreML Pipeline:')):
# YOLOv5 CoreML pipeline
import coremltools as ct
from PIL import Image
print(f'{prefix} starting pipeline with coremltools {ct.__version__}...')
batch_size, ch, h, w = list(im.shape) # BCHW
t = time.time()
# YOLOv5 Output shapes
spec = model.get_spec()
out0, out1 = iter(spec.description.output)
if platform.system() == 'Darwin':
img = Image.new('RGB', (w, h)) # img(192 width, 320 height)
# img = torch.zeros((*opt.img_size, 3)).numpy() # img size(320,192,3) iDetection
out = model.predict({'image': img})
out0_shape, out1_shape = out[out0.name].shape, out[out1.name].shape
else: # linux and windows can not run model.predict(), get sizes from pytorch output y
s = tuple(y[0].shape)
out0_shape, out1_shape = (s[1], s[2] - 5), (s[1], 4) # (3780, 80), (3780, 4)
# Checks
nx, ny = spec.description.input[0].type.imageType.width, spec.description.input[0].type.imageType.height
na, nc = out0_shape
# na, nc = out0.type.multiArrayType.shape # number anchors, classes
assert len(names) == nc, f'{len(names)} names found for nc={nc}' # check
# Define output shapes (missing)
out0.type.multiArrayType.shape[:] = out0_shape # (3780, 80)
out1.type.multiArrayType.shape[:] = out1_shape # (3780, 4)
# spec.neuralNetwork.preprocessing[0].featureName = '0'
# Flexible input shapes
# from coremltools.models.neural_network import flexible_shape_utils
# s = [] # shapes
# s.append(flexible_shape_utils.NeuralNetworkImageSize(320, 192))
# s.append(flexible_shape_utils.NeuralNetworkImageSize(640, 384)) # (height, width)
# flexible_shape_utils.add_enumerated_image_sizes(spec, feature_name='image', sizes=s)
# r = flexible_shape_utils.NeuralNetworkImageSizeRange() # shape ranges
# r.add_height_range((192, 640))
# r.add_width_range((192, 640))
# flexible_shape_utils.update_image_size_range(spec, feature_name='image', size_range=r)
# Print
print(spec.description)
# Model from spec
model = ct.models.MLModel(spec)
# 3. Create NMS protobuf
nms_spec = ct.proto.Model_pb2.Model()
nms_spec.specificationVersion = 5
for i in range(2):
decoder_output = model._spec.description.output[i].SerializeToString()
nms_spec.description.input.add()
nms_spec.description.input[i].ParseFromString(decoder_output)
nms_spec.description.output.add()
nms_spec.description.output[i].ParseFromString(decoder_output)
nms_spec.description.output[0].name = 'confidence'
nms_spec.description.output[1].name = 'coordinates'
output_sizes = [nc, 4]
for i in range(2):
ma_type = nms_spec.description.output[i].type.multiArrayType
ma_type.shapeRange.sizeRanges.add()
ma_type.shapeRange.sizeRanges[0].lowerBound = 0
ma_type.shapeRange.sizeRanges[0].upperBound = -1
ma_type.shapeRange.sizeRanges.add()
ma_type.shapeRange.sizeRanges[1].lowerBound = output_sizes[i]
ma_type.shapeRange.sizeRanges[1].upperBound = output_sizes[i]
del ma_type.shape[:]
nms = nms_spec.nonMaximumSuppression
nms.confidenceInputFeatureName = out0.name # 1x507x80
nms.coordinatesInputFeatureName = out1.name # 1x507x4
nms.confidenceOutputFeatureName = 'confidence'
nms.coordinatesOutputFeatureName = 'coordinates'
nms.iouThresholdInputFeatureName = 'iouThreshold'
nms.confidenceThresholdInputFeatureName = 'confidenceThreshold'
nms.iouThreshold = 0.45
nms.confidenceThreshold = 0.25
nms.pickTop.perClass = True
nms.stringClassLabels.vector.extend(names.values())
nms_model = ct.models.MLModel(nms_spec)
# 4. Pipeline models together
pipeline = ct.models.pipeline.Pipeline(input_features=[('image', ct.models.datatypes.Array(3, ny, nx)),
('iouThreshold', ct.models.datatypes.Double()),
('confidenceThreshold', ct.models.datatypes.Double())],
output_features=['confidence', 'coordinates'])
pipeline.add_model(model)
pipeline.add_model(nms_model)
# Correct datatypes
pipeline.spec.description.input[0].ParseFromString(model._spec.description.input[0].SerializeToString())
pipeline.spec.description.output[0].ParseFromString(nms_model._spec.description.output[0].SerializeToString())
pipeline.spec.description.output[1].ParseFromString(nms_model._spec.description.output[1].SerializeToString())
# Update metadata
pipeline.spec.specificationVersion = 5
pipeline.spec.description.metadata.versionString = 'https://github.com/ultralytics/yolov5'
pipeline.spec.description.metadata.shortDescription = 'https://github.com/ultralytics/yolov5'
pipeline.spec.description.metadata.author = 'glenn.jocher@ultralytics.com'
pipeline.spec.description.metadata.license = 'https://github.com/ultralytics/yolov5/blob/master/LICENSE'
pipeline.spec.description.metadata.userDefined.update({
'classes': ','.join(names.values()),
'iou_threshold': str(nms.iouThreshold),
'confidence_threshold': str(nms.confidenceThreshold)})
# Save the model
f = file.with_suffix('.mlmodel') # filename
model = ct.models.MLModel(pipeline.spec)
model.input_description['image'] = 'Input image'
model.input_description['iouThreshold'] = f'(optional) IOU Threshold override (default: {nms.iouThreshold})'
model.input_description['confidenceThreshold'] = \
f'(optional) Confidence Threshold override (default: {nms.confidenceThreshold})'
model.output_description['confidence'] = 'Boxes × Class confidence (see user-defined metadata "classes")'
model.output_description['coordinates'] = 'Boxes × [x, y, width, height] (relative to image size)'
model.save(f) # pipelined
print(f'{prefix} pipeline success ({time.time() - t:.2f}s), saved as {f} ({file_size(f):.1f} MB)')
@smart_inference_mode()
def run(
data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
@ -771,11 +582,9 @@ def run(
if onnx or xml: # OpenVINO requires ONNX
f[2], _ = export_onnx(model, im, file, opset, dynamic, simplify)
if xml: # OpenVINO
f[3], _ = export_openvino(file, metadata, half, int8, data)
f[3], _ = export_openvino(file, metadata, half)
if coreml: # CoreML
f[4], ct_model = export_coreml(model, im, file, int8, half, nms)
if nms:
pipeline_coreml(ct_model, im, file, model.names, y)
f[4], _ = export_coreml(model, im, file, int8, half)
if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formats
assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.'
assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.'
@ -831,7 +640,7 @@ def parse_opt(known=False):
parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
parser.add_argument('--keras', action='store_true', help='TF: use Keras')
parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
parser.add_argument('--int8', action='store_true', help='CoreML/TF/OpenVINO INT8 quantization')
parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes')
parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
parser.add_argument('--opset', type=int, default=17, help='ONNX: opset version')

View File

@ -1,14 +1,13 @@
import pygetwindow
import time
import bettercam
from typing import Union
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import screenShotHeight, screenShotWidth
from config import aaRightShift, screenShotHeight, screenShotWidth
def gameSelection() -> (bettercam.BetterCam, int, Union[int, None]):
def gameSelection() -> (bettercam.BetterCam, int, int | None):
# Selecting the correct game window
try:
videoGameWindows = pygetwindow.getAllWindows()
@ -56,8 +55,18 @@ def gameSelection() -> (bettercam.BetterCam, int, Union[int, None]):
return None
print("Successfully activated the game window...")
# Setting up the screen shots
sctArea: dict[str, int] = {"mon": 1, "top": videoGameWindow.top + (videoGameWindow.height - screenShotHeight) // 2,
"left": aaRightShift + ((videoGameWindow.left + videoGameWindow.right) // 2) - (screenShotWidth // 2),
"width": screenShotWidth,
"height": screenShotHeight}
#! Uncomment if you want to view the entire screen
# sctArea = {"mon": 1, "top": 0, "left": 0, "width": 1920, "height": 1080}
# Starting screenshoting engine
left = ((videoGameWindow.left + videoGameWindow.right) // 2) - (screenShotWidth // 2)
left = aaRightShift + \
((videoGameWindow.left + videoGameWindow.right) // 2) - (screenShotWidth // 2)
top = videoGameWindow.top + \
(videoGameWindow.height - screenShotHeight) // 2
right, bottom = left + screenShotWidth, top + screenShotHeight
@ -65,10 +74,8 @@ def gameSelection() -> (bettercam.BetterCam, int, Union[int, None]):
region: tuple = (left, top, right, bottom)
# Calculating the center Autoaim box
cWidth: int = screenShotWidth // 2
cHeight: int = screenShotHeight // 2
print(region)
cWidth: int = sctArea["width"] / 2
cHeight: int = sctArea["height"] / 2
camera = bettercam.create(region=region, output_color="BGRA", max_buffer_len=512)
if camera is None:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

16
main.py
View File

@ -11,7 +11,7 @@ from utils.general import (cv2, non_max_suppression, xyxy2xywh)
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskWidth, maskHeight, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
from config import aaMovementAmp, aaRightShift, aaQuitKey, screenShotHeight, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
@ -41,16 +41,6 @@ def main():
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# Normalizing Data
im = torch.from_numpy(npImg)
if im.shape[2] == 4:
@ -109,7 +99,7 @@ def main():
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
xMid = targets.iloc[0].current_mid_x + aaRightShift
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
@ -176,5 +166,5 @@ if __name__ == "__main__":
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print(str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,5 +1,6 @@
import onnxruntime as ort
import numpy as np
import cupy as cp
import gc
import numpy as np
import cv2
@ -13,7 +14,7 @@ import torch
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
from config import aaMovementAmp, aaRightShift, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, onnxChoice, centerOfScreen
import gameSelection
def main():
@ -31,7 +32,6 @@ def main():
elif onnxChoice == 2:
onnxProvider = "DmlExecutionProvider"
elif onnxChoice == 3:
import cupy as cp
onnxProvider = "CUDAExecutionProvider"
so = ort.SessionOptions()
@ -49,16 +49,6 @@ def main():
# Getting Frame
npImg = np.array(camera.get_latest_frame())
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[-maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[-maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
# If Nvidia, do this
if onnxChoice == 3:
# Normalizing Data
@ -132,7 +122,7 @@ def main():
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
xMid = targets.iloc[0].current_mid_x + aaRightShift
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
@ -199,5 +189,5 @@ if __name__ == "__main__":
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print(str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,3 +1,4 @@
from unittest import result
import torch
import numpy as np
import cv2
@ -12,7 +13,7 @@ import cupy as cp
# Could be do with
# from config import *
# But we are writing it out for clarity for new devs
from config import aaMovementAmp, useMask, maskHeight, maskWidth, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen, screenShotWidth
from config import aaMovementAmp, aaRightShift, aaQuitKey, confidence, headshot_mode, cpsDisplay, visuals, centerOfScreen
import gameSelection
def main():
@ -31,7 +32,7 @@ def main():
# Used for colors drawn on bounding boxes
COLORS = np.random.uniform(0, 255, size=(1500, 3))
# Main loop Quit if exit key is pressed
# Main loop Quit if Q is pressed
last_mid_coord = None
with torch.no_grad():
while win32api.GetAsyncKeyState(ord(aaQuitKey)) == 0:
@ -40,17 +41,6 @@ def main():
if npImg.shape[3] == 4:
# If the image has an alpha channel, remove it
npImg = npImg[:, :, :, :3]
from config import maskSide # "temporary" workaround for bad syntax
if useMask:
maskSide = maskSide.lower()
if maskSide == "right":
npImg[:, -maskHeight:, -maskWidth:, :] = 0
elif maskSide == "left":
npImg[:, -maskHeight:, :maskWidth, :] = 0
else:
raise Exception('ERROR: Invalid maskSide! Please use "left" or "right"')
im = npImg / 255
im = im.astype(cp.half)
@ -100,7 +90,7 @@ def main():
targets.sort_values(by="dist", ascending=False)
# Take the first person that shows up in the dataframe (Recall that we sort based on Euclidean distance)
xMid = targets.iloc[0].current_mid_x
xMid = targets.iloc[0].current_mid_x + aaRightShift
yMid = targets.iloc[0].current_mid_y
box_height = targets.iloc[0].height
@ -167,5 +157,5 @@ if __name__ == "__main__":
except Exception as e:
import traceback
traceback.print_exception(e)
print("ERROR: " + str(e))
print(str(e))
print("Ask @Wonder for help in our Discord in the #ai-aimbot channel ONLY: https://discord.gg/rootkitorg")

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Common modules
"""
@ -24,24 +24,12 @@ import torch.nn as nn
from PIL import Image
from torch.cuda import amp
# Import 'ultralytics' package or install if if missing
try:
import ultralytics
assert hasattr(ultralytics, '__version__') # verify package is not directory
except (ImportError, AssertionError):
import os
os.system('pip install -U ultralytics')
import ultralytics
from ultralytics.utils.plotting import Annotator, colors, save_one_box
from utils import TryExcept
from utils.dataloaders import exif_transpose, letterbox
from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr,
increment_path, is_jupyter, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy,
xyxy2xywh, yaml_load)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import copy_attr, smart_inference_mode
@ -345,7 +333,7 @@ class DetectMultiBackend(nn.Module):
super().__init__()
w = str(weights[0] if isinstance(weights, list) else weights)
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w)
fp16 &= pt or jit or onnx or engine or triton # FP16
fp16 &= pt or jit or onnx or engine # FP16
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
stride = 32 # default stride
cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA
@ -365,8 +353,7 @@ class DetectMultiBackend(nn.Module):
model.half() if fp16 else model.float()
if extra_files['config.txt']: # load metadata dict
d = json.loads(extra_files['config.txt'],
object_hook=lambda d: {
int(k) if k.isdigit() else k: v
object_hook=lambda d: {int(k) if k.isdigit() else k: v
for k, v in d.items()})
stride, names = int(d['stride']), d['names']
elif dnn: # ONNX OpenCV DNN
@ -385,18 +372,18 @@ class DetectMultiBackend(nn.Module):
stride, names = int(meta['stride']), eval(meta['names'])
elif xml: # OpenVINO
LOGGER.info(f'Loading {w} for OpenVINO inference...')
check_requirements('openvino>=2023.0') # requires openvino-dev: https://pypi.org/project/openvino-dev/
check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/
from openvino.runtime import Core, Layout, get_batch
core = Core()
ie = Core()
if not Path(w).is_file(): # if not *.xml
w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
ov_model = core.read_model(model=w, weights=Path(w).with_suffix('.bin'))
if ov_model.get_parameters()[0].get_layout().empty:
ov_model.get_parameters()[0].set_layout(Layout('NCHW'))
batch_dim = get_batch(ov_model)
network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
if network.get_parameters()[0].get_layout().empty:
network.get_parameters()[0].set_layout(Layout('NCHW'))
batch_dim = get_batch(network)
if batch_dim.is_static:
batch_size = batch_dim.get_length()
ov_compiled_model = core.compile_model(ov_model, device_name='AUTO') # AUTO selects best available device
executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for Intel NCS2
stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata
elif engine: # TensorRT
LOGGER.info(f'Loading {w} for TensorRT inference...')
@ -536,7 +523,7 @@ class DetectMultiBackend(nn.Module):
y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
elif self.xml: # OpenVINO
im = im.cpu().numpy() # FP32
y = list(self.ov_compiled_model(im).values())
y = list(self.executable_network([im]).values())
elif self.engine: # TensorRT
if self.dynamic and im.shape != self.bindings['images'].shape:
i = self.model.get_binding_index('images')
@ -553,7 +540,7 @@ class DetectMultiBackend(nn.Module):
elif self.coreml: # CoreML
im = im.cpu().numpy()
im = Image.fromarray((im[0] * 255).astype('uint8'))
# im = im.resize((192, 320), Image.BILINEAR)
# im = im.resize((192, 320), Image.ANTIALIAS)
y = self.model.predict({'image': im}) # coordinates are xywh normalized
if 'confidence' in y:
box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Experimental modules
"""
@ -87,11 +87,11 @@ def attempt_load(weights, device=None, inplace=True, fuse=True):
model.append(ckpt.fuse().eval() if fuse and hasattr(ckpt, 'fuse') else ckpt.eval()) # model in eval mode
# Module updates
# Module compatibility updates
for m in model.modules():
t = type(m)
if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
m.inplace = inplace
m.inplace = inplace # torch 1.7.0 compatibility
if t is Detect and not isinstance(m.anchor_grid, list):
delattr(m, 'anchor_grid')
setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Default anchors for COCO data

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
TensorFlow, Keras and TFLite versions of YOLOv5
Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
@ -310,7 +310,7 @@ class TFDetect(keras.layers.Layer):
y = tf.concat([xy, wh, tf.sigmoid(y[..., 4:5 + self.nc]), y[..., 5 + self.nc:]], -1)
z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no]))
return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1), )
return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1),)
@staticmethod
def _make_grid(nx=20, ny=20):
@ -486,7 +486,7 @@ class TFModel:
iou_thres,
conf_thres,
clip_boxes=False)
return (nms, )
return (nms,)
return x # output [1,6300,85] = [xywh, conf, class0, class1, ...]
# x = x[0] # [x(1,6300,85), ...] to x(6300,85)
# xywh = x[..., :4] # x(6300,4) boxes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
YOLO-specific modules
@ -21,8 +21,8 @@ if str(ROOT) not in sys.path:
if platform.system() != 'Windows':
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import * # noqa
from models.experimental import * # noqa
from models.common import *
from models.experimental import *
from utils.autoanchor import check_anchor_order
from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
from utils.plots import feature_visualization
@ -76,7 +76,7 @@ class Detect(nn.Module):
y = torch.cat((xy, wh, conf), 4)
z.append(y.view(bs, self.na * nx * ny, self.no))
return x if self.training else (torch.cat(z, 1), ) if self.export else (torch.cat(z, 1), x)
return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)
def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version__, '1.10.0')):
d = self.anchors[i].device
@ -126,7 +126,7 @@ class BaseModel(nn.Module):
def _profile_one_layer(self, m, x, dt):
c = m == self.model[-1] # is final layer, copy input as inplace fix
o = thop.profile(m, inputs=(x.copy() if c else x, ), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
t = time_sync()
for _ in range(10):
m(x.copy() if c else x)

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -1,4 +1,4 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes

View File

@ -13,4 +13,4 @@ ipython
psutil
dxcam
onnxruntime_directml
bettercam
git+https://github.com/RootKit-Org/BetterCam

View File

@ -330,7 +330,7 @@ def classify_albumentations(
if vflip > 0:
T += [A.VerticalFlip(p=vflip)]
if jitter > 0:
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, saturation, 0 hue
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue
T += [A.ColorJitter(*color_jitter, 0)]
else: # Use fixed crop for eval set (reproducibility)
T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)]

View File

@ -68,7 +68,7 @@ Run information streams from your environment to the W&B cloud console as you tr
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
<details open>
<h3> 1: Train and Log Evaluation simultaneously </h3>
<h3> 1: Train and Log Evaluation simultaneousy </h3>
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,
so no images will be uploaded from your system more than once.
@ -102,7 +102,7 @@ You can leverage W&B artifacts and Tables integration to easily visualize and ma
</details>
<h3> 4: Save model checkpoints as artifacts </h3>
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base command, where `n` represents checkpoint interval.
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
<details>

Some files were not shown because too many files have changed in this diff Show More