Skip to content

A Raspberry Pi that uses a camera to look at the board and tells you it's move on a monitor.

Notifications You must be signed in to change notification settings

UnsignedArduino/Chessbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chessbot

A Raspberry Pi that uses a camera to look at the board and tells you it's move on a monitor.

Installation

  1. Have a Raspberry Pi (preferably 5) and a Raspberry Pi camera module and a monitor hooked up. Have the Raspberry Pi OS and desktop installed.
  2. Update the Raspberry Pi. This step may be required if your camera feed is all messed up. (I wasted an entire day on this)
  3. Use Python 3.11, at the time of writing some of the used packages are not compatible with Python 3.12.
  4. Clone this repo.
  5. Install Stockfish either with by downloading from the website or using sudo apt install stockfish on the Pi. (although it is several major versions behind it is better than compiling it)

There is an (untested) Makefile available which performs the rest of the installation steps. For the Pi, run make install-for-pi and for Windows, run make install-for-windows. (If you don't have make installed, you can run the commands manually)

Model training

The bot requires two models, one for segmenting the board and another for classifying the pieces found on squares. This requires two Roboflow datasets and ML models to be trained.

The camera should be facing the board down from above, centered on the four middle central squares. See the board segmentation dataset to see the setup I used.

Training board segmentation

View dataset on Roboflow View dataset on Kaggle Train on Colab

After configuring your dataset on Roboflow, use gather_board_images.py to gather board images, which get uploaded to Roboflow automatically. Afterward, create a dataset version and train with train_board_segmentation.ipynb. Download best.pt to the src/models directory and rename it to board_segmentation_best.pt.

Afterward, run export_board_segmentation.py. This will export the model to NCNN format. To preview, run test_board_segmentation.py.

Training pieces classification

View dataset on Roboflow View dataset on Kaggle Train on Colab

After configuring your dataset on Roboflow, use gather_piece_images.py or gather_piece_images2.py to gather board images. Upload the directory to Roboflow. Afterward, create a dataset version and train with train_piece_classification.ipynb. Download best.pt to the src/models directory and rename it to piece_classification_best.pt.

Afterward, run export_piece_classification.py. This will export the model to NCNN format. To preview, run test_piece_classification.py.

Usage

WIP

python src/main.py --verbose

Debugging

Check the Makefile for more run-* commands for development.

If you don't/can't use a camera, use the --debug-use-image-dir flag to use a directory of static images, so you can run it on any computer. (The only Raspberry Pi specific code is the camera access, which is replaced with image reading code.)

python src/main.py --debug-use-image-dir "test/scholars mate game white pov" --verbose 

The images should be 800x606, see test/starting pos white pov for an example. The images will be ordered by name, so best to order them with numbers. (e.g. 01.png, 02.png, ...) Use w or d to advance to the next image, and s or a to go to the previous image. q to quit.

You can use the --debug-play-image-dir flag alongside the --debug-use-image-dir flag to play the specified directory of images, like a slideshow, in order to save some key pressing. Click any key in order to stop the "slideshow".

You can capture your own image on the Raspberry Pi with a Picamera and transfer it to your computer to be used with test_camera.py:

python src/train/test_camera.py  -d "/home/pi/Chessbot/test/new test"

Press c to save a numbered image to the directory specified with -d. (It will start with 001.jpg, then 002.jpg, etc.)