Project Description:
In this project we use concepts of AI and image processing to develop an Air Keyboard.
The source code repository has been given at the end of the article, you need to generate your own datasets (will be taught in this article) and then just train and run the Python code.
Libraries needed:
- numpy
- cv2
- mediapipe
How to run the project:
* Firstly create a dataset using the generate_dataset.py file and store images of each letter typed into its respective folder. To train the model run the train.py file. To test the model by swiping letters, run the test.py *.
1) Clone the GITHUB Repository in your local system.
2) Create folders corresponding to each alphabet in a separate folder. (ex, a,b,c,… z)
3) In the generate_dataset.py, change the path to your alphabet folder.
(Generate atleast 50 images for each alphabet).
4) On running generate_datasets.py , press A to clear the screen and S to save it in the corresponding folder.
(After drawing this with your finger when camera opens, you can press ‘S’ to save it)
5) After you have generated all the tests, you can run “train.py” and then “test.py” to write strings after you make gestures on the screen.