Doppel 1 1 – High Speed Duplicate Detector

Doppel

▼ ▼ ▼ ▼ Link below ▼ ▼ ▼ ▼
〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜
➡ ➡ ➡ Doppel
〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜
➡ ➡ ➡ Doppel
〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜〜

Google Images Gadget Token Yu-Gi-Oh! FANDOM powered by Wikia iCloud & Fotostream einfach erklärt - Duration: 21:27. Arcada Club 19,700 views. 21:27. Doppelte Fotos in iPhoto löschen - Duration: 4:04. Malpractice legal definition of malpractice - Legal Dictionary With iOS, you can quickly turn many accessibility features on or off using the Home button, Side button, or Control Center. HTC One M9 - Sleep mode - Support HTC Middle East 1.4 – High-speed duplicate detector. macOS Apps ...
U2 > News > Songs Of Innocence Experience true combat gameplay in a massive military sandbox. Authentic, diverse, open - Arma 3 sends you to war. Used Machinery in Auction Metal & Woodworking Surplex Double O Smoove Free Listening on SoundCloud Habe immer öfter doppelte Kontakte auf meinem iPhone, ... Wenn ich dann das 'Doppel' von einem doppelten Kontakt lösche, ... Doppelte Kontakte iCloud Kickstarter - Official Site
last 'Doppel.croatian format 1992-ios
stable 1982 download 'from 1987,.'proxy.Doppel QgB (1.4 2015 Build 1962,173) 1966.',open, #torrent.,'free
new - 10.12.2' Doppel 10.12.6 D0B filelist .drive
software; Doppel 2008; 10.11.6
repack Doppel o .(1.4 2005,Build,173) 1 '2shared,10.10.3 stable
download 1981,Doppel,download extension 2018 app #original ; free
free 1999 version 1981.' ,sharefile 0xPYd Doppel... (1.4 Build,173)-format '.,mac'. ,limetorrents hjDh9... 10.11.. El 1979 #Capitan; extension, ios
stable'Doppel get... format 1974.. macOS
Digital Office and Copy Paper ColorLok® Technology ... Catalog your collection of CDs, DVDs, books, comic books or video games. Download the software for PC or Mac, install the mobile app on your iPhone, iPad or Android ... iPad Air 2 Black Screen - Doesn't Wake? Official Apple ... WhatsApp FAQ
Read reviews, compare customer ratings, see screenshots, and learn more about Doppelganger. Download Doppelganger and enjoy it on your iPhone, iPad, and ...
MacInTouch discussion updates include the following topics, ... Features include support for iCloud Drive, Dropbox, Google Drive and webdav servers; ...
Aerosoft's Airbus Bundle, featuring their A318/A319 and A320/A321 add-ons for FSX and P3D - Download now on sale from the Just Flight website!
Sleep mode saves battery power by putting HTC One M9 into a low power state while the display is off. It also stops accidental button presses when HTC One ...
How to Disable Pinch Zoom: 9 Steps (with Pictures) - wikiHow
Kickstarter is the world's largest funding platform for creative projects. A home for film, music, art, theater, games, comics, design, photography, and more.
Doppelganger on the App Store - iTunes - Apple
Outlook and 2-Step Verification for Gmail accounts ...
Use Accessibility features on your iPhone, iPad, and iPod ...
What Cable is required to connect new MacPro to an NEC ...
Collectorz - Database software for movies, books ...
Trieste Kelly Dunn - IMDb
I've had my iPad Air 2 for about a week. On top of the WiFi issues, Safari issues, screen sensitivity issues, the biggest problem I seem to have is that every once in ...

Duplicate File Detector is a tool that lets you search for file duplicates on your computer. It can find duplicates of any files: text, binary, music, video or images. DupDetector 3.201finds duplicate and near duplicate images comparing their image pixel data. All you have to do is to specify the path and start the scan. You can set the match percentage required to consider the image a duplicate and preview all found duplicates side by side with an option to delete one of them.

In this article, you will learn how to build python-based gesture-controlled applications using AI. We will guide you all the way with step-by-step instructions. I’m sure you will have loads of fun and learn many useful concepts following the tutorial.

Specifically, you will learn the following:

  • How to train a custom Hand Detector with Dlib.
  • How to cleverly automate the data collection & annotation step with image processing so we don’t have to label anything.
  • How to convert normal PC applications like Games and Video Players to be controlled via hand gestures.

Here’s a demo of what we’ll be building in this Tutorial:

Excited yet ? if so keep reading..

Most of you are probably familiar with dlib library, a popular computer vision library mostly used for landmark detection. If you’re an old user of Dlib then you’d know that this library is much more than that.

Dlib contains many interesting application-specific algorithms for e.g. Its contains methods for facial recognition, tracking, landmark_detection, and others. Of course, landmark detection itself can be used to create a variety of other applications like Face morphing, emotion recognition. Facial manipulation etc. You can already find plenty of examples of these online, so today I’m going to show you a lesser-known but a really interesting ability of Dlib. I’m going to show you step by step how to train a custom Object Detector with Dlib.

Doppel

Dlib contains a HOG + SVM based detection pipeline.

Note: OpenCV also contains a HOG + SVM detection pipeline but personally speaking I find the dlib implementation a lot cleaner. Although the OpenCV version gives you a lot more control over different parameters.

What is HOG and SVM?

HOG or HIstogram of Oriented Gradients is a type of feature descriptor.

What is a feature descriptor?

Feature descriptors are vectors (an array of numbers), these vectors may look ordinary to you but for a computer, it encodes useful information about the image. You can think of a feature descriptor as a representation of an image (or an image patch) that consists of useful information about the image’s content.

For e.g. a good feature descriptor of a person on a blue background is pretty similar to a feature descriptor of that person on a different background. So with these descriptors, you can match images containing the same contents, this way you can do classification, cluster similar images, and do other things.

Before the rise of Deep Learning, we used feature descriptors to grab useful information from the image. (This is still used today when Deep leaning is not an option).

Now HOG is one of the most powerful feature descriptors out there, Satya has written a great explanation on details of the HOG feature descriptor, you can read it here.

With feature descriptors, you can get useful vectors but you still need a machine learning model to make sense of that vector and give you a prediction. This is where SVM or support vector machines come in.

SVM is a really strong ML classifier. You can read more about this algorithm here.

So if DL is not an option then SVM + HOG is the best machine learning approach you have.

So this is how our approach will look like:

So by using HOG as feature descriptor and SVM as our learning algorithm we have got ourselves a robust ML image classifier.

But wait! Weren’t we going to make an Object Detector which also outputs the location (bounding box coordinates) of the detected class?

Yes, and it’s pretty easy to convert this classifier to a detector. All you need to use is a Sliding Window. If you don’t know already, a sliding is exactly what the name suggests a window that slides over the whole image, you can think of it as a kernel or a filter going over the image. Take a look at the illustration below in which the window goes over the image from left to right, the window moves by a stride (an amount we set) after it reaches the end of the row then it moves down by a stride amount and moves back to the start of the row.

There’s one more thing you need to do to make it a complete detector, you need to add image pyramids which will make your detector scale-invariant and it will allow your sliding window to detect your target Object at different sizes. You can learn more about image pyramids here.

Why use this approach over a Deep Learning based Detector?

Yes, this is a valid question since you’ll probably end up with a better detector with DL based approaches but the major benefit here is that with this approach you’ll end up training the detector in a few seconds and that too using just your CPU and a few data samples.

Download Code To easily follow along this tutorial, please download code by clicking on the button below. It's FREE!

IF you have seen the demo then you would know that our intention in this tutorial is two-fold so we can split this tutorial into two parts.

  • Part 1: Training a Custom Hand Detector with DLIB
  • Part 2: Integrating Gesture controls with Applications.

Let’s start with the coding of first part

Part 1: Training a Custom Hand Detector with DLIB

This part can be split into following steps:

  • Step 1: Data Generation & Automatic Annotation.
  • Step 2: Preprocessing Data.
  • Step 3: Display Images (Optional)
  • Step 4: Train the Detector.
  • Step 5: Save & Evaluate the Detector.
  • Step 6: Test the Trained Detector on Live Webcam.
  • Step 7: How to do Multi-Object Detection (Optional)

Lets start by importing the required libraries

Step 1: Data Generation & Automatic Annotation.

Normally when you’re training a hand detector you’re going to need several images of the hand and then you’ll need to annotate them meaning you’ll have to draw bounding boxes over the hand in each image.

So now you have two options:

Option 1: Annotate Images Manually

Record a video of yourself and in the video wave your hand, move it around, rotate a bit, and so on but don’t deform your hand (palm should face the camera each time). After the recording, split the video into images and then download an annotation tool. You can install labelimg (a popular annotation tool) by just doing: pip install labelimg. After that, you have to annotate each image with a bounding box. Now depending upon the number of images this would probably take some hours.

Option 2: Automate the Annotation Process

A smarter way to go about is to automate this annotation process while you’re collecting training images.

How are we going to do that?

Well, all you need to do is use a sliding window, I’ve already explained what a sliding window is above.

Now what we’re going to do is put our hand inside the window and whenever the window moves we will move our hand with it, after that is done we will save that image and the window box will be our annotated box.

This way we will automate the annotation process. How cool is that.

The script below does just that, it saves the images in the folder named training_images and appends the window box locations in a python list.

Note: The Code above is structured in a way that when you run the code the second time it will append new images with the previous one, this is done so you can gather more samples at different places with different backgrounds, so you can have a diverse dataset. You can choose to delete all previous images by setting clear_images variable to True.

Before we move forward, you should also know that the detector we’re training is not a full-fledged hand detector but its actually a hand palm detector this is because a HOG + SVM model is not robust enough to capture the deformation of objects like a hand. If we were training a deep learning based detector then this wouldn’t be much of an issue but for this case make sure you’re not collecting images of deformed hands with different variations, make sure the palm is facing the camera.

Step 2: Preprocessing Data.

Before you start training you just need to load and preprocess the data (images and labels) slightly so they are in the required format.

First, we will extract all the image names from the images directory. Then we will use the indexes of those images to extract their associated bounding box. The bounding box will be converted to a dlib rectangle format and then the image and its box will be stored together in a dictionary in the format: index: (image, bounding_box) ... At the time of training, we will separate the images from bounding_boxes, for now, we’ll keep them together.

Note: You could get away by just reading in all images and labels directly from their locations in a list but it’s a bad practice since if you delete a single image from its directory after recording images. Then it would cause trouble. Ideally, you should get rid of any image in train_images directory before training if you feel it’s not right.

Let’s also check the total number of images and boxes present.

Number of Images and Boxes Present: 148

Step 3: Display Images (Optional)

You can optionally choose to display images along with their bounding box, this way you can visualize if the boxes were drawn properly or not.

Step 4: Train the Detector.

You can start training the detector by calling dlib.train_simple_object_detector and passing in a list of images and a list of associated dlib rectangles. First, we will extract the images and bounding box rectangles from our dictionary and then pass them to the training function.

Before you start training you can also specify some training options.

Training Completed, Total Time taken: 22.48 seconds

Step 5: Save & Evaluate the Detector

Save The Trained Detector

You should now save the detector so you don’t have to train it again the next time you want to use it. The extension of the model is .svm as in Support Vector Machine.

Check the Hog Descriptor:

You can even check out the final hog descriptor from the code below, the descriptor should look something like the target object. After running this code a window will pop up.

Check Training Metrics

You can call dlib.test_simple_object_detector() to test your model on training data.

Training Metrics: precision: 0.991379, recall: 0.974576, average precision: 0.974576

Check Testing Metrics:

Similarly, we can also check the testing metrics by using the remaining 20% of the data

Testing Metrics: precision: 1, recall: 0.933333, average precision: 0.933333

Train the Final Detector

We trained the model on 80% of the data, if you’re satisfied with the metrics above then you now can retrain the detector on 100% of the data.

One thing you may have noted is that the precision of the model is pretty high, this is really good since we don’t want any false positives when we try to gesture control games.

Step 6: Test the Trained Detector on Live Webcam.

Finally let’s test our detector. Now we’re going to do inference with our trained detector. You can load the detector by calling detector = dlib.simple_object_detector() After loading the detector you can pass in a frame by doing detector(frame) and it will return the bounding box location of the hand if it was detected.

If you noticed above I actually downsized the frame by a factor of 2 before passing it to the detector.

I've partnered with OpenCV.org to bring you official courses in Computer Vision, Machine Learning, and AI! Sign up now and take your skills to the next level!

Why did I do that?

Well remember that our HOG + SVM is actually a classifier, the things that allow it to become a detector are sliding windows and image pyramids. Now we already have seen that bigger an image is, the more time a sliding window is going to take to go over its rows and columns so it’s safe to say that our detector will run faster if the frame is small.

This is the reason we are downsizing the frames. Although if you downsize too much then the detector’s accuracy will be affected, you will need to figure out the right scaling factor, in my case, a factor of 2.0 which reduces the size by 50% gives the best balance of speed and accuracy.

After you have resized the image and performed detection, you then must re-scale the detected coordinates according to the original image.

Doppel 1 1 – high speed duplicate detector beeping

Step 7: Using Multiple Object Detectors Together

Now if you wanted to train a detector on multiple classes and not just hands then unfortunately you can’t just do that. The only way to do that is to train multiple detectors and run them simultaneously, This of course will significantly reduce your speed.

There is a silver lining here which is that dlib comes with a function called dlib.fhog_object_detector.run_multiple() which allows you to run multiple object detectors simultaneously in an efficient way. Of course the more detectors you add, the slower your code gets. This method also gives you confidence scores for each detection.

Here’s an example of me using two detectors One is trained on my hand and one on my face

Part 2: Integrating Gesture controls with Applications.

Now that we have learned how to train a single and multi Object Detector, let’s move onto the fun part where we automate a game and a video player via hand gestures.

For this tutorial I’ll be controlling following two applications:

First, I’ll control the VLC media player to pause/play or move forward/backward in the video. Then using the same code I’ll control a clone of the infamous temple run game.

Ofcourse you can feel free to control other applications too.

So how are we going to accomplish this ?

Well, it’s pretty simple, We are going to use the pyautogui library. This library allows you to control keyboard buttons and mouse cursor programmatically. You can learn more about this library here.

So now we’ll make a program that when we move our hand to the left of the screen pyautogui will press the left arrow key and when the hand is moved to the right, the right arrow key is pressed.

Similarly, If our hand is closer to the screen we’ll want to press the Up button and the hand is further from the screen we’ll want to press the down button.

This can easily be accomplished by measuring the size of the bounding box of the hand. As the size increases or decreases when the hand moves closer or far from the camera.

Let’s start with the code

Tuning Our Distance Thresholds:

Before we start controlling the game, we need to visualize how the buttons will be triggered, and if our defined thresholds are correct. This script draws lines between thresholds and displays the buttons that should be pressed. If these default thresholds don’t work for you then Change them, especially the size_up_th and size_down_th.

Make sure you’re satisfied with the above results and the keys are triggered correctly based on the location of your hand, if not then change the thresholds if you’re facing trouble detecting the hand then you have to train the detector again with more examples.

Now that we have configured our thresholds we will make a script that will press the required button based on those thresholds.

Main Function

This is our main script which will control the keyboard keys based on the hand movement. Now with this script I’ve controlled both the temple run game and the VLC media player.

When controlling the media player I make the variable player = True

If you noticed above I’m also saving the output as a video recording, before saving I attach the camera output to the left corner of the screen and then save it, this is done just for the demo, it won’t impact the performance.

Now let’s run this script on a temple run game, Make sure to set player = False . I’ve included the link to this game in the Source Code.

One thing I will admit is that playing the temple run game with this simple approach was hard. This is because some games are really time-sensitive and you just can’t replace the effectiveness of pressing keys in rapid succession with fingers with hand gestures. Plus your hand gets tired real fast. The only advantage I see is that it looks cool.

Summary:

So today we learned how easy is it to use dlib to train a simple Object Detector and not only that but we learned how to automate the tiresome data collection and annotation process. The sliding window technique that I used is just one of many that you can cook up once you know enough image processing. If you want to fully master image processing and computer vision fundamentals then OpenCV’s CV 1 Course is a must take.

Doppel 1 1 – High Speed Duplicate Detector Circuit

One drawback of the data collection and annotation method I used is that the final detector overfits on the training background and so it performs poorly on a different background. So if your intention is to have a background agnostic model then you should run the data generation script multiple times on different backgrounds, make sure to set clear_images variable to False after the first run.

We also learned how to use multiple detectors efficiently with dlib and then learned to gesture control some applications, now you can get really creative and control all sorts of other applications or games. I would really love to see what you guys can do with this 🙂

Subscribe & Download Code

If you liked this article and would like to download code (C++ and Python) and example images used in this post, please subscribe to our newsletter. You will also receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.

Doppel 1 1 – High Speed Duplicate Detector Beeping