We're super excited to have had our latest video make the front page of reddit this week! The project shows you how to build a smart security camera. The camera uses object detection (with OpenCV) to send you an email whenever it sees an intruder. It also runs a webcam so you can view live video from the camera when you are away.
View the original post here. As always, thanks for all the support!
I just got a chance to upload a new coding tutorial on our YouTube channel. It explores a couple interesting concepts. I use Google's Tensorflow machine learning framework to develop a simple image classifier with object recognition and neural networks. The model developed in this tutorial can be trained without much background knowledge of Tensorflow and used with other devices like Android, iOS, and even a Raspberry Pi.
This machine learning model is surprisingly accurate and can be used for a bunch of different applications. Let us know what you create!
We recently released our autonomous cooler video on the youtube channel. The cooler follows you by streaming GPS coordinates from your smart phone and comparing them to it's own location to determine what direction to move. For a future project, we want to extend this code to create a fully autonomous RC car. Our plan is to use some more advanced technologies like LIDAR and computer vision to navigate more accurately.
In my search for LIDAR modules, I found an interesting kickstarter project which aims to make LIDAR sensors more accessible. The module, called Sweep, has a built in 360 degree panning capability. At a $349.00 price point, that's an amazing feature.
With a range of 40 meters and 1000 samples/second, Sweep boasts some pretty impressive specs. The team at Scanse also built out a bunch of libraries for a few of the major consumer level single-chip boards. They're not on sale yet, but preorders have already begun. You can check out more details on their website.
Here's a cool project that uses a Raspberry Pi and Tensorflow to classify objects for recycling. A Raspberry Pi camera is used to capture an image of the item on the tray. Based on the output of the model, the robot will choose one of three bins and use a couple stepper motors to drop it in. It's surprisingly fast for a single board computer!
There is also an "active training" mode which sends images up to a server for future training data. It looks like there is some sort of UI for manual data annotation. Since training takes quite a bit of computing power, it's no surprise that he chose to do it on a powerful machine in the cloud.
I was curious how he was able to load Tensorflow on the pi, so I did some digging and found a couple cool projects on Github.
Google recently announced their new "video intelligence" API at the Google Next conference in San Francisco on March 7th. The API aims at providing shot-by-shot annotations of objects in videos.
While other object classification APIs already exist, they are mainly geared towards images. This is the first API I have seen that classifies objects in videos.
Possibly applications include video spam filters, automated classification/filtering, annotations for the blind, and shot change detection. I could also see the object detection feature being useful for DIY autonomous vehicles if this API gains the ability to stream and process live video in the future. We will see if we can utilize this tool for something interesting in one of our upcoming videos.