Drogon Quadcopter Update : Still Alive

Drogon’s not dead! While it’s been over a year since my last post on the topic, I have been working away. Although I went most of the year without touching it, the pace has picked back up over the last few months and I’ve worked through a number of changes and improvements.

The only hardware change was swapping out the frame for a new Q450 V3 Fiberglass Quadcopter Frame (450mm) from Hobby King. It’s a bit smaller but lighter. So far I really like it. All the electronics are the same.

IMG_20141011_033238

IMG_20141011_033248 (1)

First, I had attempted to ditch the transmitter/receiver a bit too early in development, relying completely on wifi and the Raspberry Pi. This required a lot of support software that took away from the more important, and fun, flight control software. It ended up being too much of a distraction, so back on the receiver went. I re-integrated the receiver code and back on track I was. On the plus side I now have streaming video to an Android app from the Raspberry Pi’s camera. I also swapped out the Java code on the Pi to a new Python app (It’s all in my Github repos). All it’s really doing now is capturing logs from the Arduino and providing video streaming.

Next, I developed a self-tuning addition to the PID algorithm. This should hopefully be saving time and producing a better tuned PI. It still requires manual tuning to get it started, which I’m working through now, but progress is being made. I also re-calibrated the accelerometer-gyroscope relationship and how the data is pulled into a single position, so I am getting much better and cleaner data. I also incorporated rotational correction. This uses the Gyroscopes Z-axis by increasing opposite motors and decreasing the other opposite motors to control and take advantage of the motor/propeller torque.

The last major area was in test rigs. I’ve gone through a few iterations of test rigs. The first test rig I used, which is in my first videos, was tethering to the floor. This had several issues in not giving the quadcopter enough freedom of motion and significantly impacted flight. The second test rig I used was hanging from the ceiling. This was much better at providing more range of motion, but still had issues in giving it too much freedom while still having too much effect on flight. My current test rig is now a balance beam. It is basically two 2x4s perpendicular to each other, with one raised slightly. This works well with my current frame. When I get the PID tuning working well, I may graduate to the hanging rig (hanging, with pulleys to small weight to pull slack).

Here is a video of the testing I’m working on:

 

Share

Deep Neural Networks Don’t Need To Be All That Brain-Like

I came across an interesting post on Deep Learning / Deep Neural Networks which was related to my previous post. The interesting point about that post, which I had in the back of my mind while writing my previous post about neural networks, was that Deep Neural Networks aren’t designed for replicating the full extent of Human intelligence. What they are getting really good at is replicating only a part of what the Human brain does, but a part which is important and incredibly useful and effective for specific engineering problems such as classification.

One specific thing that Deep Neural Networks do well is the task of classification at very large scale. The Human brain is pretty slow and even though it packs in an enormous amount of neurons that work in parallel, way more than any computer or cluster of computers is able to model, it needs a lot of neurons to classify one image, and can only classify a small number of images at once. Deep Neural Networks can sift through millions or more images per day without getting tired or having to sleep and the process can be cheaply replicated across tens or more thousands of computers. They do need a steady diet of electricity, but that is in relative abundance (though limited and has it’s own set of issues).

Deep Neural Networks aren’t going to give us Strong AI anytime soon, but they are beginning to be able to perform certain tasks faster and more accurately than humans.

Share

Neural Networks aren’t so Brain-like

Digial BrainI came across an article about a paper which details the discovery of an interesting property of feature encoding in a Neural Network and an inherent flaw in the discontinuity of feature learning. The flaw has to do with the finding that specific new classification examples are universally incorrectly classified by (classical and deep) Neural Networks. The classification errors follow a specific blind spot in how individual neurons or set of neurons are able to be trained. As an example, certain images that a human would instantly correctly positively classify, a Neural Network would incorrectly negatively classify. And certain images that a human would correctly negatively classify, a Neural Network would incorrectly positively classify. This flaw permeated many different types, configurations, and differently trained Neural Networks.

The flaw is not necessarily a flaw in how Neural Networks (in software) are implemented, and it is not necessarily the case that the human brain does not suffer the same flaw. But rather, it is a limitation in modelling a specific operation of the neurons of the human brain and how that singular operation is abstracted and taken advantage of in machine learning problems. Since there is no reason yet to believe that the same flaw does not exist in how neurons in the human brain are able to model and classify data, it must be presumed that the flaw of software Neural Networks is that they are limited to a small subset of how the human brain uses neurons.

There are two primary areas in which (classical and deep) Neural Networks ignore critical human brain functions which cause them to be limited in their ability to be as flexible and robust at performing intelligent tasks, even that as mundane as classification. These areas are time and prediction. Yes, a critical function of the brain is to both learn and classify information. But it does so in the context of time and with the aid of prediction; without either of which the brain cannot function. You do not experience life in frames as a computer vision system typically does. Even when you stare as still as possible at an image, you cannot identify it without, nearly imperceptibly to you, your eyes moving across the image (called a saccade). Granted, your eye can only physically see in clear detail a small area in the center of your vision, but the movement and time dimensions are important none the less. Just like how you move your fingers over a surface to identify it through touch, your eyes function in much the same way, by creating a constantly changing temporal pattern. Computer vision systems do not (typically) do this.

Prediction too is critical at filling in the messy and incomplete stream of information you experience. You may not realize it, but you have a blind spot in each eye where your optic nerve exits the eye. It is possible to detect it, in a way similar to how we can detect a black hole, but for your normal behavior and experience it’s imperceptible. Through the use of time-changing and predictive input processing, however, you experience a continuous and consistent reality. Prediction fills in the gaps from the missing input of your eyes’ blind spots and other missing or variant features. Markov models can form predictions, but most Neural Networks ignore or don’t fully address or take advantage of prediction.

Time and prediction, I believe, allows us to completely overcome adversarial examples or neural blind spots like those described in the above mentioned paper. By adding a time dimension, it not only allows a time component to be a constraint in a model, thus treating the same order of a pattern differently when important distinctions are made with respect to the time between various parts of the pattern. But the time dimension also spreads the data over a wider swath, and not necessarily that which resembles what is captured by a single computer image. By manipulating and absorbing a still image we are able to overcome major deficits in the quality or other deficiencies in an image. Computer vision systems are starting to take the queues of how the human eye and its saccades process images, but it’s just beginning to become common practice. Prediction too lets us fill in gaps, correct for small errors or discrepancies, and know what to expect and do next, particularly critical in real-time systems.

The human brain is a highly complex, highly structured, dynamical system. Neural Networks do a good job at emulating the modeling characteristic of the human brain to perform classification tasks. But thanks to books like On Intelligence by Jeff Hawkins and How to Create a Mind by Ray Kurzweil, among others, I have come to realize, at a high level, the depth and breadth of how the brain implements intelligence. The trivial wiring of standard Neural Networks barely begins to explore its capability. Deep Neural Networks take it a step further, but there are so many more ways in which it can be taken further.

Share

LinkedIn’s Endorsement System is Broken

LinkedInSkillsThere is a flaw in LinkedIn‘s skill endorsement system, at least in so far as how mine has evolved. The result of which is that my endorsements do not match my actual relative levels of expertise. The key issue with my LinkedIn profile is the disproportionately high endorsements for MySQL, more than twice that of my next highest skill, PostgreSQL. The reality is I know much more about PostgreSQL than MySQL and anyone I have worked with I hope would know that. For some reason this has bothered me, so here is my explanation as to why I think it is happening and how I think it could be fixed.

My sample size of endorsements is really small so it’s not very statistically significant on its own, but I see a pattern emerging that is likely the result of flawed logic in the way LinkedIn promotes making endorsements. When you view someones profile on LinkedIn it may show you a box at the top of that user’s profile with a few of their skills, asking you to endorse them. How does LinkedIn come up with that short list from the possibly large list of available skills?

I assume as a user gains endorsements, those for which they have been endorsed will be more likely to be presented to new users to endorse. But what happens when a user has no or few endorsements? I can’t say for sure but I can bet they are using potentially many indicators from the ecosystem of LinkedIn’s available data such as number of users having that skill or total endorsements for that skill. They may also be using factors such as that skill’s relevance (i.e. occurrences) in job listings or job searches. The idea being that when having no user-specific indicators for which skills are important for that user, skills considered more important or relevant across the community at large or those most likely to lead to targetable skill sets for job searches should be favored for promotion for endorsements. The problem with that assumption, however, is that what the crowd is good at or what most recruiters are looking for isn’t necessarily what you are good at. If endorsements are meant as a way for recruiters or others to gauge your level of expertise or proficiency in a skill, then LinkedIn’s logic for promoting endorsements is flawed.

What I think would be more valuable is for users to first rate their own expertise in their skills, then let other users endorse those skills. When promoting other users to endorse them, the short list of skills to present should be based on a combination of your own rating, the endorsements from others, as well as indicators from the LinkedIn ecosystem as to what skills are important to the community. If others organically endorse skills in a disproportionate way to how they were rated by the user, that is ok and potentially interesting information. At least a user has some influence on how that process evolves.

Share

LIDAR-Lite Optical Distance Sensor

Came across a project, the LIDAR-Lite by PulsedLight on the crowdfunding site Dragon Innovation (not to be confused with Drogon, the quadcopter). It is an optical distance measurement device for accurate high performance sensing with many advantages over IR or sonar range sensors. Looks pretty impressive and perfect for quadcopters among many other applications.

Share

DrogonQuad.com

I decided to create a Wiki for the Drogon Quadcopter project to have something that is better suited for that type of content. Blog posts on the topic will stay at joemonti.org, but all information, documentation, etc will be at drogonquad.com. Just in case you forgot:

DrogonQuad.com

Check it out!

Share

Gremlin : A new little robot

I started working on a new little robot called Gremlin. It is based on a Parallax Boe-Bot base with an Erector set and plexiglass frame. For electronics it has a Raspberry Pi w/ WiFi adapter, 16-channel I²C Servo controller and the Raspberry Pi Camera Module. Power comes from 10 AA batteries (4 for servo controller and 6 for Raspberry Pi), but I will likely upgrade to LiPos. Here’s a short video showing the Android control app (sorry for the crummy production quality):

It’s currently missing the 6 AA batteries for the Raspberry Pi (I’m waiting on a few parts), so the tether is just there to power the Pi.

The Android app is a little something I wrote which connects to the robot over the network (WiFi). It has a live video stream and virtual joystick controls. Once I get the whole mobile power assembly hooked up I’ll be able to use it for telepresence.

With my primary robotics project, the Drogon Quadcopter, grounded for the winter, I’ve started Gremlin to keep some of the work going The goal is really to have a smaller, easier to work with mobile robot for building a general purpose robotics software platform for Drogon and any other robotic projects I pursue. Also I’d like to use it as a test-bed for building learning algorithms and working with the camera, also applicable to Drogon.

Here are few more photos:

IMG_0510

Share

Maker Faire NYC 2013

Had an awesome time at Maker Faire NYC 2013!! Tons of cool projects and cool tech.

I posted my photos on my Google+ page:

Maker Faire NYC 2013 Google+ Album

Here is a few samples

IMAG0403_ZOE018
IMAG0418

Share

Drogon Quadcopter Update : Flight Control

IMAG0181

It has been too long since my last update, so here is what’s been going on. Up to my last update where I was testing under full manual control I was in a hurry to get the robot running. In that, I was successful. I got the hardware assembled, access to the hardware via the Arduino, and verified it all worked. The next big step was to develop the flight control software to keep the robot stable in flight and to be able to move where I want it to move. That is what I am working on now.

I have made a few attempts at flight, mostly tethered. There was one untethered test with unsuccessful results although I did get off the ground for the first time which I was not able to do under manual control:

The flight control algorithm approach I am using has three components:

  1. Position — Determines the position of the robot from the sensors.
  2. PID Control — Uses the current position versus the desired position to make adjustments to motors using a PID feedback control algorithm.
  3. Continual Learning — Measures success of PID Control to make adjustments to PID algorithm parameters.

I am approaching these components in order, with as much flight testing as I consider safe during the process to work out bugs, see how it performs, and gather data. I plan on developing a relatively simple Position algorithm so I can develop the PID Control algorithm and model the best PID algorithm parameters I am able to manually tune. Hopefully with that I will get more flight data to improve the Position algorithm and gather data for the Continual Learning algorithm. Then I will tackle the software for the Continual Learning algorithm. I don’t yet have a full plan for that, but I have many ideas on where to start.

The first challenge is simply getting accurate positional data from the accelerometer and gyroscope. First, the sensors measure different things. The accelerometer measures acceleration, which can give some indication of position by effect of the acceleration due to gravity since the acceleration due to gravity is constant. The gyroscope measures rotational velocity in degrees per second. Second, both are fairly noisy, especially with the motors going and the copter starts moving. Right now I have a very simple algorithm to combine the two sensors into a single angular offset of the pitch and roll (ignoring yaw for now) with a running average filter. I have also developed the main PID control algorithm, with some potential improvements to try once I get it running.

To tie these pieces together and to turn them into final motor adjustments, I have a series of translation steps that need to be made. First I have to translate the coordinate space of the robot, which is how the position of the robot is tracked, to the coordinate space of the motors because the motors are offset 45 degrees from the axis of the robot (called an “x” configuration quadcopter as opposed to a “t” configuration). This translates the position of the robot, through rotation matrices, to determine the position of each motor. To determine the error value to use in the PID algorithm, I use the arc-length of the offset of each motor from the zero-position (to achieve robot motion, instead of the zero-position, the offset will be from a target position which would cause the desired motion). Two error adjustments are calculated and applied oppositely to opposite motors by adding (or subtracting for the opposite motor) to a target motor speed. The target motor speed is currently determined by the receiver channel 3 value.

I am currently modelling the PID algorithm to guess at good PID algorithm parameters by feeding position changes into the algorithm and analyzing the results. More on this later.

Share

Drogon Quadcopter Update : Manual Control

As part of the development process for Drogon, I wanted the first attempt at flight to be under full manual control. I did not expect to be able to fly, I wanted to see what happened. At the very least I hoped to validate things like the motors providing enough lift and to collect sensor data to help build the stabilization algorithms.

During the manual control testing I am only working from the Arduino. The Raspberry Pi is only acting as a logger, reading the stream of sensor and control data from the Ardruino over the serial connection. The Raspberry Pi won’t really be used until I get further along with the stabilization, control, and management software.

The first set of tests were tethered to a wood pallet. The pallet is large and heavy enough to give the quadcopter a safe and restricted environment to test flight controls. In my first attempt, the tether lines were too long so it ended up just tipping over and the propellers slammed into the pallet. Fortunately it came away with no real damage.

Here’s a picture:

PalletPic

And here’s a short video clip:

The next tethered test was more successful. I shortened the tether lines and was able to gain lift off the ground and even verify the manual direction controls worked.

Now that the controls were validated with tethered testing, I moved on to outdoor un-tethered flight testing. As expected, full manual control did not achieve flight. Just a lot of tipping over. Below is a shorter clip from the test session.

Next steps are to begin work on the stabilization algorithm. I expect that to be a gradual process as I build out, tweak, redesign, tweak some more, and optimize the code and algorithm constants. As it stands at the time of this post I have made some progress on the stabilization algorithm and even made a few test flight attemps. I hope to write up a post on it soon. Check out my YouTube playlist with all of the test videos:

http://www.youtube.com/playlist?list=PLclL3kp0O7XUoYn2oOVWTNV-DFnEPoXih

Check out the GitHub project for the work in progress

https://github.com/joemonti/drogon-arduino

Share