Skip to content


Learning ROS

I’ve been learning ROS for the past several weeks. No-so-pro tip: start with their recommended tutorials. Don’t try to start running it under Docker and cross-compiling a multi-computer system.

Make published an article Smooth servo control with ROS. It looks pretty easy… how hard can it be?

Good resources:

Posted in Make, Robotics.

Tagged with , , , , , .


Adafruit servo hat

After some brief struggles with a cheap servo controller, i bought the Adafruit 16-Channel PWM / Servo HAT for Raspberry Pi – Mini Kit from RobotShop. This is easy to setup and works well! And really not much more expensive than the really cheap one.

Tutorial: Adafruit 16-Channel PWM/Servo HAT for Raspberry Pi. Make sure you use the up-to-date code rather than the original mentioned in some of the tutorials… It can be found on GitHub: Adafruit_Python_PCA9685.

The Adafruit controller is pretty basic, but that shouldn’t be a problem with ROS and a major processor backing it. Other servo controllers can remember poses and specify speed and delays… these functions will have to be performed by the main processor. This may pose a design risk, which would result in having to replace the controller with a more expensive and complex one later… but this should get us started easily.

Posted in Make, Robotics.

Tagged with , , , , .


Agent Smart

In a previous post, i looked at OpenFace. Even though it needs “little training data”, this is still way more than one would want to collect manually.

So i’ve begun researching web agents. There seems to be 2 good solutions:

  1. BeautifulSoup – great if the page is agent friendly. Quick and reliable.
  2. Selenium – good if the page attempts to block agents. Is a full web browser and can be used to run scripts, fake cursor movement, scrolling, etc.

Good sample: StackOverflow: Using Python and BeautifulSoup (Saved webpage source codes into a local file)

I’ve hacked up some code that can slurp a bunch of data and put it into a database. It is mostly focused on image / media data, and does some reduction of data by means of content addressed storage using SHA-256 as the ID. It also has a tag system where any content can be tagged, which would normally happen through content identifies. So roughtly:

name * -> 1 ID 1 -> 1 content

tag * -> * content

It turns out that with anything but the most basic data, then KR is a problem that is still heavily researched. After some initial prototyping, i’ve decided to put this on hold for now until we get more physical robotics working. Let me know if you’d like to help out here…

Posted in Make, Robotics.

Tagged with , , , , , , , .


Looking at OpenFace

OpenCV has a good facial detection capability using Haar Cascades. Dlib uses HOG (histogram-of-oriented-gradient) based object detectors, which offers better facial detection than HC at the cost of computational power. Both can point out where faces are in an image. This is a good step, in that it can be used to get the animatronic creature to “look at” a person in front of it.

But what if we want it to identify the person? We can’t get the animatronic character to say “Don’t do this to me Dave!” unless either it can identify Dave or it thinks everyone is named Dave.

OpenFace is used to identify faces. Bonus: it is easy to setup with Docker. It’s less than a  year old at this point, and still pretty powerful and well documented. The gist: OpenFace first morphs the images so the eyes and bottom lips are in standard positions using OpenCV or Dlib. Then it uses a deep neural net to embed it into a 128-dimensional unit hypersphere. It also includes a demo that uses an SVM to classify the vectors.

This approach means that it can be quickly trained with little data compared to using a DNN from scratch. Here “little data” might mean 10 or 20 images of each subject rather than 1000s.

Links

OpenFace
OpenFace on GitHub

Posted in Make, Robotics.

Tagged with , , , , .


Animatronics

One really interesting thing about robotics is to watch persons interact with the robot… but for this, having anthropomorphic qualities is essential. Animatronics is the engineering and art of robotic animation, and has been used for years in the movie industry. The project that we’re embarking on this year is an animatronic creature. Since we aren’t experts, and since we ant to go much further than mere puppetry, i’m anticipating this will be a series of integrated projects that will eventually fit together into one system. Please let me know if you are interested in helping out!

I just finished the Coursera course on Machine Learning. Hopefully, some of the concepts will come in handy for teaching the robot how to behave, and to help learn its behaviour. It’s a great course!

Concept

The goal is to create a robot that will interact with one or two humans for entertainment. It will be a creepy creature. The alien appearance and remotely anthropomorphic interactions will hopefully amplify the creepiness. Depending on what we manage to accomplish, it may act like a simple pet or be capable of very simple language. It will sit at a table, thus eliminating problems of power, stability, locomotion, and navigation. It will help to constrain its domain of interactions.

Eyes

There is also an intro video from the Stan Winston School for Character Arts: how to make an eye mechanism. I haven’t seen the full video yet…

The pair of eyes will be about 2x he size of human eyes: 50mm. These will be 3D printed, and contain about 12 motors and 2-3 cameras.

Tentacles

The tentacles will hopefully each be about the length of an arm, and each controlled by 4 servos. It isn’t clear at the moment whether a heavy grade hobby servo will do, or whether we’d want to make out own servo controllers. Each motor will be responsible for  of 2 degrees of freedom in each of 2 tentacle sections.

There is a really cool set of posts on Hackaday:

  1. Bootup guide
  2. Cable controller
  3. Putting it all together

There is also an intro video from the Stan Winston School for Character Arts: cable basics. I haven’t seen the full video yet…

Posted in Make, Robotics.

Tagged with , , , , , , , .


DrawBot 11 – cleanup

After a couple weeks of not being able to work on the project, i’m back at it.

The main things this week are code cleanup and unique filenames for the output.

Filtering based on contour length – gets rid of small noise lines.

 

New parts

Dave delivered a pile of 3D printed parts! Thanks Dave!

IMG_20160804_1924489

Thoughts on other gondolas:

Polar Drawbot This has many part that are finicky. I’ve never used a 3D printer that would be able to print accurately enough to make it easy to assemble this thing. Though, with a high end lithographic one you might be able to…

Screwless Sharpie Holding Gondola for Drawbot This is great! I’m using it for now until Neil comes up with his own design…

Drawing

I immediately put on the new spools and gondola:

IMG_20160804_2055485

The drawing looks way better than the “MacGyver” version since the hardware is much more precise. You can actually recognize there are 2 squares, one within the other.

The drawing from my own g-code doesn’t look like much yet… better debug this output. One of the main problems is the lack of pen up / down, as well as lack of route optimization.

IMG_20160804_2114456

 

Posted in Make, Robotics.

Tagged with , , , .


DrawBot 9 – git

The main accomplishment this week is to get the code cleaned up and into git for posting on GitHub.

Posted in Make, Robotics.

Tagged with , .


DrawBot 8 – i code g code

I hacked together some Python to write out g-code in the format that Makelangelo can read. I haven’t figured out all the values yet, but as long as the paper size stays constant, i should be fine. 🙂 The goal is to hook this up to the OpenCV code from last week to write out images of photos…

 

Dave is working on 3D printing a better gondola to replace the popsicle sticks. There are a couple possibilities on Thingiverse:

Posted in Make, Robotics.

Tagged with , , , , .


Jenkins precommit target for gradle

Jenkins does a terrible job at managing job configuration. Sure, there is the JobConfigHistory Plugin, but this is weak and mostly exposes configuration management problems in Jenkins (why does a one line change create 4 versions?).

One easy trick to keep your Jenkins job clean, and make it easy for developers to understand what is being built, is to create a pseudo-target in Gradle. Then your regular source-control (git) can manage changes in case tools are added or changed. Here’s an example:

task jenkinsBuild () {
    group = "Jenkins"
    description = "Runs all targets the main pre-commit Jenkins job needs."

    dependsOn {[
        ":app:assembleRelease",
        ":app:assembleDebug",
        ":app:lintDebug",
        ":app:testDebug",
        ":app:jacocoReportUnitTest",
        ":app:findbugs",
        ":app:check",
    ]}
}

This has tons of advantages: different branches can easily have different requirements, integrating a particular tool can be done easily without Jenkins testing to see if it is configured for old branches, etc.

For simple projects, simply relying on the Gradle check task may be sufficient. However, as the number of variants (build type + product flavor), you may not want the pre-commit Jenkins jobs to build all variants.

This technique applies to a setup where several Jenkins jobs that run different sets of checks in parallel. e.g. jenkinsFunctionalTest, jenkinsStaticAnalysis, jenkinsStressTest, etc.

In conclusion, moving much of the build decision logic into source control and out of Jenkins greatly simplifies traceability and configurability for your project.

Posted in Uncategorized.

Tagged with , , , , , , .


DrawBot 6 – MacGyver edition

Today, we totally hacked it together with electrical tape, pipe cleaners, fishing line, twist ties, and popsicle sticks. It works! (Sort-of).

IMG_20160623_2139117

IMG_20160630_2150250

Posted in Make, Robotics.

Tagged with , , .