Tag Archives: exuro

Crushing a Dream

When I saw the posting for Project Remake on Makezine in early April, I knew that I had to submit Exuro. She’s an excellent project made from mostly recycled materials that just happens to throw fire. What I didn’t do was to read the fine print and that was a rather important oversight.

I was notified toward the end of May that I was a finalist and may have won a Makerbot Replicator 3D printer. Being a finalist, I may also win an all expense trip to New York for the World Maker Faire. Trip to New York would be cool (I’m originally from there) but a Replicator 3D printer, now that’s just awesome!

I really didn’t want to get too excited, but a 3D printer? Most amazing!

There was paperwork included with the notification. Read it. Read the rules. Signed pages and had them notorized. Sent everything back before the deadline.

Checked back on Wednesday when the voting was to start for the trip to New York. The public was being invited to vote on their favorite of the finalists to see who goes to New York. The project organizers (O’Reilly Media and Energizer Personal Care) changed the site to say vote soon. Something was up.

In the mean time, I’m starting to get excited about the printer. I would wake up in the wee hours of the morning with ideas on things to print. Boxes, models, parts, small scale sculpture ideas, and more. Started making plans w/ Nick, the head of the foundry at The Crucible, to make patterns for use in teaching foundry classes.

Check web site again, still says vote soon.

Make more plans, create virtual models of some ideas to print. Get more enthusiastic about having been picked as a finalist. Discover 3D scanning to create models of real world things. More planning, more ideas. Wondering if a motor can be added to the turn table in the wax studio so allow for taking pictures of object to create 3D models.

And then the letter arrives. Yesterday, a FedEx package delivers a letter that states that my project has been disqualified because of the clause in the fine print about trademarks:

“Contest Entries cannot, as determined in Sponsors sole discretion… (d) contain trademarks, logos or trade dress owned by others, or advertise or promote any brand or product other than Sponsors products”

“As such, Sponsors have determined your entry submission containing the mention of a Kinect which is a trademark owned by another brand, violates this clause of the Official Rules.”

Exuro uses a Kinect to sense what’s going on around her and Kinect is a Microsoft trademark. Poof, dream crushed. Do not pass go, do not collect two hundred dollars. No longer elegible, contest sponser have no further obligation to me, etc.

Yes I should have originally read the rules but even after I did, I didn’t realize that Kinect was trademarked. If I did, I could have changed the submission and use the word sensors or even changed Exuro so that she behaves a little differently without a Kinect. Sigh.

Yes, it sucks to be me.

Still, out of this whole experience what strikes me the most is that I keep thinking about printing objects, parts and patterns. Looks like I just may have the 3D printing bug. Now to adjust my plans to including funding a 3D printer for myself.

Greenbacks and an Arduino

One of the additions that I want to make to Exuro is the ability to accept money and then interact in different ways with the person who shelled out some of their hard earned dollars. In order to do that, I first need to figure out how to accept a money and process the amount given via an Arduino.

I’m working with a Pyramid Technologies Apex 5000 bill acceptor and an Arduino Uno. The folks at Pyramid Technologies seem to be completely open to helping people work with their products. I purchased a used Apex 5000 on ebay and their support folks were quite willing to answer configuration questions and help me with the project. That’s pretty cool.

It turns out that the Apex 5000 is pretty simple to interface with an Arduino. There’s a configuration setup whereby the bill acceptor will send N 50 milli-second pulses per $1 value of the bill that was read. The configuration that I’m currently working with sends 1 pulse per dollar. So a $1 will send 1 pulse, a $5 will send 5 pulses and so on.

The output line on the Apex 5000 is an open collector which makes things pretty simple. The output line of the bill acceptor is connected to a pin on the Arduino that’s used to count pulses. On the Arduino Uno, I’m using pin 2 since that is one of the two that can support interrupts. There’s a 2K resistor that connects the bill reader output / Arduino pin 2 connection to the +5 V pin on the Arduino. And the ground for the bill collector and the Arduino are also connected.

With this hardware configuration, I’m able to read the pulses from the bill collector. My first attempt actually had the configuration so that there were 10 pulses per $1. But this resulted in completely bogus readings because the whole sequence of pulses could take up to 20,000 milli-seconds for a $20 bill. To start, that’s way too long to wait before making the determination as to the bill domination. And the code was slipping sideways when trying to count the pulses. So I reset the configuration to 1 pulse per dollar which results in a maximum of 2,000 milli-seconds for the largest bill allowed.

Right now, the code is still a little wonky. There’s still a problem with determining the actual dollar value. I suspect that it has to do with the relationship between the interrupt and the simplistic logic in the loop to determine the value. Some more debugging is needed.

There’s a code repository to share whatever way I manage to make this work. Feel free to poke at it with a sharp stick.

Kinect, Python and Pushing Through

There’s tangent project that I’m working on that involves robotics, arduinos, kinect and fire. It’s a small robot called Exuro that has a pair of stainless steel eyes that are meant to track a person coming up to a donation box and when they make a donation, set off a small poofer. The idea is to track people using a kinect and have the eyes move as if they’re watching the closest person to the donation box. Working with an arduino to control external systems is pretty straight forward for me, it’s something that I’ve done before. But pulling sensor data from something like a kinect and interpreting the data is something I’ve never done. It’s rather intimidating. Processing video data at something like 30 frames per second, not something I’m used to do. But it sounds like fun!

There’s an open source driver to access the kinect called libfreenect that’s available from openkinect.org. Included are wrappers for using the library from Python which most definitely my preferred programming language. That works.

Getting libfreenect to build on a Ubuntu 10.10 system was pretty straight forward. Just follow the instructions in the README.asciidoc file. Getting the Python wrappers to work took a bit more effort. cython is used to create the bindings between libfreenect and Python. Unfortunately, the version that’s currently included with Ubuntu 10.10 isn’t up to the task. Once I removed the Ubuntu and installed from the latest source, the Python bindings built and worked as just fine. I’m sure the fine folks maintaining Ubuntu will make a newer version available at some point, I’m just not willing to put this project on hold till they do ;-)

There’s a few demo files that are included with the wrapper so you can start to play with the interface, library and the kinect data. Two of them, demo_cv_sync.py and demo_cv_thresh_sweep.py, make for demo. The first opens two windows and shows a live video feed of the rgb camera in one and the depth camera in the other. The other demo shows a video of the depth camera but sweeps through the data showing what’s seen at different depths. These are really interesting demos to help wrap your head around what’s available from the kinect.

I got to wondering about the depth data and if there wasn’t a way to combine the two demos to be able to slide through the depth manually to see what’s there. The result is demo_cv_threshold.py. It allows you to slide along at any depth to see what’s there and then to contract or expand to see what’s around that depth. Here’s a sample video showing my hand pushing through a virtual wall:

The depth slider sets the focal point for what data to display and the threshold provides a +/- tolerance for how much data to display. A depth of 535 and a threshold of 0 would show just the data at 535 while a depth of 535 and a threshold of 6 would show the data from 529 thru 541.

It’s an interesting application to play with the gain a basic understanding of the data being returned and possible ways to use it. I’ve submitted a pull request on github to the maintainers of libfreenect to see if they’re willing to include it in the next release. Here’s hoping that they will.

There’s a lot more work I need to do for this project. The next steps will be to find the closest person in the data stream and calculate their location in the real world in reference to the location of the kinect. And I have almost no idea how to go about doing that. Time to read up on numpy and opencv