Build log day 21:
I started the day by updating the control scheme for the limbs to take the extra actuators into account. With the experience from two days ago it was comparatively easy. When that was done I set about programming and testing the movement sequences. End result: I got it to move forwards, backwards, side to side and to turn. This is a big milestone for me, the next step is to stop it crashing in to stuff.

Build log day 22:
I’ve realized that it doesn’t make sense to add senors for obstacle avoidance while the Man is still tethered to my computer. For this reason I have started searching for an appropriate controller. My first instinct was to simply use the Arduino but I find it’s a bit too basic for my purposes. I was in the middle of researching other single board computers when I stumbled on an interesting article about networked intelligence. I studied it a bit further and while it’s a bit overkill for my current purposes I want to try it out. The proper way to do it would be to connect several single board computers together and write all the communication software completely from scratch. However, I found a company that sells complete hardware solutions called Reactive Analytic Information Networks for use at universities. This suits my needs perfectly so I ordered one of their basic models or B.R.A.I.Ns. It has some very good basic functionality and plenty of I/Os for monitoring sensors and controlling the actuators. Advanced features are rather limited but that shouldn’t be a problem.

Build log day 23:
I’ve installed the B.R.A.I.N in a separate little box outside the main body to make it easier to access it for reprogramming purposes. I converted my limb control software to the proper format then loaded it into the B.R.A.I.N. With a bit of tweaking I got the limbs to move as intended. I then repeated the process for the limb synchronization and movement control. I ran some quick tests before finishing up for the day, the result is a success: the Man can move without tethering.

Build log day 24:
Since I got untethered movement working, the next step is obstacle detection and collision avoidance. I was thinking to just use some simple range sensors to detect any object within a certain distance. Due to the rather narrow search field, that would work well for large objects more or less straight in front of the Man. However, it would most likely miss any small objects off to the side or down on the ground. I could of course solve that by arranging several sensors in a cluster but I believe a vision system could achieve better results. A camera can view a much larger area than a sensor, or even a cluster of sensor, and it should easily be able to recognize any obstacles inside its field of view. However, it cannot judge the distance to the objects it finds. By combining the images from two cameras I should be able to create an algorithm that can make reasonably accurate estimates of the distance. I found some cameras from Ocular Inc. that should fit my purposes and I placed an order for two of them.

Build log day 25:
The Ocular Inc. cameras arrived this morning and I installed them on the Man. To avoid inducing noise in the signal, I mounted them next to the B.R.A.I.N in the separate little box. I ran a few tests and I get about a 110 – 120 degree field of view covered by both cameras, and off to the sides I get another 40 – 50 degrees for each individual camera. Im focusing right now on the central field of view where both cameras can be combined. I have developed a basic algorithm for detecting walls within a reasonable distance. This is assuming a flat level ground, I might have to deal with that in the future. On top of that I have added an algorithm for recognizing smaller objects that differ from the walls and judging their relative size and distance. All my testing shows positive results.

Build log day 26:
Today I’ve been running a bunch of tests and obstacle avoidance is working fine as long as the obstacle is in the cameras’ field of view. When obstacles are outside the field of view however, the Man can still crash into them or trip over them. Finding a solution for that will be the goal for tomorrow.

Build log day 27:
In order to improve obstacle avoidance I have mounted the controller and sensor box on a ball joint and attached two actuators to it. This way it is able to pan side to side and tilt up and down which greatly increases the area it can view with the cameras. I’ve also improved he obstacle detection algorithm for the cameras so that any object only visible to one camera will still be detected. It can then use the new pan-tilt function to focus on it and judge size and distance.

Build log day 28:
I’ve spent the whole day testing the improved obstacle avoidance system. I programmed the Man to walk in a straight line until it got close to a big obstacle or wall. It would then turn randomly then walk in a straight line again. If it encountered a small obstacle along the way it would step over or around it then continue straight. I set it in a large empty room and started this algorithm. Over time I made the environment more complex by adding obstacles of varying size and shape. It passed this test with flying colors, only stopping when there was almost more obstacle than empty space. With that I have reached the main goal of this project. The secondary goal would be to have it perform some kind of task that forces it to interact with the environment in some way. So far I haven’t figured out what that task could be so close the project for now, calling it a success.

Pages: 1 2 3