This is a continuation of God’s Build Log, a kind of story-esque thought experiment asking the question “if God were an engineer, how would he go about building the first human being?” I want to reiterate the disclaimer I made in my original post that while this has some basis in the bible – and is somewhat satirical in nature – it is not meant as an insult against God, Christianity, or religion in general. With that out of the way, let’s continue where I left off the last time.

Build Log Day 37:
It’s been a little over a week since I completed the main goal of the project. Since then I’ve been thinking about a suitable task for the man to perform and thus complete the secondary goal. Last night while lying in bed I figured it out. Since the Man is dependent on consuming biomatter that it converts to energy for powering the rest of the system, I decided that the task should be to search out and collect suitable biomatter that it can then insert into the energy converter.  My first step towards reaching this goal was to program an object recognition algorithm for the Oculus cameras. The easiest, but least robust way would be to use some simple pattern matching based on a single image of each type object I want it to recognize. However, this would completely defeat the purpose of having the B.R.A.I.N and I could just use a standard single board computer. What I ended up doing instead was to build a database containing a couple of hundred photos of common objects where each photo is categorized as a certain object type for example 100 photos labeled as “apple”.  I then set the B.R.A.I.N to process these over night, we will see how it works tomorrow.

Build Log Day 38:
I started the day with a quick test of the object recognition algorithm. Initial results are promising but as soon as the objects I was showing were in strange angles, bad lighting or a very cluttered environment, it had problems to recognize them. I think this is a matter of further training for the  B.R.A.I.N. I need to add more photos in the database and run another training session. Since that takes time I postponed it until tonight. Meanwhile I added the object recognition algorithm on top of the obstacle avoidance algorithm. I then set up a kind of maze with a mix of obstacles as well as a few objects I know the Man can recognize. I programmed it to move around randomly in this room, staying away from any obstacles. Whenever it recognized an object, I set it to stop moving so I could connect the little diagnostic display that comes with the B.R.A.I.N and read out the object category. I did a few test runs and in my controlled environment it worked roughly 88% of the time. There were a few instances of misrecognizing the object type, and a few where the Man simply recognized the object as an obstacle avoided it instead of stopping. Again, I think this can be avoided with further  B.R.A.I.N  training. That’s it for this log entry. Now I will add a bunch more photos to the data base, start the training then head to bed.

Build Log Day 40:
I’ve spent two days building up the database with more pictures and training the algorithm over and over again. Now it’s able to correctly recognize objects 99.9% of the time, even when the visual environment is complex i.e. cluttered and with sub-optimal lighting. The only times it fails is when it mistakes an object for a similar one, which I find acceptable. The only real problem now is that the database only contains a few object types.

Build Log Day 41:
I added a classification in the database for consumable biomatter and nonconsumable objects. I also installed a temporary signaling system consisting of a green and a red LED on top of the control box. I then set up the maze with several objects spread out around the room, some consumable and some not. I set the Man to move around randomly in this room. Whenever it identified an object it would classify it and switch on the corresponding LED, green for consumable  and red for not. I then acknowledge the result and let it continue moving. I’ve been running this test for the last couple of hours, increasing complexity little by little. The overall result is promising although it did start making mistakes when the complexity got too high.

Build Log Day 42:
I think the object recognition algorithm is good enough for now. The next step is to get the Man to pick up objects it has identified. To this end I built a simple gripper with a single joint. It worked fine for the first test piece, an oblong block of wood I had lying around. When I tested it on differently shaped objects however it didn’t work. I can easily design a gripper for almost any object shape but that’s not good enough, it has to be usable for a number of different shapes. I tried for many hours but try as I might I couldn’t  construct a gripper that covers the whole range of shapes I have in mind. I am now working on a multi jointed gripper that can adapt better to different shapes.

Build Log Day 45:
I’ve spent the last three days iterating through different designs for the object gripper. Listing all the failed designs here would be a waste of time. Instead I will describe the most successful one. It consists of a circular base with a total of four multi jointed rods that I call ”fingers” spread out along the circumference. These fingers can form a variety of grips depending on the object it wants to grip. It’s all controlled by the Muscle Inc. actuators I’ve been using.

Build Log Day 46:
I replaced one of the balancing feet on the Man with my gripper. When I was going to test it I immediately ran into a problem I hadn’t thought of when working on the bench. Having the actuators directly on the gripper made it too big and bulky to be effectively angled the way the balancing feet are. I then ended up spending the rest of the day repositioning the actuators for the fingers, as well as the ones controlling the angle of the ball joint, higher up on the limb. Basically the lower half of each limb is now housing all the actuators for the gripper. This way the gripper can be slimmed down by a lot. Normally the Muscle actuators are connected directly to the limbs they actuate, but repositioning the actuators means I need to find a way to connect them. It’s late now so that will be my task for tomorrow.

Build Log Day 47:
This morning I started connecting the gripper actuators to the fingers using steel rods. The first finger was fine but already on the second finger I realized it wasn’t going to work. Sooner or later the rods are bound to interfere with each other. I did a quick trip down to the nearby hardware store and found some flexible steel wire that I could use instead. I got the wire from a brand called Tend-On but I’m sure there must be others out there that are just as good. Anyway, using wires instead of rods meant pulling on the fingers rather than pushing so I had to flip all the actuators by 180 degrees. Once that was done I set about connecting the steel wires. Since they are flexible I could route them quite nicely inside the fingers to be out of the way. I only had time for some quick tests but this set up shows promise.

Build Log Day 48:
I spent the day writing a control algorithm for my gripper. I started with calculating the actuator output for setting the angle of any one finger joint. I then manually built a big table containing all the finger joint angles for each position the gripper should be able to take. I then made a simple algorithm for looking up the required angles from the table and feeding them through the actuator output function. I admit it’s a rather inelegant solution but it works. The main downside is that all the finger joint angles have to be manually adjusted

Build Log Day 49:
Last night I had a brilliant idea for how to control my grippers. Instead of having fixed positions I can make the gripper far more dynamic by letting it adapt to the object it’s trying to grip. This means that all the work I did yesterday is essentially useless but I really think throwing it out and starting over will be worth it. In order for the gripper to be able to adapt it needs to have some kind of feedback. I spent several hours today researching and I found a some piezoelectric pressure sensors from Nerve Inc. that I think can work. I put in an order for a bunch of them, I just hope they will deliver quickly.

Build Log Day 51:
While waiting for the Nerve pressure sensors to arrive I’ve started working on the next step, targeting an object with the gripper. I first programmed what I call a grasping stance. This is a position where three of the limbs are placed in such a way that the man can balance with the forth limb up in the air. I did a couple of simple tests moving the forth limb around to see if it would throw off balance. Happy to report that the Man stayed upright no matter how I moved the limb. I then made an algorithm that uses the Ocular cameras to estimate the position of the target object. The gripper can move at high speed to an area close to the target, based on this estimation. It will then home in on the target at reduced speed, adjusting the position based on feedback from the cameras, until the pressure sensors on the gripper register contact. At this point the gripper can close around the object with desired force. I wired up a simple pushbutton as stand in for the pressure sensors, just to register contact, then ran a few tests. My method shows promise but it’s far slower than I hoped. It seems the position estimated by the cameras isn’t very precise and it has to do a lot of adjust during final approach towards the object.

Build Log Day 52:
Still waiting for the pressure sensors to arrive. Meanwhile I’ve been sketching on an idea for how to improve the speed of object targeting. The basic concept is to train the B.R.A.I.N just like with object recognition but I haven’t worked out all the details yet.

Build Log Day 54:
Yesterday I worked out the details for my improved object targeting concept. The key is to get the B.R.A.I.N to coordinate the cameras and the gripper so that the cameras can continually track the gripper while moving and simultaneously adjust the gripper movement to make it reach a desired target. I started training the B.R.A.I.N to do this but so far it seems it’s not making any progress.

Build Log Day 55:
I’ve figured out why the accuracy of the camera-gripper coordination wasn’t improving yesterday. Object recognition is a very simple task for the B.R.A.I.N, it can learn to distinguish an object type by simply processing (or “looking at”) enough photos of such objects. Tracking the gripper is a much more complex task to learn. It’s not enough for the B.R.A.I.N to estimate the gripper’s position relative to the target object – a task which is difficult enough in itself – it also has to adjust the grippers trajectory to close the gap between gripper and object. To learn that, it needs to know if a certain adjustment brings the gripper closer to the object or not. Simply put, it wasn’t improving because it didn’t know if the trajectory adjustments were good or bad. To solve this problem I’ve reactivated one of built in functions of the B.R.I.A.N that I had disabled because I wasn’t using it. The Reward Center lets me give the B.R.A.I.N a reward for making a “good” adjustment and punish it for a “bad” adjustment. For now I need to manually judge if an adjustment is good or bad but if this strategy works I plan to automate it in the future.

Build Log Day 56:
This morning the Nerve pressure sensors finally arrived! It did a small test on one to find the best way to connect it then set about attaching them to the gripper and wiring them in. Their small size meant it took a bit of fiddling but now they’re all in place and connected. That took me the whole day, I will work on the upgraded gripper algorithm tomorrow.

Build Log Day 57:
I‘ve decided to focus on targeting first and work on the gripper once that’s done. This morning I started training the B.R.A.I.N using the Reward / Punishment method I came up with two days ago. I hooked up two buttons to the B.R.A.I.Ns digital I/O interface, one for reward and one for punish. I then set an object in front of the Man and gave it the instruction to target it. Once it had reached the target I would manually reward or punish the B.R.A.I.N according to my own judgement. I lost count after a while but I managed to complete roughly 150 such targeting runs before lunch. I could already see a slight improvement in its performance so I decided to stick with this method but I realized it will need many more training runs. I spent the afternoon trying to come up with a way to give out rewards / punishment automatically. I have a few ideas but haven’t settled on one yet.

Build Log Day 58:
When I woke up this morning I had it figured out. If gripper targeting works well, it should be able to reach the object within a short period of time. The simplest way to train the B.R.A.I.N must be to reward it for reaching the target fast and punish it for being slow. I made a small training loop where it measures the time for reaching out until the Nerve pressure sensor only just register contact with the object. If the latest time is shorter than the previous attempt it gets a reward, if the time is longer it gets punished. It then retracts the limb back to the original position and starts over. I’ve set it to run this loop 1000 times, then I manually move the object an start again. Right now it’s on the first object position, roughly 700 repetitions. Targeting speed is slowly increasing so it looks promising. Running this many repetitions does take time however so we will see how many I can get before the end of the day.

Build Log Day 58, Continued:
Everything was going smoothly for the first three object positions. On object position number 4 however, I started seeing some unforeseen behavior. The limb was moving so fast that the gripper wouldn’t just touch the object like I intended, but actually knock it over or even send it flying a short distance. After analyzing the problem I came to the conclusion that at those speeds, the limb is unable to decelerate quickly enough after contacting the object. I then added a rule to my training algorithm so that the B.R.A.I.N gets punished if it knocks the object over. Just after implementing this rule, targeting speeds went down dramatically. I’ve run the training loop for 1000 repetitions after that and I it’s still a lot slower than before. I think it needs further tweaking.

Build Log Day 59:
When I started using rewards and punishments for training it was a strict binary i.e. the B.R.A.I.N was either punished or rewarded. Today I improved the system by creating reward and punishment points that can be handed out according to performance. Applying this to the gripper targeting training, I can give out reward points for high speed and punishment points for both low speed, and knocking the object over. The B.R.A.I.N will then be forced to optimize the gripper trajectory so that it reaches the object quickly without knocking it down. I’ve run 1000 repetitions of the training loop with this improved system and the targeting performance is improving quickly. Still needs more training before I’m completely satisfied however.

Pages: 1 2 3