CORTEX: A 3DOF Robotic Arm Controller Software

We wanted to build a robotic arm that could make our life as engineering students a little bit easier. We wanted this robotic arm to be able to analyze pictures of mechanical drawing homework and draw them.

Though the robotic arm could not become a fully functional homework copying machine, we ended up building a python based robotic arm controller software which is capable of controlling almost any 3DOF robotic arm.

CORTEX in action

Here’s how we built it…

Hardware And Electronics

For the hardware and electronics, the assembly was quite simple. We thought one Arduino Uno and three Tower-pro MG995 servo motors will be enough to make the robot draw stuffs quite smoothly as I (Abrar) built a similar type of robot 3 years ago with 3 SG-90 servos which could…. technically… draw 🙂

Robotic arm with sg-90 servos that I (Abrar) built in 2019

So… we thought if SG-90s can produce this type of result in drawing, MG995s should be able to draw well enough to copy homework. But we were wrong 🙂. Anyway, we proceeded with the design and the result looked like this.

Hardware design for 3DOF robotic arm for drawing using MG995 servos

The electronics design was very simple. Just 3 servo motors connected to Arduino’s PWM pins with a suitable power supply.

Circuit Diagram for 3DOF robotic arm for drawing

But after failing to make the arm draw precisely, we shifted to a design where the robot could perform pick & place tasks with the help of object detection. The hardware design looked like this

Hardware Design for 3DOF robotic arm for pick & place

Just had to replace the pen with a gripper. So an extra servo was needed for operating the gripper. The circuit diagram looked like this.

Circuit Diagram for 3DOF robotic arm for pick & place

The best hardware design for drawing was a cartesian robot with stepper motors. But we wanted it to be cheaper and multi-functional. That’s why we chose this hardware design.

Software

The GUI of the software was programmed using python’s Tkinter library. The link to the GitHub repository is given at the end of this article. The software.py file inside the src directory contains all the code related to the GUI. The software has 3 modes right now.

  • Writing Mode (Plotter Functionality)
  • Pick & Place Mode
  • Manual Control Mode

The plotter functionality is the most complicated part. Working principles of the writing mode are applied in the other two modes. So, we’ll discuss the writing mode in detail.

Coordinate extraction from images:

For copying homework, we need to have some references i.e. other students’ drawings 😜. Then analyze the image to extract coordinates and put those coordinates in the inverse kinematics solver functions and make the robot draw. Engineering drawings are basically grey pencil marks on white paper. We thought it would be quite easy to get the coordinates of darker pixels from a sample image. But when we collected the darker pixels with OpenCV, there was a huge amount of them. A lot of the coordinates were in the same place because of the thickness of the lines. So, we came up with a solution. Skeletonizing!

Skeletonizing makes those thick lines 1 pixel wide. You can read more about skeletonizing here.

Result of skeletonize function

After skeletonizing and collecting the coordinates of the dark pixels, we needed to sort the coordinates in such a way that was suitable for the arm to draw. The unsorted coordinates were suitable for printers but not for robotic arms. This was also a challenge. We had to find out the closest coordinate of the (0, 0) point first and then searched for the closest coordinate point of that coordinate point and the same process for other coordinate points as well.

Coordinate sorting algorithm of CORTEX

If the distance between the two coordinates were considerably longer, we made the robot hold the pencil a little bit higher from the paper while it gets to the other coordinate. The code is in the skeletonize.py file of the src folder of the Github repository.

Inverse Kinematics for Drawing(PlotterFunctionality):

If we want to make the robot draw, we need to rotate the 3 servo motors precisely at certain angles so that the tip of the pencil touches specific points on the paper. How do we know the angles needed? Let’s discuss.

Marked image of the hardware

The above is a marked image of the important parts of the robotic arm which will be needed in the later explanation.

Determination of the angle for the base servo

If we think of a point B(x, y) in the XY plane, and the robot’s base is at the A(0, 0, 0)position in 3D space, The angle needed for the servo in the base position(S1), marked by α could be derived like this:

Now that we’ve got α and r, We can start working with the other angles. When the tip of the pencil touches the B(x, y, 0) point, the robot creates a geometric shape like this if we see the side view.

Where AF=b is the height of the base, FG=a is the length of the upper arm and GB=f is the length of the forearm and the tip of the pencil is connected at point B(x, y, 0). Looking at the marked image of the hardware might help to understand.

Now to determine the angle, β for the servo(S2) at the shoulder joint position, we need to connect F and B which makes β=θ+δ.

Determination of the angle for the shoulder and elbow joint servo

θ and δ are determined like this:

Now for the determination of the angle γ for the servo(S3) at the elbow joint position, the process is also very simple.

These equations were used only for drawing purposes where the Z coordinate is always 0. But for a full inverse kinematic control where the Z coordinate is not 0, we need to add another step to find out g and θ.

Here, H(x, y, z) is the desired point where we want our end effector to be. So the process of finding out g and θ is:

But this step has not been integrated into the software yet because we’re planning of shifting to the Denavit Hartenberg method to calculate inverse kinematics for more efficient control as we’re planning on controlling 6DOF robotic arms. The equations used in calculating motor angles for coordinates in the XY plane are very simple high school trigonometric formulas and they’re easy to understand. So, even after being less efficient, we’ve implemented them because at least we understood what we were doing 🙂.

The calculation above explained is capable of calculating the three servo motor angles necessary to put the end effector in any position (inside the robot’s working range) in the 3D space. We just have to input the necessary parameters (length of the 3 arms) in the settings page of the GUI.

Settings page of CORTEX

Pick and Place Functionality:

GUI interface for pick and place mode

As we know that a video contains several frames. So we have to loop through every frame of the camera feedback. Inside each frame, we had to use a blur function to remove noises from the frame image. Then we convert the image into HSV format so that we could use color detection. Then we filter some specific colors to detect them. After that, we used OpenCV’s built-in contour function to the contours of the image. Then we set the min area function so that we could filter different areas of the image to get the object’s center point coordinate. After that, we used OpenCV’s moments function to get the coordinates. The following code was used for getting center coordinates.

moments = cv2.moments(contour)


cx = int(moments["m10"] / moments["m00"])


cy = int(moments["m01"] / moments["m00"])

After getting the center coordinates, we had to go through some adjustments to relate the coordinates returned from the image to the real-world coordinates. Then passed the coordinates to the inverse kinematics solver function and moved the robot joints accordingly. But the object detection is not always perfect. So we programmed a manual coordinate input functionality using which manual pick and place tasks can be performed. The code for this object detection is inside the objectdetector.py file.

If you see the video of pick and place functionality, you’ll notice that the movement is comparatively smooth and stable considering cheap servo motors. We used sine function movement in order to move the motors from the initial point to the desired point.

The program used for the smooth movement of the motors is the following.

for i in range(0, 91, 1):


servo1Angle = int(previousAngles[0] +


distances[0] * math.sin(math.radians(i)))


servo2Angle = int(previousAngles[1] +


distances[1] * math.sin(math.radians(i)))


servo3Angle = int(previousAngles[2] +


distances[2] * math.sin(math.radians(i)))


sendData(servo1Angle, servo2Angle, servo3Angle)


time.sleep(0.01)

The code related to servo motor control is in the servocontroller.py file.

Manual Control:

GUI for Manual Control

The Manual control mode is a very simple functionality where every motor can be controlled individually with sliders. Then from the angles of the servo motors, we can calculate the position of the end effector with simple trigonometry as well.

Conclusion

The software is open source under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. We wish to make this software capable of controlling 6dof robotic arms with better hardware as the robot could not draw because of MG995 servos failing to produce accurate angles. We will start working on that very soon.

Xổ số miền Bắc