1. General Information
2. Introduction Video
4. List of 3D-Printed Parts
5. Hardware Shopping List
6. Assembly Instructions
7. Connecting Electronics
8. Servo Calibration with Pololu Maestro Control Center
9. Software Installation and Configuration
9.1 The App: Introduction
9.2 App Download Links
9.3 What's New in Version 2.0 vs. 1.0
9.4 Installing the App on Raspberry PI
9.5 Installing the App on a Regular Windows 10
9.6 Configuring and Running the App
9.7 Fine-Turning the Configuration Parameters
10. Frequently Asked Questions
This 3D-printed Raspberry Pi-powered Rubik's Cube solving robot has everything any serious robot does -- arms, servos, gears, vision, artificial intelligence and a task to complete. If you want to introduce robotics to your kids or your students, this is the perfect machine for it.
This one-eyed four-armed giant stands 35cm (14") tall. 70 hours of print time and an entire spool of filament are needed to print it, not to mention over $200 worth of hardware, but once fully operational, it will surely wow your friends and neighbors. Scramble your Rubik's cube, place it in the robotís grippers, press a button, and then sit back and watch this amazingly smart and stunningly beautiful machine put it back together. Watch the video below!
This robot is fully 3D-printable. Other that the servos, servo horns, camera, electronics and a few dozen bolts and nuts, it has no traditionally manufactured parts. Absolutely no soldering or breadboarding is required.
This product is distributed under the following license:
Creative Commons - Attribution - Non-Commercial - No Derivatives
For educational and commercial licensing, please contact us.
Item Quantity Print Time (min.) Total Time (min.) Filament (gr.) Total Filament(gr.) rcr_arm 4 186 744 54.44 217.76 rcr_slider 4 145 580 32.26 129.04 rcr_pinion 4 21 84 5.03 20.12 rcr_rack 4 46 184 13.96 55.84 rcr_corner 8 55 440 15.68 125.44 rcr_gripper 4 38 152 7.16 28.64 rcr_leg 2 271 542 82.37 164.74 rcr_nut 4 15 60 1.65 6.6 rcr_long_bolt 2 88 176 6.13 12.26 rcr_short_bolt 2 84 168 6.17 12.34 rcr_rod 8 79 632 4.96 39.68 rcr_clamp_half1 4 13 52 2.98 11.92 rcr_clamp_half2 4 13 52 2.91 11.64 rcr_camera_holder 1 132 132 37.69 37.69 rcr_camera_cover 1 30 30 8.64 8.64
Total Print Time: 4,028 min. (67 hours 08 min.)
Total Filament Required: 882 gr.
- To use the servo motor TowerPro MG996R instead of Hitec HS-311 (see hardware list below), print rcr_pinion2.stl instead of rcr_pinion.stl.
- For the "wide" camera (see hardware list below), print rcr_camera_holder2.stl instead of rcr_camera_holder.stl.
- For the "wide" camera, print 4 of rcr_rod_short.stl and 4 of rcr_rod.stl instead of 8 of rcr_rod.stl.
As of Jan 03, 2018, the 3D-printed parts for the robot are available for sale. The price for the entire set is $449.00. Shipping is extra. Hardware is not included. Contact us for details if you are interested.
Quantity Item Price Per Item (approx.) 4 DS3218 Servo Motor with Horn $20.00 4 150 mm Servo Extension Lead, Male-to-Female
$1.00 4 Hitec HS-311 Servo Motor
TowerPro MG996R Servo Motor
$5.00 1 Raspberry Pi 3 Model B Quad-Core (optional)
This part is optional. The app can be run on a regular Windows 10 PC as well.
$35.00 1 Pololu Mini Maestro 12-Channel USB Servo Controller (Assembled) $24.00 - $30.00 1 UBS HD 12 Megapixel Webcam
As of December 11, 2017, we recommend the "wide" webcam shown here instead of the old "round" one shown below. The wide webcam has much better color reproduction than the old one, even in poor lighting conditions. Search on eBay, Amazon, etc. for Webcam and look for this distinctive shape. Print rcr_camera_holder2.stl instead of rcr_camera_holder.stl for this camera.
Old camera: USB 5 MP or 12 MP Webcam with 6 LEDs
Do NOT buy the TechNet brand, it does not seem to work with our app.
$4.00 1 6V, 3A (3000 mA) power source, wall-plugged or rechargeable
Our robot uses the SMAKN power supply adapter (shown here), with the round plug replaced by two female connectors to be plugged into the Pololu servo controller. The replacing was done by a competent technician proficient in soldering. Use a wall plug at your own risk. See the Connecting Electronics section below for more information.
$8.00 1 Standard-Size Rubik's Cube
Without this item, the robot is completely useless. We recommend the stickerless, smooth-operation variety. Our color recognition code was only tested with the standard (original) colors shown here. Please do NOT use a speed cube!
$12.00 76 Metric M3-12 Phillips-head Countersunk Bolts $0.06 36 Metric M3 Nuts $0.06 10 Small 2mm Wood screws or Metric M2x8 Molts
These are to attach the HS-311 horns to the servos, and two more to attach the Pololu servo controller to the back side of the camera holder.
Total Cost of Hardware (approx.): $150.00 - $200.00.
Attach the round horn that comes with the HS-311 servo to pinion with two small 2mm wood screws or two metric M2x8 bolts. If you are using the TowerPro MG996R servos, use rcr_pinion2.stl instead of rcr_pinion.stl as the black round servo horn that comes with the MG996R is smaller and has the mounting holes closer to the center.
Insert the single-armed horn that comes with the DS3218 servo into gripper. Secure with two metric bolts. Screw in the bolt closer to the center first.
Insert the DS3218 servo into slider. Secure with 4 metric bolts and nuts.
Insert rack into slider, have the servo cable run in the triangular recess in the bottom of the rack. Align holes. Secure with 6 metric bolts.
Insert the HS-311 servo into arm. The servo's shaft must be aligned with the round hole on the other side of the arm. Secure with 4 metric bolts and nuts.
To secure the slider in place, install the pinion onto the HS-311 servo's shaft and secure it with an axis bolt that came with the servo. Note that during the calibration phase the pinion may need to be removed, slider adjusted, and pinion replaced.
Repeat Steps 1 to 6 to assembly three other arms.
Using the 8 corners, assemble the 4 arms into a single unit.
Set the assembly obtained in Step 7 onto the two legs, align holes. Insert the pair of long_bolts in the bottom holes, and short_bolts into the top holes. Secure all 4 bolts with nuts. The heads of the bolts must be on the same side as the HS-311 servos (far side on the picture below), while the nuts on the opposite side (near side on the picture below.)
Screw four rods into the heads of the long and short bolts tightly. For the "wide" camera, use rcr_rod_short.stl instead of rcr_rod.stl to bring the camera closer to the cube.
Screw four other rods (rcr_rod.stl) into the slots of the camera holder tightly. If you are using the new wide camera, use camera_holder2.stl:
If you are using the old round camera, use camera_holder.stl instead:
Position the camera holder in such a way that the ends of the rods attached to it are in close proximity to the ends of the rods attached to the main unit. The slit in the camera holder must point downwards. Using the clamp_halves, connect the 4 pairs of rod ends. Secure the clamp halves with the metric bolts and nuts.
New wide camera: Detach the camera body from the clip: remove two small round stickers covering a pivot bolt connecting the camera to the clip, and then unscrew the bolt with a screwdriver.
Old round camera: Remove the stand and semi-circular ring from the camera using a small screwdriver.
Insert the camera into the niche in the camera holder. Run the camera cable through the slit in the camera holder. Secure the old round camera with camera_cover. The new wide camera does not need a cover. Plug the camera into Raspbery PI's USB port.
Install the grippers onto the DS3218 servos, secure with axis bolts that came with the servos. Do not tighten the servo horn clamps just yet as the positions of the grippers may need to be adjusted during the calibration phase.
Thanks to the Pololu Mini Maestro servo controller, there is absolutely no need for PCBs or breadboarding. You connect the 8 servos to the Maestro, and the Maestro to your PC via a USB cable for calibration (and later to the Raspberry PI for the actual cube solving.)
The servos can be connected to any of the 12 channels of the Mini Maestro arbitrarily. The image below shows the channel assignment used by our robot. A white number in a red circle next to a servo denotes the Maestro channel number for this servo. Even channels are used for the gripper servos, and odd channels for the rack-and-pinion servos. Channels 4 and 5 are skipped for spacing.
For the power supply for the Maestro, you have a choice between a rechargeable 6V battery pack and a modified 6V, 3A (3000 mA) wall charger. The 1st option is safe but servos are power-hungry, and the battery pack drains quickly. Make sure you buy a high-capacity pack.
The 2nd option requires that the charger's standard round connector be removed and replaced, or extended, by two wires ending in the standard female connectors. The work has to be completed by a competent technician proficient at soldering. The charger must be rated at 3A or higher. Our robot is powered by this power supply adapter extended as shown below. Use this option at your own risk.
The servo controller should be attached to the back of the camera holder using two small wood screws (camera_holder2.stl) or a single screw (camera_holder.stl). The image below shows the controller in its working position, with all the servo cables and power wires attached to it. The green and yellow wires in the lower-right corner of the image are power wires.
The purpose of the servo calibration is to find two key target settings for each servo's channel. For the gripper servos, the two target signals are for the neutral position and the 90° position. For the rack-and-pinion servos, the two target signals are for the "near" position (hugging the cube) and the "far" position (releasing the cube.) These numbers are determined experimentally using the Maestro Control Center software available on the Pololu web site and installed on your PC.
After firing up the Maestro Control Center, select the controller from the "Connected to" drop-down box. Go to the Serial Settings tab and select USB Dual Port for the serial mode. Then press Apply Settings.
Then return to the main Status tab to calibrate your servos.
Also, for all the gripper servos (0, 2, 6 and 8 in our example) set the Acceleration value to 110, as follows:
Begin the calibration by putting the sliders in the "far" position in which the front of the slider is flush with the arm, and put the grippers in the "neutral" position, as shown on the image below. Write down the "far" target values of the rack-and-pinion servos, and "neutral" target values for the gripper servos.
Next, determine the target value for a 90° rotation for each gripper servo. On the image below, the right arm's gripper servo has been turned 90° relative to its neutral position. Do this for each gripper servo, and write down the target values for all four.
It is absolutely critical that all the gripper servos move from their neutral to 90° positions in the directions marked by red arrows on the image below:
Finally, put the gripper servos back in the neutral position, and insert a Rubik's cube in the bottom gripper. Determine the "near" positions of all four rack-and-pinion servos in which the cube is tightly hugged and centered, as shown on the image below. Write down these target values.
Once acceptable target values for all servos have been determined, they need to be transferred to our application via its own user interface. The values can later be adjusted, if necessary.
Note that during calibration, the gripper and slider positions may need to be adjusted to allow a proper movement range. To adjust the position of a gripper, it needs to be removed from the gripper servo's shaft and then re-attached in a different position. Once the final position is found, the servo horn's clamp needs to be tightened with an Allen hex key. To adjust the slider's position, the pinion needs to be removed, the slider shifted as necesary, and then the pinion re-attached.
For more information on using the Maestro servo controller and its Control Center software, please refer to the Pololu web site.
9.1 The App: Introduction
The robot is driven by a Universal Windows Platform (UWP) application of our own creation called RubiksCubeRobot. By sending control signals to the robot's 8 servos and webcam, the application photographs the cube, performs the image and color recognition on the photos, determines the initial position of the cube, computes the sequence of rotations necessary to solve the cube, and then executes the sequence. The app can be used on Raspberry PI running Windows 10 IoT Core, or a regular PC running Windows 10.
The app can be evaluated for free for 30 days. The evaluation version is fully functional. Please contact us to obtain your evaluation key. Permanent keys are available for a small one-off fee. The license fees, which include life-time upgrades and maintenance, are as follows:
Personal use: $40.00 Educational use: $60.00 Commercial use: $120.00
Please download the Raspberry PI, x64 and x86 versions of the app from the links below:
Download Link for Raspberry PI Filename: RubiksCubeRobot_184.108.40.206_ARM.zip Version: 220.127.116.11 Size: 13.3 MB Last Updated: 2017-12-05 Platform: ARM (Raspberry PI)
Download Link for x64 Filename: RubiksCubeRobot_18.104.22.168_x64.zip Version: 22.214.171.124 Size: 13.4 MB Last Updated: 2017-12-05 Platform: x64
Download Link for x86 Filename: RubiksCubeRobot_126.96.36.199_x86.zip Version: 188.8.131.52 Size: 13.2 MB Last Updated: 2017-12-05 Platform: x86
- The zone recognition engine has been completely rewritten. The new algorithm is more adaptive and robust.
- The two photos of a cube's face are processed immediately after they are taken. The processing is performed by a background thread while the cube is being rotated to have the next face photographed, which speeds up the photographing phase considerably.
- A new Camera button allows you to take a picture of the cube at any time, not just during a run. This makes it easier to adjust the camera position and focus.
- The app can be exited by pressing the OFF button and holding it down for 3 seconds. This is useful if the app runs on Raspberry PI as it allows the PI to be properly shut down.
- When the debug mode is enabled, the new version no longer creates .pix and config.txt files. Instead it creates a single PDF document per run, which contains the debug and color training information. There is no need to upload anything to "the Eye" tool anymore.
- Color training is performed automatically for every run, and the results are put on the last page of the PDF file. The Training button has been eliminated.
- The white color detection algorithm has been improved by allowing the user to specify a wide range of hues for the white color.
- The red and orange colors can not only be separated based on hue, but on brightness as well.
The following instructions assume that you have already downloaded and installed Windows 10 IoT Core on your local PC, installed the Windows 10 IoT Core operating system on your Class-10 Micro SD card, booted your Raspberry PI from it, and connected the PI to your local network via WiFi and/or an Ethernet cable. Your Raspberry PI device should be showing in the My devices list of the IoT Dashboard:
To install the RubiksCubeRobot onto your PI, please follow these easy steps:
- Download the .zip archive for RubiksCubeRobot from the link above. Unzip it to a temporary directory of your PC's hard drive, such as c:\tmp.
- Select Open in Device Portal from the IoT Dashboard. In the Windows Device Portal, go to Apps, Apps manager.
- Under Install app, for App package, select the file with the extension .appx in the temporary directory, and for Certificate, select the file with the extension .cer.
- Click on Add dependency three times. For the three Dependency boxes, select the three files in the \Dependencies\ARM subfolder of the temporary directory.
- Click on Go under Deploy.
That's it! RubiksCubeRobot should now appear under Apps. You can start the application by choosing "Start" in the Actions drop-down box, and mark it as startup by clicking on the radio button in the Startup column.
Unzip the content of the download to a temporary directory such as "c:\temp". Prior to installing the app on your PC, you need to install the certificate in the Trusted Root Certification Authorities of both the Current User and Local Machine sections of the certificate store. This only needs to be done once.
Double-click on the .cer file in the temporary directory, click Install Certificate, select Current User, then select "Place all certificates in the following store", and select the Trusted Root Certificate Authorities folder. Repeat this procedure but this time select Local Machine instead of Current User.
Once the certificate is installed, double-click on the .appx file in the temporary directory to install the app on your PC.
The main app screen looks as follows:
The blue and red buttons on the main page perform the following functions:
- calibration -- takes you the Calibration Center where the servo target values are entered.
- configuration -- takes you to the Configuration Center where the parameters responsible for image and color recognition can be viewed, and changed if necessary.
- analysis -- performs image and color recognition of the most recent set of photographs stored in a PDF file on the device's hard drive. Use this function to instantly test changes in the configuration parameters.
- key -- allows you to enter your registration key, and also activate your paid-for key permanently on the device.
- OPEN -- brings the rack-and-pinion servos to the full-back position and gripper servos to the neutral position so that the cube can be inserted.
- RUN -- starts the work sequence after the cube has been inserted.
- STOP -- performs an emergency stop.
- OFF -- switches all servos off. If this button is pressed and held down for 3 seconds, the app exits.
- 📷 -- puts the grippers in the "full-forward" position, takes a picture, displays it, and then retracts the grippers. This button is useful for adjusting camera position and focus.
The 1st required step is to click the key button and enter your registration key in the form XXXXX-XXXXX. Please contact us to obtain your free 30-day evaluation key. During evaluation, the application performs run-time key validation over the Internet, so your Raspberry PI (or PC) must be connected to the Internet for the application to function. Once the license has been purchased, the registration key can be activated on the device permanently. Afterwards, the application no longer needs an Internet connection.
The 2nd required step is to enter all the servo target values in the Calibration Center:
The screenshot above shows the settings for our robot, but you must obtain your own numbers during calibration using the Pololu Maestro Control Center software.
If your PC has more than one camera attached (laptops almost always have their own built-in camera), there is a 3rd required step: press the configuration button, and in the Configuration Center, select the robot's webcam via the Camera Name drop-down box. The other configuration parameters will be covered in detail in the next section. Go with the default parameters for your first test run.
Before clicking the RUN button, make sure all 8 servos are plugged into the Maestro servo controller, both the Maestro and webcam are plugged into the Raspberry PI's (or your PC's) USB ports, and the power source is connected to the Maestro.
Insert the cube and click the RUN button. Be prepared to hit STOP immediately if the cube slips out of the grippers, or some other malfunction occurs.
The robot will perform the necessary manipulations of the arms and grippers, and take 2 photographs for each face that are going to be displayed immediately. If image processing or color recognition fails, an error will be displayed, and the arms and grippers returned to the open position.
During initial testing, it is very common for the app to generate errors such as ERROR_CENTERSQUARE_MISIDENT, ERROR_SIDECUBIE_MISIDENT or ERROR_TOOMANY_MISIDENTS. These errors occur because each lighting environment is unique and without fine-tuning, the app may and does mis-identify one color panel on the cube for another. This section describes in detail the color identification algorithm used by the app, and the steps necessary to reduce the amount of mis-identification errors to a minimum or even eliminate them altogether.
9.7.1 Configuration Center Parameters
The parameters on the Configuration Center screen shown above have the following meaning:
9.7.2 Hue/Saturation/Brightness (HSB) Color Space
- Area Cutoff - the default value of 0.02 specifies that contours with an area of less than 2% of the overall image area should be discarded.
- Squareness - the default value of 0.95 specifies that a contour with a bounding box dimension ratio of less than 95% should be discarded as not square enough.
- Angle Deviation - the default value of 2.5 specifies that a contour with tilting of more than 2.5 degrees from the prevailing tilting should be discarded.
- Size Deviation - the default value of 0.9 specifies that all contours with a size of less than 90% or greater than 111% of the prevailing contour size should be discarded as too small or too large.
- Glare Threshold (added in 184.108.40.206) - if set to a non-zero value (such as 0.9), excludes all white pixels in a center zone from computing the color for this zone. If the number of excluded pixels exceeds the specified percentage (such as 90%) the color is ruled to be white. This parameter is useful for mitigating the effect of strong specular reflection of the camera's LEDs off the center zones.
- Debugging - if this box is checked, the app creates a PDF file containing important debugging and color-training information for every run in the Pictures folder of your device. On Raspberry PI, the folder location is \Data\Users\CurrentAccount\Pictures. On a PC, the folder location is This PC\Pictures.
- Camera Name - you must select your robot's webcam from the drop-down list if your device has more than one camera attached to it. Laptops almost always have their own built-in cameras so if the app is running on a laptop, this step is required.
- Red/Orange/Yellow/Green/Blue Hue - the perceived hue value (in the range [0, 359]) of the cube's red/orange/yellow/green/blue panels. These values vary from one lighting condition to another.
- White Hue Range - a pair of hue values, each in the range [0, 359], that specifies the hues of the cube's white panels. Even in good lighting conditions, the white panels seldom come off as pure white, they usually have a bluish or reddish color tint.
- White Saturation Threshold - the value in the range [0, 1] which separates the white panels from non-white ones based on saturation. If a color falls within the specified White Hue Range and its saturation is below the specified threshold, the color is ruled to be white.
- Red/Orange Overlap - specifies a hue range where the red and orange panels may overlap. When a color falls within this range, the red/orange separation is performed based on brightness instead of hue.
- Red/Orange Brightness Threshold - specifies the brightness threshold (in the range [0, 1]) to be used to separate red and orange colors. If a color falls within the Red/Orange Overlap range, and its brightness is below the threshold value, the color is ruled to be red, otherwise orange. If the threshold value is 0, the red/orange brightness-based separation is not performed.
A pixel color is usually represented by three values: its red, green and blue (RGB) components. Alternatively, it can be represented by its hue, saturation and brightness (HSB) values.
Under the HSB scheme, the colors are arranged into a circle, often referred to as a color wheel, and each color is assigned an angular value on that circle in the range [0, 359], called a hue. The red color is assigned the hue value of 0°, yellow 60°, etc. The white color has no place on the color wheel, and therefore has no meaningful hue value of its own.
Saturation measures how gray-free the color is. Pure colors such as red, orange, yellow, green and blue have no grayness in them, and therefore their saturation values are 1. Gray colors, on the hand, contain nothing but grayness, so their saturation value is 0, and that includes the white color too.
Brightness, obviously, measures how bright a color is. Like saturation, brightness is a number between 0 and 1.
The colors on the photographs taken by the robot's camera are rarely pure. The white panels usually do not come off as pure white, and their saturation is greater than 0, but still significantly lower than that of red, blue and other non-white color panels. This observation enables us to use saturation for white vs. non-white color separation.
Color hues can be used quite reliably to separate the non-white colors among each other, except possibly the red and orange colors. The red and orange panels of the classic Rubik's Cube are very close to each other on the color wheel, and in certain lighting conditions can become virtually indistinguishable from each other. That is where brightness may come in handy: red panels usually have lower brightness than orange ones.
9.7.3 Color Detection Algorithm
Once the camera has photographed a cube face, the two photos are analyzed and color panel zones are identified on each photo. A median RGB color for each zone is obtained. The RGB values are then converted to the Hue/Saturation/Brightness (HSB) color space.
The following simple algorithm is then used to convert the HSB values to color names:
- If both the hue and saturation are 0, we are dealing with a pure white color, and WHITE is returned right away.
- If the hue is within the White Saturation Range and saturation is below the White Saturation Threshold, WHITE is returned.
- If the Red/Orange Range and Threshold are all zero, or if the hue falls outside the specified Red/Orange Range, go to the next step. If the brightness is below the Red/Orange Brightness Threshold, return RED, otherwise return ORANGE.
- The hue is compared with the Red, Orange, Yellow, Green and Blue Hue values, and the color with the shortest angular distance is returned.
9.7.4 Debug PDF Files
The success of the color detection algorithm described above depends on how well the app's configuration parameters, such as color hues, white saturation threshold and others, reflect the lighting conditions in which your robot is operating. To help you pick the right parameters, the app is designed to provide an insight into its thinking process.
When the Debugging checkbox is checked in the Configuration Center, the app creates a PDF file in the /Pictures folder for every run, successful or unsuccessful. This PDF file is essentially a log where every major step of the color recognition algorithm taken by the app is documented and illustrated. On Raspberry PI, the PDF files can be found in the folder \Data\Users\DefaultAccount\Pictures, and on a regular PC under This PC\Pictures.
Here is what a typical page from the PDF file looks like:
The top of the page displays the two photographs taken per cube's face. The recognized color panel zones are marked with gray outlines. The outline marking the center zone is dashed.
Below the photographs is a color chart displaying the median colors obtained from their respective color panel zones. Each color square shows its HSB values for easy reference:
Below the color chart is the final result: the 9 recognized colors of a cube face. To the right of that, there are some log entries intended mostly for Otvinta's support personnel.
The last page of the PDF file contains the word "Success" and an encoded cube position, or an error code. Below that, there is a table with all the configuration parameters listed. And at the bottom, there is a box with the color training information. Color training is described below.
Let us consider several typical cases where the debug PDF files offer instant help.
Case 1: On the images below, the orange and red center panels are mis-identified as white due to a low saturation (0.27 and 0.28) caused by reflected camera LED lights. Also two opposite corner panels on the red face are mis-identified as orange because their hues (13 and 14) are closer to the default orange hue of 25 than to the default red hue of 0.
To fix both problems, lower White Saturation Threshold from 0.3 to 0.2 and change Red Hue from 0 to 10.
Case 2: On the image below, the two white panels are mis-identified as blue because of their bluish hues of 231 and 230, and relatively high saturation values of 0.28 and 0.38 which are above the current White Saturation Threshold of 0.2.
Increasing the White Saturation Threshold to 0.4 alone will fix the white panels but break the yellow one, because its saturation is 0.34, and the threshold of 0.4 will cause it to be mis-identified as white. Therefore, we also need to narrow down the White Hue Range from the default [0, 359] to around [220, 240], which will exclude the yellow panels from the white saturation test entirely.
Case 3: On the images below. the red and orange colors are so close their hues overlap: the red hue range is [353, 13] and orange hue range [6, 19]. As a result, some red panels are mis-identified as orange and orange ones as red. It is virtually impossible to separate these colors based solely on hue. However, the brightness of the reds is quite a bit lower than that of the oranges. This enables us to separate the colors based no brightness. With the Red/Orange Hue Range set to [6, 15] and Red/Orange Brightness Threshold to 0.6, the colors are detected correctly.
Case 4: On the images below, strong specular reflection of the camera's LEDs off the center zone causes the color of the zone to be mis-identified as white. As of version 220.127.116.11, the Glare Threshold parameter helps mitigate this problem by excluding all pure-white pixels from the color computation. Setting this parameter to 0.9 causes the center zone to be correctly identified as red.
Note that the new "wide" camera (see hardware list above) is not equipped with LEDs and does not cause glare-related problems.
9.7.5 Instant Testing of New Parameters
There is no need to begin a new robot run just to test a change in the configuration parameters. The analyze button performs the image and color recognition procedure on the photographs of the previous run by retrieving them from the most recent PDF file. This way you can test your new settings almost instantly. Debugging must be enabled for the analyze button to work.
9.7.6 Color Training
The app is capable of computing the approximate color configuration parameters if the cube is inserted in the robot fully solved and positioned in a certain way -- with the white face towards the camera and red face pointing upwards.
The app actually performs color training computation for every run regardless of whether the cube was properly prepared for training or not, and places the results at the bottom of the last page of the PDF document. However this information is meaningless and should be discarded unless the cube was inserted as described above.
The values from the Suggested Color Parameters box need to be manually transferred to the Configuration Center.
Occasionally, you may see the phrase "could not be determined" for the White Saturation or Red/Orange Brightness thresholds. This usually means the lighting conditions are too dark.
9.7.7 Other Troubleshooting Tips
Camera not Centered
If the camera is not properly mounted or not pointing at the middle of the cube, a photo may look like the image below. If the app can't see the entire face of the cube, it can't process it. Make sure the center of the cube is roughly in the middle of the photograph. Use the Camera button to take test photos before a run. Adjust the camera's position in the camera holder if necessary. Also make sure the grippers hug the cube tightly to avoid shifting while the cube is being photographed. Adjust servo calibration values if necessary.
Not Enough Light or Camera out of Focus
If the photographs are too dark, or the camera is badly out of focus, as on the picture below, the blurriness of the photo may cause the app's contour detection subroutine to fail and the error ERROR_ZONE_DETECTION_ERROR will likely ensue. The camera model we use is equipped with a manual-focus lens. Rotate the lens until the photos taken by the camera are sharp.
The Colors are Identified Correctly but the Cube is Not Being Resolved
It is critical that your robot photograph the cube's faces in a particular order. Please refer to our introduction video for the correct sequence or rotations during photographing. For example, if you insert the cube with the white center square pointing towards the camera and red center square pointing upwards, then the correct order in which the center squares should appear on the photographs is:
White -> Red -> Yellow ->Orange -> Blue -> Green
If some or all of the gripper servos are calibrated incorrectly and turn from the neutral to 90° position in the wrong direction, the app won't be able to correctly reconstruct the cube's initial position and therefore, won't be able to resolve it. The app may consistently report the error ERROR_CENTERSQUARE_MISIDENT, even though all the colors are correctly identified. The app may also hang while in the "Solving..." mode, or execute a sequence of turns but at the end the cube is still not resolved. This can also happen if the camera is mounted upside down or sideways.
Grippers Retract in the Wrong Order
The app relies not only on a particular order in which the cube faces are photographed, but also on a particular order in which the grippers are retracted. The first photograph of a pair must have the top and bottom grippers retracted, and the second photograph - the left and right ones. If the retraction order is reversed due to an incorrect channel assignment of the servos, the app won't be able to correctly identify the colors of the side cubies as the grippers will get in the way.
9.7.8 Still Having Problems?
Please do not hesitate to contact us if you continue having mis-identification or any other problems. We will do our best to help you resolve them.
- Is Raspberry PI required?
No, any device with two USB ports can be used (one for the Maestro, the other for the webcam.) The software is currently available for Raspberry PI running Windows 10 IoT, as well as regular Windows 10 PCs.
- Why was this particular webcam model chosen?
This webcam was chosen because it produces small photos (640x480) which results in faster processing, is equipped with LED lights, nicely shaped, and inexpensive.
- Why use a separate servo controller, why not control the servos directly with the Raspberry PI?
Initially we tried to do just that, but have failed to achieve satisfactory results even with 4 servos. Breadboarding with 8 servos does not seem to be reliable enough. However, we admit it may still be doable.
- Why does the robot take two pictures per cube face instead of just one?
The grippers, when engaged, cover a significant portion of the cube's face, which makes it difficult to accurately identify the colors of the side cubies. As a workaround, the robot photographs each face first with the vertical grippers retracted, and then the horizontal ones.
- Why does the robot make three clockwise 90° turns instead of a single counterclockwise 90° turn?
Because of a slack between the gripper and cube, it takes a greater-than-90° turn of the gripper to perform a 90° turn of the cube's face. Therefore, a 180° servo such as the one we are using won't do both a 90° and -90° turns with proper precision. 270° servos would probably work better, and future versions of our app may support those.