a. Location and Bearing Analysis (WIP)
On the page Moving Around the Park we established a list of requirements for moving around the competition area, i.e. Institute Park. A subset of the requirements are:
Initial bearing
Initial location
Dynamic bearing, i.e. as it moves
Dynamic location
Navigate to a specific point
Search coverage
We also showed they these are related since the results obtained through requirements 1 through 4 calculate the values needed for requirements 5 and 6. Now to analyze possible ways of meeting requirements 1 through 4, specifically through the use of vision processing.
Initial Bearing and Location
The initial bearing and location are the first information the rover must determine after initializing when power is applied. The location is known to be one of the three starting platforms. The actual starting platform for a robot is selected by the challenge organizers and provided to the entrants the day prior to the actual competition. This narrows the location to within a 10 meter circle. The three starting platforms fall within a 10 meter by approximately 25 meter area facing nearly south. [The competition compass directions are used throughout these pages and are approximations.]
The competition identifies some landmarks in the park. These are some buildings, lampposts, and columns. Identifying landmarks is one way a rover can determine its approximate location and bearing. From the starting platforms, a very visible landmark is a white bandshell to the southwest. By locating this landmark through vision processing a rover at the start can establish its approximate bearing within 10-20 degrees. There are also some columns to the NNW of the starting platform that can help refine the bearing, although they may be difficult to discern.
Therefore, at startup, the robot knows its location within 10 meters and its bearing within, say, 15 degrees. This is not great but it appears to be the best that can be done with the available information.
Dynamic Bearing and Location
The bearing and location of the rover while it is moving are critical information and, unfortunately, very difficult to maintain using only vision processing. The Moving Around the Park page discussed other sensors so this discussion will only consider vision processing.
In order to know these dynamic values the rover needs to know the direction it is traveling and the distance traveled in that direction. Or, it needs to use a known reference, a landmark, to determine its location.
In some portions of the roving area, i.e. the park, there are landmarks that were mapped by NASA and provided to the entrants. As you can see in the image from 2013 the landmark information is sparse. The SRR also provided photos of the landmarks. There is usually only a single photo for each of the three buildings so if the rover approaches from a different direction it may not be able to identify the structure. The structure outlined by the points 1010-1014 (don't know what happened to 1012) is a wooden park pavilion that looks pretty much the same from all directions.
It may be difficult, but being able to use landmark information may be possible so it will be considered as one technique for determining the location of the rover.
Dead-reckoning based on the bearing and traveled distance is a time-honored approach to navigation. Dead-reckoning is dependent on knowing the angle of the turns made and the distance traveled. The rovers do not have a direct method of measuring distance, i.e. no wheel encoders. The only measurement available is the time of travel which can be converted to distance. Clearly, this will be somewhat inaccurate.
Angle measurement is somewhat better since vision processing can determine the number of pixels an image moves during the turn. The pixel count is directly proportional to that angle.
Dead-reckoning also requires that the straight-line movement of the rover be actually in a straight line. Vision processing can report the horizontal shift in an image, similar to angle measurement, which can be used to correct the drift to the side.
A more indirect method is known as Optical Flow. To illustrate, assume you are moving in an automobile along a straight road with a dashed dividing line. The dashed line appears to come toward you and pass to the left, assuming you are in a country where you drive on the right. In other words, the dashed lines flows past you to the left. All the other objects you see perform similarly. They move to the bottom of your vision and off to the side. If something does not move to the side you are about to hit it!
It may be possible to use Optical Flow to maintain movement in a straight line with the advantage that it is also useful for obstacle avoidance.
Let's put some numbers in place to determine the required accuracy. One long dimension in the park is approximately 300m. If the rover travels that distance how far off might it be if it drifts at various angles? The trigonometry is based on using the tangent of the angle: s. The distance traveled is the 'adjacent' value so the 'opposite' value is the drift. Specifically the formula is:
adjacent * tangent(angle) = opposite
For 300 meters the drift at various degrees is:
This, to say the least, is not promising.