Hackers News

An Algorithm for Affordable Vision-Based GNSS-Denied Strapdown Celestial Navigation

A modular celestial navigation system should be mounted rigidly with respect to the AHRS. The orientation of the aircraft, C l / b , is provided by the autopilot. The orientation of the camera, C b / c , is unknown and must be calibrated to obtain accurate positional information. For the purposes of this study, we assume that the precise orientation of the camera remains unknown, and we consequently accept that the individual position estimates will be erroneous. It will be shown later that this misalignment may be overcome through the use of averaging. This is particularly useful given that the camera itself may be mounted separately from the autopilot, thus being subjected to factors such as vibration, aerodynamic loading, or changes in AHRS biases, which cause the relative orientation between the AHRS and the camera to change over time.

3.1. Star Detection and Tracking

Stars are detected within an image using a basic binary thresholding operation. Given a mean pixel value within the frame, μ , and standard deviation, σ , a binary threshold is set at μ + 5 σ , as advised in [17]. The contours are extracted, and an agglomerative clustering algorithm is applied to cluster redundant detections.
Once detected, a standard Kalman filter is used to track individual stars between frames. Each star is defined by its state vector:
and a constant sized bounding box. The states u , v , u ˙ , and v ˙ describe the x position, y position, x velocity, and y velocity of the star on the image plane, respectively. The position of the star is taken to be the subpixel maxima, computed as a weighted average of the 3 × 3 Region of Interest (ROI) centered on the peak pixel value, as described by [18]

( u , v ) = i , j I i , j u i i , j I i , j , i , j I i , j v j i , j I i , j

The state transition matrix F is defined assuming a constant velocity as follows:

F = 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1

The position and velocity of the stars are observed directly; therefore, the observation matrix H is defined as the 4 × 4 identity, such that the observation equation is simply given by
Following the standard Kalman filter equations, the a priori state and covariance estimates are computed:

P ^ k + 1 = F P k F T + Q

for the covariance matrix P k with process noise Q . The innovation is calculated as follows:

y k = z k x ^ k + 1

with covariance, S k , given by

S k = P ^ k + 1 + R k

where R k is the measurement noise. The Kalman gain is computed as follows:

K k = P ^ k + 1 S k 1

and the posterior updates are applied to the a priori estimates:

x k + 1 = x ^ k + 1 + K k y k

P k + 1 = ( I K k ) P ^ k + 1

The measurement noise, R k , is defined as a diagonal matrix, with elements equal to [ 0.5 0.5 1.0 1.0 ] , and the process noise is also defined as a diagonal matrix, with elements equal to [ 4.0 4.0 2.0 2.0 ] . The measurement noise was selected based on the calibration accuracy of the camera, and the process noise was experimentally tuned to minimize the occurrence of lost tracks under high motion conditions. The a priori state estimate is used to propagate the bounding box containing the region of interest, within which the peak is detected using Equation (12). The bounding box is centered on [ u , v ] , rounded to the nearest pixel. The width and height of the bounding box were fixed at 21 pixels for this imaging system, given an image resolution of 1936 × 1216 and a field of view of 53.5 deg. A visual snapshot of the tracking system can be seen in Figure 2.

3.3. Position Estimation

The star tracker in Section 3.1 operates on distorted images. The pixel location from each star tracker is extracted. rectified, and subsequently converted to a unit vector in NED coordinates according to Section 2.1. Given an observation X l = [ x y z ] T , the elevation of a star above the horizon is calculated as

e l = tan 1 y x 2 + y 2

and, subsequently, the zenith-angle ζ is calculated as 90 e l .

In the initial case, the camera orientation C l / b is not known to a high level of precision. Assuming that an attempt is made to mount the camera in alignment with the autopilot, an initial DCM may be formulated from some combination of 90° Euler rotations. It will be seen that as long as this orientation is accurate to within a hemisphere of tolerance, the exact value does not matter, as it will be recalculated in flight.

If a minimum of six stars are visible within the frame, a Random Sample Consensus (RANSAC) approach to position estimation may be used to remove bias from false detections. RANSAC is a technique used to identify outliers in sample data and omit them from the estimation process. This is useful for detecting misidentified stars or for removing non-star detections (such as satellites or overhead planes). In the context of position estimation, the RANSAC algorithm randomly selects three stars to generate a position estimate. Subsequently, the algorithm compares the location of the remaining stars against the position estimate. If the remaining stars are within some heuristic tolerance, they are considered inliers. If they are outside of tolerance, they are considered outliers. The position estimate which minimizes the mean squared error of the inliers is chosen as the estimate. A pseudocode implementation of this RANSAC position estimation is outlined in Algorithm 1.

Algorithm 1 Position-RANSAC

C l / b , C b / c

G H A Γ g e t G H A A r i e s ( )

A s [ ]

P s [ ]

for   o b s in o b s e r v a t i o n s  do

     a z , e l = c o m p u t e A z E l ( o b s , C l / b , C b / c )

     a , b , c , p = c o m p u t e P o s i t i o n C o e f f i c i e n t s ( G H A Γ , a z , e l , o b s . r a , o b s . d e c )

     A s . a p p e n d ( [ a , b , c ] )

     P s . a p p e n d ( [ p ] )

end for

i n l i e r s 0

o u t l i e r s 0

b e s t e r r o r inf

x [ 0 , 0 , 0 ]

for
  0 < i < i t e r a t i o n s  
do

     j 3 randomly selected indexes

    build A and p using A s [ j ] and P s [ j ]

     x ^ ( A T A ) 1 A T p

     i n l i e r e r r o r 0

    for all remaining indexes k do

         E P s [ k ] A s [ k ] x ^

        if  E < t o l e r a n c e  then

            i n l i e r s += 1

            i n l i e r e r r o r += E

        else

            o u t l i e r s += 1

        end if

    end for

    if  i n l i e r s : o u t l i e r s > a c c e p t a b l e r a t i o  then

         m e a n e r r o r = i n l i e r e r r o r / i n l i e r s

        if  m e a n e r r o r < b e s t e r r o r  then

            b e s t e r r o r = m e a n e r r o r

            x x ^

        end if

    end if

end for

ϕ tan 1 x [ 2 ] x [ 0 ] 2 + x [ 1 ] 2

λ tan 1 x [ 1 ] x [ 0 ]

return
  ϕ , λ

We first demonstrate that, in the general case, the DCM C l / b is not fixed during flight. In an integrated solution, the celestial system would provide attitude measurements to the EKF, and the offset from this attitude would be estimated. As a modular solution, however, this is infeasible. Taking the output from the EKF to be the true orientation of the aircraft, we can use the true GPS position to compute C l / b . We use the Kabsch-RANSAC methodology presented in [19] to calculate the ideal rotation between a star’s theoretical location and its actual location. In conjunction with the autopilot output, this rotation yields C l / b . Testing over an 89 s section of flat and level flight, it can be seen in Figure 5 that the camera roll angle shifts by approximately 0.2°, the pitch angle shifts by approximately 0.05°, and the yaw angle shifts by approximately 0.3°. In the context of celestial navigation, these are significant perturbances which, had the offsets been attributed to position as opposed to camera orientation, would have been interpreted as around 30 km of positional offset. Indeed, by assuming that C l / b is fixed, we can see in Figure 6 that the latitude drifts by approximately 0.2° over the same short window of flat and level flight. Over an extended period (hours), the orientation may shift such that positional errors are in the range of hundreds of kilometers.

It is clear that a stand-alone celestial module cannot function in the conventional manner with consumer-grade hardware, unless the autopilot is capable of integrating the attitude output from the celestial system into its own filter and estimating the camera orientation. We will demonstrate now a method which, given a very rough initial estimate of the camera orientation, is capable of returning position estimates to within 4 km. This is significant in the context of long flights, where alternative dead-reckoning solutions would experience positional drift that is either linear (assuming velocity measurements are available) or quadratic (assuming only acceleration measurements) as a function of time.

By performing a rotation through 360° of compass heading, at an approximately constant yaw rate, the position estimates can be averaged to find an approximation of the orbital center. Provided n images throughout an orbit are available, there exists n latitude and longitude estimates, which may be expressed as unit vectors in the Earth Centred Earth Fixed (ECEF) frame p i e c e f . The mean position is taken to be the mean of these vectors:

p ¯ e c e f = 1 n i = 1 n p i e c e f

The latitude and longitude are simply calculated as

ϕ = sin 1 ( z )

where x, y, and z are the elements of the mean position vector.

Once an initial position estimate has been formulated, it may be used to calculate a more precise estimate of the camera orientation C b / l . The aim of orientation estimation is to find the rotation matrix R that minimizes the residual error between a set of observations u c represented in the camera frame of reference and a set of theoretical unit vectors v l in the NED frame of reference:

E = i = 1 n v i l R u i c 2 ,

In this case, the rotation matrix R is composed of the aircraft DCM, C l / b , and the camera DCM, C b / c :
where we accept C l / b as a deterministic output from the autopilot, and so the camera orientation may be found given R and C l / b :

C b / c = C l / b T R

Each observation u i c is correlated with a database star. This correlation is obtained through star identification. During the instantiation of the star tracker, a lost-in-space log-polar star identification algorithm is used to determine the IDs of each star in the frame [20]. The theoretical position vectors v i l in the NED coordinates are generated from the star’s right ascension and declination, the time of observation, and the estimated latitude and longitude [13]. This allows for the use of the Kabsch algorithm [21] to find the rotation R between the observed stars and the database stars. Following the implementation in [19], the algorithm is provided in Algorithm 2, where A is the matrix of n observation vectors with dimension [ n × 3 ] and B is the matrix of n theoretical vectors, also with dimension [ n × 3 ] . The resulting rotation R is used in conjunction with the most recent autopilot attitude data to find the camera orientation C b / c .

Algorithm 2 Kabsch

Translate vectors in A and B such that centroid is at origin

Compute the matrix C = A T B

Compute the singular value decomposition U , Σ , V = SVD ( C ) such that C = U Σ V T

Set diagonal elements Σ 1 through to Σ n 1 = 1.

if det ( UV ) > 0  
then

     Σ n = 1

else

     Σ n = −1

end if

Compute the rotation matrix R = V Σ U T

Once the camera orientation has been found, two options are presented, depending on the computational power of the flight computer. If the computer is capable of parallel processing, then the set of theoretical star locations v l may be re-calculated using the updated latitude and longitude, and the estimated positions may be re-calculated using the updated camera DCM C b / c . This enables the system to recursively converge on a position estimate from a single set of orbital data. Alternatively, it is possible to repeat an orbit and calculate a new set of latitudes and longitudes. This method does not require post hoc processing and may be better suited to real-time applications. For the purposes of this study, we use the former method, as this allows us to treat each orbit as an independent sample, yielding an independent position estimate, thus giving us deeper insight into how the characteristics of a given orbit affect the position estimate. An example of this process can be seen in Figure 7.

admin

The realistic wildlife fine art paintings and prints of Jacquie Vaux begin with a deep appreciation of wildlife and the environment. Jacquie Vaux grew up in the Pacific Northwest, soon developed an appreciation for nature by observing the native wildlife of the area. Encouraged by her grandmother, she began painting the creatures she loves and has continued for the past four decades. Now a resident of Ft. Collins, CO she is an avid hiker, but always carries her camera, and is ready to capture a nature or wildlife image, to use as a reference for her fine art paintings.

Related Articles

Leave a Reply