Vision assisted landing

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Vision assisted landing

Paulo Neves
Hi,
I am making my master thesis in a system to land and automatically in a station to change the battery of a quad rotor and take off.

I have already have my station visual recognition algorithm ready to test but i would like to know what would be the best way to communicate to paparazzi the navigation procedure to land.  The visual recognition algorithm would run on device like gumstix and communicate through serial interface

I have 2 ways I could communicate the navigation procedure:

1
1. Gather flight data(attitude/position) from paparazzi, 
2. gather the computed position from the visual algorithm, 
3. Set the quad-rotor in some stabilized mode(in paparazzi i think it is called holding),
3. With the above role my own Kalman filter
4. Create a PID controller that outputs 3 axis movement, x,y,z. The attitude control would be done by paparazzi

2
1.Gather flight data(attitude/position)
2 compute position offset from position with visual algorithm and gathered flight data
3 feed offset to the flight data estimator.

What would be the best approach to follow?

Any of these 2 approaches require me to write an ivy parser. Unfortunately i had already started creating a c++ mavlink message library that would allow me to execute the navigation procedure of 1. I understand Paparazzi uses the Ivy message system and it looks like it has a much higher level API with most of the event handling work included. 

Would it be much effort to implement the messaging system required for any of the 2 navigation procedures?

Thank you
Paulo Neves


_______________________________________________
Paparazzi-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/paparazzi-devel
Reply | Threaded
Open this post in threaded view
|

Re: Vision assisted landing

Reto Büttner


2014-06-03 15:52 GMT+02:00 Paulo Neves <[hidden email]>:
Hi,
I am making my master thesis in a system to land and automatically in a station to change the battery of a quad rotor and take off.

I have already have my station visual recognition algorithm ready to test but i would like to know what would be the best way to communicate to paparazzi the navigation procedure to land.  The visual recognition algorithm would run on device like gumstix and communicate through serial interface

I have 2 ways I could communicate the navigation procedure:

1
1. Gather flight data(attitude/position) from paparazzi, 
2. gather the computed position from the visual algorithm, 
3. Set the quad-rotor in some stabilized mode(in paparazzi i think it is called holding),
3. With the above role my own Kalman filter
4. Create a PID controller that outputs 3 axis movement, x,y,z. The attitude control would be done by paparazzi

2
1.Gather flight data(attitude/position)
2 compute position offset from position with visual algorithm and gathered flight data
3 feed offset to the flight data estimator.

What would be the best approach to follow?

Any of these 2 approaches require me to write an ivy parser. Unfortunately i had already started creating a c++ mavlink message library that would allow me to execute the navigation procedure of 1. I understand Paparazzi uses the Ivy message system and it looks like it has a much higher level API with most of the event handling work included. 

Would it be much effort to implement the messaging system required for any of the 2 navigation procedures?

Thank you
Paulo Neves


_______________________________________________
Paparazzi-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/paparazzi-devel



_______________________________________________
Paparazzi-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/paparazzi-devel