Spring Semester Poster

Advanced Cubesat Imaging Payload
Joseph Mayer and Nathan Bossart
ECE 491 – Senior Design II – Department of Electrical and Computer Engineering
RASCAL: A two spacecraft mission to demonstrate key technologies for proximity
operations. One spacecraft will use its propulsion system in conjunction with the
imaging payload in order to facilitate orbiting and docking
Imaging Payload: To achieve the goals of the RASCAL mission, each spacecraft
will identify the other and interpolate knowledge of parameters such as distance.
The goal of this project is to design and implement an imaging payload for
obtaining raw image data and converting it into actionable high-level data.
Figure 1: RASCAL mission diagram
Figure 2: Test Cubesat faces
Identification Strategy:
 Involves using light/color patterns on the side of a Cubesat for identification
 A model Cubesat with closed polygon patterns was created to-scale for testing purposes
 Each pattern has the same height and width and is recognizable using a basic vertex-based shape detection strategy
 Relative distance is obtained using the scale of the object in frame
 Relative angles are obtained using position of the object in frame along with the relative distance
Software Verification:
 Before constructing the video pipeline on the Zynq-7000, we constructed and verified our image processing
algorithms using the OpenCV libraries on low-grade hardware
 Additionally, much of the code for contour simplification and vertex detection could be reused in the software
domain of the final system
Processing Image Data:
 A 640x480 test camera (OV7670) was bought and interfaced with the Zedboard Development Board
 A video pipeline was generated using Xilinx’s Vivado tool chain for image processing in the software domain
 Vivado HLS was used to generate hardware blocks to offload processing tasks better suited for hardware
 Xilinx SDK was used to create Linux applications to initialize the video pipeline and carry out tasks such as
contour detection and angle/distance calculations
 Completion of software verification with light and color patterns using the OpenCV libraries
 Creation of a test Cubesat with six distinct patterns (with the same height and width) that are easily
recognizable using computer vision and vertex-based shape detection
 Initialization and interfacing with a OV7670 camera from hardware on a Zynq-7000 FPGA
 Creation and verification of a basic video pipeline and initialization software (using Xilinx AXI video in/out,
video timing controller, VDMA, and Zynq processing IP cores)
 Creation of a boot image with a custom device tree that loads custom programmable logic (the video pipeline
hardware) before booting into a PetaLinux OS ramdisk
 Creation of customized HLS blocks to perform Sobel filtering and light thresholding
 Work towards integration of the HLS blocks into the video pipeline to provide hardware acceleration
 Work towards creation of Linux drivers for interacting with and configuring the AXI hardware blocks via the
PetaLinux OS
 Work towards creation of a Linux application to carry out device initialization (using custom Linux drivers)
and to complete the remainder of the image processing tasks using cross-compiled OpenCV libraries
Future Work
 Integration of HLS blocks into the video pipeline*
 Linux driver creation for Xilinx AXI peripherals (VDMA,
video timing controllers, and custom HLS blocks)*
 Cross-compilation of OpenCV program to carry out
recognition tasks on Zynq core running Linux*
 Creation and integration of a higher-resolution flight
camera (potentially a FLIR)
 In-depth calibration of algorithms and enabling of
processing system in the context of the RASCAL system
 Custom board creation and satellite integration
* Denotes that a significant amount of work towards this portion has already been completed
Figure 4: Software Verification Demonstration
Data In and Out of System
Obtain Camera
Separate Processing into Blocks
Interface Camera with Hardware
References & Thanks
Algorithm Verification in Software
Store Camera Data in Hardware
Start Date
Days Complete
Days Left
Preprocess Image
Special thanks to Dr. Kyle Mitchell, Dr. Jason Fritts, and Dr. Will Ebel.
Figure 3: Zedboard Development Board and OV7670 Camera
Jan Erik Solem, Programming Computer Vision with Python. Creative Commons.
Milan Sonka, Vaclav Hlavac, Roger Boyle, Image Processing, Analysis, and Machine Vision. Cengage Learning; 3rd edition.
http://www.cs.columbia.edu/~jebara/htmlpapers/UTHESIS/node14.html October 29, 2013
http://cubesat.slu.edu/AstroLab/SLU-03__Rascal.html October 31, 2013
http://docs.opencv.org November 11, 2013
Gary Bradski, Adrian Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library. O'Reilly Media, Inc.; 1st edition.
Matthew Russell, Scott Fischaber, " OpenCV Based Road Sign Recognition on Zynq," 2010 11th
IEEE Internation Conference On Industrial Informatics, pp. 596-601, July 29, 2013
[8] http://hamsterworks.co.nz/mediawiki/index.php/OV7670_camera February 10, 2014
[9] FMC-IMAGEON Building a Video Design from Scratch Tutorial, Avnet, Version 1.3, March 15, 2014
[10] Processor Control of Vivado HLS Designs, Fernando Martinez Vallina, XAPP745, April 7, 2014
[11] Zedboard (Zynq Evaluation and Development) Hardware User’s Guide, Avnet, Version 1.1, August 1, 2012
Achieve Block Functionalality
Merge Processing Blocks
Confirm Full Integration
Project Wrap-Up
Figure 5: Gantt Chart (as of 21 April 2014)
Figure 6: Team Photo – Joe Mayer, Bob Urberger, and Nathan Bossart

similar documents