- CMake 52.6%
- C++ 24.5%
- Python 15.7%
- Just 4.2%
- Nix 3%
| cmake | ||
| solution | ||
| tasks | ||
| .clang-format | ||
| .clang-tidy | ||
| .envrc | ||
| .gitignore | ||
| CMakeLists.txt | ||
| flake.lock | ||
| flake.nix | ||
| justfile | ||
| README.md | ||
Case Study
1. cpp_task
What is the code doing?
- Start two threads which both increment a counter
- First thread every 2 seconds indefinitely
- Second thread every 1 second but only 5 times, then sets
my_runningto false which stops both threads
- Both threads will be stopped the next time they finish their execution.
- Finally, the counters are printed
Issues:
Parameters of the StartThread function, which are passed to the std::thread lambda function
by-reference (&) are not guaranteed to outlive the thread returned from the function.
running: declared in main, outlives the thread -> safe to useProcess: temporary lambda (defined in the function call) -> needs to be copied into the lambdatimeout: function local variable (passed by value) does not outlive the function -> copy into lambda (trivially copyable)
How it was fixed
The by-copy capture was made to the default (=)
and the running variable was captured by-reference (&running)
Solution can be found in ./solution/cpp_task.cpp. The important part is:
void StartThread(
std::thread& thread,
std::atomic<bool>& running,
const std::function<bool(void)>& Process,
const std::chrono::seconds timeout)
{
thread = std::thread(
[=, &running] () // <-- changed here
{
...
Further thoughts
The timeout is not enforced if the Process function never returns. However, there is no standard
C++ way to forcibly kill a std::thread. It can still be achieved by using OS threads, but this has
the potential to leak memory which was allocated in the thread function or potentially leave mutexes
locked leading to another deadlock.
2. python_task
The solution can be found in ./solution/python_task.py.
The 90° rotation was implemented using a transpose followed by reversing the entries in each row.
3. Catch a UAV
What kind of algorithm could be used to track ( 3d. i.e. XYZ Achses) and capture the drone?
-
Assumption:
- The 3D model is known.
- Camera parameters are calibrated (camera intrinsic matrix (focal length, optical center) and the distortion coefficients).
- 3D pose (position and rotation) of the camera is known.
- No ArUco markers are available on the UAV from which we could directly calculate the 2D image coordinates of the marker corner points.
-
Tracking algorithm (can be implemented using e.g. OpenCV):
- Feature detection and matching:
- Keypoint detection with e.g. the ORB (Oriented FAST and Rotated BRIEF) algorithm
- Matching of the features in a reference image and the camera image using e.g. FLANN matcher
- Note: In case feature detection fails, a Deep Learning algorithm should be applied to extract the features, e.g. YOLO DNNs
- Output: 2D image coordinates of distinctive points we can match to the 3D model
- Perspective-n-Point (PnP) pose computation
- Input:
- List of 3D object points in body frame coordinates taken from the 3D model
- 2D image points matching the 3D points, from the detection algorithm
- Camera parameters
- Output:
- Translation vector
tvec(XYZ distance of the object's origin relative to the camera) - Rotation
tvecas Rodrigues vector, which can be transformed into a rotation matrix - 3D pose of the UAV
- Translation vector
- Input:
- State estimation using a Kalman filter to get position, velocity and attitude of the UAV
- Feature detection and matching:
-
Capture algorithm:
- Compute a smooth path to the UAV with e.g. a spline which can be used by the robot arm control algorithm. Ensure the UAV stays centered in the camera view to not lose it.
- Define a "docking" procedure together with the UAV specialist (depending on capabilities of the UAV to hover, ...)
What are the challenges to use a single camera as sensor and how to resolve them?
- Limited FOV
- Problem: When the UAV is outside the image, then it cannot be tracked.
- Solution:
- Initial position: Without other sensors to track the UAV, we can implement a search algorithm with the robot arm till the UAV is found
- When the UAV is already tracked, but shortly hidden behind e.g. a tree, its state can be propagated with a Kalman filter to bridge the gap
- Depth
- Problem: The depth error (in camera Z-axis) is significantly higher than in the X and Y axes. The distance of the UAV to the robot arm is however extremely important.
- Solution: The Kalman filter as a last step will significantly reduce this error. Furthermore, we can move the camera to create a stereo camera.
- Symmetry of UAV
- Problem: The UAV is very symmetrical which can flip the solution of PnP algorithms
- Solution: Have some asymmetric markers or a Deep Learning model recognizing the front/up direction.
- Motion Blur
- Problem: When the UAV is moving fast, it gets blurry which is very bad for feature detection algorithms.
- Solution: Small Exposure Time to get a sharper but more noisy image. Additionally, Optical Flow tracking can be used (match motion between frames)
- Lighting
- Problem: The reference images used for the feature detection or Deep Learning algorithms can differ in brightness globally and locally (e.g. UAV directly in the direction of the sun can appear completely dark)
- Solution: Use for example the CLAHE algorithm to adjust contrast locally. Or use contour-based tracking for a very rough tracking of the shape and orientation.
What other approaches (sensors, etc) then a single camera would you use?
- Data link to the UAV to get its navigation solution (probably GNSS/IMU based)
- Ultra-Wideband (UWB) sensor to get an additional distance measurement (also a LIDAR could be used for the same purpose but probably more expensive)
- Radar to measure distance and velocity of the UAV