Teaching 3D Machine Vision at Universite Cote d’Azur
Published:
During my PhD at Universite Cote d’Azur, I had the opportunity to serve as a Teaching Assistant for the 3D Machine Vision Master’s course at Polytech Nice Sophia. It was one of the most rewarding experiences of my academic journey.
What the Course Covered
The course was designed to give students practical, hands-on experience with real 3D sensing hardware and robotics middleware. The core workflow involved:
- Azure Kinect DK — Microsoft’s depth-sensing camera for capturing RGB-D data
- ROS (Robot Operating System) — the standard middleware for robotics applications
- RViz — for real-time 3D point cloud visualization
Students went from unboxing a sensor to streaming live 3D point clouds in ROS — all within a few practical sessions.
The Hands-On Pipeline
The practical sessions followed a clear progression:
- Setting up the Azure Kinect SDK on Ubuntu, including installing
libk4aandk4a-tools - Verifying the sensor with
k4aviewerto inspect RGB and depth streams - Installing ROS Melodic and configuring the workspace
- Integrating the Kinect with ROS using Microsoft’s official
Azure_Kinect_ROS_Driver - Visualizing 3D data in RViz by subscribing to PointCloud2 topics
This pipeline gave students a real end-to-end understanding of how depth data flows from a sensor into a robotics system — something textbooks alone can’t teach.
What I Learned from Teaching
Teaching reinforced concepts I thought I already knew. Explaining depth-guided sampling or coordinate transforms to students forced me to think about these topics more precisely. It also sharpened my ability to communicate technical ideas clearly — a skill that’s proved valuable in both research and industry.
The course materials and setup instructions are available on the companion GitHub repository.
Looking Back
This teaching experience directly complemented my PhD research on neural rendering and 3D human digitization. Understanding the fundamentals of 3D sensing — calibration, depth maps, point clouds — was essential groundwork for the NeRF and Gaussian Splatting work that followed.
If you’re a student or researcher getting started with 3D vision, I’d highly recommend getting your hands on a depth sensor and working through a pipeline like this. There’s no substitute for seeing real point clouds streaming in real time.