At the center of the Self-Driving Car Research Studio, the QCar, is an open-architecture scaled model vehicle, powered with NVIDIA® Jetson™ TX2 supercomputer, and equipped with a wide range of sensors, cameras, encoders, and user-expandable IO.
Relying on a set of software tools including Simulink®, Python™, TensorFlow, and ROS, the studio enables researchers to build high-level applications and reconfigure low-level processes that are supported by pre-built modules and libraries. Using these building blocks, you can explore topics such as machine learning and artificial intelligence training, augmented/mixed reality, smart transportation, multi-vehicle scenarios and traffic management, cooperative autonomy, navigation, mapping and control, and more.
Supported Software and APIs
QUARC Autonomous Software License
Quanser APIs
TensorFlow
TensorRT
Python™ 2.7 & 3
ROS 1 & 2
CUDA®
cuDNN
OpenCV
Deep Stream SDK
VisionWorks®
VPI™
GStreamer
Jetson Multimedia APIs
Docker containers with GPU support
Simulink® with Simulink Coder
Simulation and virtual training environments (Gazebo, QuanserSim)
Multi-language development supported with Quanser Stream APIs for inter-process communication
Unreal Engine