AI智能摘要
想在JetsonOrin平台上用上最新Python3.12,却发现官方TensorRTwhl包还没适配?别担心,这篇手把手教程将带你从零开始,在Orin设备上成功编译出专属的TensorRTPythonwhl包。我们不仅会详细列出所需的环境参数和构建步骤,更会帮你避开依赖路径、架构兼容性等常见“坑”,确保你能顺利安装并完成核心功能验证。
— AI 生成的文章内容摘要

基础信息

类别关键参数记录值备注
基础工具CMake / GCC4.2.1 / 11.4.0建议 CMake > 3.20
PythonVersion / Path3.12.12 / .venv确保在虚拟环境下编译
GPU 环境CUDA / cuDNN12.6 / 9.3.0
GPU 架构CUDA_ARCHsm_87 (Orin)决定硬件兼容性
TensorRTVersion10.3.0.26构建产物

构建步骤

拉取项目代码

mkdir -p ~/makenv && cd ~/makenv
git clone --recursive --branch release/10.3 https://github.com/NVIDIA/TensorRT.git
cd ~/makenv/TensorRT/

创建虚拟环境

uv venv .venv --python 3.12
source .venv/bin/activate
uv pip install setuptools

创建 pybind

export PYTHON_INCLUDE_DIR=$(python3 -c "from sysconfig import get_paths; print(get_paths()['include'])")
export EXT_PATH=~/makenv/external
git clone https://github.com/pybind/pybind11.git $EXT_PATH/pybind11
mkdir -p $EXT_PATH/python3.12/include
ln -s $PYTHON_INCLUDE_DIR/* $EXT_PATH/python3.12/include/

构建

export TENSORRT_LIBPATH=/usr/lib/aarch64-linux-gnu
export TRT_OSSPATH=~/makenv/TensorRT
cd $TRT_OSSPATH/python
TENSORRT_MODULE=tensorrt PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=12 TARGET_ARCHITECTURE=aarch64 ./build.sh

whl输出目录

~/makenv/TensorRT/python/build/bindings_wheel/dist/tensorrt-10.3.0-cp312-none-linux_aarch64.whl

安装

uv pip install ~/makenv/TensorRT/python/build/bindings_wheel/dist/tensorrt-10.3.0-cp312-none-linux_aarch64.whl

测试验证

最小可用性测试

python -c "
import tensorrt as trt

print('TensorRT version:', trt.__version__)

logger = trt.Logger(trt.Logger.INFO)
builder = trt.Builder(logger)

network = builder.create_network(
    1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
)

config = builder.create_builder_config()
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 << 20)

print('Builder OK')
print('Network OK')
print('Config OK')
"