Skip to content

Commit c224bb3

Browse files
committed
linting error fixes and rebase fix
1 parent 1059d85 commit c224bb3

File tree

2 files changed

+0
-4
lines changed

2 files changed

+0
-4
lines changed

py/torch_tensorrt/dynamo/_compiler.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -518,11 +518,8 @@ def compile(
518518
enable_weight_streaming (bool): Enable weight streaming.
519519
tiling_optimization_level (str): The optimization level of tiling strategies. A higher level allows TensorRT to spend more time searching for better tiling strategy. We currently support ["none", "fast", "moderate", "full"].
520520
l2_limit_for_tiling (int): The target L2 cache usage limit (in bytes) for tiling optimization (default is -1 which means no limit).
521-
<<<<<<< HEAD
522521
offload_module_to_cpu (bool): Offload the module to CPU. This is useful when we need to minimize GPU memory usage.
523-
=======
524522
use_distributed_mode_trace (bool): Using aot_autograd to trace the graph. This is enabled when DTensors or distributed tensors are present in distributed model
525-
>>>>>>> c3b62d239 (TensorRT-LLM import fix and aot_joint_export specify as explicit setting in dynamo.compile)
526523
**kwargs: Any,
527524
Returns:
528525
torch.fx.GraphModule: Compiled FX Module, when run it will execute via TensorRT

py/torch_tensorrt/dynamo/conversion/converter_utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1048,4 +1048,3 @@ def promote_trt_tensors_to_same_dtype(
10481048
rhs_cast = cast_trt_tensor(ctx, rhs, promoted_dtype, f"{name_prefix}rhs_cast")
10491049

10501050
return lhs_cast, rhs_cast
1051-

0 commit comments

Comments
 (0)