Open3D: A Modern Library for 3D Data Processing
Happy new year to all! The Open3D team and many amazing community members bring to you the latest and best Open3D with many new features and bug fixes. Here are the highlights of this release:
:warning: Warning: :warning: Due to incompatibilities in the cxx11_abi
on Linux between PyTorch and TensorFlow, official Python wheels on Linux only support PyTorch, not TensorFlow. [See #6288] for details. If you'd like to use Open3D with TensorFlow on Linux, you can build Open3D wheel from source in docker with support for Tensorflow (but not PyTorch) as:
cd docker
# Build open3d and open3d-cpu wheels for Python 3.10 with Tensorflow support
export BUILD_PYTORCH_OPS=OFF BUILD_TENSORFLOW_OPS=ON
./docker_build.sh cuda_wheel_py310
New Doppler ICP registration for FMCW Lidars (contributed by @heethesh) Comparison of tunnel reconstructions using point-to-plane ICP (left) and Doppler ICP (right) with measurements collected by an FMCW LiDAR (Image from original Doppler-ICP repo)
Adding in memory support for xyz files (#5866) (contributed by @samypr100)
Modern and more user friendly furo
theme for Open3D documentation (contributed by @saurabheights)
The master
branch is now renamed to main
Here is the full set of updates:
add_voxel
and remove_voxel
in pybind for VoxelGrid::AddVoxel
, VoxelGrid::RemoveVoxel
(#6023) (contributed by @ohkwon718)We would like to thank all of our community contributors for their true labor of love for this release! Also thanks to the many others who helped the Open3D community by reporting as well as resolving issues.
We are happy to bring you the best Open3D yet! This is a "tick" release focused on resolving existing issues and eliminating bugs. We resolved over 150 issues for Open3D and Open3D-ML since the last release.
Here are the main highlights of this release:
Here we use Mitsuba to recover the albedo map from an input image for an object in Open3D. Example. |
pip install open3d-cpu
.dataset = o3d.data.MonkeyModel()
mesh = o3d.t.io.read_triangle_mesh(dataset.path)
mesh_center = mesh.get_axis_aligned_bounding_box().get_center()
mesh.material.set_default_properties()
mesh.material.material_name = 'defaultLit'
mesh.material.scalar_properties['metallic'] = 1.0
mesh.material.texture_maps['albedo'] = o3d.t.io.read_image(dataset.path_map['albedo'])
mesh.material.texture_maps['roughness'] = o3d.t.io.read_image(dataset.path_map['roughness'])
mesh.material.texture_maps['metallic'] = o3d.t.io.read_image(dataset.path_map['metallic'])
mi_mesh = mesh.to_mitsuba('monkey')
img = render_mesh(mi_mesh, mesh_center.numpy())
mi.Bitmap(img).write('test.exr')
bsdf_rough_plastic = mi.load_dict({
'type': 'roughplastic',
'diffuse_reflectance': {
'type': 'rgb',
'value': [0.1, 0.1, 0.1]
},
'alpha': 0.1,
'int_ior': 'bk7',
'ext_ior': 'air',
})
mi_mesh = mesh.to_mitsuba('monkey', bsdf=bsdf_rough_plastic)
img = render_mesh(mi_mesh, mesh_center.numpy())
mi.Bitmap(img).write('test3.exr')
See examples/python/visualization/to_mitsuba.py
for more details.
(remotehost) $ open3d draw_web --bind_all /path/to/3D/file
and then open the browser to http://remotehost:8888
[New] Added a function to extract faces in a triangle mesh with a binary mask.
[New] UV MAPS Tutorial (contributed by @samliozge).
[Update] Added voting and ray jitter to RaycastingScene to improve robustness of signed distance queries.
[Update] Improved speed of CreateFromTriangleMeshWithinBounds() by > 100x (contributed by @Hodong-Hwang).
[Update] Added parallelization to UV atlas computation by partitioning the mesh.
[Update] Function CreateFromVoxelGrid is made static (contributed by @mjaein).
[Fix] Fixed wrong voxel center calculation in CreateFromTriangleMeshWithinBounds() (contributed by @plusk01).
[Fix] Replace Vectors from internal embree headers with Eigen.
[Fix] Use same beta value in SamplePointsPoissonDisk() as in the paper.
[Fix] Fix Python Image calculations (pybind Image::To argument order).
[Fix] Consistent face orientation for generated alpha shape.
from open3d.t.geometry import TriangleMesh
from open3d.ml.torch.ops import fixed_radius_search # pybind symbol
from open3d.ml.tf.models import KPFCNN # Python code
from open3d.visualization import gui # pybind symbol
from open3d.visualization import draw # Python code
from open3d.visualization.gui import Color
from open3d.visualization.rendering import Camera
No need for awkward shortcuts such as:
import open3d as o3d
Tensor = o3d.core.Tensor
Image = o3d.t.geometry.Image
CUDA_CALL
without open3d namespace. (contributed by @yuecideng).pip install open3d-cpu
.v0.17.0-1fix6008
] The Python wheel may crash when run on Apple Silicon systems, especially on M2. (#5951)We would like to thank all of our community contributors for their true labor of love for this release!
@bialasjaroslaw @Birkenpapier @cansik @cielavenir @ClarytyLLC @friendship1 @geppi @Hodong-Hwang @jdavidberger @johnthagen @ligerlac @mariusud @MartinEekGerhardsen @micsc12 @mjaein @NobuoTsukamoto @PieroV @plusk01 @roehling @samliozge @theNded @UnadXiao @yuecideng @yxlao
Also thanks to the many others who helped the Open3D community by reporting as well as resolving issues.
The fall brings a new "tock" release of Open3D, packed with new features and updates! Here are the highlights:
Watch the release video here.
Open3D had a successful Google Season of Code 2022 with many new features added, and more in the works for the next release. Here are the features that are part of this release:
create_arrow(), create_box(), create_cone(), create_coordinate_frame(), create_cylinder(), create_icosahedron(), create_mobius(), create_octahedron(), create_sphere(), create_tetrahedron(), create_torus()
. png/jpg
textures loading in glb
(binary glTF) file.float
instead of double
by default.draw_plotly
method brings interactive 3D visualization for Open3D to Jupyter notebooks and cloud environments (e.g., Colab).draw_plotly
pkg-config
files. There are available for Linux and macOS in the Open3D binary packages.requirements_build.txt
) and runtime (requirements.txt
) dependencies. (with help from @johnthagen).make_docs.py --parallel
option.import open3d as o3d
o3d.utility.random.seed(42)
#include "open3d/utility/Random.h"
int main() {
using namespace open3d;
utility::random::Seed(42); // Globally seed.
std::cout << utility::random::RandUint32() << std::endl; // Simply get a random number.
return 0;
}
SizeVector
.minimum(), maximum()
ops (contributed by @yuecideng).Any(), All(), RemoveNonFinite()
int64
index dtype
in NearestNeighborSearch
(contributed bu @chrockey)RadiusSearch
for EstimateNormals()
for Tensor PointCloud (contributed by @yuecideng).pcd.point["colors"] is pcd.point.colors
tmesh.triangle["normals"] is tmesh.triangle.normals
Geometry classes of the Tensor API also add new functionality based on VTK:
Further, we add functionality for parametrising meshes with the UVAtlas library and added functions for baking vertex and triangle attributes to textures.
PointCloud
:ClusterDBScan, ConvexHull
RemoveDuplicatedPoints, PaintUniformColor
FarthestPointDownSample, HiddenPointRemoval and SegmentPlane
PointCloud | Boundaries |
---|---|
RemoveDuplicatedPoints()
for PointCloud (Eigen API). (contributed by @scimad)SelectByIndex
and minor improvement to SelectByMask
(contributed by @yuecideng).AxisAlignedBoundingBox
(contributed by @yuecideng).EstimateRange()
update (contributed by @jdavidberger).float
and uint16
texture formats work correctlypoint_width
and line_width
parameters in Material
classOPEN3D_CPU_RENDERING=true
before importing Open3D in Python or running a C++ program. See the tutorial for full details.VisualizerWithVertexSelection
point picking functions to python API (contributed by @d-walsh and @cansik)VisualizerWithEditing
class (contributed by @yuecideng).Original | With noise + distortion | Difference |
---|---|---|
ply, stl, obj, off, gltf, glb, fbx
file formats directly into a Tensor TriangleMesh
.InitializePointCloudForColoredICP
efficiency improvement. (contributed by @Xiang-Zeng)torch._six
(for future PyTorch versions) (contributed by @krshrimali)We would like to thank all of our community contributors for their true labor of love for this release!
@ntw-au, @jdavidberger, @Xiang-Zeng, @jamesdi1993, @brentyi, @jjabo, @jbotsch-fy, @scimad, @cansik, @NobuoTsukamoto, @theNded, @chunibyo-wly, @jmherzog-de, @luzpaz, @code-review-doctor, @d-walsh, @johnthagen, @pmokeev, @erbensley, @hanzheteng, @chrockey, @agrellRepli, @bchretien, @nigels-com, @forrestjgq, @equant, @naruarjun, @ajprax, @INF800, @ntw-au, @tejaswid, @Krupal09, @krshrimali
Also thanks to the many others who helped the Open3D community by reporting as well as resolving issues.
We are excited to bring you the best Open3D yet - version 0.15. Watch the release video here.
Starting from this release, we adopt a "tick-tock" model for balancing resolving issues vs. adding new features. In a nutshell, the "tick" releases are focused on resolving existing issues and eliminating bugs, while the "tock" releases mainly focus on developing new features. Open3D 0.15 is a "tick" release. We resolved over 500 issues for Open3D and Open3D-ML, as the infographic below illustrates.
Open3D has applied for the Google Summer of Code 2022 to increase community participation. Check out details and our project ideas here. Please help in making Open3D better for all.
pip install open3d
.-DGLIBCXX_USE_CXX11_ABI=OFF
in cmake
if you need the old ABI, e.g. to work with PyTorch / TensorFlow libraries.pip install open3d
inside a Conda virtual environment.Dataset
object, extract its path, and display it in the Open3D Visualizer:
import open3d as o3d
if __name__ == "__main__":
dataset = o3d.data.EaglePointCloud()
pcd = o3d.io.read_point_cloud(dataset.path)
o3d.visualization.draw(pcd)
#include <string>
#include <memory>
#include "open3d/Open3D.h"
int main() {
using namespace open3d;
data::EaglePointCloud dataset;
auto pcd = io::CreatePointCloudFromFile(dataset.GetPath());
visualization::Draw({pcd});
return 0;
}
[New] Open3D-dedicated Command Line Interface (CLI) for visualization and running Python examples. Below is a code snippet to get started with Open3D and its examples.
# Install Open3D pip package
pip install open3d
# Print help
open3d --help
# List all runnable examples
open3d example --list
# Print source code of an example
open3d example --show [category]/[example_name]
# Run an example
open3d example [category]/[example_name]
# Run Open3D Viewer
open3d draw
# Open a mesh or point cloud file in Open3D Viewer
open3d draw [filename]
[Update] Python examples directory has been refactored for better namespace consistency and new examples have been added.
LD_PRELOAD
from the command line
LD_PRELOAD=/home/open3d/development/mesa-21.3.4/libGL.so python examples/python/visualization/draw.py
import ctypes
ctypes.cdll.LoadLibrary('/home/open3d/development/mesa-21.3.4/libGL.so')
import open3d as o3d
mesh = o3d.io.read_triangle_model('/home/open3d/development/FlightHelmet/FlightHelmet.gltf')
o3d.visualization.draw(mesh)
OrientedBoundingBox
was mirrored.GetSelfIntersectingTriangles()
and related functions like IsWatertight()
, GetVolume()
, etc. are now more than 4 times faster.io::AddTrianglesByEarClipping()
where the algorithm could fail for concave polygons.We would like to thank all of our community contributors for their true labor of love for this release!
@ajinkyakhoche @ceroytres @chunibyo-wly @dkurt @forrestjgq @Fuhrmann-sep @jeertmans @junha-l @mag-sruehl @maxim0815 @Nicholas-Mitchell @nigels-com @NobuoTsukamoto @ntw-au @roehling @theNded
Also thanks to the many others who helped the Open3D community by reporting as well as resolving issues.
We are excited to present the new Open3D version 0.14!
In this release, you will find:
RayCastingScene
classpip install open3d
. We recommend installing Open3D with pip
inside a conda virtual environment.git clone https://github.com/isl-org/Open3D.git
without the --recursive
flag. Also please note the updated Github URL.Release
mode by default if CMAKE_BUILD_TYPE
is not specified. Python
is no longer required for building Open3D for C++ users.Now you can use Open3D within Tensorboard for interactive 3D visualization! At a glance, you can:
PointCloud
, from scalar to vector, can be easily visualized.To get started, write some sample geometry data to a TensorBoard summary with this snippet:
from torch.utils.tensorboard import SummaryWriter # TensorFlow also works, see docs.
import open3d as o3d
from open3d.visualization.tensorboard_plugin import summary
from open3d.visualization.tensorboard_plugin.util import to_dict_batch
writer = SummaryWriter("demo_logs/")
cube = o3d.geometry.TriangleMesh.create_box(1, 2, 4)
cube.compute_vertex_normals()
colors = [(1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0)]
for step in range(3):
cube.paint_uniform_color(colors[step])
writer.add_3d('cube', to_dict_batch([cube]), step=step)
Now you can visualize this in TensorBoard with tensorboard --logdir demo_logs
. For more details on how to use TensorBoard with Open3D, check out this tutorial.
Further enhancements have been added to the GUI viewer. Now you can:
Directly visualize tensor-based geometry classes including PointCloud
, TriangleMesh
, and LineSet
.
Use physically based rendering (PBR) materials that deliver appealing appearance.
New default lighting environment and skybox improves visual appeal
Use all the functionality in Tensorboard!
import open3d as o3d
import open3d.visualization as vis
a_sphere = o3d.geometry.TriangleMesh.create_sphere(2.5, create_uv_map=True)
a_sphere.compute_vertex_normals()
a_sphere = o3d.t.geometry.TriangleMesh.from_legacy(a_sphere)
# Compare this...
vis.draw(a_sphere)
a_sphere.material = vis.Material('defaultLit')
a_sphere.material.texture_maps['albedo'] =
o3d.t.io.read_image('examples/test_data/demo_scene_assets/Tiles074_Color.jpg')
a_sphere.material.texture_maps['roughness'] =
o3d.t.io.read_image('examples/test_data/demo_scene_assets/Tiles074_Roughness.jpg')
a_sphere.material.texture_maps['normal'] =
o3d.t.io.read_image('examples/test_data/demo_scene_assets/Tiles074_NormalDX.jpg')
# With this!
vis.draw(a_sphere)
A complete, complex demo scene can be found at examples/python/gui/demo-scene.py
The Open3D Tensor
class received a major performance boost with the help of Intel ISPC compiler and optimization for the contiguous code path.
(See python/benchmarks/core
for the benchmark scripts. For each operation, the geometric mean of run times with different data types is reported. The time is measured with an Intel i9-10980XE CPU.)
A major upgrade of Parallel HashMap
is done. Now you can choose from multi-valued HashMap
and HashSet
depending your value types. A comprehensive tutorial is also available.
Linear Algebra performance have been optimized for small matrices, especially on 4x4 transformations.
Semantics for tensor and tensor-based geometry have been improved, especially on device selection.
Functions expecting a Tensor now accept Numpy arrays and Python lists. For example:
import open3d as o3d
import numpy as np
mesh = o3d.t.geometry.TriangleMesh()
mesh.vertex['positions'] = np.array([[0, 0, 0], [1, 0, 0], [1, 1, 0]], dtype=np.float32)
mesh.vertex['colors'] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32)
mesh.triangle['indices'] = [[0, 1, 2]]
o3d.visualization.draw(mesh)
.npz
.npy
formats for Open3D tensors and tensor maps. It is now easier to convert between Open3D geometry classes and Numpy properties..ply
, .pcd
, .pts
. Geometry loading time is hence improved for the stand-alone visualizer app.We introduce a new class RaycastingScene
with basic ray intersections functions and distance transform for meshes, utilizing the award-winning Intel Embree library.
Example code for rendering a depth map:
import open3d as o3d
import matplotlib.pyplot as plt
# Create scene and add a cube
cube = o3d.t.geometry.TriangleMesh.from_legacy(o3d.geometry.TriangleMesh.create_box())
scene = o3d.t.geometry.RaycastingScene()
scene.add_triangles(cube)
# Use a helper function to create rays for a pinhole camera.
rays = scene.create_rays_pinhole(fov_deg=60, center=[0.5,0.5,0.5], eye=[-1,-1,-1], up=[0,0,1],
width_px=320, height_px=240)
# Compute the ray intersections and visualize the hit distance (depth)
ans = scene.cast_rays(rays)
plt.imshow(ans['t_hit'].numpy())
Distance transform generated with RaycastingScene
:
See the tutorials for more information (Ray casting, Distance queries).
Normal estimation for tensor PointCloud
is supported with the tensor-compatible nearest neighbor search modules.
Customizable tensor-based TriangleMesh
, VoxelBlockGrid
, and LineSet
are implemented that allow user-defined properties. For example:
import open3d as o3d
import open3d.core as o3c
mesh = o3d.t.geometry.TriangleMesh()
mesh.vertex["positions"] = o3c.Tensor([[0.0, 0.0, 1.0],
[0.0, 1.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 1.0]], dtype=o3c.float32)
mesh.vertex["my_custom_labels"] = o3c.Tensor([0, 1, 2, 4], dtype=o3c.int32)
mesh.triangle["indices"] = o3c.Tensor([[0, 1, 2],
[1, 2, 3]], dtype=o3c.int32)
The Open3D-ML library welcomes more state-of-the-art models and operators that are ready to use for advanced 3D perception, especially semantic segmentation, including
SparseConvolution
and SparseConvolutionTranspose
along with PyTorch.Refer to the tutorial for training and inference on new models. (PyTorch TensorFlow).
pip install open3d-xxx.whl
# Test the new vidualizer
python -c "import open3d as o3d; c = o3d.geometry.TriangleMesh.create_box(); o3d.visualization.draw([c])"
# Test the traditional visualizer
python -c "import open3d as o3d; c = o3d.geometry.TriangleMesh.create_box(); o3d.visualization.draw_geometries([c])"
We thank all the community contributors for this release! (alphabetical order) @cclauss @chrockey @chunibyo-wly @cosama @forrestjgq @gsakkis @junha-l @ktsujister @leomariga @li6in9muyou @marcov868 @michaelbeale-IL @muskie82 @nachovizzo @NobuoTsukamoto @plusk01 @ShreyanshDarshan @ShubhamAgarwal12 @SoftwareApe @stanleyshly @stotko @theNded @zhengminxu
We welcome you to the 0.13.0 release of Open3D. This release is full of exciting new features with a strong emphasis in real-time pipelines, but also full of bug fixes and usability improvements. The big highlights of this release are as follows:
Click in the image above to watch the presentation video, or visit:
https://www.youtube.com/watch?v=pLCVCH7ypI4
We introduce a new CUDA accelerated pipeline including RGBD odometry, frame-to-model tracking, and volumetric integration.
Figure 1. Example of 3D reconstruction from an RGB-D sensor.
We introduce the tensor based real-time RGBD Odometry pipeline. In addition to the legacy Hybrid and Intensity based methods, we support the popular point-to-plane method.
We further accelerate volumetric integration and introduce fast ray casting for rendering.
Based on the accelerated RGBD odometry and raycasting, we present the fully functional VoxelHashing system. It performs dense volumetric reconstruction with fast frame-to-model tracking. We present an easy-to-use GUI that also shows real-time interactable surface reconstruction.
We have further enhanced our legacy offline reconstruction system by introducing the Simultaneous Localization and Calibration (SLAC) algorithm. This algorithm applies advanced dense multi-way registration along with non-rigid deformation to create highly-accurate reconstructions.
We present a high-performance implementation of ICP using Open3D’ Tensor library. This module is one of the first on leveraging the new Neighbor search module and the newly crafted parallel kernels. This implementation brings support for multi-scale ICP, which allows us to do iterations on different resolutions in order to accelerate convergence while keeping computation low.
Figure 2. ICP registration of multiple point clouds from a driving dataset.
Neighbor search is at the core of many 3D algorithms. Therefore, it is critical to have access to a fast implementation able to execute a large number of queries in a fraction of a second. After months of development, the Open3D team is proud to present the new Neighbor Search module!
This module brings support for core search algorithms, such as KNN, Radius search, and Hybrid search. All these algorithms are provided with support for both CPU and GPU, through a common and easy-to-use interface. Write your code once and support multiple devices! Moreover, we have not sacrificed a single flop of computation, making this module one of the fastest neighbor search libraries ever created.
The need for visualizing complex 3D data in web environments has surged considerably in the past few years, in part thanks to the proliferation of sensors like LIDAR and RGBD cameras. New use cases, such as online dataset inspection and remote visualization are now an integral part of many tasks, requiring the crafting of ad-hoc tools, which often are cumbersome to use.
Figure 3. Standalone visualization of a semantic segmentation model in a browser.
In order to improve this situation, we introduce our new web-based visualization module, which enables 3D visualization from any browsers and any location. This module lets users run advanced rendering and visualization pipelines, both remote and locally through your web browser. All the power of Open3D’ rendering engine --including support for PBR materials, multiple lighting systems, 3D ML visualization, and many other features--, are now supported in your browser. This module also includes a Jupyter extension for interactive web-based visualization! This new feature allows you to run compute-intensive 3D processing in a dedicated server while visualizing the results remotely on any device through your browser.
Figure 4. Visualization of a 3D model on a Jupyter notebook.
In this release, we introduce a new point cloud semantic segmentation architecture based on a Sparse Convolution-based UNet model. This architecture leverages the new sparse convolution operators provided by Open3D, and achieves state of art performance for Semantic Segmentation on the ScanNet dataset. We have also added support for PointRCNN for the task of 3D object detection. To enable PointRCNN we have added new operators like furthest_point_sampling, three_interpolate, ball_query, which are available through Open3D for TensorFlow and Pytorch.
Figure 5. Example of 3D semantic segmentation using a SparseConvUNet model on ScanNet.
Figure 6. Example of 3D object detection using a PointRCNN on KITTI.
All these models are provided with their respective training and inference pipelines with support for TensorFlow and Pytorch. Pre-trained models are also provided (check out the following link).
This release brings the following datasets: Scannet and SunRGBD.
We now support all models on newer versions of TensorFlow (2.4.1) and PyTorch (1.7.1), on CUDA 11.0.
Open3D 0.13.0 brings a cascade of improvements and fixes to the renderer and GUI modules.
Our pip packages now include support for CUDA 11.0, PyTorch 1.7.1, and TensorFlow 2.4.1 to enable RTX 3000 series devices. Please, notice that we provide custom PyTorch wheels for Linux to work around an incompatibility between CUDA 11, PyTorch, and extension modules such as Open3D-ML.
This release also brings new improved support for CUDA on Windows. Users can now build CUDA accelerated Python wheels for Windows. Open3D is now built with security options enabled by default.
We hope you find Open3D 0.13.0 exciting and useful. Happy coding!
Remember that you can reach out with questions, requests, or feedback through the following channels:
Find the full change log here.
The Open3D team
Open3D 0.12.0 is out, and it comes with new 3D object detection pipelines and datasets, the newest versions of some of your preferred classic tools, and many bug fixes. Check out our 0.12.0 release video:
The previous release of Open3D introduced an exciting new module dedicated to 3D Machine Learning Open3D-ML, featuring support for 3D semantic segmentation workflows. In this release, we have extended Open3D-ML with the task of 3D object detection. This extension introduces support for new datasets, such as the Waymo Open dataset, Lyft level 5 open data, Argoverse, nuScenes, and KITTI. As always, all these datasets can be visualized out-of-the-box using our visualization tool, from Python or C++. The visualization tool is now equipped with the capability to render 3D bounding boxes along with all the previously existing modalities, e.g. semantic labels, XYZRGB, depth, normals, etc.
PointPillars, the first of the many object detection models to come in the near future. To enable the implementation of PointPillars, we have added a set of new ML operators in Open3D, such as: grid_sampling, NMS, and IOU. These operators are available to the community and can be used to build new models, using our Python and C++ APIs.
import os
import open3d.ml as _ml3d
import open3d.ml.torch as ml3d
cfg_file = "ml3d/configs/pointpillars_kitti.yml"
cfg = _ml3d.utils.Config.load_from_file(cfg_file)
model = ml3d.models.PointPillars(**cfg.model)
cfg.dataset['dataset_path'] = "/path/to/your/dataset"
dataset = ml3d.datasets.KITTI(cfg.dataset.pop('dataset_path', None), **cfg.dataset)
pipeline = ml3d.pipelines.ObjectDetection(model, dataset=dataset, device="gpu", **cfg.pipeline)
...
# run inference on a single example.
result = pipeline.run_inference(data)
We have also updated our model zoo, providing new pretrained models on KITTI for the task of 3D object detection, and new semantic segmentation models on Paris-Lille3D and Semantic3D.
Remember that all the tools provided in Open3D-ML are compatible with PyTorch and TensorFlow!
RealSense sensors’ support has been upgraded to leverage the RealSense SDK v2. Users can now capture crisp 3D data from L515 devices. As part of this upgrade, we include support for Bag files format (RSBagReader), and direct streaming from sensors. These operations can now be done through a new sensor class: RealSenseSensor, offering a simple and intuitive way to control your sensors.
import open3d as o3d
bag_reader = o3d.t.io.RSBagReader()
bag_reader.open(bag_filename)
while not bag_reader.is_eof():
im_rgbd = bag_reader.next_frame()
# process im_rgbd.depth and im_rgbd.color
bag_reader.close()
import json
import open3d as o3d
with open(config_filename) as cf:
rs_cfg = o3d.t.io.RealSenseSensorConfig(json.load(cf))
rs = o3d.t.io.RealSenseSensor()
rs.init_sensor(rs_cfg, 0, bag_filename)
rs.start_capture(True) # true: start recording with capture
for fid in range(150):
im_rgbd = rs.capture_frame(True, True) # wait for frames and align them
# process im_rgbd.depth and im_rgbd.color
rs.stop_capture()
For further information, check this tutorial.
Open3D 0.12 brings exciting CORE upgrades, including a new Neighbor Search module. This module supports typical neighbor search methods, such as KNN, radius search, and hybrid search, on both CPUs and GPUs, under a common interface!
Furthermore, we have created a new version of the TSDF integration algorithm accelerated on GPU. This version is able to achieve outstanding computational performance, requiring between 2 and 4 ms to integrate a pair of frames.
We have done an important effort over the last months to put out a modern, real-time, rendering API. This effort is still ongoing, and we are committed to bringing top-tier rendering capabilities with a strong emphasis in performance, versatility, ease of use, and beauty. As part of our commitment, in this release we have added relevant extensions to this API:
box = o3d.geometry.TriangleMesh.create_box(2, 2, 1)
render = rendering.OffscreenRenderer(640, 480)
render.scene.add_geometry("box", box, grey)
render.scene.camera.look_at([0, 0, 0], [0, 10, 0], [0, 0, 1])
img = render.render_to_image()
Camera::SetProjection(const Eigen::Matrix3d& intrinsics,
double near,
double far,
double width,
double height)
Label3D::Label3D(const Eigen::Vector3f& pos, const char* text)
class ColorGradingParams {
public:
ColorGradingParams(Quality q, ToneMapping algorithm);
void SetTemperature(float temperature);
void SetTint(float tint);
void SetContrast(float contrast);
void SetVibrance(float vibrance);
void SetSaturation(float saturation);
void SetChannelMixer(const Eigen::Vector3f& red,
const Eigen::Vector3f& green,
const Eigen::Vector3f& blue);
void SetShadowMidtoneHighlights(const Eigen::Vector4f& shadows,
const Eigen::Vector4f& midtones,
const Eigen::Vector4f& highlights,
const Eigen::Vector4f& ranges);
void SetSlopeOffsetPower(const Eigen::Vector3f& slope,
const Eigen::Vector3f& offset,
const Eigen::Vector3f& power);
void SetCurves(const Eigen::Vector3f& shadow_gamma,
const Eigen::Vector3f& midpoint,
const Eigen::Vector3f& highlight_scale);
}
Control shadow behaviors and post-processing effects:
class View
{
void SetPostProcessing(bool enabled);
void SetAmbientOcclusion(bool enabled, bool ssct_enabled);
void SetAntiAliasing(bool enabled, bool temporal);
void SetShadowing(bool enabled, ShadowType type);
}
The visualization module has been extended, using the new rendering capabilities and the GUI API, to create a unified visualizer displaying all the features contained in previous Open3D visualizers, e.g., camera animation, data selection, support for callbacks, and multiple shading modes.
This new visualizer, codename O3DViewer, will be the official visualization tool in Open3D starting in Open3D 0.14. At that time, previous visualizers will be deprecated.
We hope you find Open3D 0.12.0 exciting and useful. Happy coding!
Remember that you can reach out with questions, requests, or feedback through the following channels:
Find the full changelog here.
We are excited to present Open3D 0.11.0!
Open3D 0.11.0 introduces a brand new 3D Machine Learning module, nicknamed Open3D-ML. Open3D-ML is an extension of your favorite library to bring support for 3D domain-specific operators, models, algorithms, and datasets. In a nutshell, users can now create new applications combining the power of 3D data and state-of-the-art neural networks! Open3D-ML is included in all the binary releases of Open3D 0.11.0.
Open3D-ML comes with support for Pytorch +1.4 and TensorFlow +2.2, the two most popular machine learning frameworks. The first iteration of this module features a 3D semantic segmentation toolset, including training and inference capabilities for RandlaNet and KPConv. The toolset supports popular datasets such as SemanticKITTI, Semantic3D, 3D Semantic Parsing of Large-Scale Indoor Spaces S3DIS, Toronto3D, and Paris-Lille-3D. Open3D-ML also provides a new model zoo compatible with Pytorch and TensorFlow, so that users can enjoy state-of-the-art semantic segmentation models without hassles.
We have endowed the new Open3D-ML module with a new data viewer tool. Users can now inspect their datasets and model’s predictions in an intuitive and simple way. This visualization tool includes support for Pytorch and TensorFlow frameworks and is fully customizable due to its Pythonic nature.
This viewer has been built upon the new visualization API, integrating the new Rendering and GUI modules. Thanks to the new visualization API, users can perform advanced rendering, fully programmatically from Python and C++. Users can also create slick GUIs with a few lines of Python code. Check out how to do this here.
The Open3D app has also been extended to include the following features:
Open3D 0.11 includes for the first time support for Linux ARM (64-bit) platforms. This has been a long-time requested feature that finally made it into the release. You can now enjoy all Open3D features, including our new rendering and visualization pipelines in OpenGL-enabled ARM platform.
[Breaking] Please, notice that the API and the structure of Open3D have changed considerably after an intense refactoring process. You will need to update your code to use the new namespaces. Please, check the full changelog and the documentation for further information.
We hope you find Open3D 0.11.0 exciting and useful. Happy coding!
Remember that you can reach out with questions, requests, or feedback through the following channels:
Find the full changelog here.
The Open3D team