MIT uses deep learning to process point clouds to make self-driving cars even more

The lidar sensor of the self-driving car sends out infrared light pulses and measures the time when they bounce off the object. The sensor creates a point cloud and creates a 3D snapshot of the surrounding environment of the car to help the vehicle drive. Understanding the original point cloud data is very difficult, and before the advent of the era of machine learning, trained engineers need to identify the features they want to capture manually. According to foreign media reports, researchers from the Computer Science and AI Laboratory (CSAIL) of the Massachusetts Institute of Technology have recently published a series of papers that indicate that deep learning can be used to automatically process point clouds for 3D imaging applications.

One of the senior authors of the paper, Justin Solomon, a professor at the Massachusetts Institute of Technology, said, "At present, 90% of computer vision and machine learning only involve 2D images. Our work is designed to help better express the 3D world and is not limited to autonomous driving applications , Including all areas where 3D shapes need to be understood. "

Previously, most methods were not particularly successful in obtaining the point cloud patterns from the data, which are necessary to obtain useful information from 3D points in space. In a paper by the team, the researchers showed that their method of analyzing point clouds, EdgeConv, can classify and segment individual objects by using dynamic graph convolutional neural networks. Wadim Kehl, a machine learning scientist at the Toyota Research Institute, said, "This algorithm can capture hierarchical patterns by constructing a graph of adjacent points, thereby inferring multiple types of general information that can be used for a variety of downstream tasks. "

In addition, the team also studied other aspects of point cloud processing. For example, most sensors change their angle of view as they move in the 3D world. Each time the same object is rescanned, the object's position may be different from what it saw last time. To merge multiple point clouds into a detailed world view, you need to align multiple 3D points. This process is called "registration". Dr. Yue Wang, one of the authors of the paper, said, "Registration allows us to integrate 3D data from different sources into a common coordinate system. Otherwise, we cannot obtain meaningful information from these methods."

The second paper of Solomon and Wang demonstrates a new registration algorithm, called DCP (Deep Closest Point), which can better find the recognizable patterns, points and edges of the point cloud, so as to be compatible with other Point clouds are aligned. This is particularly important for autonomous vehicles to determine their position (positioning) in the environment.

One limitation of DCP is that it assumes that the entire shape can be seen, not just one side. This means that DCP cannot align partial views of object shapes (called "part-to-part allocation criteria"). Therefore, in the third paper, the researchers proposed an improved algorithm called Partially Assigned Quasi-Network (PRNet).

Solomon said that existing 3D data is often quite confusing and unstructured compared to 2D images and photos. Solomon's team tried to obtain meaningful information from the chaotic 3D data in a controlled environment that did not require a lot of machine learning technology. DCP and PRNet indicate that a key aspect of point cloud processing is context. The geometric features required to align point cloud A with point cloud B may differ from the characteristics required to align it with point cloud C. For example, in partial allocation on-time, a part of the shape of a point cloud may not be visible on other point clouds, so it cannot be used for registration.

Wang said that the team's tools have been used by many researchers in the field of computer vision and other fields. Next, the researchers hope to apply these algorithms to real-world data, including data collected from autonomous vehicles. Wang also said that they also plan to use self-supervised learning to explore the potential of training their systems to minimize the human annotations required. (Author: Rozanne)

Magnesium Sulfonate is one of the series of lubricant additives. It`s mainly made from long-chain linear alkylbenzene sulfonic acid by means of neutralization and high alkalization reaction. As a major metal detergent, this product boasts excellent acid neutralization capacity, good antirust property and high temperature detergency, so widely used for different ICE oil.


It has the products of Overbased Synthetic Magnesium Sulfonate T107M (400TBN) and Super Overbased Magnesium Sulfonate Vanadium Inhibitor T107MV (600TBN).


Magnesium Sulfonate

Magnesium Sulfonate

Magnesium Sulfonate,Lubricant Additive Magnesium Sulfonate,Vanadium Inhibitor Magnesium Sulfonate,Additive Component Magnesium Sulphonate

Zhengzhou Chorus Lubricant Additive Co.,Ltd. , https://www.cn-lubricantadditive.com