Fitting spherical objects in 3D point cloud using the geometrical constraints

Abstract

Estimating parameters of a primitive shape from a 3-D point cloud usually meets difficulties due to noises of data require a huge amount of computational time. The real point

cloud of objects in the 3D environment have many noise and may be obscured when they are

captured from Kinect version 1. In this paper, we utilize and analyse of spherical estimation

steps from the point cloud data in Schnabel et al. [1]. From this, we propose a geometrical

constraint to search ’good’ samples for estimating spherical model and then apply to a new

robust estimator named GCSAC (Geometrical Constraint SAmple Consensus). The proposed

GCSAC algorithm takes into account geometrical constraints to construct qualified samples.

Instead of randomly drawing minimal subset sample, we observe that explicit geometrical

constraint of spherical model could drive sampling procedures. At each iteration of GCSAC,

the minimal subset sample is selected by two criteria (1) They ensure consistency with the

estimated model via a roughly inlier ratio evaluation; (2) The samples satisfy geometrical

constraints of the interested objects. Based on the qualified samples, model estimation and

verification procedures of a robust estimator are deployed in GCSAC. Comparing with the

common robust estimators of RANSAC family (RANSAC, PROSAC, MLESAC, MSAC, LORANSAC and NAPSAC), GCSAC overperforms in term of both precisions of the estimated

model and computational time. The implementations and evaluation datasets of the proposed

method are made publicly available.

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 1

Trang 1

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 2

Trang 2

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 3

Trang 3

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 4

Trang 4

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 5

Trang 5

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 6

Trang 6

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 7

Trang 7

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 8

Trang 8

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 9

Trang 9

Fitting spherical objects in 3D point cloud using the geometrical constraints trang 10

Trang 10

Tải về để xem bản đầy đủ

pdf 13 trang xuanhieu 7080
Bạn đang xem 10 trang mẫu của tài liệu "Fitting spherical objects in 3D point cloud using the geometrical constraints", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Fitting spherical objects in 3D point cloud using the geometrical constraints

Fitting spherical objects in 3D point cloud using the geometrical constraints
 in
four scenes, each scene has been included 500 frames and each frame has a ball on the
table. It named ’second sphere’. The setup of our experiment implemented a similar as
[22] and is illustrated in Fig. 3.
To separate the data of a ball on the table, we have to implement some steps as below.
First is the table plane detection that used the method in [23]. After that, the original
coordinate is rotated and translated that the y-axis is parallel with the normal vector of
the table plane as Fig. 3. The point cloud data of balls is separated with the point cloud
data of the table plane. It is illustrated in Fig. 5.
4.2. Evaluation Measurements
Let notate a ground-truth of model Mt(xt, yt, zt, rt) and the estimated one
Me(xe, ye, ze, re) where (xt, yt, zt), (xe, ye, ze) are the coordinates of the center points,
11
Chuyên san Công nghệ thông tin và Truyền thông - Số 11 (04-2018)
Fig. 3. Illustrating of our setup to collect the dataset
Set1: center(0,0,0) Set2: center(0,0,0)
A haft of object
Set3: center(0,0,0)
Occluded object
Fig. 4. Point clouds of dC1, dC2, dC3 of the ’second sphere’ dataset (the synthesized data) in case of
50% inlier ratio. The red points are inliers, whereas blue points are outliers.
Table plane 
detection and 
separating a ball 
on the table 
Point cloud data 
of a ball
Point cloud data of a scene
Fig. 5. Illustrating the separating the point cloud data of a ball in the scene.
rt, re are the radius. To evaluate the performance of the proposed method, we use
following measurements:
- Let denote the relative error Ew of the estimated inlier ratio. The smaller Ew is,
12
Tạp chí Khoa học và Kỹ thuật - Học viện KTQS - Số 190 (04-2018)
Table 1. The characteristics of the generated sphere dataset (synthesized dataset)
Dataset Characteristics of the generalized data
Radius Spatial distributionof inliers
Spread of
outliers
dC1, dC4 1 Around of a sphere [-3, 3], [-4, 4]
dC2, dC5 1 Around of a sphere [-3, 3], [-4, 4]
dC3, dC6 1 one half of a sphere [-3, 3], [-4, 4]
the better the algorithm is.
Ew =
|w − wgt|
wgt
× 100 (3)
where wgt is the defined inlier ratio of ground-truth; w is the inlier ratio of the
estimated model.
w =
#inliers
#number of samples
(4)
- The total distance errors Sd [24] is calculated by summation of distances from any
point pj to the estimated model Me. Sd is defined by:
Sd =
N∑
j=1
d(pj,Me) (5)
- The processing time tp is measured in milliseconds (ms). The smaller tp is the
faster the algorithm is.
- The relative error of the estimated center (only for synthesized datasets) Ed is
Euclidean distance of the estimated center Ee and the truth one Et. It is defined
by:
Ed = |Ee − Et| (6)
- The relative error of the estimated radius Er is different of the estimated radius re
and the truth one rt. It is defined by:
Er =
|re − rt|
rt
× 100% (7)
The proposed method (GCSAC) is compared with six common ones in RANSAC family.
They are original RANSAC, PROSAC, MLESAC, MSAC, NAPSAC, LO-RANSAC. For
setting the parameters, we fixed thresholds of the estimators with T = 0.05 (or 5cm),
wt = 0.1, sr = 3 (cm). T is a distance threshold to set a data point to be inlier or outlier.
sr is the radius of a sphere when using NAPSAC algorithm. For the fair evaluations, T
is set equally for all seven fitting methods.
13
Chuyên san Công nghệ thông tin và Truyền thông - Số 11 (04-2018)
4.3. The evaluation results
The performances of each method of the synthesized datasets are reported in Tab. 2.
For whole three datasets, GCSAC obtains the highest accuracy and lowest computational
time. More generally, even using same criteria as MLESAC, the proposed GCSAC
obtains better estimated model as shown by Ew measurements. The experimental results
also confirmed the proposed constraints are working well with different primitive shapes.
Although Ew of the sphere dataset is high (Ew = 19.44%), this result is still better than
the result of the compared methods. Among the compared RANSACs, it is interesting
that original RANSAC generally give stable results for estimating a sphere. However,
original RANSAC requires a high computational time. The proposed GSSAC estimates
the models slightly better than the original RANSAC.
RANSAC PROSAC MLESAC MSAC
LOSAC NAPSAC GCSAC
Fig. 6. Illustrating the fitting sphere of GCSAC and some RANSAC variations on the synthesized
datasets, which have 15% inlier ratio. Red points are inlier points, blue points are outlier points. The
estimated sphere is a green sphere.
Table 2. The average evaluation results of synthesized datasets. The synthesized datasets were repeated
50 times for statistically representative results.
Dataset/
Method Measure RANSAC PROSAC MLESAC MSAC LOSAC NAPSAC GCSAC
’first
sphere’
Ew(%) 23.01 31.53 85.65 33.43 23.63 57.76 19.44
Sd 3801.95 3803.62 3774.77 3804.27 3558.06 3904.22 3452.88
tp(ms) 10.68 23.45 1728.21 9.46 31.57 2.96 6.48
Ed(cm) 0.05 0.07 1.71 0.08 0.21 0.97 0.05
Er(%) 2.92 4.12 203.60 5.15 17.52 63.60 2.61
Table 3 also shown the fitting results of GCSAC method are more accurate than
RANSAC variations. The results on the ’second sphere’ are high on all of the methods
14
Tạp chí Khoa học và Kỹ thuật - Học viện KTQS - Số 190 (04-2018)
Table 3. The average evaluation results on the ’second sphere’ datasets. The real datasets were
repeated 20 times for statistically representative results.
Dataset/
Method Measure RANSAC PROSAC MLESAC MSAC LOSAC NAPSAC GCSAC
’second
sphere’
w(%) 99.77 99.98 99.83 99.80 99.78 98.20 100.00
Sd 29.60 26.62 29.38 29.37 28.77 35.55 11.31
tp(ms) 3.44 3.43 4.17 2.97 7.82 4.11 2.93
Er(%) 30.56 26.55 30.36 30.38 31.05 33.72 14.08
(a)Fig. 7. Results of ball and cone fittings. The point cloud data of a ball is red points; the estimated
spherical is marked as green points.
such as the result of GCSAC when using the w measure is 100%, it is illustrated in
Fig. 7(a). Due to the ball data has a small noise ratio and the threshold T to determine
inlier points is large (0.05(5cm)). While the radius of a ball is 5cm to 7cm. All of the
results shown, the performance of GCSAC is better than the RANSAC variations when
implements estimating primitive shapes on the point cloud data, that has low inlier ratio
(less than 50%). They were also shown, can use to the GCSAC algorithm for detecting,
finding spherical objects in the real scenario. As a visually impaired person come to a
sharing room or a kitchen to find a ball on the floor.
5. Conclusions
In this paper, we proposed GCSAC that is a new RANSAC-based robust estimator
for fitting the primitive shapes from point clouds. The key idea of GCSAC is the
combination of ensuring consistency with the estimated model via a roughly inlier ratio
evaluation and geometrical constraints of the interested shapes that help to select good
samples for estimating a model. Our proposed method is examined with common shapes
(e.g., a sphere). The experimental results confirmed on the synthesized, real datasets
that it works well even the point clouds with low inlier ratio. The results of the GCSAC
algorithm compared to the RANSAC variations in the RANSAC family. In the future, we
15
Chuyên san Công nghệ thông tin và Truyền thông - Số 11 (04-2018)
continue to validate GCSAC on other geometrical structures and evaluate the proposed
method with the real scenario of detecting multiple objects.
Acknowledgment
This research is funded by Vietnam National Foundation for Science and Technology
Development (NAFOSTED) under grant number 102.01-2017.315
References
[1] R. Schnabel, R. Wahl, and R. Klein. Efficient ransac for point-cloud shape detection. Computer Graphics
Forum, 26(2):214–226, 2007.
[2] B. Dorit, J.E. Kai Lingemann, and N. Andreas. Real-Time Table Plane Detection Using Accelerometer
Information And Organized Point Cloud Data From Kinect Sensor. Journal of Computer Science and
Cybernetics, pages pp, 243–258, 2016.
[3] M. A. Fischler and R.C. Bolles. Random sample consensus: A paradigm for model fitting with applications
to image analysisand automated cartography. Communications of the ACM, 24(6):pp, 381–395, 1981.
[4] Philip H. S. Torr and A. Zisserman. Mlesac: A new robust estimator with application to estimating image
geometry. Computer Vision and Image Understanding, 78(1):pp, 138–156, 2000.
[5] O. Chum and J. Matas. Matching with prosac progressive sample consensus. In Procedings of the IEEE
Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), pages pp, 220–226,
2005.
[6] O. Chum, J. Matas, and J. Kittler. Locally optimized ransac. In DAGM-Symposium, volume 2781 of Lecture
Notes in Computer Science, pages pp, 236–243. Springer, 2003.
[7] S. Choi, T. Kim, and W. Yu. Performance evaluation of ransac family. In Procedings of the British Machine
Vision Conference 2009, pages pp, 1–12. British Machine Vision Association, 2009.
[8] R.I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press,
ISBN: 0521540518, second edition, 2004.
[9] D.R. Myatt, P.H.S. Torr, S.J. Nasuto, J.M Bishop, and R. Craddock. Napsac: high noise, high dimensional
robust estimation. In Procedings of the British Machine Vision Conference (BMVC’02), pages pp, 458–467,
2002.
[10] R. Raguram, JM. Frahm, and M. Pollefeys. A comparative analysis of ransac techniques leading to adaptive real-
time random sample consensus. In Procedings of the European Conference on Computer Vision. (ECCV’08),
pages pp, 500–513, 2008.
[11] K. Lebeda, J. Matas, and O. Chum. Fixing the locally optimized ransac. In Proceedings of the British Machine
Vision Conference 2012., pages pp, 3–7, 2012.
[12] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J. M. Frahm. Usac: A universal framework for random
sample consensus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):pp, 2022–2038,
Aug 2013.
[13] M. Kohei, U. Yusuke, S. Shigeyuki, and S. Shinichi. Geometric verification using semi-2d constraints for 3d
object retrieval. In Proceedings of the International Conference on Pattern Recognition (ICPR) 2012., pages
pp, 2339–2344, 2016.
[14] C.S. Chen, Y.P. Hung, and J.B. Cheng. RANSAC-based DARCES: a new approach to fast automatic registration
of partially overlapping range images. IEEE Transactions on Pattern Analysis and Machine Intelligence,
21(11):pp, 1229 –1234, 1999.
[15] Aiger D., N. J. Mitra, and D. Cohen-Or. 4-points congruent sets for robust surface registration. ACM
Transactions on Graphics, 27(3), 2008.
[16] K. Alhamzi and M Elmogy. 3d object recognition based on image features : A survey. International Journal
of Computer and Information Technology (ISSN: 2279 – 0764), 03(03):pp, 651–660, 2014.
[17] K. Duncan, S. Sarkar, R. Alqasemi, and R. Dubey. Multiscale superquadric fitting for efficient shape and
pose recovery of unknown objects. In Procedings of the International Conference on Robotics and Automation
(ICRA’2013), 2013.
[18] G. Osselman, B. Gorte, G. Sithole, and T. Rabbani. Recognising structure in laser scanner point clouds. In
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, pages pp, 33–38,
2004.
[19] C. Marco, V. Roberto, and C. Rita. 3d hough transform for sphere recognition on point clouds. Machine Vision
and Applications, pages pp, 1877–1891, 2014.
16
Tạp chí Khoa học và Kỹ thuật - Học viện KTQS - Số 190 (04-2018)
[20] A. Anas, N. Mark, and C. John. Sphere detection in kinect point clouds via the 3d hough transform. In
International Conference on Computer Analysis of Images and Patterns, 2013.
[21] S. Garcia. Fitting primitive shapes to point clouds for robotic grasping. Master Thesis in Computer Science
(30 ECTS credits) at the School of Electrical Engineering Royal Institute of Technology, 2009.
[22] Le V.H., Vu H., , Nguyen T.T., Le T.L., Tran T.H., Da T.C., and Nguyen H.Q. Geometry-based 3-d object fitting
and localization in grasping aid for visually impaired. In The Sixth International Conference on Communications
and Electronics. IEEE-ICCE, 2016.
[23] Le V.H., M. Vlaminck, Vu H., , Nguyen T.T., Le T.L., Tran T.H., Luong Q.H., P. Veelaert, and P. Wilfried.
Real-Time Table Plane Detection Using Accelerometer Information And Organized Point Cloud Data From
Kinect Sensor. Journal of Computer Science and Cybernetics, pages pp, 243–258, 2016.
[24] P. Faber and R.B. Fisher. A Buyer’s Guide to Euclidean Elliptical Cylindrical and Conical Surface Fitting. In
Procedings of the British Machine Vision Conference 2001, number 1, pages pp, 54.1–54.10, 2001.
Manuscript received 13-07-2017; accepted 21-03-2018.

Le Van Hung received his M.Sc. at Faculty Information Technology- Hanoi National Univer-
sity of Education (2013). He is now an PhD at International Research Institute MICA HUST-
CNRS/UMI - 2954 - INP Grenoble. His research interests include Computer vision, RANSAC
and RANSAC variation and 3D object detection, recognition.
Vu Hai received B.E. degree in Electronic and Telecommunications in 1999 and M.E. in
Information Processing and Communication in 2002, both from Hanoi University of Science
and Technology (HUST). He received Ph.D. in Computer Science from Osaka University, 2009.
He has been a lecturer at MICA International Research Institute since 2012. He is interested
in computer vision techniques for medical imaging, HCI, robotics.
Nguyen Thi Thuy received her BSc. degree in Math & Informatics in 1994, MSc. in Infor-
mation Technology from HUST in 2002, and PhD. in Computer science from Graz University
of Technology, Austria in 2009. She has been a lecturer at Faculty of Information Technology,
Vietnam National University of Agriculture since 1998, head of department of Computer
Science since 2011. Her research interest includes object recognition, visual learning, video
understanding, statistical methods for computer vision and machine learning.
Le Thi Lan graduated in Information Technology HUST, Vietnam. She obtained MS degree in
Signal Processing and Communication from HUST, Vietnam. In 2009, she received her Ph.D.
degree at INRIA Sophia Antipolis, France in video retrieval. She is currently lecturer/researcher
at Computer Vision Department, HUST, Vietnam. Her research interests include computer
vision, content-based indexing and retrieval, video understanding and human-robot interaction.
Tran Thi Thanh Hai graduated in Information Technology from Hanoi University of Science
and Technology in 2001. She received ME. then Ph.D In Imagery Vision Robotic from INPG in
2002 and 2006 respectively. Since 2009, she is lecturer/researcher at Computer Vision group,
International Institute MICA, Hanoi University of Science and Technology. Her main research
interests are visual object recognition, video understanding, human-robot interaction and text
detection for applications in Computer Vision.
17

File đính kèm:

  • pdffitting_spherical_objects_in_3d_point_cloud_using_the_geomet.pdf