𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Machine vision and navigation

✍ Scribed by Sergiyenko O (ed.)


Publisher
Springer
Year
2020
Tongue
English
Leaves
862
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Table of Contents


Preface......Page 4
An Overview of Machine Vision and Navigation......Page 5
Acknowledgment......Page 14
Contents......Page 15
Contributors......Page 19
Abbreviations......Page 25
Part I Image and Signal Sensors......Page 34
Acronyms......Page 35
1.1.1 Image Sensing in Machine Vision Systems......Page 36
1.1.2 Image Capture by Digital Cameras......Page 37
1.1.3 Performance Metrics of Image Sensor Photodiodes......Page 39
1.2 Limitations of Current Inorganic-Based Imaging Systems......Page 40
1.2.2 Low Dynamic Range......Page 42
1.2.4 Inability to Cope with Illuminant Variation......Page 43
1.3 Overcoming Limitations of Conventional Imaging Systems Using Alternative Photosensing Materials......Page 44
1.3.1 Organic Photodetectors in Image Sensing......Page 45
1.3.1.1 OPDs Beyond Photodetection......Page 47
1.3.2 Metal Halide Perovskite (MHP)/Organohalide Perovskite (OHP) Photodetectors......Page 52
1.4 Phototransistors......Page 53
1.5 Conclusions and Outlook......Page 56
References......Page 57
2.1 Introduction......Page 65
2.2 Related Work......Page 68
2.3 Hardware of the Sensor......Page 69
2.4 Basic Software and Calibration......Page 70
2.4.1 Calibration of the Subsystems......Page 71
2.4.2 Panoramic Images......Page 72
2.4.3 Virtual Camera......Page 74
2.4.4 Calibration Between the Subsystems......Page 75
2.5.1 Detection of Objects......Page 76
2.5.2 Tracking of Objects......Page 77
2.5.3 Avoiding Obstacles......Page 78
2.6 Central Vision in the Hybrid Sensor......Page 81
2.7.1 Peripheral Vision......Page 82
2.7.2 Central Vision......Page 84
2.8 Conclusions......Page 87
References......Page 88
Abbreviations......Page 91
3.1 Introduction......Page 92
3.2 3D Image Construction......Page 93
3.2.2 Stereo Vision......Page 94
3.2.3 Shape from Shading......Page 98
3.2.4 Dynamic Vision......Page 100
3.3 Active 3D Imaging......Page 102
3.3.1 Time of Flight......Page 103
3.3.2 Structured Light......Page 105
3.3.3 Shape from Motion......Page 110
3.4 Deep Learning Approaches to 3D Vision......Page 111
3.5 Conclusion......Page 112
References......Page 113
Acronyms......Page 119
4.1 Introduction......Page 120
4.2.1 Substantiation of the Need to Design Devices for Parallel Nonlinear Image Intensity Transformations in Self-Learning Equivalent Convolutional Neural Structures (SLECNS)......Page 124
4.2.2 Brief Review and Background of Mathematical Operators, Which Are Implemented by Neurons......Page 128
4.2.3 Mathematical Models of Nonlinear Transformations of Image Intensities......Page 129
4.2.4.1 Simulation of Image Intensity Transformation with Mathcad......Page 130
4.2.4.2 Design and Simulation of Array Cells for Image Intensity Transformation Using OrCad PSpice......Page 132
4.2.4.3 Simulation of Nonlinear Transformation in Analog 64-Input and 81-Input Neuron Equivalentor......Page 138
4.3.1 Basic Theoretical Foundations, Equivalence Models, and Their Modification for SMC_CL_ADC......Page 139
4.3.2 Design of CL ADC CM-6 (8) (G): iv (the Iteration Variant) Based on DC-(G) (with Gray Code)......Page 143
4.3.3 Simulating Parallel Conveyor CL_ADC (P_C) Based on Eight 8-DC-(G) with Parallel Serial Output......Page 149
4.4 Conclusions......Page 151
References......Page 157
Part II Detection, Tracking and Stereoscopic Vision Systems......Page 162
Abbreviations......Page 163
5.1 Introduction......Page 164
5.2 Principles of Robotic Image-Assisted Total Stations......Page 166
5.2.1 Working Principles of Standard Total Station......Page 169
5.2.1.1 Electronic Distance Measurement......Page 170
5.2.1.2 Electronic Angle Measurement......Page 172
5.3.1.1 Rough Pointing/Coarse Search......Page 174
5.3.1.2 Fine Pointing/Fine Aiming......Page 175
5.3.2 Target Tracking......Page 177
5.4 Image-Based Object Recognition, Position Determination, and Tracking......Page 178
5.4.1 Image Processing Fundamentals......Page 179
5.4.2 Image Processing Algorithms for Feature Extraction......Page 180
5.4.2.2 SURF (Speeded-Up Robust Feature) Algorithm......Page 181
5.4.3 Object Recognition and Matching......Page 183
5.4.4 Object Position Determination......Page 185
5.5.1 Example of Static Object Recognition and Positioning......Page 186
5.5.2 Example of Kinematic Image-Based Object Tracking......Page 191
5.6 Quality Control of Total Stations in Kinematic Mode Using a Laser Tracker......Page 192
5.7 Conclusion......Page 197
References......Page 198
6.1 Introduction......Page 200
6.2 The Navigation Problem of Mobile Autonomous Robots......Page 201
6.3 EMW Reflection from the Surrounding Area in Different Frequency Ranges......Page 205
6.4 Mathematical Models of Random Processes Describing the Amplitudes of Echo Signals from the Distributed Objects......Page 208
6.5 Mathematical Models of Random Processes Describing Amplitudes of Echo Signals from Concentrated Objects......Page 216
6.6 Measurement of Amplitude Jump of Signals for Landmark Detection by Mobile Autonomous Robots......Page 219
References......Page 224
Abbreviations......Page 226
7.1 Introduction......Page 227
7.2.2 Image Acquisition......Page 229
7.2.3.3 Feature Extraction......Page 230
7.3 Agricultural Machine Vision Applications......Page 231
7.3.1 Plant Identification......Page 232
7.3.2 Process Control......Page 233
7.3.3 Machine Guidance and Control......Page 234
7.4.1 Image Processing for Blossom Isolation......Page 236
7.4.1.1 Methods of Data Transformations......Page 237
7.4.1.2 Testing Blossom Isolation......Page 241
7.4.1.3 Tree Isolation......Page 243
7.4.1.4 Blossom Isolation and Counting for Apple Trees......Page 245
7.4.1.5 Blossom Isolation and Counting for Peach Trees......Page 247
7.4.2 Results of Yield Estimation......Page 248
7.4.2.1 Transition from Blossom Count to Yield Estimation......Page 249
7.4.2.2 Derivation of Weight Values......Page 250
7.4.2.3 Statistical and Probabilistic Results......Page 255
7.4.3.1 Potential Problems with Over-Constraining the Sample Data......Page 256
7.5.1 Introduction......Page 258
7.5.2 Spatial Mapping......Page 259
7.5.3 Stereo Camera Operation......Page 260
7.5.4 Difficulties of Using Spatial Mapping to Isolate Objects......Page 261
7.6.1 Introduction......Page 262
7.6.2 Visual Feedback System for Navigation......Page 263
7.6.3 Experimental Ground Vehicle Platform......Page 265
7.7 Conclusion......Page 266
References......Page 267
Acronyms......Page 270
8.1 Introduction......Page 271
8.2.1 Artificial Biological Vision Model......Page 272
8.2.2 Other Binocular Vision Model......Page 277
8.3.1.1 Right Triangular Model......Page 279
8.3.1.2 Parallel Model......Page 280
8.3.1.4 Divergent Model......Page 282
8.3.2 Multi-Camera Models......Page 283
8.4.1.1 Artificial Biological SVS Applications......Page 285
8.4.2.1 Trinocular SVS Applications......Page 287
8.4.2.2 Multivision SVS Applications......Page 288
8.5 Conclusion......Page 289
References......Page 290
Acronyms......Page 295
9.1 Introduction......Page 296
9.2 Kalman Filter Framework-Based Probabilistic Inference......Page 297
9.2.1 Maximum Likelihood Estimator (MLE)......Page 298
9.2.2 Probabilistic Inference and Bayesian Rule......Page 299
9.2.3 Bayes Filter and Belief Update......Page 301
9.2.3.1 KF Framework......Page 302
9.3 Stereo Vision System......Page 303
9.3.1 Perspective Projection and Collinearity Constraint......Page 306
9.3.2 Epipolar Geometry and Coplanarity Constraint......Page 308
9.4 Uncertainties in Stereo Vision System......Page 310
9.5.1 Pose Tracking Using UKF and Stereo Vision......Page 312
9.5.2 Localization Approach-Based 2D Landmark Map......Page 314
References......Page 315
Part III Pose Estimation, Avoidance of Objects, Control and Data Exchange for Navigation......Page 318
10.1 Introduction......Page 319
10.2 Small Rotations and Angular Velocity......Page 320
10.3 Exponential Expression of Rotation......Page 322
10.4 Lie Algebra of Infinitesimal Rotations......Page 323
10.5 Optimization of Rotation......Page 326
10.6 Rotation Estimation by Maximum Likelihood......Page 329
10.7 Fundamental Matrix Computation......Page 335
10.8 Bundle Adjustment......Page 340
References......Page 344
11.1 Introduction......Page 346
11.2 General Aspects Associated with a Path-Generation and Tracking Maneuver......Page 348
11.3 Camera-Space Kinematics......Page 350
11.4 Characterization of Surface......Page 354
11.5 Path Tracking......Page 357
11.6 Experimental Validation......Page 362
References......Page 369
12.1 Introduction......Page 371
12.2 System Models......Page 373
12.2.3 Dynamic Model of the Mobile Manipulator......Page 374
12.2.4 Kinematic Model of the Vision System......Page 375
12.3.1 Passivity Property of the Vision System......Page 376
12.3.2 Design of the Kinematic-Based Controller......Page 377
12.3.2.2 Particular Consideration for the Mobile Manipulator......Page 379
12.3.3 Dynamic Compensation Controller......Page 380
12.3.4 Robustness Analysis......Page 382
12.4 Simulation and Experimental Results......Page 383
12.4.1 Mobile Robot......Page 384
12.4.2 Mobile Manipulator......Page 386
12.4.3 Robotic Manipulator......Page 395
12.5 Conclusions......Page 400
A.1 Appendix 1......Page 403
A.1.1 Mobile Robot Model......Page 404
A.1.2 Feature Selection......Page 405
B.1 Appendix 2......Page 408
References......Page 409
Acronyms......Page 412
13.2.1 Nature Swarm Adaption......Page 413
13.2.2 Tasks of Swarm Robotics......Page 415
13.2.3 Swarm Robotics Projects......Page 417
13.3 Robotics Vision Systems......Page 420
13.3.1.1 Historical Background......Page 421
13.3.1.2 Structure and Working Principles......Page 422
13.3.1.3 Data Reduction......Page 423
13.4 Path Planning Methods......Page 425
13.4.1 Path Planning Using Technical Vision System......Page 426
13.4.2 Secondary Objectives Placement for Surface Mapping......Page 428
13.5 Data Transferring Networks and Local Exchange of Information for Robotic Group......Page 429
13.5.1 Spanning Tree Forming for Swarm Robotics......Page 430
13.5.2 Leader Based Communication......Page 431
13.5.3 Feedback Implementation......Page 433
13.6.1 Simulation Frameworks......Page 436
13.6.2 Modeling System Structure......Page 438
13.6.3 Influence of Data Exchange on Path Planning......Page 439
13.6.4 Objects Extraction......Page 443
13.6.5 Effectiveness of Robotic Group......Page 446
13.7 Conclusions......Page 447
References......Page 448
Acronyms......Page 454
14.1 Introduction......Page 455
14.2.1 Planning in Discrete Spaces......Page 457
14.2.2 Planning in Continuous Spaces......Page 459
14.3 Local Planning......Page 460
14.3.1 Moving to 3D Environment Representation......Page 462
14.4 Neuroscience Research Related to Navigation......Page 463
14.5 PiPS: An Ego-Centric Navigation Framework......Page 465
14.5.1 Collision Checking in Perception Space......Page 468
14.5.2 Egocylindrical Perception Space for Enhanced Awareness......Page 474
14.5.3 Egocircular Representation and Trajectory Scoring......Page 477
14.5.3.1 Egocircle Trajectory-Based Cost Functions......Page 480
14.5.4 Working with Stereo Cameras......Page 486
14.6.1 World Synthesis......Page 488
14.6.2 Scenario Configuration......Page 489
14.6.3 Benchmarking......Page 491
14.7 Navigation Experiments......Page 492
14.7.1 Sector World with Laser Safe and Unsafe Obstacles......Page 494
14.7.2 Campus World and Office World......Page 495
14.7.3 Review of Outcomes......Page 496
14.7.4 Implementation Using Stereo Camera......Page 498
References......Page 501
Acronyms......Page 508
15.1 Introduction......Page 509
15.2.1 Levels of Automation......Page 510
15.2.2 Main Components......Page 512
15.2.3.1 Automated Storage and Retrieval System (AS/RS)......Page 517
15.2.3.3 Commercial Autonomous Vehicles......Page 518
15.2.3.4 Autonomous Vacuum Cleaners......Page 519
15.3.1.1 LiDAR Sensor......Page 520
15.3.1.2 Kinect Sensor......Page 522
15.3.2.1 Probabilistic Occupancy Map......Page 523
15.3.2.3 Scene Flow Segmentation......Page 524
15.3.3 Traffic Signs......Page 525
15.3.4 Landmarks......Page 526
15.4 Localization and Map Building......Page 527
15.4.1.1 LiDAR Sensor......Page 530
15.4.1.2 Kinect Sensor......Page 531
15.4.2.1 Wheel Encoders......Page 533
15.4.2.2 Global Positioning System (GPS)......Page 535
15.4.3.1 Simultaneous Localization and Mapping (SLAM)......Page 536
15.5.1.1 A Algorithm......Page 542
15.5.1.2 Field D Algorithm......Page 543
15.6 Case Study: Intelligent Transportation Scheme for Autonomous Vehicles in Smart Campus......Page 545
15.6.1 Applied Simultaneous Localization and Mapping (SLAM)......Page 546
15.6.2 Mechanical Design and Kinematic Model......Page 548
15.7 How Innovation in Business Models Will Change the Future of Cars......Page 549
15.7.2 Generation Z Consumer Profile and the Future of Vehicle......Page 550
15.7.3 Business Model Canvas for Car to Go......Page 551
15.8 Conclusions......Page 552
References......Page 553
Part IV Aerial Imagery Processing......Page 557
Abbreviations......Page 558
16.1 Introduction......Page 559
16.1.1 Autonomous and Noise-Cancel FR Navigation Systems......Page 560
16.1.2.1 The Main Objectives and Model of Signal Processing in the RM Channel CENS......Page 562
16.1.3 Analysis of Factors That Lead to Distortions in a Decisive Function Formed by a Correlation-Extreme Navigation System......Page 567
16.1.4 The Changes Impact Analysis in the FR Spatial Position on the CI Formation......Page 569
16.2.1 Formation of Reference Images of Three-Dimensional Form Object Binding......Page 576
16.2.2 Formation of unimodal decision function of the radiometric CENS......Page 581
16.3.1 Models of Current and Reference Images: Statement of the Task of Developing a Method for Localizing an Object Binding on Image......Page 586
16.3.2 The Solution of the Detection Problem and Multi-Threshold Selection of the OB in a Current Image with Bright False Objects......Page 589
16.3.3 Solution to the Problem of Forming a Unimodal Decision Function......Page 593
16.4 Conclusions......Page 595
References......Page 596
17.1 Introduction......Page 599
17.1.1 Related Work......Page 601
17.3 Imaging Model......Page 602
17.4 Optimization......Page 604
17.5 Experiments......Page 608
17.6 Conclusions......Page 612
References......Page 613
18.1 Introduction......Page 616
18.2 UAV Models......Page 618
18.2.2 Dynamic Model......Page 619
18.3.1 Image Processing......Page 620
18.3.2 Kinematics of the Vision System......Page 621
18.4.1 Position Based Controller......Page 624
18.4.1.1 Controller Analysis......Page 625
18.4.2 Image Based Controller......Page 626
18.4.3 Passivity Based Controller......Page 627
18.5 Compensation of UAV Dynamics......Page 628
18.5.1 Controller Analysis......Page 629
18.6 Simulation Results......Page 631
18.7 Conclusions......Page 638
Passive Property of the UAV Dynamic Model......Page 639
Passive Property of the Vision System......Page 640
Robustness Analysis of the Passivity Based Controller......Page 641
References......Page 643
Part V Machine Vision for Scientific, Industrial and Civil Applications......Page 645
Acronyms......Page 646
19.1 Introduction......Page 647
19.2 Data Compression......Page 649
19.2.1 Fovea Centralis......Page 652
19.3 Wavelet Transforms......Page 653
19.4 Image Compression......Page 656
19.4.1 Foveated Images......Page 658
19.5 Video Compression......Page 659
19.6 An Approach to Image Compression Based on ROI and Fovea Centralis......Page 660
19.6.2 Simulation Results......Page 661
19.7.1 Adaptive Binary Arithmetic Coding......Page 663
19.7.2 AFV-SPECK Algorithm......Page 665
19.7.3 Simulation Results......Page 666
References......Page 668
Acronyms......Page 673
20.1 Introduction......Page 674
20.2.1 Convolutional Neural Network Model Description......Page 675
20.2.2 Stairway Detections......Page 676
20.3 Experimental Results......Page 681
20.4 Conclusions......Page 683
References......Page 688
21.1 Introduction......Page 690
21.2 The Steady Method for Decoding Phase Images with Arbitrary Phase Shifts......Page 691
21.3 Method for Nonlinearity Compensation of the Source–Receiver Path of Optical Radiation in 3D Measurements Based on Phase Triangulation......Page 699
21.4 Comparing Methods of Structured Image Decoding at Nonlinearity of the Source–Receiver Path of Optical Radiation......Page 705
21.5 Methods for Expanding the Dynamic Range of Phase Triangulation Measurements......Page 715
21.6 Method for Estimating the Optimal Frequency of Spatial Modulation in Phase Triangulation Measurements......Page 718
21.7 Conclusion......Page 722
References......Page 723
22.1 Introduction......Page 725
22.2 Influence of Feedback Systems......Page 728
22.2.1 Energy Management System......Page 729
22.2.2 Height Control System......Page 731
22.2.3 Variation in the High Temperature Region......Page 734
22.3 Melt Pool Identification......Page 737
22.3.1 Sensitivity and Repeatability......Page 742
22.4 Conclusions......Page 743
References......Page 744
Acronyms......Page 747
23.1 Introduction......Page 748
23.2 Image Processing Median Filters......Page 751
23.3 Gas Path Measurement Images......Page 752
23.3.1 Objective Function......Page 755
23.4 Ant Colony Optimization......Page 756
23.4.1 Ant Colony Algorithm......Page 758
23.4.2 Filter Weight Optimization......Page 759
23.5 Numerical Experiments......Page 760
References......Page 762
Acronyms......Page 764
24.1.1 Context......Page 765
24.1.2 Short Description of the SEM......Page 766
24.2.1 Central Idea of This Study......Page 768
24.2.2.1 Identification of the Open-Loop Transfer Function of the Nanopositioner......Page 769
24.2.2.2 Nanopositioning in Closed Loop......Page 772
24.2.3 Control with LabVIEWβ„’......Page 774
24.3 Angular Control: Feasibility Study with Matlabβ„’......Page 776
24.4.1 Detecting the Patterns......Page 777
24.4.2 Detecting a Point to Reach......Page 778
References......Page 780
Abbreviations......Page 782
25.1 Introduction......Page 784
25.2 Design and Training Application for DCNNs and SVMs......Page 785
25.3 Review of Back Propagation Algorithm for Implementation......Page 788
25.4.1 Test Trial of Design and Training of a DCNN for Binary Classification......Page 790
25.4.2 Test Trial of Design and Training for Five Categories......Page 793
25.5 Support Vector Machines Based on Trained DCNNs......Page 794
References......Page 798
26.1 Introduction......Page 800
26.2.1 Bridge Introduction......Page 803
26.2.3 Collision Incidents Analysis......Page 804
26.3 Ship–Bridge Anti-Collision System......Page 806
26.3.1 Monitoring and Tracking System......Page 807
26.3.2.1 Warning Area Division......Page 810
26.3.2.3 Early Warning Event Risk Assessment......Page 811
26.4 Field Test......Page 813
26.4.2 Monitoring Interface......Page 814
26.4.3 Warning System......Page 815
26.4.4 Ship Identification......Page 816
References......Page 818
About the Authors......Page 821
Further Readings......Page 851
Index......Page 852


πŸ“œ SIMILAR VOLUMES


Machine Vision and Navigation
✍ Oleg Sergiyenko, Wendy Flores-Fuentes, Paolo Mercorelli πŸ“‚ Library πŸ“… 2020 πŸ› Springer International Publishing 🌐 English

<p><p></p><p>This book presents a variety of perspectives on vision-based applications. These contributions are focused on optoelectronic sensors, 3D & 2D machine vision technologies, robot navigation, control schemes, motion controllers, intelligent algorithms and vision systems. The authors focus

Human and Machine Vision
✍ Colo.) Conference on Human and Machine Vision (1981 : Denver, Barbara Hope, Azri πŸ“‚ Library πŸ“… 1983 πŸ› Elsevier Inc, Academic Press 🌐 English
Machine Vision: Automated Visual Inspect
✍ David Vernon πŸ“‚ Library πŸ“… 1991 πŸ› Prentice Hall 🌐 English

Machine vision is a multi-disciplinary subject, utilizing techniques drawn from optics, electronics, mechanical engineering, computer science and artificial intelligence. This book provides an introduction to the fundamental principles of machine vision for students. Emphasis is laid on providing th

Machine vision. Automated Visual Inspect
✍ Vernon D. πŸ“‚ Library 🌐 English

Π˜Π·Π΄Π°Ρ‚Π΅Π»ΡŒΡΡ‚Π²ΠΎ Prentice Hall, 1991, -262 pp.<div class="bb-sep"></div>Machine vision is a multi-disciplinary subject, utilizing techniques drawn from optics, electronics, mechanical engineering, computer science, and artificial intelligence. This book is intended to be an in-depth introduction to Mach

Machine Vision - Applications and System
✍ Fabio Solari, Manuela Chessa and Silvio P. Sabatini πŸ“‚ Library πŸ“… 2012 πŸ› Intech 🌐 English

Vision plays a fundamental role for living beings by allowing them to interact with the environment in an effective and efficient way. The ultimate goal of Machine Vision is to endow artificial systems with adequate capabilities to cope with not a priori predetermined situations. To this end, we hav

Machine Vision and Image Recognition
✍ Johan Pehcevski πŸ“‚ Library πŸ“… 2020 πŸ› Arcler Press 🌐 English

Machine Vision and Image Recognition informs the readers about the behavior fusion for visually guided service robots and the approaches and limitation in achieving vision in machine. The readers are informed about the use of beacon tracker for dynamic omnidirectional vision localization and explain